text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
from Michael John Neill There are times to search for specific years and there are times when we should not. Many searches at Ancestry.com allow users to include more than just a specific year as a part of their search. Before you mindlessly enter some search terms and click “search” think about that year and that range you entered. If you know (reasonably) that a relative immigrated to the United States in 1850, you may want to search for 1850 immigrations allowing for an error of plus or minus two or five years, depending upon how reliable you think the 1850 date of immigration is. It may be necessary to broaden the search even more. If you are searching for someone in the 1860 census whom you think was born in 1840, you may want to search for them as being twenty years of age, plus or minus a few years, again depending upon how reliable you think the year of birth is. The older a person is, the more likely their age is to be incorrect. Newspapers may be a little different. Some would run a “days beyond recall” column, where items from 20, 30, or even 50 years earlier were re-published in the newspaper. Consequently a death notice from 1890 may appear in a 1940 edition of the paper. Do not assume that a reference to your ancestor 40 years after his death cannot be his. It may be that the paper is rerunning part of an earlier notice. Death notices typically do not appear thirty years before a death, but they may occasionally appear thirty years after. Think about the record you are searching. How accurate does that date need to be? And it is possible that the range of years you are searching for needs to be larger than you think?
<urn:uuid:01b190e0-9a94-4fee-9ed8-ae14663657f1>
CC-MAIN-2016-26
http://blogs.ancestry.com/circle/?p=2565
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971835
360
3
3
Services on Demand - Cited by Google - Similars in SciELO - Similars in Google On-line version ISSN 1982-0216 INDRUSIAK, Camila dos Santos and ROCKENBACH, Sheila Petry. Prevalence of phonological deviations in children - 4 to 6 year old - from a kindergarten school in Canoas - RS. Rev. CEFAC [online]. 2012, vol.14, n.5, pp.943-951. Epub Feb 14, 2012. ISSN 1982-0216. http://dx.doi.org/10.1590/S1516-18462012005000011. PURPOSE: to check the prevalence of phonological deviations and processes according to gender and age. METHOD: statistic and descriptive analysis of phonological evaluation of Yavas, Hernandorema and Lamprecht (2002). This was carried through in 60 kindergarden schools from 4 to 6 year old in municipal schools in Canoas - RS. The sample consisted of children whose parents did not report any auditory and neurological alterations and syndromes in the questionnaire. They did not specify if these children had already taken phonological treatment or phonetic deviations. RESULTS: the prevalence of phonological deviations was 55% and most of them were males. When they were 5 year old, we found major prevalence of phonological deviations. The most found phonological processes were a consonantal join of reduction (46,7%) , final liquid erase (40%) and liquid substitution (30%). The relations of the phonological prevalence process with gender were demonstrated similar in certain ways, except from the atonic syllabus for the male gender. CONCLUSION: the high prevalence of phonological deviations shows the necessity of public programs and health prevention for human communication. Keywords : Prevalence; Speech; Language and Hearing Sciences; Language.
<urn:uuid:47bb9a60-e6a0-4ecc-8499-c5c1fc04ddb8>
CC-MAIN-2016-26
http://www.scielo.br/scielo.php?script=sci_abstract&pid=S1516-18462012005000011&lng=en&nrm=iso&tlng=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00089-ip-10-164-35-72.ec2.internal.warc.gz
en
0.897656
398
2.609375
3
What does Anuk mean? The name Anuk is of Native American Inuit origin. The meaning of Anuk is "polar bear". Anuk is generally used as a boy's name. It consists of 4 letters and 2 syllables and is pronounced A-nuk. The Given Name Anuk Anuk is an enchanting and simple name. Daring yet delightful, the name is a great blend of character and flair. Although unique, your little elegant Anuk, is sure to make it a memorable one. Anuk falls into the animal name category. In the U.S. in 2015, less than 5 boys were given the name. Less than 5 girls were given the name. In contrast, the year before less than 5 boys were given the name. Less than 5 girls were given the name. Want to see how Anuk sizes up? How it compares to some other names? Then check out the Anuk Name Popularity Page. Anuk Related Names Want to know how your name choice may effect your child? Then take a look at the Numerological Report For Anuk. It may give you some insight about your new baby. Children named Anuk are often lumbering and merry but most of all they are read more >> Anuk Name Fun Would you like to fingerspell the name Anuk in American Sign Language? Then just follow the diagram below. Be creative with the name Anuk. Just for fun, see the name Anuk in Hieroglyphics, learn about ancient Egyptian Hieroglyphics and write a Hieroglyphic message. Learn about nautical flags and see your name or message written in nautical flags, on the Anuk in Nautical Flags page.
<urn:uuid:6b906f9b-8112-4cb3-b223-3412f71e3562>
CC-MAIN-2016-26
http://www.ourbabynamer.com/meaning-of-Anuk.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93809
370
2.84375
3
March 8, 2013 In an article for the Huffington Post, alum Sharon Stapel (’98), Executive Director of the NYC Anti Violence Project, discusses the new Violence Against Women Act (VAWA), which was signed into law by President Obama on March 7, 2013. This new version of VAWA is the first federal legislation in the country that explicitly prohibits discrimination based on sexual orientation and gender identity. “This is a cause for celebration,” Stapel writes, “it is a model that we can build on, and it is a victory for LGBT people everywhere. President Obama signs VAWA into law. On far right, alum Sharon Stapel (’98). Photo credit: Day One (http://www.dayoneny.org/) Read Stapel’s article on the Huffington Post here. Watch video of President Obama signing VAWA into law and acknowledging the contribution of Sharon Stapel, courtesy of Distinguished Professor Ruthann Robson on the Constitutional Law Prof Blog.
<urn:uuid:87b84f06-4c0d-445d-9963-782e0258075e>
CC-MAIN-2016-26
http://www1.cuny.edu/mu/law/2013/03/08/sharon-stapel-98-on-lgbt-protections-in-vawa/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910901
215
2.53125
3
Set a Local Domain to Ease Local Development If you’re a web developer you probably do a fair amount of development on your local machine using either the built-in Mac OS X Apache server or, in my case, something like MAMP. Because a local web server like this is really handy for testing this, you can make your local development life a bit easier by setting a local domain, and we’ll show you how to do that. For what it’s worth, we’re covering this for Mac OS X, but you can set local domains like this on a Linux PC or Windows PC too. As long as the computer has a hosts file, you can use a local domain using this same trick. You’re going to need to modify your hosts file in order to do this, it’s not difficult, but does require the command line. From the Mac Terminal type the following: sudo nano /etc/hosts This will bring up the /etc/hosts file in the nano editor, it will look something like this: # Host Database # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. Setting the Local Domain Name Next is the important pat: you’ll want to add the hostname (in this case, we’ll use the name local.dev) you’d like to use locally to the end of that file on a new line, in the following format: Save the changes to /etc/hosts file by hitting Control-O and then Control-X to exit. Now you can access your local domain via the web browser, ftp, or whatever other means just by accessing “local.dev” in the appropriate web browser. You may need to flush your Macs DNS cache for the effect to take effect, and some apps may require a quick relaunch too, like Safari or Chrome. You obviously don’t need to pick “local.dev” as your local domain, and you can actually use the localhost IP to test live domains this way without taking them live, which allows you to preserve links when testing a site, spider, crawler, or whatever else you’re working on.
<urn:uuid:8765c01e-7c2e-4cb0-a345-328855471eac>
CC-MAIN-2016-26
http://osxdaily.com/2009/10/28/set-a-local-domain-to-ease-local-development/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.874231
480
2.59375
3
very good report from THRAKINET ( in Greek) – I made a short summary below Not one, not two but 15 dead loggerhead sea turtles (caretta-caretta) have been washed up at the beaches of Alexandroupoli, North-East Greece. Most of the lifeless bodies had injuries likely to have been caused by fishemen’s nets. Only the last 24 hours two dead turtles have been found in the regions of Appollonia and Evthomo. Accoridng to the Major 15 turtles have been found dead since end of last April. He certifies as most common cause of death fishermen’s nets and boat propellers Caretta-Caretta are considered an endangered species and are protected by the International Union for the Conservation of Naturea species under protection. In Greece caretta-caretta is being protected by ARCHELON, the Sea Turtle Protectioon Society of Greece. ARCHELON treats over 50 injured or sick turtles every year at the Rescue Centre at Glyfada (Athens). Biggest Threats: fishermen nets, plastic gargabe, boat propellers, speedboats Fishing gear is the biggest threat to loggerheads. They often become entangled in longlines or gillnets. They also become stuck in traps, pots, trawls, and dredges. Caught in this unattended equipment, loggerheads risk serious injury or drowning. Turtle excluder devices for nets and other traps reduce the number being accidentally caught. Nearly 24,000 metric tons of plastic is dumped into the ocean each year. Turtles ingest a wide array of this floating debris, including plastic bags, plastic sheets, plastic pellets, balloons and abandoned fishing line. Loggerheads may mistake the floating plastic for jellyfish, a common food item. The ingested plastic causes numerous health concerns including: intestinal blockage, reduced nutrient absorption, suffocation, ulcerations, malnutrition or starvation. Ingested plastics release toxic compounds, including polychlorinated biphenyls, which may accumulate in internal tissues. Such toxins may lead to a thinning of eggshells, tissue damage or deviation from natural behaviors
<urn:uuid:c0993cc3-68f8-40ff-8631-d142197674c0>
CC-MAIN-2016-26
http://www.keeptalkinggreece.com/2010/06/17/15-dead-caretta-caretta-sea-turtles-washed-up-in-alexandroupoli/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933911
446
2.78125
3
Satellites Top the "Must Have" List Satellites, satellites, satellites! Participants in a Space Foundation survey have said loud and clear that they cannot live without satellites and the benefits they provide for navigation, communications, weather forecasting and logistics. The survey asked two questions about the impact of space: - Which space-based invention has the biggest impact on society? - What space invention could you not live without? In both cases, roughly half the respondents cited various uses of satellites: - 24 percent felt that the Global Positioning System(GPS)*, a satellite array managed by the U.S. Air Force that accurately pinpoints location and is most popularly used for navigation, has had the most impact on our lives - and 29 percent said they couldn't live without it - Another 22 percent said that satellites* - especially communications satellites - have the most impact and the same percentage said they couldn't live without them Reflecting the broad and pervasive impact space research and development has had on our lives, survey respondents cited many other life-changing technologies, including: - Medical imaging* (12 percent) - Discoveries made by telescopes, especially the Hubble Space Telescope (8 percent) - Computers and microelectronics (4 percent each) - Aircraft safety* - Cordless tools - Space-age lubricants* - Pressure-relieving foam mattresses*, such as Tempurpedic® - Strong, light-weight materials* In addition to GPS and satellite technology, the "gotta have it" list included: - Pressure-relieving foam mattresses* (8 percent) - Smoke detectors (6 percent) - Cordless tools*, microwave technology and microelectronics* (4 percent each) - Automotive oil derivatives* - Space-age batteries - Water filters* The Space Foundation has long recognized the impact of space on life on Earth through two programs - Space CertificationTM, a marketing program for products originally developed for space, and the Space Technology Hall of Fame®, a prestigious recognition program that highlights extraordinary space innovations and their impact. "It's not surprising that many of the inventions our respondents cited either carry Space Certification or have been inducted into the Space Technology Hall of Fame," said Kevin Cook, Space Foundation director - space awareness. "What's gratifying is that many people know that the technologies that make their lives easier and better originally came from space development - and that the technologies might not be there if there were not active space programs." About Space CertificationTM Products and services that display the Space Certification seal are guaranteed to have stemmed from or been dramatically improved by technologies originally developed for space exploration or to have significant impact in teaching people about the value of space utilization. Developed and administered by the Space Foundation, the Space Certification program provides a marketing edge for Space Certification partners, demonstrates how space technologies improve life on Earth and makes space more interesting and accessible to everyone. Space Certification products and services have been scrutinized by the Space Foundation, working closely with NASA, the European Space Agency (ESA) and other organizations engaged in space research and development. For more information, go to www.SpaceCertification.org. About the Space Technology Hall of Fame® The Space Foundation's prestigious Space Technology Hall of Fame® honors innovations by organizations and individuals who transform space technology into commercial products that improve life on Earth. Since the program was established in 1988 to increase public awareness of the benefits of space exploration programs and to encourage further innovation, 65 technologies have been inducted, including energy-saving technologies, life-saving medical devices, health improvement technologies, satellite and telecommunication technologies and practical commercial devices. Additional information about the Space Technology Hall of Fame®, including a complete list of inducted technologies, is available at www.SpaceTechHallofFame.org. Many Top Responses Recognized by the Space Foundation Many of the technologies survey responsdents mentioned are recognized by the Space Foundation's Space Certification and/or Space Technology Hall of Fame programs. They are marked with an asterisk in this article.
<urn:uuid:8a81bf34-06ba-4cfb-bf7f-049728b09237>
CC-MAIN-2016-26
http://www.spacefoundation.org/media/space-watch/satellites-top-must-have-list
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925913
841
2.953125
3
Guest post by Uma Lele How quickly and how well are developing countries transforming their economies and with what effects on inter-sectorial growth and distribution? A group of us studied structural transformation looking at evidence from 109 countries over 30 years covering a 1980 to 2009 period with a particular focus on China, India, Indonesia and Brazil. We later followed up on the five members of the East African community. Economists following Kuznets, Chennery and Syrquin, Johnston, Mellor and Timmer have described structural transformation (ST) as consisting of: (1) declining share of agriculture in Gross domestic product (GDP), (2) declining share of agriculture in employment, (3) rural-urban migration, (4) growth of the service and manufacturing sectors and (5) a demographic transition with reduction in the population growth rates and have noted that India’s transformation is stalled. The turning point is reached when the share of employment in agriculture declines at a faster rate than the share of agriculture in GDP. Differences in labor productivity between the agricultural and non-agricultural sectors disappear in the final stages of structural transformation. Before labor productivities among sectors converge a huge and often even a widening gap occurs between labor productivities in the agricultural and non-agricultural sectors. Those differences explain inter-sectoral income inequalities and concentration of poverty in the agricultural sector. Timmer and Akkus, noted in their earlier analysis of 86 countries that turning point for today’s developing Asian countries is taking longer and occuring at a higher income than was the case for the industrial countries because of the sheer sizes of the Asian labor forces in agriculture. Kuznets had explained the narrowing of income inequality at the later stages of industrial development in advanced countries in terms of their gradually increasing progressive policies, increased saving and investment in the new enpreneurial class and technological change which uses skilled labor. All these factors today explain the growing income inequalities in developed countries. We developed some new insights. - In Asia value added per worker has been increasing both in agriculture and non-agriculture. The growth in labor productivity in both sectors is spectacular in China, less so in Indonesia and the least in India. It is clearly a result of more liberal policies, increased savings and investments including foreign direct investment and outsourcing to Asia. .Income distribution measured by Gini coefficients has worsened in all three countries in the same rank order as changes in labor productivities. - In sharp contrast, per capita value added in the non-agricultural .sector has been declining in the rest of the developing world. - In Latin America, value added per worker in agriculture has increased substantially and the continent has emerged as a major agricultural exporter and yet agriculture has been shedding labor fast. In the non-agricultural sector value added per worker shows a secular decline. Lowered Gini coefficients in Latin America are often seen as a sign of progress. But if they are the result of declining value added per worker in the non- agricultural sector, they may be less a cause for celebration. - In Africa neither value added per worker in agriculture nor in the non-agricultural sector has increased much. This should be a cause of concern for both growth and poverty reduction. - In short all is not well with the growth story in emerging countries. - In Asia internal terms of trade have moved in favor of agriculture relative to non-agriculture. In the rest of the developing world they have shown a deterioration against agriculture perhaps because incomes outside agriculture are not increasing much and not creating as much demand for food as in Asia. Ratio of Value added per worker in Non-Agriculture relative to Agriculture, in the World 1980-2009 Terms of Trade (Deflator for Agriculture/Deflator for Non-Agriculture [Industry + Service])(in US$) by Region (1980-2009) Based on a paper by Uma Lele, Manmohan Agarwal, Peter Timmer, and Sambuddha Goswami. Uma Lele is a Former Senior Advisor, the World Bank.
<urn:uuid:21151590-a47d-4a63-a5b0-2858444b642e>
CC-MAIN-2016-26
http://rodrik.typepad.com/dani_rodriks_weblog/2014/05/todays-structural-transformation-is-a-more-mixed-story-than-in-the-past.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00011-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947765
845
2.53125
3
This Harvard-produced video has gone viral, and then some, having clocked more than 3,000,000 views. We’ve watched the pendulum balls swirl, moving almost impossibly from pattern to pattern, and we’ve remained dazzled all along. But the mechanics behind this choreographed action haven’t really been brought to the fore. So let’s turn to Harvard’s web site to understand how this kinetic art works: The period of one complete cycle of the dance is 60 seconds. The length of the longest pendulum has been adjusted so that it executes 51 oscillations in this 60 second period. The length of each successive shorter pendulum is carefully adjusted so that it executes one additional oscillation in this period. Thus, the 15th pendulum (shortest) undergoes 65 oscillations. When all 15 pendulums are started together, they quickly fall out of sync—their relative phases continuously change because of their different periods of oscillation. However, after 60 seconds they will all have executed an integral number of oscillations and be back in sync again at that instant, ready to repeat the dance.
<urn:uuid:472e0b88-7ca3-43db-a2a3-97099bd7795a>
CC-MAIN-2016-26
http://www.openculture.com/2011/08/pendulum_waves_as_kinetic_art.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.936305
236
3.625
4
Nancy J. Kaser, MSN, RN, ACNS-BC Aniko Kukla, MSN, RN Download pdf version As the prevalence of obesity sky rockets worldwide, the search for successful weight- management strategies follows. For select individuals, surgical intervention is the most appropriate weight-management intervention for sustained weight loss. Surgical procedures, such as the Roux-en-Y gastric bypass, sleeve gastrectomy, and laparoscopic adjustable gastric banding, bring about both dramatic weight loss and the ability to provide the patient with marked improvement in obesity-related conditions such as diabetes, arthritis, hypertension, and obstructive sleep apnea. In this article the authors will address the incidence of obesity and the criteria for weight-loss (bariatric) surgery; describe the preoperative evaluation and selection of the appropriate surgical procedure; discuss postoperative complications and required nursing care; and give readers a preview of future options for surgical weight loss. Citation: Kaser, N., Kukla, A., (January 31, 2009) "Weight-Loss Surgery" OJIN: The Online Journal of Issues in NursingVol. 14, No. 1, Manuscript 4. Keywords: bariatric surgery, bariatric-surgery nursing care, laproscopic adjustable banding, nursing role in bariatric surgery, obesity, Roux-en-Y gastric bypass, sleeve gastrectomy, surgical intervention for obesity The increasing girth of people in the United States (US) is evident at every turn. Recall your last trip to the store, out to dinner, to the park, or at work, and ask yourself how many overweight people you observed on these occasions. While the statistics related to overweight and obese children and adults are no longer shocking, they do continue to be quite worrisome. It is unfortunate that what we truly know about obesity is sparse in comparison to the rate at which it is spreading. While energy balance is certainly an important factor in weight management, only recently have we come to appreciate that obesity is really a very complex disease that involves a wide variety of factors, including metabolic, environmental, social, behavioral, and psychological factors (Hensrud & Klein, 2006). The association of obesity with chronic diseases, such as heart disease, hypertension, sleep apnea, degenerative joint disease, gastroesophageal reflux disease, asthma, and depression, is well documented and reinforces the benefit of achieving and maintaining a “normal” weight (Harrington, 2006; Hughes & Dennison, 2008; Sheipe, 2006). Bariatric surgery provides dramatic improvement in these chronic conditions (See Figure 1. Chronic Conditions Improved After Bariatric Surgery [pdf], which is used by permission of the Cleveland Clinic Foundation.) Traditional diet, exercise, and behavior modification programs produce short-term results, but have limited long-term (greater than 5 years) success for obese persons (Wadden, Butryn, & Byrne, 2004). Bariatric or weight-loss surgery is the appropriate option for some of these obese persons. In this article the authors will address the incidence of obesity and the criteria for weight-loss (bariatric) surgery; describe the preoperative evaluation and selection of the appropriate surgical procedure; discuss postoperative complications and required nursing care; and give readers a preview of future options for surgical weight loss. The Incidence of Obesity The National Health and Nutrition Examination Survey (NHANES) revealed that in 2005-2006, 33.3% of men and 35.3% of women were obese (Ogden, Carroll, McDowell, & Flegal, 2007). These numbers do not include the many persons who are merely overweight (Ogden et al.). Equally alarming is the percentage of obese children, estimated by the Centers for Disease Control and Prevention (CDC) (CDC National Center for Health Statistics, 2006) to be 17%. Obese and overweight persons are those whose weight is greater than what is deemed healthy for their given height. The Body Mass Index (BMI) uses a person’s height and weight to measure the degree of obesity. A BMI calculator is available at the Department of Health and Human Services (DHHS) National Heart, Lung and Blood Institute (NHLBI) (1998) website. www.nhlbisupport.com/bmi. Table 1 lists the BMI classifications. Table 1. BMI Classifications Classifications for BMI < 18.5 kg/m2 18.5 – 24.9 kg/m2 25 – 29.9 kg/m2 Obesity (Class 1) 30 – 34.9 kg/m2 Obesity (Class 2) 35 – 39.9 kg/m2 Obesity (Class 3) ...of paramount importance is the patient’s capacity to understand the lifestyle changes required for a safe, successful, postoperative course...The U.S. Department of Health and Human Services (DHHS) National Institutes of Health (NIH) Clinical Guidelines (1998) addressing weight-loss surgery indicate that surgery is an appropriate option, and poses an acceptable operative risk, for people who have a BMI>40, or BMI >35 with along with comorbid conditions, such as cardiovascular disease, sleep apnea, uncontrolled type 2 diabetes, and/or physical problems interfering with performance of daily activities. Additional criteria include failure of medically supervised, nonsurgical weight-loss programs; absence of uncontrolled psychotic or depressive disorders; and absence of current alcohol or substance abuse. The ideal candidate is highly motivated, well-informed, and has a supportive family and social environments (Brethauer, Chand, & Schauer, 2006). Also, of paramount importance is the patient’s capacity to understand the lifestyle changes required for a safe, successful, postoperative course, including a lifelong commitment to revised eating patterns, vitamin supplementation, and regular monitoring by their healthcare provider. Once the patient qualifies for surgery, a thorough preoperative assessment takes place to optimize the patient’s health status, reduce operative risk, and identify potential barriers to the desired outcome. Depending on the patient’s health status, this process may take several weeks to several months. In the first step, the patient completes a comprehensive questionnaire that provides the bariatric team, consisting of surgeons, dietitians, psychologists, nurses, and bariatricians (physicians who specializes in the medical management of weight loss), a “snapshot” of the patient’s lifestyle. Included are questions about medical, surgical, and psychological history, food intake and eating habits, activities of daily living, mobility, and activity tolerance. The next step is a detailed physical exam by the surgeon which may prompt further evaluation by specialists in the areas of cardiology, pulmonology, endocrinology, anesthesia, or vascular medicine. In anticipation of surgery, specialists adjust treatment regimes for the most effective management of chronic conditions, thereby optimizing the patient’s physical status. The patient also completes a battery of diagnostic tests that establish baseline values and examine preoperative function. Included are the assessment of complete blood counts, electrolytes, renal and hepatic function, chest x-ray, and electrocardiography. In patients with known heart disease and poor exercise tolerance a dobutamine stress echocardiography may be required (McGlinch et al., 2006). Due to the prevalence of obstructive sleep apnea in the obese patient, a sleep study is commonly performed to identify patients that require treatment. Preoperative assessment by a member of the anesthesia team is also important due to challenges in airway security, vascular access, and heightened anesthesia risk associated with the obese patient (Benotti & Rodriguez, 2007). The preoperative phase also includes visits with the dietitian to learn about the postoperative bariatric diet. The patient gains experience in reading food labels, identifying eating cues, keeping food diaries, determining nutrition content, and establishing portion sizes (Sullivan, Logan & Kolasa, 2006). These visits focus on the dietary changes necessary to achieve and sustain weight loss while maintaining an adequate intake of protein and nutrients. Most importantly the nurse sets expectations for the entire experience. The psychologist also meets with the patient to assess general competency; readiness for change; commitment to weight loss; mental status; the presence of substance abuse, including tobacco; and/or an underlying eating disorder, such as binge eating. The patients are required to quit smoking prior to surgery (Ide, Farber, & Lautz, 2008). Should a psychological issue be discovered, the appropriate course of treatment ensues. The patient becomes eligible for reassessment and reconsideration for surgery upon completion of the treatment and demonstration that stability has been achieved. The nurse plays a key role throughout this evaluation phrase. During office visits the nurse provides and reinforces key points required for successful surgical weight loss. The nurse facilitates the patient’s progression through each phase of the program while providing ongoing education. Most importantly the nurse sets expectations for the entire experience. Detailed patient education is provided by the nurse regarding preoperative preparation (Walsh, Albano, & Jones, 2008) and postoperative care, dangerous warning signs, when to call the office, and community resources. Relationships develop as the nurse and patient get to know each other over time. The nurse serves as the resource person to answer questions of any nature and is a continuing support person across the continuum of preoperative, postoperative, and follow-up care. Additionally, many bariatric-surgery programs require the patient to attend nurse-led group educational sessions and support group meetings prior to surgery (Fox, 2007). These serve as a method of reinforcing education about surgical procedures, activity, and diet changes, while developing supportive peer relationships that are very valuable through all phases of the surgical experience. Choosing a Surgical Procedure The surgical procedures most commonly performed today work on two principles: restriction and malabsorption. Bariatric Surgery is the only weight-loss method proven to achieve lasting, long-term results in the fight against obesity (Barth & Jensen, 2006). Through education and discussion, weighing the pros and cons, and considering the risks and benefits, the patient and surgeon choose, on an individualized basis, which surgical procedure will be performed. Laparoscopic Adjustable Gastric Band The surgical procedures most commonly performed today work on two principles: restriction and malabsorption. Procedures, such as the laparoscopic adjustable gastric band (LAGB), and the laparoscopic sleeve gastrectomy (LSG) are successful simply because they restrict the amount of food the patient is able to consume at any one meal without interfering with digestion. Laparoscopic gastric banding has gained favor in that it is the least invasive of the restrictive procedures. It results in early and prolonged satiety, it is adjustable, and it is fully reversible. An inflatable gastric band is placed around the upper stomach creating a small gastric pouch and a narrow outlet to the stomach (See Figure 2. Laparoscopic Adjustable Gastric Band [pdf], which is used by permission of Cleveland Clinic Foundation.) At first, the pouch will fill with only an ounce of food; over time this will stretch to hold approximately 4 ounces (Gabriel & Garguilo, 2006). The band has tubing attached to a small subcutaneous port in which saline is added or removed to adjust band size for optimal results (Deitel, 2007). The patient is required to eat very small meals, chew food thoroughly, and eat slowly; else epigastric discomfort and vomiting will result. Generally, patients will accomplish peak weight loss of 44% to 68% of excess weight over a two to three year period of time (Brethauer et al., 2006). Laparoscopic Sleeve Gastrectomy The laparoscopic sleeve gastrectomy (LSG) is a procedure being use as both a primary intervention, and as the first procedure of a staged intervention for the patient that is super obese (BMI>60 kg/m2) or very high-risk (Braghetto et al., 2007). (See Figure 3. Laparoscopic Sleeve Gastrectomy [pdf] which is used by permission of Cleveland Clinic Foundation.) Because this procedure is simple and straightforward, it can be accomplished in a relatively short period of time making it more feasible in the extremely large and/or high risk patient. This procedure also offers an option to the select group of patients in which the Roux-en-y gastric bypass is contraindicated (see Fig 3). The sleeve gastrectomy is primarily considered a restrictive procedure; however, excision of the grehlin-producing portion of the stomach provides an added benefit, namely early satiety (Gumbs, Gagner, Dakin, & Pomp, 2007; Tucker, Szomstein, & Rosenthal, 2008). Grehlin is considered the “hunger hormone,” in that it stimulates appetite. A reduction in the amount of grehlin results in a decreased appetite. Additional benefits to the LSG procedure, as indicated by Braghetto and colleagues (2007), are that the complication of dumping syndrome is avoided because the pylorus is preserved, resulting in a decrease in the likelihood of nutritional deficiency (Gumbs et al.). Dumping syndrome is triggered by food or liquid rapidly entering the intestine resulting in nausea, cramping, diarrhea, and/or dizziness (Gallagher, 2005). Roux-en-y Gastric Bypass The Roux-en-y (rü-en-wi) (RNYGBP) gastric bypass is a procedure that employs both mechanisms of restriction and malabsorption to achieve weight loss. Food intake is limited by dividing the stomach to create a 15-30 ml pouch which is then connected to a loop of small intestine. Connecting the gastric pouch to the small intestine allows food to bypass the distal stomach, duodenum, and a portion of the jejunum, thus achieving malabsorption (See Figure 4. Roux-en-y Gastric Bypass [pdf], which is used by permission of Cleveland Clinic Foundation). Although the RNYGBP can usually be performed laparoscopically, some individuals will require an open approach. This procedure is contraindicated in those patients who have a history of Chron's Disease, multiple abdominal surgeries, or are heavy smokers (Tucker et al., 2008), require anti-inflammatory medication, or have a history of inflammatory bowel disease (Braghetto et al., 2007). This procedure is the most common weight-loss surgery performed in the US and constitutes nearly 80% of all bariatric procedures (Brethauer et al., 2006). Postoperative Surgical Care and Complications The keys to preventing post-operative complications for the bariatric-surgery patient are also a careful and thorough baseline assessment and close surveillance. The customary postoperative nursing measures, specifically pain management, wound care, venous thromboembolism prophylaxis, pulmonary toilet, early and frequent ambulation, line and drain maintenance, fluid balance, nutrition therapy, continued education, and emotional support, are of paramount importance (Harrington, 2006). Regardless of whether the approach is open or laparoscopic, bariatric surgery constitutes major abdominal surgery and potential exists for the typical postoperative complications of hemorrhage, surgical-site infection, sepsis, atelectasis, and pulmonary embolism. Complications directly related to bariatric-surgery procedures are divided into two categories, namely early complications and late complications. Of the early complications (those occurring prior to hospital discharge), one of the most serious is an anastomotic or staple-line leak. Brethauer and colleagues (2006) explained that changes in patient status can be subtle and added that tachycardia is often the only presenting sign of an anastomotic leak. As intraperitoneal irritation progresses, patient presentation is characterized by complaints of increasing pain, hiccups, restlessness, and tachycardia (Barth & Jensen, 2006). The condition of the patient with a leak may rapidly deteriorate as peritonitis, sepsis, and respiratory distress ensue. Therefore, any element of suspicion (tachycardia, fever, tachypnea, oliguria, or increasing oxygen requirement) warrants a call to the surgeon in anticipation of orders for gastrografin swallow x-ray, computed tomography scan with contrast, and/or a return trip to the operating room. The incidence of leakage after laparoscopic RNYGBP is from 0% to 4.4% (Brethauer et al.). Pulmonary embolism is the second most common cause of mortality in bariatric-surgery patients; the rate of occurrence is 2% (Chand, Gugliotti, Schauer, & Steckner, 2006). Diligent postoperative monitoring is required to facilitate early detection of this critical complication. Prevention of pulmonary embolism will be discussed in greater detail below. Obese patients are at risk for wound complications, including dehiscence, infection, and slow healing. This is due to the poor vascularity of adipose tissue, increased wound tension, greater intra-abdominal pressure, and the frequent presence of diabetes (Fox, 2007). Rhabdomyolysis (RML), although a rare occurence, is also considered an early complication of bariatric surgery. It involves muscle damage, degradation, and necrosis due to prolonged muscle compression and associated ischemia (Barth & Jensen, 2006). The presentation may be subtle complaints of hip, shoulder, or buttock discomfort, along with numbness, bruising, swelling, and/or weakness (Tanaka & Brodsky, 2007). The primary diagnostic indicator is the elevation of the serum creatine phosphokinase (CPK) levels; elevation at five times the normal level (>1000 I/U) is indicative of rhabdomyolysis (Lagandre et al., 2006). In severe cases, acute renal failure ensues as the kidneys become overwhelmed by the high concentrations of myoglobin in the urine (Tanaka & Brodsky, 2007). Factors that predispose a patient to rhabdomyolysis include a BMI > 40 kg/m2, a surgical procedure lasting four hours or more, decreased functional status (American Society of Anesthesiologist Classification II-IV), and diabetes (Lagandre et al.). In describing nursing care of the bariatric-surgery patient, Fox (2007) defines late complications as those which occur once the patient has been discharged from the hospital and has recovered from the surgery. These complications can be further classified according to the surgical procedure performed. In the patient who is post-sleeve gastrectomy, late complications are few and have been identified as weight regain and gastric sleeve dilatation (Tucker et al., 2008). Late complications following the LAGB include reflux esophagitis, band slippage, adhesions, port complications, cholelithiasis, pouch dilatation, and individualized intolerance to foods. RNYGBP-patient late complications include adhesions, small bowel obstruction, marginal ulceration, stomal stenosis, cholelithiasis, pouch dilatation, depression, intolerance of sweet and/or fatty foods, dumping syndrome, nausea, vomiting, and diarrhea. A critical point is that with comprehensive patient education and close postoperative follow-up, complications may be detected early, minimized, or avoided all together. The need for compliance with postoperative instruction and ongoing monitoring by a healthcare provider cannot be overstated. Early Postoperative Nursing Care The keys to the specialized nursing care of the bariatric-surgery patient, too, are a careful and thorough baseline assessment and close surveillance. The customary postoperative nursing measures, namely pain management, wound care, venous thromboembolism prophylaxis, pulmonary toilet, early and frequent ambulation, line and drain maintenance, fluid balance, nutrition therapy, continued education, and emotional support, are of paramount importance (Harrington, 2006) as described below. Adequate pain management improves patient mobility and lessens pulmonary morbidities (Farshad & Bell, 2004). Patients are more willing and able to use incentive spirometry, cough, and deep breathe every hour when their pain is manageable. During the initial 24-48 hours, patient controlled analgesia (PCA) is frequently employed to achieve pain control (Gallagher, 2004) with a switch to liquid oral agents when the patient is able to tolerate oral intake (O'Leary, Paige, & Martin, 2007). It is not uncommon for this group of patients to have a history of chronic pain due to debilitating joint and back conditions, resulting in the chronic use of narcotic pain medication. In these cases, achieving pain management can be challenging; and the input of pain management specialists may be required (O'Leary et al., 2007). Wound & Skin Care With increasing size, comes the need for greater attention to skin care. In the uncomplicated patient with a lesser BMI, skin and wound care are straightforward and uncomplicated. These activities include monitoring the surgical site(s) for bleeding or hematoma development, observing for signs of infection, and keeping the dressings clean and dry. Excessive intra-abdominal pressure (such as that which occurs during vomiting) can add strain to incision lines. Hence treatment of nausea and prevention of vomiting are important during the postoperative phase (Fox, 2007). An abdominal binder is helpful in adding abdominal support as well. With increasing size, comes the need for greater attention to skin care. Patients who are less mobile will need assistance with turning and repositioning. The large patient is prone to skin breakdown due to pressure from surgical-drain and foley-catheter tubing that is allowed to become lodged against the skin or in skin folds. These devices should be repositioned every two hours as the patient is turned. Arms, legs, or skin folds resting against side rails for prolonged periods of time have also been found to develop skin erosion (Fox, 2007). Skin should be kept clean and dry. Intertrigo and fungal infections in skin folds under the breasts and/or on the back, abdomen, thighs, and perineum can be minimized by placing absorbent fabric, gauze, or silver impregnated textile products within the fold (Barth & Jensen, 2006). The use of powder or talc should be avoided because they tend to clump and contribute to irritation. Plastic-lined underpads create excessive heat and perspiration; they should be abandoned in favor of a fabric pad, or one that is specially formulated to wick away moisture. It is best to apply tape to the skin sparingly as the epidermal layer is stretched thin and susceptible to skin tears from tape. In the case of a complicated wound, consulting the wound nurse specialist, or a related specialist if a wound nurse specialist is not available within the institution, is recommended. Venous Thromboembolism (VTE) Prophylaxis ...the bariatric-surgery patient is at great risk for an embolic event. It is well documented that the bariatric-surgery patient is at great risk for an embolic event. The following reasons for this risk have been offered: - Polycythemia from chronic respiratory insufficiency and stasis from immobility (Gabriel & Garguilo, 2006) - Increased abdominal pressure and decreased mobility (Clark, 2007) - Carbon dioxide insufflation that exacerbates the already elevated intra-abdominal pressure, resulting in increased pressure in the femoral veins secondary to vena caval compression (O'Leary et al., 2007) - Elevated levels of fibrinogen and plasminogen activator inhibitor, as well as antithrombin III deficiency and decreased fibrinolysis (O'Leary et al., 2007) The institution of VTE prophylaxis can potentially be a life-saving measure for the obese patient. It is recommended that sequential compression devices be applied in the operating room, prior to the administration of anesthetic agents. The administration of either unfractionated or low-molecular-weight heparin preparations, early postoperative mobilization, and frequent ambulation comprise the triad that is considered “VTE prophylaxis.” In the event that the patient has a history of venous stasis, endothelial damage, and a hypercoagulable state (Virchow’s triad), and/or prior VTE or pulmonary embolism, consideration should be given to the preoperative placement of a filter in the inferior vena cava (IVC) that lessens the migration of emboli from the lower extremities to the heart, lungs, or brain (Miller & Rovito, 2004; Ojo, Asiyanbola, Valin, & Reinhold, 2008). The stable patient should be encouraged to sit and dangle legs over the bedside in the immediate postoperative period. If this is well tolerated, activity can gradually be progressed to out-of-bed walking in the halls a minimum of three to four times daily, beginning by the end of postop-day one. In the non-ambulatory patient, hourly leg exercises and range of motion activity is encouraged as the patients recover their baseline level of activity. ...obese patients have a reduced functional residual capacity...[and] little reserve capacity. As previously mentioned, turning, coughing, and deep breathing (incentive spirometry) are the tried and true nursing interventions for pulmonary care in the postoperative surgical patient. In the obese patient, additional assessment, interventions, and monitoring are required. Due to the changes in pulmonary function associated with obesity, obese patients have a reduced functional residual capacity. When they are faced with respiratory distress, they have little reserve capacity. Through careful assessment and monitoring for slight changes, complications can be minimized. Because the administration of anesthetic agents and narcotic medications can contribute to respiratory depression, continuous pulse oximetry, supplemental oxygen, and cardiac monitoring are employed for the first 24-48 hours after surgery. Auscultation of breath sounds can best be accomplished by positioning the patient in a reverse trendelenberg position leaning forward, or sitting on the side of the bed. The side-lying option is best tolerated if the head of the bed remains elevated 30-45 degrees. It also is helpful to lift or move skin folds in order to detect breath sounds more clearly. The patient with obstructive sleep apnea who uses continuous positive airway pressure (CPAP) or bilevel positive airway pressure (BiPAP) at home will need to continue its use while hospitalized. Some facilities encourage the patient to bring their masks and equipment from home for maximum fit and comfort. However, it is wise to check the policy for the use of personal medical devices at one’s institution before suggestion this to a patient. Psychosocial and Emotional Support It would be remiss to conclude this look at postoperative care of the bariatric-surgery patient without discussing psychosocial and emotional support. It is difficult to do justice to this important topic in the limited space available here. Reto (2003) provides a comprehensive look at providing this support; caregivers are encouraged to read this helpful article. It is important for the nursing staff to be aware of their own feelings and fears about excess weight. The typical bariatric-surgery patient can tell stories reflecting a lifetime of social typecasting as a result of being overweight. In society today, obesity is often linked to the misperceptions of laziness, uncleanliness, low intellect, ineptness, and lack of willpower (Reto, 2003). As a result of these experiences, patients undergoing bariatric surgery continue to feel misunderstood and mistreated by caregivers (Fox, 2007). Unfortunately, it has been noted that fat bias and prejudice are perpetuated by all persons, including bedside caregivers. It is important for the nursing staff to be aware of their own feelings and fears about excess weight. To help nurses examine their personal attitudes, Puhl (2006) recommends that nurses ask themselves the following questions: - Do I make assumptions about a person’s character, intelligence, professional success, health status, or lifestyle behaviors based only on weight? - Am I comfortable working with people of all shapes and sizes? - Do I give appropriate feedback to encourage healthful behavior changes? - Am I sensitive to the needs and concerns of overweight and obese individuals? - Do I treat the individual or the condition? Through education about obesity, heightened awareness of self-presentation and tools for implementing therapeutic interaction, caregivers can be a positive agent in the bariatric-surgery patient’s quest for lifelong change. Many times it is the small acts that make the biggest difference. Approaching the patient in an unhurried manner, making eye contact, appropriate touch, and positive reinforcement of small successes all help to create a positive outcome for patients and staff alike. Obese patients commonly demonstrate low self-esteem and limited socialization which can result in manipulative behaviors (Barth & Jensen, 2006). It is important that caregivers consult the team of bariatric psychologists for assistance in developing strategies to deal with the more difficult patient. Studies have demonstrated that over 50% of bariatric-surgery patients have a concurrent diagnosis of anxiety, depression, or a psychological disorder. For these patients, resuming the administration of anxiolytics, antidepressants, and/or antipsychotic medications as soon as possible after surgery is imperative to avoid the negative effects associated with withdrawal. It is recommended that for these patients, a treatment plan to address these concerns be developed prior to surgery. Future Options for Surgical Weight Loss ...less than one percent of the potential bariatric-surgery patients decide to undergo bariatric surgery. The range of benefits resulting from successful bariatric surgery are substantial, including improvement in a range of health conditions, reduction in certain cancer risks, improvement in the quality of life, and “cure" of some diseases, such as diabetes. Despite these benefits, less than one percent of the potential bariatric-surgery patients decide to undergo bariatric surgery. The main reasons for not undergoing the surgery are centered on health insurers not covering the procedure and patients (along with the insurers) being apprehensive due to the perceived potential risks and complications (Shikora, 2008). New and novel techniques, and the re-emergence of some older ones, are surfacing today in the treatment of morbid obesity. Metabolic surgery is the new term to describe the surgical procedures performed to treat metabolic conditions, such as type 2 diabetes, hypertension, high cholesterol, non-fatty liver disease, and obstructive sleep apnea. The American Society for Metabolic and Bariatric Surgery (ASMBS) www.asmbs.org/ is instrumental in the development of the guidelines and recommendations regarding metabolic surgeries (ASMBS, 2007). New and novel techniques, and the re-emergence of some older ones, are surfacing today in the treatment of morbid obesity. New procedures are being developed that are more appealing to the patients. These procedures are less complex and result in less severe and fewer complications (Schauer, Chand, & Brethauer, 2007a). One new concept evolving in this field is that of the endoluminal procedure (surgery performed entirely within the lumen of the gastrointestinal tract). Another is the transgastric procedure (transluminal procedure performed within the peritoneal cavity). These procedures combine endoscopy with the minimally invasive surgery. These procedures have a great potential to be less complex and costly, and ambulatory in nature (Schauer, Chand, & Brethauer, 2007a). The primary benefit is that they eliminate the need to enter the peritoneal cavity, thus reducing the physical discomfort and pain associated with the traditional bariatric procedures. Other benefits are reduced recovery time and no visible scarring because of the use of natural orifices such as mouth, anus, or vagina (Madan & Martinez, 2008). This new field, recently named Natural Orifice Transluminal Endoscopic Surgery™ or NOTES®, is outlined by the Natural Orifice Surgery Consortium for Assessment and Research. www.noscar.org. This consortium was formed to provide guidance, oversight, and evaluation of NOTES® techniques and related research. These emerging endoluminal technologies (See Table 2) can be used as primary weight-loss procedures for weight loss or for preoperative weight loss in staged procedures. They can also be used as revisional procedures for stoma or gastric pouch size change, such as utilizing the injection of sclerosing agents or endoscopic “sewing machines” (Madan & Martinez, 2008; Schauer, Chand, & Brethauer, 2007a). Table 2. Available and Emerging Endoluminal Technologies Mechanism of action and clinical application. Problems / complications associated with the device or technique Endoluminal suturing and stapling “Sewing machine” for partial and full-thickness sewing Endoscopic stapling devices Durability - mucosa to mucosa suturing may not hold. No complications are reported in the literature. Risk of staple-line leakage. Although it is serosa-to-serosa stapling, gaps have been reported in the staple line. Long-term staple line dehiscence is another problem. Engineering obstacles and challenges remain. Injection or prosthesis Tube/stent placement in duodenum for the purpose of malabsorption (the duodenum and first part of jejunum is bypassed via an endoscopic bypass sleeve). Studies are underway for Type 2 diabetes resolution with this procedure Some report problems with the anchors that hold the device in place and mucosal tearing with the removal of the device. This is the only device available in this category, it is not FDA approved. Weight regained with balloon removal. Vomiting, reflux, hypokalemia, renal dysfunction, and intestinal blockage occur. Electrodes implanted to slow down gastric emptying Vagal nerve down regulation to slow digestion, gastric emptying, and curb hunger Gastric perforation and lead dislodgement can occur. Injection of a sclerotherapeutic agent for endoscopic reduction of stoma size Possible stenosis that would require dilatation of the stoma Table 2 is based on the following sources: Madan, A. K., & Martinez, J. M. (2008, November). Natural Orifice Bariatric Surgery: An Update. (R. Rosenthal, Ed.) Bariatric Times , 5 (11), pp. 34-38. Schauer, P., Chand, B., & Brethauer, S. (2007a). New Applications for Endoscopy: The Emerging Field of Endoluminal and Transgastric Bariatric Surgery. Surgical Endoscopy , 21, 347-356. Schauer, P. R., Chand, B., & Brethauer, S. A. (2007b). The Emerging Field of Endoluminal and Transgastric Bariatric Surgery. In P. R. Schauer, B. D. Schirmer, & S. A. Brethauer (Eds.), Minimally Invasive Bariatric Surgery (pp. 395-405). New York: Springer. New procedures are being developed that are more appealing to the patients. Other innovations focus on the techniques that are already established. The sleeve gastrectomy is routinely performed using five-to-seven trocar sites. Reavis, Hinojosa, Smith, and Nguyen (2008) report a case of a laparoscopic sleeve gastrectomy that was performed through a single laparoscopic incision, where they have removed the gastric specimen through the same incision. Although this innovation requires one incision that is four cm long, it can only be used in patients with a lower BMI. Other surgeons have focused on decreasing the scarring that results from these laparoscopic surgeries. Kim, Kim, Lee, and Lee (2008) report on a minimal-scar, laparoscopic, adjustable gastric-banding, noting that use of this technique results in a natural-looking and nearly invisible scar around the umbilicus. The access port is placed above the umbilicus allowing easy localization for band adjustments. While these innovations are exciting, they are still in their initial phases of development. Clear guidelines and more research are needed before they can be marketed to patients and insurers. As procedures evolve, continuing education will be necessary so that research teams and nurses in the field can stay on the same page with regard to the ever-changing institutional and administrative requirements. Most significantly, though, will be the heightened necessity for an increased focus on pre- and post-operative education procedures, if the evolution of weight-loss surgery moves this surgery into the realm of out-patient surgery. Helping bariatric-surgery patients make monumental lifestyle changes in the quest for permanent weight loss can be a most rewarding endeavor. The treatment of obesity requires a dedicated, multidisciplinary team in-order-to achieve consistent, sustainable outcomes. The surgeons, dietitians, psychologists, nurses, and bariatricians must carefully orchestrate each step in the process so as to provide optimal care of the bariatric-surgery patient. Each client is unique and presents his/her own set of challenges. The time interval from initial contact, through surgical clearance, and arrival at the actual surgery date may be several months. Postoperative care and monitoring will continue for 12-18 months or longer. Care coordination and communication are of key importance. It is the nurse who guides the patient through a variety of medical, physical, and emotional challenges. Witnessing a patient’s transformation and ultimate success is what makes the effort worthwhile. Weight-loss surgery is not an “easy fix” or “once and done” solution to morbid obesity; however, it is one tool available to improve the health and longevity of obese patients. Nancy J. Kaser, MSN, RN, ACNS-BC Ms. Kaser received her MSN degree from Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland Ohio. She is currently working with surgical patients as a Clinical Nurse Specialist in Adult Health at the Cleveland Clinic Foundation (CCF). In this role she focuses on the care of patients in the Endocrinology and Metabolism Institutes. Specific responsibilities include assisting with the orientation of nurses on the Metabolic Surgery Unit and both facilitating and speaking at the CCF Bariatric Nurses Education Day. Ms. Kaser also serves as an adjunct nursing faculty member at the Notre Dame and Cuyahoga Community Colleges in the Cleveland, Ohio area. Aniko Kukla, MSN, RN Ms. Kukla is a Clinical Instructor in the Department of Nursing Education and Professional Practice at the Cleveland Clinic Foundation (CCF). She facilitates both the orientation program for new nurses and the ongoing education program for staff nurses on the Metabolic Surgery and Surgical Telemetry units at CCF. She has been working with the bariatric population for the last three years. She received her nursing education in the countries of Serbia, Hungary and the United States. Her areas of expertise include Metabolic Surgeries and Pediatrics. She is certified as a pediatric nurse practitioner. American Society for the Metabolic and Bariatric Surgery. (2007). Statements, guidelines, action items. Retrieved December 3, 2008, from www.asmbs.org/Newsite07/resources/asmbs_items.htm. Barth, M., & Jensen, C. E. (2006). Postoperative nursing care of gastric bypass patients. American Journal of Critical Care , 15, 378-388. Benotti, P., & Rodriguez, H. (2007). Preoperative preparation of the bariatric surgery patient. In H. C. Buchwald (Ed.), Surgical Management of Obesity (pp. 102-107). Philadelphia, PA, USA: Saunders. Braghetto, I., Korn, O., Valladares, H., Gutierrez, L., Csendes, A., Debandi, A., et al. (2007). Laparoscopic sleeve gastrectomy: Surgical technique, indications and clinical results. Obesity Surgery , 17, 1442-1450. Brethauer, S., Chand, B., & Schauer, P. R. (2006). Risks and benefits of bariatric surgery: Current evidence. Cleveland Clinic Journal of Medicine (11), 993-1008. CDC, National Center for Health Statistics. (2006). Prevalence of overweight among children and adolescents: United States, 2003-2004. Retrieved September 10, 2008 from: www.cdc.gov/nchs/products/pubs/pubd/hestats/overweight/overwght_child_03.htm. Chand, B., Gugliotti, D., Schauer, P., & Steckner, K. (2006). Perioperative management of the bariatric surgery patient: Focus on cardiac and anesthesia considerations. Cleveland Clinic Journal of Medicine, 73, S51-S56. Clark, W. (2007). Laparoscopic gastric bypass using the circular stapler. In H. C. Buchwald (Ed.), Surgical Management of Obesity (pp. 1208-1213). Philadelphia: Saunders. Deitel, M. (2007). A synopsis of the development of bariatric operations. Obesity Surgery, 17, 707-710. Farshad, A., & Bell, R. (2004). Assessment and management of the obese patient. Journal of Critical Care Medicine, 32 (4), S87-S91. Fox, K. (2007). Nursing care of the bariatric surgery patient. In H. C. Buchwald (Ed.), Surgical Management of Obesity (pp. 406-417). Philadelphia: Saunders. Gabriel, S., & Garguilo, H. (2006). Nursing made incredibly easy: Bariatric surgery basics. Retrieved August 8, 2008, from: www.nursingcenter.com/prodev/ce_article.asp?tid=622767 Gallagher, S. (2004). Taking the weight off with bariatric surgery. Nursing2004, 34 (3), 58-63. Gallagher, S. (2005). The challenges of caring for the obese patient. Edgemont, PA, USA: Matrix Medical Communications. Gumbs, A., Gagner, M., Dakin, G., & Pomp, A. (2007). Sleeve gastrectomy for morbid obesity. Obesity Surgery, 17, 962-969. Harrington, L. (2006). Postoperative care of patients undergoing bariatric surgery. MEDSURG Nursing, 15 (6), 357-363. Hensrud, D. D., & Klein, S. (2006). Extreme obesity: A new medical crisis in the United States. Mayo Clinic Proceedings, 81 (10, suppl), S5-S10. Hughes, S., & Dennison, C. R. (2008). Cardiovascular nurses called to don public health hats. Journal of Cardiovascular Nursing, 23 (6), 536-537. Ide, P., Farber, E. S., & Lautz, D. (2008). Perioperative nursing care of the bariatric surgical patient. AORN Journal, 88 (1), 30-58. Kim, E., Kim, D., Lee, S., & Lee, H. (2008). Minimal-scar laparoscopic adjustable gastric banding (LAGB). Obesity Surgery, [epub ahead of print]. Lagandre, S., Arnalsteen, L., Vallet, B., Robin, E., Janey, T., Onraed, F., et al. (2006). Predictive factors for rhabdomyolosis after bariatric surgery. Obesity Surgery, 16, 1365-1370. Madan, A. K., & Martinez, J. M. (2008, November). Natural orifice bariatric surgery: An update. (R. Rosenthal, Ed.) Bariatric Times, 5 (11), 34-38. McGlinch, B., Que, F. G., Nelson, J. L., Wrobleski, D. M., Grant, J. E., & Collazo-Clavell, M. L. (2006). Perioperative care of patients undergoing bariatric surgery. Mayo Clinic Proceedings, (pp. S25-S33). Rochester. Miller, M., & Rovito, P. F. (2004). An approach to venous thromboembolism prophylaxis in laparoscopic Roux-en-Y gastric bypass surgery. Obesity Surgery, 14, 731-737. Ogden, C.L, Carroll, M.D., McDowell, M.A., & Flegal, K.M. (2007). Obesity among adults in the United States - no statistically significant change since 2003-2004. U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. National Center for Health Statistics. Data Report, 1, 1-8. Ojo, P., Asiyanbola, B., Valin, E., & Reinhold, R. (2008). Post discharge prophylatic anticoagulation in gastric bypass patient - how safe? Obesity Surgery, 17, 791-796. O'Leary, J., Paige, J. T., & Martin, L. F. (2007). Perioperative management of the bariatric surgery patient. In H. C. Buchwald (Ed.), Surgical Management of Obesity (pp. 119-130). Philadelphia: Saunders. Puhl, R. (2006). The stigma of obesity and its consequences. Retrieved November 25, 2008, from: http://18.104.22.168/search?q=cache:YdQobHQfCMYJ:www.obesityaction.org/resources/oacnews/oacnews3/Stigma%2520of%2520Obesity.pdf+The+stigma+of+obesity&hl=en&ct=clnk&cd=1&gl=us Reavis, K. M., Hinojosa, M. W., Smith, B. R., & Nguyen, N. T. (2008). Single-laparoscopic incision transabdominal surgery sleeve gastrectomy. Obesity Surgery, 18, 1492-1494. Reto, C. S. (2003). Psychological aspects of delivering nursing care to the bariatric patient. Critical Care Nursing Quarterly, 26 (2), 139-149. Schauer, P., Chand, B., & Brethauer, S. (2007a). New applications for endoscopy: The emerging field of endoluminal and transgastric bariatric surgery. Surgical Endoscopy, 21, 347-356. Schauer, P. R., Chand, B., & Brethauer, S. A. (2007b). The emerging field of endoluminal and transgastric bariatric surgery. In P. R. Schauer, B. D. Schirmer, & S. A. Brethauer (Eds.), Minimally Invasive Bariatric Surgery (pp. 395-405). New York: Springer. Sheipe, M. (2006). Breaking through obesity with gastric bypass surgery. The Nurse Pracitioner, 31 (10), 13-23. Shikora, S. (2008, September). Emerging technologies: The introduction of new technologies in bariatric and metabolic surgery- Are you ready to embrace them? (M. Bessler, Ed.). Bariatric Times. Retrieved November 17, 2008, from http://bariatrictimes.com/2008/09/20/emerging-technologies-the-introduction-of-new-technologies-into-bariatric-and-metabolic-surgery%E2%80%94are-you-ready-to-embrace-them/ Sullivan, C. S., Logan, J., & Kolasa, K. M. (2006). Medical nutrition therapy for the bariatric patient. Nutrition Today, 41 (5), 207-214. Tanaka, P., & Brodsky, J. B. (2007, November/December). Rhabdomyolysis following bariatric surgery. Bariatric Times. (C. Hutchinson, Ed.). Bariatric Times. Retrieved November 17, 2008, from http://bariatrictimes.com/2007/12/17/rhabdomyolysis-following-bariatric-surgery/ Tucker, O. N., Szomstein, S., & Rosenthal, R. J. (2008). Indications for sleeve gastrectomy as a primary procedure for weight loss in the morbidly obese. Journal of Gastrointestinal Surgery , 12, 662-667. United States Department of Health and Human Services / National Institutes of Health, National Heart, Lung and Blood Institues. (1998). The evidence report on the identification, evaluation, and treatment of overweight and obesity in adults (No. 98-4083). Washington, D.C.: National Institute of Health. Wadden, T. A., Butryn, M. L., & Byrne, K. J. (2004). Efficacy of lifestyle modification for long-term weight control. Obesity Research, 12, 151S - 162S. Walsh, A., Albano, H., & Jones, D. B. (2008). A perioperative team approach to treating patients undergoing laparoscopic bariatric surgery. AORN Journal, 88 (1), 59-64. © 2009 OJIN: The Online Journal of Issues in Nursing Article published January 31, 2009 - Advocating for the Prevention of Childhood Obesity: A Call to Action for Nursing Bobbie Berkowitz, PhD, RN, FAAN; Marleyse Borchard, MPH (January 31, 2009) - Essentials of a Bariatric Patient Handling Program Marylou Muir, RN, COHN; Gail Archer-Heese, BEd, O.T.Reg (MB) BMR (January 31, 2009) - Obesity in Older Adults Ann Mabe Newman, DSN, APRN, CNE (January 31, 2009) - Obesity: An Emerging Concern for Patients and Nurses Susan Gallagher Camden PhD, MSN, MA, RN, CBN (January 31, 2009)
<urn:uuid:2318c42a-9e09-4102-8150-f5fa3991de53>
CC-MAIN-2016-26
http://nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Vol142009/No1Jan09/Weight-Loss-Surgery.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885659
10,603
2.53125
3
I get a kick out of the editorials and short essays by Calvin Johnson, which you can find at such places as Lew Rockwell and the Conservative Free Press. Given the last few posts on the mythology of black Confederates I thought it might be nice to share another little story. Yes, I am beating a dead horse, but if this blog can help to correct this skewed view of the past than my time on this site will be worthwhile. In this essay, Johnson examines the history of the monument to Confederate soldiers, which is located on the grounds of Arlington National Cemetery. The monument was organized by the United Daughters of the Confederacy to mark the graves of 267 Confederate soldiers. Designed by Moses Ezekiel, it was unveiled in 1914 and included a dedication speech by President Woodrow Wilson. Here is what Johnson has to say about the monument itself: Around the start of the 20th century this country also honored the men who fought for the Confederacy. This site of men who fought for “Dixie” is located in section 16. There is an inscription on the 32.5 foot high Confederate monument at Arlington National Cemetery that reads, “An Obedience To Duty As They Understood it; These Men Suffered All; Sacrificed All and Died”! Some claim this Confederate Monument at Arlington may have been the first to honor Black Confederates. Carved on this monument is the depiction of a Black Confederate who is marching in step with the White soldiers. Also shown is a White Confederate who gives his child to a Black woman for safe keeping.[my emphasis] What exactly is Johnson referring to? The photographs below are close-ups of the freezes included around the perimeter of the monument. You can see what appears to be a black man marching in rank with Confederate soldiers as a well as a female slave who is about to take charge of what must be her master’s children. This is a wonderful example of why the study of memory is so important to our understanding of the Civil War. To understand this statue and the choices of the sculptor we must understand the historical context in which it was dedicated. Monuments and other public spaces dedicated to historic events are as much about the time in which they were build as they are about the event in question. The year, 1914, places us right at the height of Jim Crow. The images helped to justify the emphasis within Lost Cause narratives of loyal slaves and a war that was supposedly fought simply for states rights. Wilson’s presence at the dedication is also important given his order at just this time to segregate federal office buildings along racial lines. In other words, this is not simply a monument to commemorate the lives of Confederate soldiers, but part of an attempt to shape a certain version of the past that worked to minimize the theme of emancipation and distance the Confederate experiment from the preservation of slavery altogether. The enforcement of white supremacy by legal means helped to ensure that African Americans would be unable to shape their own emancipationist legacy of the Civil War, which in turn helped to perpetuate the political monopoly that whites enjoyed through the 1960s. Unfortunately, Calvin Johnson doesn’t really understand what he is looking at.
<urn:uuid:5a76a681-e064-4b51-801f-a1dea156da75>
CC-MAIN-2016-26
http://cwmemory.com/2009/04/19/calvin-e-johnsons-neo-confederate-fantasy-land/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968821
645
3.5625
4
Semantic Web Stack. Semantic Web Stack The Semantic Web Stack, also known as Semantic Web Cake or Semantic Web Layer Cake, illustrates the architecture of the Semantic Web. Overview The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies that are standardized for Semantic Web are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of classical hypertext web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized. Semantic Web technologies As shown in the Semantic Web Stack, the following languages or technologies are used to create Semantic Web. Hypertext Web technologies The bottom layers contain technologies that are well known from hypertext web and that without change provide basis for the semantic web. Standardized Semantic Web technologies Notes Resource Description Framework (RDF): Concepts and Abstract Synt. W3C Recommendation 10 February 2004 New Version Available: "RDF 1.1 Concepts and Abstract Syntax" (Document Status Update, 25 February 2014) The RDF Working Group has produced a W3C Recommendation for a new version of RDF which adds features to this 2004 version, while remaining compatible. RDF/XML Syntax Specification (Revised) Abstract The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web. This document defines an XML syntax for RDF called RDF/XML in terms of Namespaces in XML, the XML Information Set and XML Base. The formal grammar for the syntax is annotated with actions generating triples of the RDF graph as defined in RDF Concepts and Abstract Syntax. The triples are written using the N-Triples RDF graph serializing format which enables more precise recording of the mapping in a machine processable form. The mappings are recorded as tests cases, gathered and published in RDF Test Cases. 1 Introduction. RDF Vocabulary Description Language 1.0: RDF Schema. Abstract The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web. This specification describes how to use RDF to describe RDF vocabularies. This specification defines a vocabulary for this purpose and defines other built-in RDF vocabulary initially specified in the RDF Model and Syntax Specification. Contents. RDF Semantics. W3C Recommendation 10 February 2004. RDF Test Cases. W3C Recommendation 10 February 2004 New Version Available: "RDF 1.1 Test Cases" (Document Status Update, 25 February 2014) The RDF Working Group has produced a W3C Recommendation for a new version of RDF which adds features to this 2004 version, while remaining compatible. Please see "RDF 1.1 Test Cases" for a new version of this document, and the "What's New in RDF 1.1" document for the differences between this version of RDF and RDF 1.1. This Version: Latest Version: Previous Version: Editors: Jan Grant, (ILRT, University of Bristol) Dave Beckett, (ILRT, University of Bristol) Series editor: Brian McBride (Hewlett Packard Labs) Please refer to the errata for this document, which may include some normative corrections. See also translations. Copyright © 2004 W3C® (MIT, ERCIM, Keio), All Rights Reserved. Abstract. RDF Primer. The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. This Primer is designed to provide the reader with the basic knowledge required to effectively use RDF. It introduces the basic concepts of RDF and describes its XML syntax. It describes how to define RDF vocabularies using the RDF Vocabulary Description Language, and gives an overview of some deployed RDF applications. It also describes the content and purpose of other RDF specification documents. 1. The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. RDF is intended for situations in which this information needs to be processed by applications, rather than being only displayed to people. RDF is based on the idea of identifying things using Web identifiers (called Uniform Resource Identifiers, or URIs), and describing resources in terms of simple properties and property values. <? 2. (URL).
<urn:uuid:ab45a527-ec7b-4db4-b889-382eaff3d3e3>
CC-MAIN-2016-26
http://www.pearltrees.com/t/semantic-web/w3c/id511609
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.857631
930
3.65625
4
Acorn's range of 8-bit machines include the Atom (on the picture), the BBC models A, B and B+, the Electron, the BBC Master and the Master Compact. Atom (1979) had 6502A/1 MHz CPU, 8 KB ROM (max. 16), 2 KB RAM (max. 12), 8 colors and 3 voices. A several peripherals were developped for it: e.q. 5,25" FDD or a network called EcoNet (this network allowed to link up to 250 Atoms). |Atom Emulator 1.33||Acorn Atom emulator for DOS (freeware)||Author's homepage| |Games on the author's page| The adventure started in 1981, when the British Broadcasting Corporation (BBC) wanted to host a series of programs introducing the British public to computers. Acorn Computers Ltd. scooped the contract. The first model, BBC model A (on the picture), was limited by 16 KB of RAM and tape-based, while the BBC model B doubled the amount of RAM and added a floppy drive or a network adaptor. |BeebEm for Windows 1.02||Good BBC Model B emulator for Windows (freeware)||Author's homepage| |Horizon 1.1||Commercial BBC model B emulator for Windows 95 (crippled demo-version)||Authors' homepage| |BeebEm DOS 1.2C||BBC Model B emulator for DOS (freeware)||The BBC lives| |Norvegian ftp-archive of BBC software||ROM images| |The BBC B was quite expensive, so Acorn decided to produce a smaller, more limited "little brother", called Electron. As technology improved, Acorn introduced its "big brother", the BBC Master. It also came in several models: the Master 128 and 512 with 128 and 512 KB memory respectively.| |ElectrEm beta 9b||Acorn Electron emulator for Windows/DOS (freeware)||Author's page| |Stairway to Hell||ROM images|
<urn:uuid:7a5175db-6df2-4ce6-9788-4214612a03e7>
CC-MAIN-2016-26
http://www.oocities.org/siliconvalley/9723/acorn.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.816186
443
2.59375
3
Old Capitol’s history began as the seat of the territorial government of Iowa. It became the University’s first permanent building in 1857 when the state legislature moved to Des Moines. In addition to being the administrative center of the University, at various times it was also the home of the law school, the library, a museum, a dormitory, and even a gymnasium. The story of Old Capitol intersects with some of the most defining moments in the nation’s history. Abraham Lincoln was eulogized on its steps on April 19, 1865. A hundred years later, another moment of turmoil—the protests over the Vietnam War— engulfed Old Capitol. It is the heart of the University, its pivot, and the image conjured up when remembering the high bluffs and city above the Iowa River. Despite Old Capitol’s popularity, it has had its detractors. In 1939, the rabidly anticlassicist Frank Lloyd Wright famously called the building his least favorite on campus, adding, “all of your buildings are very bad . . . and they are destructive of me and my work.” He advised the University to “forget your sentimentality for Old Capitol else you are doomed to destruction.” Wright was advocating for contemporary design. Yet Old Capitol remains the focus of collective memory and the point of departure for architecture on campus, having inspired the Beaux-Arts Classicism of the Pentacrest buildings, the dome of Boyd Law Building, and the axes along which the various campuses are organized. Old Capitol itself has also been refined and redefined over the years, with a near total rehabilitation from 1921 to 1924 that added the west portico, an element included in the original design but never built. Owing to a lack of space, and after 110 years and fifteen University presidents, the Office of the President was moved in 1970 from its location in the southeast corner of the first floor to Jessup Hall. Old Capitol was rededicated as part of the 1976 Bicentennial celebrations, this time restored to its original character as territorial seat and home of state government. The 2006 renovation, made more extensive than originally planned by a November 20, 2001, fire that destroyed the lantern (cupola) and dome, has even more fully revived the building’s nineteenth-century character. A late example of Greek Revival architecture, Old Capitol reiterates on a more modest scale the state capitol in Springfield, Illinois (also designed by Rague) and a distinguished succession of state capitols (Ohio, Tennessee) going all the way back to Thomas Jefferson’s Virginia state capitol at Richmond (1799). The walls of Old Capitol are composed of porous Iowa limestone, giving the building a rough- hewn quality. The portico columns, pediment, bell housing, lantern (cupola), and dome are all wood painted to imitate stone. Owing to its prominent porticoes, Old Capitol is a Doric building. This choice was both symbolic and aesthetic—the fluted Greek Doric order, and its associations with the Parthenon and Athenian democracy, conveys efficiency, modesty, and good government. The façade walls are articulated with the even sparer Doric pilasters. Frugality and moral rectitude are the order here, relieved only by the Corinthian capitals of the lantern columns, modeled on the Choragic Monument of Lysicrates, a fourth-century BCE work in Athens. The dome, recently regilded in gold leaf, captures the sun to become the focal point of the building and the entire campus. The results of the 2006 project are also visible in the detailed work done to restore Old Capitol with greater historical accuracy. Because no drawings existed from the building’s construction, architectural historians pieced plans together from fragments. Some changes were made—the original wood-shingled roof, which had been replaced first with slate, then with asphalt shingles, was restored with standing- seam metal cladding—but Old Capitol today is as close to its original design as it has been since the nineteenth century. Inside, the inversely rotated stairway has been retained, and the building’s bell—destroyed in the fire—has been replaced by one from the same period. The new interior color scheme, more in keeping with the mid-nineteenth century, has also been introduced; in place of sober white walls from the 1970s, Old Capitol is warmed by lavender, rose, and azure walls. Burnished and reopened in May 2006, it again greets visitors and looks westward across Iowa, as it has since 1842. As a “nationally important example of Greek Revival architecture,” Old Capitol has been designated a National Historic Landmark. Old Capitol is accessible to people with disabilities.
<urn:uuid:a7130a30-9ce8-48c4-ae4e-4583cd9ca3a7>
CC-MAIN-2016-26
http://maps.uiowa.edu/oc
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966354
985
3
3
Skin Cancer (Melanoma) Data Rhode Island Numbers - About 240 Rhode Islanders were diagnosed with melanoma of the skin on average every year from 2007 through 2011. - An average of 33 people died due to melanoma every year from 2007 through 2011. - Rhode Island males get melanoma of the skin at a rate about one-and-a-half times higher than females from 2007 through 2011. - Males are, however, about twice as likely as females to die from melanoma from 2006 through 2010. - The death rate for 2007 was too low to be publicly reported for males and similarly too low for females in 2008. By Race / Ethnicity - White Rhode Islanders get melanoma of the skin about 36 times more than blacks from 2007 through 2011. - White Rhode Islanders accounted for nearly all (99.5%) of melanoma deaths 1999 to 2010. - Older Rhode Islanders are more likely to get melanoma of the skin. The rate of new cases is highest for Rhode Islanders over 80. - The death rate from melanoma of the skin also increases with age and also highest for persons over 80. Rates: based on age-adjusted population data from the US Census estimates computed for individual calendar years. Death rates are not shown for years in which there are fewer than 10 deaths of the type observed, following the rule of the National Center for Health Statistics. For example, there were fewer than ten deaths from melanoma of skin among Rhode Island women in 2008, so the death rate for that year has been suppressed.
<urn:uuid:5682c4ec-5c79-4b93-9ff9-5feb14169163>
CC-MAIN-2016-26
http://www.health.ri.gov/data/cancer/skin/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952752
321
2.578125
3
School Library Journal - School Library JournalGr 2-4-Each of these brief titles discusses the history of the site and its significance. In Carlsbad, readers learn how limestone and water formed the caves, while Lascaux and Qumran focus on the discoveries of prehistoric wall paintings and the Dead Sea Scrolls at these locations. Thousand Buddhas explores the ancient, decorated cliff-side cave temples of western China. Each title has a simple map. While these books offer enough information for rudimentary reports, the texts are dry, the layouts are uninspired, and the color photographs are of average quality. Mediocre fare.-Michael Giller, South Carolina Governor's School for the Arts and Humanities, Greenville Copyright 2003 Reed Business Information. Write a Review and post it to your social network Most Helpful Customer Reviews See all customer reviews >
<urn:uuid:a183b185-001a-4339-8d4b-24092f2b4ec2>
CC-MAIN-2016-26
http://www.barnesandnoble.com/w/qumran-caves-brad-burnham/1112238774?ean=9780823962594&itm=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00136-ip-10-164-35-72.ec2.internal.warc.gz
en
0.856189
173
2.921875
3
The “Big Splash” global school experiment on water quality to be launched on World Water Day This year’s edition of World Water Day, on 22 March, which will be officially celebrated in Cape Town (South Africa), will see the launch of a global chemistry experiment involving the testing of water quality by school children all over the world. The ‘Big Splash’ is one the highlights of the day’s events in Cape Town, which have been organised with all of the United Nations agencies that are members of UN-Water, including UNESCO. It will kick off a global chemistry experiment “Water a Chemical Solution.” For the event, 1,000 15 to 18-year-old students in the Cape region of South Africa will test water quality, from 22 to 25 March, measuring salinity and acidity, and learning how it is filtered and distilled. This initiative, launched by UNESCO and the International Union of Pure and Applied Chemistry (IUPAC) as part of the International Year of Chemistry, aims to raise the awareness of primary and secondary school students of the importance of water as a vital resource. Students will be able to register the results of their tests in an interactive on-line map. After its launch in South Africa, the experiment will be available to interested schools all over the world. Agnès Bardon, UNESCO Division of public Information. Tel : +33 (0) 1 45 68 17 64. [email protected] <- Back to: All news
<urn:uuid:14a45b3f-13da-42b1-ba69-9ceb4572bc61>
CC-MAIN-2016-26
http://www.unesco.org/new/en/media-services/single-view/news/the_big_splash_global_school_experiment_on_water_quality_to_be_launched_on_world_water_day/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00199-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920714
319
2.9375
3
Perhaps now is the time to remind Americans of the significance of state governments. While being heard by Congress seems like a monumental task, Americans can talk to—and be heard by—their local lawmakers. Local governments are best suited to address local issues, and together, a chorus of state lawmakers can elicit real change in a state’s direction and hold sway over the federal government. The United States Constitution was designed with local governments in mind. The 10th Amendment gives power to the states and to the people and reminds all Americans they do have a seat at the table. And, while the federal government was shut down, state governments were open and fully functional. States have bills to pay, promises to keep and their constituents to serve. In 48 states, the law requires a balanced budget, and states’ taxation policies, education policies, environmental policies and other policies give them their own unique signatures. The beauty of state governments is not their similarities, but their differences. These differences give Americans the ability to make decisions on what is best community by community, from deciding in which state to live to which state to own a business. State governments provide choice for the American people, and those choices are what allow Americans to make decisions best suited for them. State governments are representative of the government our founders envisioned: of the people, by the people and for the people. The first priority of state governments is to answer their residents, not the federal government. While it is true state lawmakers have come to impasses—Connecticut’s government shut down in the early nineties—the effects of a state shutdown are felt immediately and painfully, making state government shutdowns a particularly distasteful course for lawmakers. This shutdown marked the 18th time the federal government has shut down since 1976. Certainly it seems the federal government is willing to use the American people as a bargaining chip. Perhaps it is time to remind the federal government of its purpose. If government is emblematic of the people it serves, then our federal government does not hold the American people in high regard. While there will always be differences in opinion, state governments show that regular compromise can be achieved and that opposing parties can work together. It is time the federal government looked to the states as models of leadership. Piscopo is a state representative from Connecticut and serves as the National Chairman of the American Legislative Exchange Council.
<urn:uuid:47fae5b1-a190-404d-93af-8645b0d90119>
CC-MAIN-2016-26
http://thehill.com/blogs/congress-blog/politics/329639-look-to-state-governments-as-models-of-leadership
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971492
484
3.34375
3
Antropologi Visual dan Antropologi Seni — Etnohistori.org. When I hear the word “culture”… For more than a hundred years anthropology has been spreading sweetness and light. And now that the results are in—now that even the strangest customs from the remotest places have been recognized as truly human and entirely natural—it is plain that the popular verdict has been an enthusiastic assent. Its ethical understandings are widely regarded as benign. Its politics are as congenial to the liberal imagination as they are to the radical mind. Its broader implications have been sympathetically received by a wide range of people who have gladly melded its doctrines with their own. And if there is any one thing which explains this congeniality and appeal it is the persuasive conception of ‘culture’ which anthropology has bestowed upon the world. “Whenever I hear the word culture, I reach for my gun” “Wenn ich ‘Kultur’ höre, nehme ich meine Pistole.” –which is often attributed to Hermann Wilhelm Göring (photo), commander of the Luftwaffe in Nazi Germany. …I reach for my gum (Babe Ruth?) ANTHROPOLOGISTS for JUSTICE and PEACE. What do we make of Occupy Wall Street? - Open Anthropology Cooperative. For those with an interest in moving the global economy in more humane directions, the story of the hour is the Occupy Wall Street movement that has now spread worldwide. Its eruption has spurred a discussion on Anthro-L from which I take the following exchange. I believe that many of us these days don't believe that evolving Capitalism forward, in a more humane direction, means we have to give up the good life. We certainly don't have to all ware the same colored cloths or have stagnant, planned central economies--that would be boring. We can have more fun and variety--without having to worry if our small business fails or our last paycheck stops we might die for lack of basic medical care. Would you place doing away with multinational corporations as we now know them and financial institutions too big to fail in the reform or revolutionary category? Why not make films in Indigenous languages? - Transient Languages & Cultures. 2006 saw the release of several films with actors speaking endangered languages - Mel Gibson's Apocalypto (Mayan) and Rolf de Heer's film Ten Canoes, set in Arnhem Land, and with actors speaking mostly in Ganalbingu (and see Anggarrgoon on it). Leaving aside the pleasure of hearing actors speak in Indigenous Australian languages, I liked Ten Canoes - it was funny, it gave an idea of the good and the bad about small societies - you're looked after, but you have reciprocal responsibility, and NO privacy - everyone knows what you're thinking. The filming was beautiful - both the recreation of early photos, and the shots of the light on the water and the tangled trees in the swamp. And the authors worked hard to "make a film that would not only satisfy local tastes and requirements but would also satisfy Western audiences used to Western storytelling conventions. " Good eh? For example, "Paddling a new canoe" by Paul Gray in the Herald Sun (8/1/07) criticises these decisions. Hate tourism « Culture Matters. Scott Atran: A Memory of Claude Lévi-Strauss. In 1974, when I was a graduate student in anthropology at Columbia University, I wanted to organize a discussion of universals with people whose ideas I wished to know more about than I thought I could get from their writings. At the time, I was working for Margaret Mead as one of her assistants at the American Museum of Natural History, so I asked her how I might go about getting my wish. She said "talk to these people and see if they'll meet. " So I went to see Noam Chomsky in Cambridge, Jean Piaget in Geneva, and Jacques Monod in Paris, and they agreed; but I wondered if Levi-Strauss would because he seemed so aloof. Margaret licked her lips and laughed: "Well, that's his look, aloof and frail, but he's more playful than he lets on and he'll outlive me by thirty years if a day. Just tell him I sent you. " I ran from La Bastille to the College de France on Rue des Ecoles and up the steps to knock on his door. When I started there was still no science of mind. Human Evolution, Special Traits, Human Origins.
<urn:uuid:464f866f-14af-4dc8-95d6-7139aefe242e>
CC-MAIN-2016-26
http://www.pearltrees.com/trancebutton/anthropology/id3870888
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962812
957
2.640625
3
The students were encouraged to identify, question and cross social boundaries during lunch periods last week. According to School Counselor Kim Nelson, the activity took place in the cafeteria because that is where social boundaries are typically evident. “Mix it up is a positive step that schools can take to help create learning environments where students see each other as individuals and not just members of a separate group,” Teaching Tolerance Director Maureen Costello said. Students were given a guide of questions to ease the process of communicating and making new friends. “When people step out of their cliques and get to know someone, they realize just how much they have in common,” Costello said. Nelson added that when students interact with others who are different from them, biases and false perceptions can disappear.Follow @BrittanyMWehner
<urn:uuid:10e25cee-44e9-4e9c-8f5a-67f85f30f2e6>
CC-MAIN-2016-26
http://www.nj.com/salem/index.ssf/2012/11/students_at_mary_s_shoemaker_s.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966967
174
3.234375
3
Southern Africa could be facing a heightened risk of malaria this year, the World Health Organization warns. The WHO has urged countries to hand out mosquito nets The WHO says that the climate phenomenon La Nina has caused unusually wet conditions in the region, which could raise infection levels. The WHO has urged countries to raise awareness and distribute anti-malaria drugs and insecticide-treated nets. Malaria is one of the main causes of death in southern Africa, killing an average of 400,000 people each year. "Malaria is a climate sensitive disease and for this time of the year we have experienced uncommonly heavy rainfall and flooding in parts of southern Africa," said Joaquim Da Silva, WHO's Malaria Epidemics & Emergency Officer in the region. Further heavy rainfall has been forecast until February. La Nina originates in the eastern Pacific Ocean, but its effects reach around the globe, making wet regions wetter and dry ones drier. Mr Da Silva said the phenomenon could also raise the risk of flooding in river systems in Angola, Zambia, Zimbabwe, Mozambique, Namibia, Botswana and South Africa.
<urn:uuid:ed161dd3-a8ae-4dd7-a24e-c0209c4915ef>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/africa/7169421.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943021
233
3.078125
3
To Teach Social Skills, Educators Turn To TV. A unique program at one Pennsylvania school is using the power of television news to teach social skills to youngsters with Asperger’s syndrome. Students in the Asperger’s support program at Worrall Elementary School outside Philadelphia produce “Action 7 News,” using a green screen to bring everything from Major League Baseball to world events down to size. While the kids have fun producing the broadcast, the program is pure therapy, say their teachers and therapists. Standing in front of a camera helps the students learn to speak clearly. It also gives them a chance to play back their reports and analyze their own presentation. Meanwhile, reporting also offers the kids an opportunity to understand that issues are not always black and white. The program appears to be paying off. For the pint-size reporting staff, however, the best part of producing the newscast may be showing off their TV skills to the rest of the school. Saxon Publishers. State-of-the-Art Science Program Grades K–8 Science Program Combining interactive write-in texts, hands-on activities, and a full digital curriculum, ScienceFusion provides multimodal learning options to build inquiry and STEM skills, preparing students for success in future science courses and careers. GCSE Geography Revision. Serious Games For An Active Classroom. Serious Games challenging us to play a better education Promethean Announces Partnership with BrainPOP Atlanta, June 22, 2007 -- Promethean, a global leader in interactive learning, announced an exciting new partnership with BrainPOP, the world's leading producer of online, animated educational content for grades K-12. Both sites' content will be optimized to integrate seamlessly with Promethean Activclassroom technology, allowing teachers to develop and deliver more dynamic, engaging and effective lessons. Using Promethean and BrainPOP together provides the resources that any teacher needs to engage interest, meet learning styles and differentiate instruction for all students," said Jill Meeker, a Fulton County, GA elementary school teacher who uses both technologies in her classroom. "I know the impact of this partnership will be tremendous. All animated topics are developed in accordance with national education standards (NCTM, NSES and NCTE). Technorati Tags: serious games. THC Classroom — History.com TV Episodes, Schedule, & Video. Discovery Educator Network - A Community of Educators. Biography - DiscoverySchool.com. Augmented Reality Game Lets Kids Be the Scientists. President Barack Obama may have urged Americans to celebrate science fair winners as if they were Super Bowl champions during his 2011 State of the Union address, but American students still struggle with science. Now, researchers hope to ignite kids' interest in science by drawing them into an activity long loved by children: computer games. On April 4, scientists at the Massachusetts Institute of Technology and the Smithsonian Institution plan to launch a first-of-its-kind "curated game" — funded by the National Science Foundation — that's designed to give middle-school students a peak into the process of science. The game, called "Vanished," is an environmental mystery game with a science-fiction twist, said Scot Osterweil, a game developer and creative director of MIT's Education Arcade. It's also an "augmented reality" game, meaning kids will do real-world experiments and activities that mesh with the fiction of the game. Collaborative game play Doing science online. GCSE Bitesize - Religious Studies. Vanished. Vanished is a "curated game," a format derived from alternate reality games (ARGs) for museums, being developed by Education Arcade for the Smithsonian museums in Washington D.C., with NSF funding. The game ran from April 4 through May 22 2011, and targeted middle school age kids in informal settings like afterschool programs. The ARG aspects of the game included going to museums and interacting with real world places and objects as kids solved puzzles to unravel a fictional interdisciplinary science mystery that touched on life sciences, environmental sciences, paleontology, archaeology, geology, anthropology, math, the arts, and language arts. Players collaborated online and in-person while receiving help from MIT students who acted as facilitators and conferenced with Smithsonian scientists. The project staff hopes to have changed students' conception of the scientific method to one where they view scientific problems as interesting mysteries to be solved. GCSE Physics Revision. U.S. Studies - American History (Grades 6 - 8) Should we assign homework? LOYOLA PRESS A Jesuit Ministry : Home.
<urn:uuid:d687b11f-7ac3-4800-bf38-a26bd3e3dfa3>
CC-MAIN-2016-26
http://www.pearltrees.com/jenovesia/educators/id4142633
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00179-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940574
949
3.40625
3
An international team of astronomers has discovered the largest known planet orbiting another star. The planet has a large radius but a low density The "transiting" planet - meaning one that passes in front of its parent star as seen from Earth - is about 70% larger than Jupiter. But the gas giant has a much lower mass than Jupiter - the biggest planet in our Solar System - making it of extremely low density. Details of the work are to appear in the Astrophysical Journal. The new exoplanet, called TrES-4, is located in the constellation of Hercules and was discovered by a team working on the Transatlantic Exoplanet Survey (TrES). It is mostly made up of hydrogen. TrES-4 circles the star GSC02620-00648, which lies about 1,435 light-years away from Earth. Being only about seven million km (4.5 million miles) from its parent star, the planet is also very hot, about 1,327C (1,600 K; 2,300F). Because of the relatively weak pull exerted by TrES-4 on its upper atmosphere, some of the atmosphere probably escapes in a curved comet-like tail. "TrES-4 is the largest known exoplanet," said lead author Georgi Mandushev, from the Lowell Observatory in Flagstaff, US. It is so big, in fact, that its size is difficult to explain using current theories about superheated giant planets. "We continue to be surprised by how relatively large these giant planets can be," says Francis O'Donovan, a graduate student in astronomy at the California Institute of Technology (Caltech) which operates one of the TrES telescopes. "But if we can explain the sizes of these bloated planets in their harsh environments, it may help us better understand our own Solar System planets and their formation." Its density of 0.2 grams per cubic centimetre is so low that the planet would, in theory, float on water. By definition, a transiting planet passes directly between the Earth and the star, blocking some of the star's light and causing a slight drop in its brightness. "TrES-4 blocks off about 1% of the light of the star as it passes in front of it," said Dr Mandushev. "With our telescopes and observing techniques, we can measure this tiny drop in the star's brightness and deduce the presence of a planet there." Planet TrES-4 makes a complete revolution around its parent star every 3.55 days, so a year on this planet is shorter than a week on Earth. The TrES is a network of three 10cm telescopes in Arizona, California and the Canary Islands. In order to accurately measure the size of the TrES-4 planet, astronomers used the 0.8m telescope at the Lowell Observatory in Arizona, the 1.2m telescope at the Whipple Observatory, also in Arizona, and the 10m Keck telescope in Hawaii.
<urn:uuid:d770e5be-c0bf-4b84-89fb-09f0dccea1ad>
CC-MAIN-2016-26
http://news.bbc.co.uk/2/hi/science/nature/6934603.stm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946674
625
3.46875
3
"Could cut greenhouse gas emissions by 86 percent" "Keeping in line with the Energy Independence and Security Act of 2007 (EISA), the Environmental Protection Agency (EPA) revealed that it will need 800 million gallons of biodiesel in the United States domestic market in 2011. The EISA "expanded" the Renewable Fuels Standard (RFS2), which has volume requirements for Biomass-based Diesel, undifferentiated Advanced Biofuels and Cellulosic Biofuels. Biodiesel is the only commercially accepted U.S.-made Advanced Biofuel that fits the description of an undifferentiated Advanced Biofuel and Biomass-based diesel, and it can cut greenhouse gas emissions by as much as 86 percent when made from animal fats, agricultural oils, and waste greases. "We applaud EPA for this announcement and for reaffirming the common sense notion that we should displace petroleum diesel fuel with Advanced Biofuels like biodiesel," said Manning Feraci, Vice President of Federal Affairs for the National Biodiesel Board. Biofuel producers are concerned with whether these production levels can be reached due to biodiesel prices being much more expensive than regular diesel. Producers would like to see Congress pass a $1 per gallon biodiesel tax once again, since it expired last year, in order to make biodiesel more affordable."According to my quick & dirty analysis based on a Thus, most of the 800 million gallons of biodiesel required for next year will need to come from "agricultural oils," which the EPA claims in this press release will cut greenhouse emissions up to 86%. Apparently, the EPA is unaware of another of it's press releases which states: "It is important to note that biofuel production and consumption, in and of itself, will not reduce GHG or conventional pollutant emissions, lessen imports or consumption of petroleum, or alleviate pressure on exhaustible resources. Biofuel production and use must coincide with reductions in the production and use of fossil fuels for these benefits to accrue. These benefits would be mitigated if biofuel emissions and resource demands augment, rather than displace, those of fossil fuels. Economic Costs of Biofuel Production Biofuel feedstocks include many crops that would otherwise be used for human consumption directly, or indirectly as animal feed. Diverting these crops to biofuels may lead to more land area devoted to agriculture, increased use of polluting inputs, and higher food prices. Cellulosic feedstocks can also compete for resources (land, water, fertilizer, etc.) that could otherwise be devoted to food production. As a result, biofuel production may give rise to several undesirable developments: 1. Land use patterns may change, resulting in GHG emissions. Biofuel feedstocks grown on land cleared from tropical forests, such as soybeans in the Amazon and oil palm in Southeast Asia, generate particularly high GHG emissions. 2. Even when feedstocks are not directly grown on forests or native ecosystems, higher crop prices can encourage the expansion of agriculture into undeveloped land, leading to GHG emissions and biodiversity losses. 3. Biofuel production and processing practices can release GHGs. Fertilizer application releases nitrous oxide, a potent greenhouse gas. Most biorefineries operate using fossil fuels. The magnitude of the total GHG emissions resulting from biofuel production and use, including those from indirect land use change, might even exceed those generated by fossil fuels in some circumstances. 4. The quantity of food brought to market might decrease, resulting in higher food prices and possibly more malnutrition. 5. Water quality could suffer as rising prices for agricultural commodities induce more intensive agricultural practices (e.g., greater use of inputs such as fertilizer). Increases in irrigation could unsustainable deplete aquifers. 6. Air quality could also suffer if the total impact of biofuels on tailpipe emissions plus the additional emissions generated at biorefineries increases net conventional air pollution." And that's just what the ever-trustworthy EPA has to say. Here's a small sampling of additional analyses to consider: 1. "As large-scale biofuels subsidies and mandates are enacted in the future, more and more forests, grasslands, etc., will be cleared, either directly or indirectly, releasing their tremendous stores of carbon (soils and plant biomass contain almost three times as much carbon as the atmosphere). When properly accounting for these land-use changes, Searchinger et al. (2) estimated that rather than reducing GHG, corn-based ethanol doubles emissions for over thirty years and results in increased emissions for 167 years. Switchgrass-based biofuels, even if grown on U.S. corn fields, would still increase GHG emissions by 50% over thirty years. " 2. Lose-Lose on Biofuels? The EPA’s analysis suggests that the switch toward renewables will significantly increase various emissions. 3. Palm oil biodiesel can increase greenhouse emissions by 2,000% 4. New study claims ethanol and biodiesel may actually boost GHG emissions case study for the state of Vermont, if every single ounce of waste grease from the 812,623 restaurants in the USA was somehow recycled in a 1:1 ratio to biodiesel, the maximum yield of biodiesel would be 940 million gallons, just slightly above the EPA requirement for next year. This doesn't take into account the massive amounts of fossil fuel and non-existent infrastructure requirements required to gather, transport, and convert the waste grease.
<urn:uuid:140a2cb8-e595-4fc7-82b6-6f5ac17efa1a>
CC-MAIN-2016-26
http://hockeyschtick.blogspot.com.au/2010/07/schizophrenic-epa-wants-more-biofuels.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931271
1,131
2.9375
3
Chronic Exposure of Imidacloprid and Clothianidin Reduce Queen Survival, Foraging, an Chronic Exposure of Imidacloprid and Clothianidin Reduce Queen Survival, Foraging, and Nectar Storing in Colonies of Bombus impatiens In an 11-week greenhouse study, caged queenright colonies of Bombus impatiens Cresson, were fed treatments of 0 (0 ppb actual residue I, imidacloprid; C, clothianidin), 10 (14 I, 9 C), 20 (16 I, 17C), 50 (71 I, 39 C) and 100 (127 I, 76 C) ppb imidacloprid or clothianidin in sugar syrup (50%). These treatments overlapped the residue levels found in pollen and nectar of many crops and landscape plants, which have higher residue levels than seed-treated crops (less than 10 ppb, corn, canola and sunflower). At 6 weeks, queen mortality was significantly higher in 50 ppb and 100 ppb and by 11 weeks in 20 ppb–100 ppb neonicotinyl-treated colonies. The largest impact for both neonicotinyls starting at 20 (16 I, 17 C) ppb was the statistically significant reduction in queen survival (37% I, 56% C) ppb, worker movement, colony consumption, and colony weight compared to 0 ppb treatments. Bees at feeders flew back to the nest box so it appears that only a few workers were collecting syrup in the flight box and returning the syrup to the nest. The majority of the workers sat immobilized for weeks on the floor of the flight box without moving to fed at sugar syrup feeders. Neonicotinyl residues were lower in wax pots in the nest than in the sugar syrup that was provided. At 10 (14) ppb I and 50 (39) ppb C, fewer males were produced by the workers, but queens continued to invest in queen production which was similar among treatments. Feeding on imidacloprid and clothianidin can cause changes in behavior (reduced worker movement, consumption, wax pot production, and nectar storage) that result in detrimental effects on colonies (queen survival and colony weight). Wild bumblebees depending on foraging workers can be negatively impacted by chronic neonicotinyl exposure at 20 ppb. Tags for this Thread
<urn:uuid:0000ddcf-dbf8-4910-8e51-9f1d3c5af44b>
CC-MAIN-2016-26
http://www.beesource.com/forums/showthread.php?294751-Chronic-Exposure-of-Imidacloprid-and-Clothianidin-Reduce-Queen-Survival-Foraging-an&p=1075273
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937644
502
2.734375
3
Springboks hold a special place in the heart of many South Africans, because it is the country's national animal and the name of their national rugby team. A magnificent sight in the bush is most certainly watching the springbok jump. Their name literally translates to "jump antelope". When they jump they arch their back and their feet come together, off the ground, as they leap gracefully through the bush. This is a bit of a playful taunt to predators, signalling that the springbok is not only aware of the predator's presence, but too fit and fast to be hunted down. When springboks engage in this activity it is known an pronking. It is also used to startle predators by leaping suddenly out of the grass, also serving as an alarm to other animals that predators are close by. Pronking is also used to attract potential mates, by showing off how fit and healthy they are. There are occurences of black springboks in the bush as well. This distinction in colour is merely due to an excess of melanin, which gives them a darker appearance. The black springbok usually does not live as long, because their darker colour makes them more conspicuous during the day and thus they cannot protect themselves by blending into their environment. This melanistic aspect is also found in leopards and cheetahs. When the opposite occurs, the animal is known as an albino – an example being the white lion. Springboks are abundant at Inverdoorn and even black springboks have been spotted.
<urn:uuid:91095b7e-f9ad-4784-8780-6b523a5be3a1>
CC-MAIN-2016-26
http://www.inverdoorn.com/gallery/item/springboks
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972443
320
2.796875
3
Electromagnetism is difficult to learn. Applying the concepts of these physics principles in the real world is even tougher though! Once you understand how the systems of voltage, current, resistance, watts, work amongst each other, applying these principles will help you start building your own circuits. Here we will discuss some of the basics of soldering your own circuit board, but make sure, you understand the principles behind what is discussed here beforehand. It will make learning how to solder your first circuit board much easier! Soldering is a useful tool to have in your back pocket no matter what industry you are in. Whether you are an IT technician, plumber, provide a roofing service, or any other contracting services. Soldering can save your tail in several different situations. First and foremost, we must consider the soldering iron that we are going to use. For most circuit boards, it is recommended to use around a twenty-five or thirty watt iron. These irons come with both thick and thin tips. While for some projects you will use thick tips, for your first basic circuit board, it will be best to use a thin tip iron. Now that we have figured out what iron to use, lets consider what type of solder to use. For your first project, it is recommended to use the Kester Solder, preferably of 1 millimeter in length. If this is your first experience soldering a circuit board, the Kester331 is recommended because it is easy to wash with just your everyday faucet water. Use a copper circuit board, and try to make sure that it is clean. Dirty or drab copper will make soldering much more difficult. If however you must use a less than spectacular piece of copper, try to clean it off with a basic pink eraser. It is absolutely critical that your copper board is as sparkling as possible, because this is what will help you get a good soldering connection! A dirty or muddy board will have more random particles or debris that will be in the way of creating a strong connection. In the same vein, your soldering tip should be as sharp and clean as possible, as a dirty tip will also get in the way of making a good connection. During the soldering process make sure that your soldering tip is consistently shiny and never black or dirty. Have a towel with you the whole time to constantly wipe off your tip. Once you have completed all of these steps, you are ready to solder your very first circuit board. Move your shiny tip against one of the wire sides (it does not matter which side you do first), while pressing on the board and other side of the wire, thus stimulating similar amounts of heat into both wires. In the next step, you will repeat this same process, but with the other side of the wire. If this step is done correctly, what should happen is that the copper from the board and the heated wire will fuse together and actually melt the solder rapidly. The heat should flow quickly into the soldering hinge. Heating any part of this circuit for too long will damage either the board or your wire, thus destroying your first soldered circuit board. When you are ready to clean up, use a solder sucker. Remember that soldering a circuit board can be extremely difficult, and that the only way you can really get good at this process, is practice! Go over these tips as you try to solder your first board. As you become more advanced you will find that you prefer a different type of solder for example. Good luck!
<urn:uuid:e67c8172-4e3b-4fab-a824-90a38695e834>
CC-MAIN-2016-26
http://electronicsystemlevel.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947811
725
3.203125
3
C.A. Johnson, R.C. Williamson The linden borer (Saperda vestita Say) is a native insect found throughout northeastern North America that is a serious pest of basswood and linden trees. This fact sheet presents the signs and symptoms, the life cycle, and effective mean of control of the linden borer.
<urn:uuid:cb983172-b260-41be-9375-94d04cbdafb0>
CC-MAIN-2016-26
http://learningstore.uwex.edu/Linden-Borer-P485.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.860668
76
2.734375
3
Available for: iPad, iPhone, Android, Mac, and Windows. The Holy War, an allegorical work by John Bunyan, portrays the continuous conflict between man and the devil. The city of Mansoul is attacked by Diabolus using ruse, intrigue and force. The battle is long and bitter, but through the intervention of Prince Emmanuel victory is secured, in spite of the weaknesses and failings of the inhabitants of Mansoul. The story succeeds in retaining the beauty of Bunyan's style, presenting scriptural truth and telling an exciting story which will be enjoyed by young and old alike. Bunyan, (1628 - 1688), was a popular preacher as well as a very voluminous author, though most of his works consist of expanded sermons. In theology he was a Puritan, but not a partizan; nor was there anything gloomy about him. He was no scholar, except of the English Bible, but that he knew thoroughly.
<urn:uuid:03f5f085-69a5-40a2-b182-0fadd0e0b8a2>
CC-MAIN-2016-26
https://www.olivetree.com/store/product.php?productid=16766
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977656
198
3.0625
3
This city was the first one to have a woman police officer. Los Angeles. Alice Stebbins Wells was a social worker who joined the LAPD in 1910. She became the first policewoman in history with full arrest powers. London did not appoint its first policewoman until 1914. HistoryNet.com is brought to you by World History Group, the world's largest publisher of history magazines. HistoryNet.com contains daily features, photo galleries and over 5,000 articles originally published in our various magazines.
<urn:uuid:9ad30b1a-f072-4698-96c0-c1ea70412706>
CC-MAIN-2016-26
http://www.historynet.com/quiz130413.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961056
106
2.75
3
Making Classroom Time More Valuable With Edmodo Posted by: Lucia Giacomantonio This post is part of our Edmodo Spotlight series which highlights Edmodo teachers, schools and districts. If you are interested in being featured, please complete this form. Heather Landrum is a 7th grade ELAR teacher at Hillsboro Jr. High in Hillsboro, Texas. Making Classroom Time More Valuable I have 134 7th grade writing students, and with only 47 minute periods, there is not enough time in the day to get it all in! Edmodo has given me time that extends beyond the learning day. Students are able to access resources, PowerPoint, reading selections, and assignments from their mobile device or computers at home; this helps them stay ahead of where we start class the next day. I also teach two class periods that are reading/writing combined. Edmodo has proved effective as a learning platform for these students to practice reading strategies and skills, without taking time out of our already full class period. Facilitating Peer Conferences Currently I’m using Edmodo to give my students more time to have peer conferences. Students are grouped by class periods and they post their stories in their group. Their classmates will offer feedback and suggestions for improving effectiveness of the piece. We have a specific response framework that we use when responding to our classmates’ writing. - Reply to at least 3 peers - Reply with a specific positive, and then a “what if” - Students are also expected to reply to anyone that comments on their writing Using Edmodo for peer writing engages my students in the revision stage of the writing process by providing a novel and different way for them to read and respond to one another’s writing. It also allows them to continue their collegial conversations even after the class period is over. Often, students even go beyond the required three responses because they enjoy getting to read what their classmates have written. Utilizing Shared Folders I have many favorite features in Edmodo, but my favorite would have to be Folders. I have uploaded powerpoints, writing mentor texts, assignments, videos, and web links into Folders. My students have access to everything we use at school right at their fingertip; no more lost papers or keeping up with copies. Edmodo folders keep those references right where I need them – with students! Advice for Teachers Getting Started With Edmodo Give yourself time to play with it and learn how to use it before you introduce it to all of your students. I first launched Edmodo with two of my seven class periods. This made it easier for me to manage, troubleshoot, and get used to, than if I had started with all 134 of my students at the beginning of the year. I would also advise you to make using Edmodo meaningful. I added folders and resources that compelled my students to login and use Edmodo. It wasn’t just another form of social networking for them.
<urn:uuid:219f6b98-295b-47e4-a319-c94eea7a3125>
CC-MAIN-2016-26
http://blog.edmodo.com/2013/03/27/making-classroom-time-more-valuable-with-edmodo/comment-page-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961086
633
2.515625
3
|dc.description.abstract||This report reviews user-oriented generalized reservoir/river system models. The terms reservoir/river system, reservoir system, reservoir operation, or river basin management "model" or "modeling system" are used synonymously to refer to computer modeling systems that simulate the storage, flow, and diversion of water in a system of reservoirs and river reaches. Generalized means that a computer modeling system is designed for application to a range of concerns dealing with river basin systems of various configurations and locations, rather than being site-specific customized to a particular system. User-oriented implies the modeling system is designed for use by professional practitioners (model-users) other than the original model developers and is thoroughly tested and well documented. User-oriented generalized modeling systems should be convenient to obtain, understand, and use and should work correctly, completely, and efficiently. Modeling applications often involve a system of several simulation models, utility software products, and databases used in combination. A reservoir/river system model is itself a modeling system, which often serves as a component of a larger modeling system that may include watershed hydrology and river hydraulics models, water quality models, databases and various software tools for managing time series, spatial, and other types of data. Reservoir/river system models are based on volume-balance accounting procedures for tracking the movement of water through a system of reservoirs and river reaches. The model computes reservoir storage contents, evaporation, water supply withdrawals, hydroelectric energy generation, and river flows for specified system operating rules and input sequences of stream inflows and net evaporation rates. The hydrologic period-of-analysis and computational time step may vary greatly depending on the application. Storage and flow hydrograph ordinates for a flood event occurring over a few days may be determined at intervals of an hour or less. Water supply capabilities may be modeled with a monthly time step and several decade long period-of-analysis capturing the full range of fluctuating wet and dry periods including extended drought. Stream inflows are usually generated outside of the reservoir/river system model and provided as input to the model. However, reservoir/river system models may also include capabilities for modeling watershed precipitation-runoff processes to generate inflows to the river/reservoir system. Some reservoir/river system models simulate water quality constituents along with water quantities. Some models include features for economic evaluation of system performance based on cost and benefit functions expressed as a function of flow and storage.||en
<urn:uuid:1a15f7bb-70a5-403f-ac5a-b2ce70dc90ca>
CC-MAIN-2016-26
http://repositories.tdl.org/twdl-ir/handle/1969.1/6092?show=full
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93907
512
2.65625
3
According to the Bureau of Labor Statistics, jobs for dentists are expected to show overall growth of 21 percent between 2010 and 2020. This growth is due to an increase of patients needing dental care. In May 2010, the median annual salary for dentists was $146,920. For this profession, you must graduate from an accredited dental school and pass both written and practical exams. Dentists use a variety of tools to care for patients, including drills, mouth mirrors, probes and scalpels. These tools also help them evaluate dental health and diagnose any diseases that the patient may have. Dentists must have steady hands, especially when using different tools to examine a patient's mouth. They must be comfortable using these tools for common dental procedures including filling a cavity, repairing fractured teeth and removing tooth decay. Dentists need to make sure that all dental equipment is handled properly and safely at all times. Dental Design Skills Dentists need good observational skills to fit a denture or a bridge in a patient's mouth. Common dental appliances include braces, retainers and space maintainers, and dentists have to make sure that dental appliances are specifically designed to fit the patient's mouth. They need to pay attention to the shape, space and color of a patient's teeth especially during dental procedures such as root canal and teeth-whitening treatments. Dental students are required to take a variety of science classes focusing on anesthesia, radiology and anatomy. They must be knowledgeable about all aspects of a patient's mouth to diagnose health problems. Dentists must also know all technical terms relating to dental care, including the many different types of teeth and their functions in a patient's mouths. Dentists use specific software programs to maintain patients' medical records, health insurance and office files. Among them are Alphadent, for example, a software management program that helps dentists organize appointments and medical records. Another program, Dentimax, helps dentists handle billing and scheduling, as well as digital imaging of patient's teeth. Depending on the size of the office, dental assistants are responsible for using this software on a daily basis; however, dentists should also be comfortable using the computer, especially programs giving them access to patients' records. - Creatas Images/Creatas/Getty Images
<urn:uuid:baa26b06-35de-4ad3-8e59-36113e2e3285>
CC-MAIN-2016-26
http://work.chron.com/technical-skills-dentist-16308.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951521
466
3.25
3
STEP THREE: BE INFORMED If a disaster strikes, you may have to provide care. First assess the scene by evaluating: - Is the scene safe? - What happened? - How many injured people are there? - Are there bystanders who can help? When providing care, begin by checking the person for life-threatening conditions.
<urn:uuid:aa4a4cd8-e83c-42f4-98b8-5f0a72d41c08>
CC-MAIN-2016-26
http://www.redcross.org/flash/brr/English-html/check.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00151-ip-10-164-35-72.ec2.internal.warc.gz
en
0.926429
73
2.6875
3
10th Anniversary: President's Plan For Emergency AIDS Relief Recognizing the 10th Anniversary of the U.S. President's Plan For Emergency Aids Relief (PEPFAR) Office of the Spokesperson June 18, 2013 Today, Secretary Kerry marked the 10th Anniversary of the creation of the historic U.S. President’s Plan for Emergency AIDS Relief (PEPFAR) by announcing that the millionth baby will be born HIV-free this month due to PEPFAR-supported prevention of mother-to-child transmission (PMTCT) programs. The Secretary also announced that a new PEPFAR analysis shows that there are 13 countries that have reached the programmatic “tipping point” in their AIDS epidemic. Through PEPFAR, as of September 30, 2012, the U.S. directly supported more than 5.1 million people on antiretroviral treatment (ART). This number is up from 1.7 million in 2008 – a three-fold increase in only four years. In FY 2012, PEPFAR programs supported antiretroviral drugs (ARV) to prevent mother-to-child transmission for more than 750,000 pregnant women living with HIV. Thanks to this effort, an estimated 230,000 infant HIV infections were averted in 2012 alone. PEPFAR also supported HIV testing and counseling for more than 46.5 million people in 2012. One Million Babies Born HIV-Free This month, the one-millionth baby will be born HIV-free because of PEPFAR support – something unimaginable ten years ago when the program began. Antiretroviral drugs can prevent mother-to-child transmission of HIV. The earliest PMTCT regimen decreased the likelihood that a mother would transmit HIV to her baby from 35 percent (with no PMTCT intervention) to 24 percent. Today, we have far more efficacious regimens and we have learned how to implement them more effectively. For example, under Option B+, the same combination of ARV medications used to treat adults living with HIV will be offered to all HIV positive pregnant women for life, reducing the likelihood that a mother will transmit HIV to her infant to less than five percent. In addition, Option B+ has the distinct advantages of maintaining the mother’s health, providing lifelong reduction of HIV transmission to uninfected sexual partners, and preventing mother-to-child transmission in future pregnancies. Successful implementation of this approach across countries with high HIV burdens can help achieve the commitment made by President Obama on World AIDS Day in 2011 for the United States to support six million people on ART and provide antiretroviral drugs for 1.5 million pregnant women living with HIV by the end of 2013. 13 Countries Have Reached the Programmatic Tipping Point in Their Epidemic One way of measuring progress toward the goal of an AIDS-free generation is to compare the number of annual new adult HIV infections with the annual increase in adults on treatment. By reducing infectivity through effective treatment and rapidly increasing coverage of ART, it is possible to bring the number of annual new adult HIV infections below the annual increase in adults on ART – thereby achieving the programmatic “tipping point.” When the Obama Administration released the PEPFAR Blueprint for Creating an AIDS-Free Generation last World AIDS Day, seven countries were at this programmatic tipping point. According to a new PEPFAR analysis, 13 countries are actually at this tipping point. This remarkable progress is thanks to the combined and coordinated efforts of all partners involved in the fight against global AIDS. Through PEPFAR, we are firmly committed to help countries in moving toward and beyond this tipping point. But we cannot do it alone. This is a shared responsibility. PEPFAR Key Populations Challenge Grants At the International AIDS Conference last July, Secretary Clinton announced the creation of a $20 million Key Populations Challenge Fund (KPCF) to support country-led plans to expand high-impact comprehensive package of HIV prevention, treatment, and care services for key populations, which include men who have sex with men (MSM), people who inject drugs (PWID), and sex workers (SW). HIV disproportionately impacts key populations. For example, some studies have shown that MSM were 19 times more likely to be living with HIV than people in the general population; and that SW were 13.5 times more likely to be living with HIV when compared to other females of reproductive age in the general population. Globally, among PWID, 16 million individuals report injection drug use, and an estimated three million PWID are living with HIV. Secretary Kerry announced today that six countries (and two regional programs) will be awarded funds. The countries are Cambodia, Ghana, Nepal, Senegal, Swaziland, and Zimbabwe. The regional programs include PEPFAR’s Asia and Central American regions. These funds will be leveraged as PEPFAR’s works hand-in-hand with partner country governments and civil society to strengthen sustainable programs and interventions for key populations. PEPFAR Heroes Award As part of the 10th anniversary commemoration, PEPFAR is launching the “PEPFAR Heroes: Giving Hope, Saving Lives” contest. The contest seeks to highlight outstanding individuals who have demonstrated extraordinary commitment and passion in serving people and/or communities living with and affected by HIV, and to convey the partnership of the American people with the people of partner countries in creating an AIDS-free generation.
<urn:uuid:4a23644b-52ec-4571-b4a4-176251d9d5e0>
CC-MAIN-2016-26
http://www.scoop.co.nz/stories/WO1306/S00480/10th-anniversary-presidents-plan-for-emergency-aids-relief.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933875
1,148
2.515625
3
Exposure to a second language at an early age has the potential to lay a foundation for future learning. It offers children new ways of thinking about the concept of language and other cultures in the world around them. Preschoolers and Pre-Kindergartners at Jack and Jill Childcare receive weekly Spanish instruction in a relaxed environment through fun, creative activities that promote language recognition and literacy. We will be taking our Spanish exposure to a new level for all toddler and preschool classrooms. All preschool and toddler staff are being trained on how to incorporate Spanish into daily routines. Amy Timm is a Preschool Spanish Educator who will be training our staff each quarter. Amy will teach us how to add new Spanish vocabulary into daily routines such as large group time, small group time, meals, and even potty training! She will use our curriculum themes to develop vocabulary just for our kids! Each week, your will receive a list of Spanish vocabulary words that we are working on in the classroom so that you can learn right along with us! We will combine our Spanish & Sign Language Curriculum for faster, more effective learning for the children, teaches, and parents! Jack and Jill Childcare teachers make extensive use of sign language with infants and toddlers, as it can help decrease the amount of frustration children have in communicating their needs before they are able to speak. Instruction in American Sign Language is also offered on a weekly bases in most classrooms. All infant and toddler staff will participate in sign language training. We will be adding new vocabulary signs for our teachers to share with their children and families each week depending on the curriculum theme. Each week, your will receive a list of Sign Language vocabulary words that we are working on in the classroom so that you can learn right along with us! Sign language vocabulary is used every day in all of our classrooms. The increased training that our teachers will be receiving will allow all of our teachers to increase their knowledge of sign. We will also provide our parents with an opportunity to learn the signs that we use through classes and printed resources. Jack and Jill Childcare is excited to provide our preschool children with a FREE music in motion curriculum unlike any other program offered at area centers! Music in Motion is a learning adventure of music and movement with songs and activities promoting fitness and creative expression. The class includes creative movement, fun with props, dance expression, and tumbling. Classes invite creativity and offer activities that support musical, cognitive, language, social, emotional, and large/small motor development. Click here for more detailed information. A talented dance instructor offers on-site dance instruction during the day once a week to students at Jack and Jill Childcare for an additional fee. This is a convenient way to introduce your children to music and movement without having to schedule a separate lesson after a busy day. Click here for details.
<urn:uuid:26f8c82e-d2a5-483f-bd08-396b4def16bd>
CC-MAIN-2016-26
http://www.jackandjilledu.com/childcare-programs/curriculum-enhancements/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956712
579
2.875
3
Traditionally, the name Viennese Waltz refers to a very specific musical genre: The relatively fast Waltzes of the the Romantic era in Vienna. The music is usually written in 6/8 time at a tempo of 29-30 measures per minute, although it is sometimes written as a fast 3/4 at 58-60 measures per minute. It is almost always instrumental, written for orchestras of varying sizes. The most well-known of all composers of Viennese Waltz music is Johann Strauss, responsible for such notorious works as the Blue Danube and Tales From the Vienna Woods. But the music of Johann Strauss and similar composers of the 1800's only acounts for a fraction of the music which is popular these days for dancing the Viennese Waltz. Dancers are enjoying many different styles of fast 6/8 Waltz, much of which is not Viennese at all. The music can be instrumental or vocal, Classical, Celtic, Country or even a "Top 40" hit.
<urn:uuid:2eebcb15-32cd-4ae2-9c82-8bc037c279e8>
CC-MAIN-2016-26
http://www.ballroomdancers.com/Dances/dance_overview.asp?Dance=VW
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934846
210
2.625
3
The music which makes you say, "Gosh, I wish I'd written this!" The music it is most important to your development as a composer that you study is the music that you love. (This is apparently not obvious to a lot of people. It wasn't obvious to me. So I make it explicit.) Now, if you're starting out, you may want to put that on hold until you've picked up some analytic skills on music that it is easier to do that on. This of course depends on just what your favorite music is. If you think that Clementi's piano sonatina Op 36 No 1 is the pinnacle of human artistic expression, I have great news for you. What makes music "easier" to analyze is not really a property of the music itself, but rather it's proximity to the type of music for which the analytic terms and concepts were designed to describe. A quarter century ago I took my college's "Harmony and Counterpoint I" class, because it was the pre-req for all the subsequent theory and composition classes. I want to share with you what the instructor said to us that first day. She said, in res, In this class we're going to cover the basics of what is termed Common Practice Era (CPE) voice leading. The CPE is basically the core period of classical music. We're teaching you to write harmony in this style of music not because it is superior to others (because it is not), not because it is more important than others (because it is not), not because it is more sophisticated than others (because it is not). We're teaching you to compose in this style because this style has a rigorous, thoroughly worked out method of composition that is easy to teach. And for that reason, classes on all styles of music typically use the terminology and concepts from CPE voice leading, even when the rules of those other musics are completely different. That is why this class is a pre-req: to give you the musical language needed to participate in those classes. So I'd like to suggest that if you're totally new at this, and if you haven't covered that material yet, that you do likewise. That you take -- either in person, or through self-study with textbooks and/or the internet -- an intro CPE voice leading class, and analyze whatever they tell you to, however they tell you too. Then you take those tools, and start applying them to the music you love -- though if it is music that is from a different genre, then you will have to adapt them. You will have to treat the rules you learned as hypotheses to test. You will have to take what you learned, and ask, "how closely does this piece that I love follow these rules?" Because the ways that they don't are part of what gives them their characteristic sound. If you've already covered the basics of CPE theory, and you're ready to start applying that learning outside the CPE, come back with a question about analyzing a specific piece and we'll see what we can do.
<urn:uuid:99fdd8c3-8e6b-4544-a32e-75091fb8ae10>
CC-MAIN-2016-26
http://music.stackexchange.com/questions/17226/which-scores-are-best-for-those-who-are-new-to-analysis
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979785
637
2.625
3
The Federal Trade Commission has noted that “concern previously centered on the belief that biotechnology patent protection was too strong” and “would actually obstruct commercialization on new products, thereby hindering follow-on innovation. This problem has yet to materialize. The reasons for this are numerous and are often straightforward matters of basic economics. In addition to licensing being widely available, researchers make use of a variety [of] strategies to develop working solutions to the problem of access, including inventing around, going offshore, challenging questionable patents and using technology without a license.” Indeed, the FTC recently concluded that concern that patenting upstream technology, or “research tools,” would actually obstruct commercialization of new products and hinder follow-on innovation in biotechnology “has yet to materialize.” In 2007, David Adelman from the University of Arizona and Kathryn DeAngelis from Piper Rudnick published a detailed study of more than 52,000 biotechnology patents granted in the United States between January 1990 and December 2004. In the words of the two authors, their study described “the general trends in biotechnology patenting including patent counts, patent-ownership patterns, and the distribution of biotechnology patents across distinct areas of research and development.” They concluded that “this analysis finds few tangible signs of patent thickets that define the anticommons.” The National Research Council of the National Academies of Science found that “the number of projects abandoned or delayed as a result of technology access difficulties is reported to be small, as is the number of occasions in which investigators revise their protocols to avoid intellectual property complications or pay high costs to obtain access to intellectual property. Thus, for the time being, it appears that access to patents or information inputs into biomedical research rarely imposes a significant burden for academic biomedical researchers.” A 2005 survey of scientists involved in biomedical research found that “patenting does not seem to limit research activity significantly, particularly among those doing basic research.” Only one percent of a random sample of 381 academic scientists reported a project delay of more than a month due to patents on materials necessary for their research, and none reported abandoning a research project due to the existence of patents. (Id. at 17; see also Walsh et al., View from the Bench, 309 Science 2002 (2005). An earlier study found that patents “rarely precluded the pursuit of worthwhile projects.” It noted that “for a given project, usually fewer than a dozen outside patents require serious consideration, and the number of licenses required is much fewer, often none.” (Id.) When requested, licenses were often available at minimal or no cost. (Walsh, Patents & Access, at 17.) “Thus, not only are barriers or delays rare, but costs of access for research purposes are negligible.” (Id.) Indeed, the evidence in the Myriad case showed that a vast amount of BRCA-related clinical and experimental research has been conducted since Myriad’s patents issued. Another study confirmed this and concluded that “the present analysis and accompanying observations do not point to the existence of a wide patent thicket in genetic diagnostic testing.”
<urn:uuid:3b4d1ffe-1644-4849-8455-11f5308d97b1>
CC-MAIN-2016-26
http://www.biotech-now.org/public-policy/patently-biotech/2011/06/gene-patents-stifle-research
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95495
664
2.609375
3
Minnesota is a leading producer of wild rice in the United States. Wild rice is native to hundreds of Minnesota lakes and streams. It's been harvested by American Indians for generations. Farmers started planting wild rice about 50 years ago. Researchers are looking for ways to make wild rice more productive. That means more profit for farmers. But the research angers many American Indians who say wild rice is a sacred plant. Ojibwe lore says Indian people live in Minnesota because of wild rice. Eons ago a series of prophecies led the Ojibwe westward from the Atlantic coast. "We consider it to be sacred, because it's a gift from the creator", says White Earth elder Earl Hoaglund. "It was foretold in those prophecies that as the ice melted we were to move westward and food would be provided for us on the water. And that's what happened. When we moved into the Wisconsin, Minnesota areas that rice was already there, growing." About a dozen adults and children slide canoes into a small lake on the White Earth Reservation. The kids are learning traditional ricing. First, they throw tobacco on the water to offer thanks to the creator. Then they crawl into the canoes and paddle into a rice bed. Two people sit in each canoe. One paddles slowly through the rice. The other uses small poles called knockers to shake rice into the canoe as an experienced ricer offers advice. Joe LaGarde smiles as he watches the kids catch on. LaGarde has harvested wild rice since he was a small child. He says entire families riced together. They gathered enough to last a year. If the crop was good they might sell a few pounds. LaGarde says things changed when farmers began growing wild rice in paddies. It was no longer a gift. It became a commodity. "The hard part when you're talking with people, researchers, they look at things as a way of making money," says LaGarde. "And we look at things as being equal, not there to take and dominate or make money off of." Wild rice is a cash crop for dozens of Minnesota farmers. They call is paddy rice. They grow it in flooded fields. Minnesota produced about 40-million-dollars worth of paddy rice last year. California farmers grow even more. John Gunvalson has one of the largest wild rice operations in Minnesota, near Bemidji. He started growing wild rice decades ago, before it caught on with California farmers. Now, it's hard to compete with the California farms. "They have a climate where they don't get very much wind. They just have a perfect sunny climate without wind and storms, and they get at least double what the average Minnesota grower gets, probably close to triple." says Gunvalson. Wind is a big problem for wild rice farmers. It knocks some of the ripened rice off the plant. In the wild, that's good. In fact it's essential. That's how the plants re-seed. But it's trouble for farmers. Sometimes half the rice falls off before a farmer can harvest it. Raymie Porter is trying to change that. Porter is a biologist at the University of Minnesota's experiment station in Grand Rapids. He breeds wild rice for farmers. He's growing strains of wild rice that don't fall off the plant as easily as natural wild rice does. Raymie Porter has a genetic map of wild rice to work from, but he's not doing any genetic engineering. He's doing what plant breeders have done for centuries. If he sees a plant with characteristics that he likes, he collects the seed from that plant. "We've got the seed in storage now, and we'll plant those out next year and repeat the cycle, says Porter. "We just kind of do that over a number of years, and we make progress that way pretty much traditional breeding." Raymie Porter says the plants he breeds have the same genes as natural wild rice, he's just emphasizing traits that don't emerge in the wild. It's pretty much the same thing farmers have done for thousands of years with wheat and corn and oats. But some Ojibwe people are upset by the research. They say wild rice is not like wheat. It's sacred, and humans should not try to improve on what the creator made. "See what's going to happen there is if we start changing wild rice and it gets into our wild rice beds, it's going to change our wild rice into a hybrid and it's going to lose all immunity it's built up through thousands of years," says Joe LaGarde. Researchers say that's not likely. Raymie Porter says the traits he's breeding for would disappear if cultivated rice gets mixed with natural rice. For example, if a rice plant in the wild holds onto its seeds, birds will eat the seeds, and that plant won't reproduce. Porter says cultivated wild rice that escapes probably won't survive. "If you had a seed brought in by a duck from a cultivated paddy into a natural stand, that plant might grow one season, but it's just not going to be favored by natural selection, so it's going to be maintianed at a much lower percentage, or maybe disappear entirely," says Porter. But that doesn't satisfy many Ojibwe. They say people should not alter wild rice in any way. Jill Doerfler is a University of Minnesota student who grew up on the White Earth Reservation. She says wild rice is not a farm crop, and that's something non-Indian people just don't understand. "Western society has a hard time understanding other societies have different values," says Doerfler. "We're not concerned about yields and uniformity and things like that. And that view is just as valid and just as useful as the western view of uniformity and western agriculture." Not all Ojibwe agree. Two of Minnesota's 44 rice farms are owned by the Red Lake Band of Ojibwe. But many Indian people fear for the future of what they call the sacred Manoomin or Good Berry. They're especially worried about the science of wild rice. Afraid someone is going to eventually produce genetically engineered wild rice. Raymie Porter, the U of M researcher, says Ojibwe people are entitled to their beliefs, but he says no group of people should expect their religious beliefs to shut down scientific inquiry. "We're looking for more knowledge about wild rice. And I suppose their argument would be, with more knowledge comes the possibility that knowledge could be used for things that they don't want it to be used for, and that's always a concern, I suppose," says Porter. "But our concern is not to limit knowledge. We think more knowledge can actually benefit a greater number of people." Porter believes his research might benefit wild rice in lakes and rivers. The information can help the biologists who are trying to protect natural wild rice. And the new strains of cultivated wild rice give farmers hope for bigger crops and a brighter future. But many Ojibwe people see the wild rice research as the theft of something sacred.
<urn:uuid:9e249c95-7fba-4391-b499-798e978dcd1a>
CC-MAIN-2016-26
http://news.minnesota.publicradio.org/features/200209/22_gundersond_wildrice-m/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974994
1,492
3.34375
3
Impact on Environment during the Ganesh Festival Deties and Emotions and Rituals have a important part and Aspect in Indian Culture and especially more in the Maharashtrian people and Culture. Much news and points have been raised about the recent years "celebrations" during the Ganesh Festival. The impact on environment occur because of noise pollutions caused by loud sound blocks used during the "Aratis" or ritual prayers which big people put up during the festival which become a cause of worry for the surrounding living people. Worship should be by heart and Lord Ganesha is not much pleased by the loud sounds used in processions. Other impact is the air pollutions caused by frequent use of crackers which let out heavy smoke which again are nothing but the wastage of money. Lord Ganesha never said anywhere to have loud sounds and heavy smoke to celebrate His arrival, His stay or His leaving processions. The impact on natural environment is caused by the use and immersion of Lord Icons created using Plaster of Paris which after immersion in Water Bodies, do not disintegrate which is totally against the basic concept of creation of the Lord Icons using earth from nearby one's home and thus the cycle of creation and dissolution in Nature. The dissolution of POP is slow and it releases toxic materials in the water in which the icon is immersed and the chemical paints used also contain Mercury and Cadmium which are very much harmful to the natural flora of the water bodies and to the living in the Water and then thus also to Man himself when he consumes the water. Precautionary methods to prevent pollution during Ganesh Festival 1. Do not use loud sounds during processions and prayers and also advice other people not to do so. 2. Do not use heavy smoke creating fire crackers. After all, Lord Ganesha will never like this and also that this is our Environment and we must protect this environment for our future generations and also for future Ganesha Festivals. 3. Insist on buying Lord Ganesh icons created from Earth (Clay / Shaadu Maati) which do not cause pollution after immersion. 4. Insist for natural colours on the Lord Ganesh Murti (Icon). 5. You can go for a better option of having a Lord Ganesh Murti made of Stone / Brass / any other metal according to the aesthetic value you have and have the same Murti every year for the festival. 6. If you do wish to use a Plaster of Paris (POP) Icon, then do not immerse the icon and have it repainted every year before the festival. 7. Get Creative! Use Icons made of Paper pulp (mache) can give us the same light weight icons which can be much more attractive than the POP ones. But do think of usage of paper also. 8. Encourage people to immerse Ganesh Icons in Tanks rather than Natural water bodies. Let the other living creatures also be safe and healthy. No responses found. Be the first to comment...
<urn:uuid:5b146637-e763-40f3-9b21-18421a40f9df>
CC-MAIN-2016-26
http://www.maharashtraspider.com/resources/4580-Environmental-impact-Precautions-be-taken.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951061
639
2.59375
3
Learn something new every day More Info... by email Short for ultra high frequency, UHF is one of the two standard ranges of electromagnetic waves that were set aside for the use of broadcast television in the first half of the twentieth century. In the United States, the Federal Communications Commission set aside a specific spectrum of radio waves that would provide access to local television stations. Today, those same bands are still in use, and also serve several other functions as well. UHF television broadcasts are included in the ultra high frequency spectrum of wave frequency that covers a range from 300 megahertz to 3.0 gigahertz. Initially, three specific bands were set aside for the use of television broadcasts. The range of 54 to 88 megahertz provided room for broadcast as channels one through six. A frequency of 174 to 216 megahertz covered channels seven through thirteen. The final band of UHF frequency used a frequency in the 470 to 890 megahertz range for channels from fourteen to eighty-three. Over time, UHF broadcasts on the two lower bands were discontinued, with channels two through thirteen broadcasting with the utilization of VHF technology. UHF bands for broadcast television continued for a number of years using the third band. The advent of mass cable television and more recently the use of Internet technology had made it possible for broadcast television to continue without necessarily having to rely on a strict delineation to the traditional UHF band. However, UHF radio technology is not completely outmoded. In fact, UHF radio waves are still active and have a place in today’s world. Continuing to provide the ability for broadcast television to reach areas that do not have access to cable, other communication devices make use of radio waves of this type. Cell phones, as an example, often make use of a limited range of UHF signals, ranging in the 316 MHz to 3.16 GHz portion of the spectrum.
<urn:uuid:97229cc4-0877-44d2-857f-1ee0b4a08f16>
CC-MAIN-2016-26
http://www.wisegeek.com/what-is-uhf.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970531
392
3.734375
4
Hans Sachs Poster Collection Peter Sachs is the heir of Dr. Hans Sachs, a dentist who formed the greatest pre-war collection of posters, obtained a decision by the Berlin Administrative Court in February 2009 declaring that he owned one of the 4,000 posters that long had been in a museum in East Berlin (now the Deutsches Historisches Museum). There was no dispute that the Gestapo had seized the collection in the summer of 1938 on the orders of Josef Goebbels, or that Hans Sachs, having survived the war, received some compensation for his collection from the German Government. An earlier attempt by Peter Sachs to recover the posters had been rejected by a government mediation panel which decided that, since Hans Sachs had received some compensation, his son should not recover the posters even though his father had believed the collection was destroyed and, living in the United States, never would have been able to obtain information about the collection from the Democratic Republic of Germany. The Decree of the Administrative Court was appealed. In late January 2010, the Berlin Court of Appeals found that Sachs had good title to the poster but that he no longer had a remedy for its recovery because restitution claims could no longer be made under a special restitution law. In March 2012, the German Federal Court of Justice ruled that Peter Sachs, as the rightful owner of the poster, could obtain its return under generally applicable German civil law. Wassily Kandinsky, Phalanx Exhibition (1901)Claimed Court Rulings & Decisions - Nazi Looted Art: German Historical Museum must return the Sachs Poster Collection to the Heirs, Federal Supreme Court Press Release (English version), March 16, 2012 - Sachs Decision of the Federal Court of Justice, March 16, 2012 - Nazi Looted Art: German Historical Museum must return the Sachs Poster Collection to the Heirs, Federal Supreme Court Press Release (German version), March 16, 2012 - Decision of the Federal Court of Justice, English Translation, March 16, 2012 Press & Scholarly Press & Scholarly - Eigentum ist nicht gleich Besitz, Patrick Bahners, Frankfurter Allgemeine Zeitung, p. 35, February 19, 2010 - Ownership is not Possession (English translation of article), Patrick Bahners, Frankfurter Allgemeine Zeitung, p. 35, February 19, 2010 - What's Another Word for Injustice?, Marilyn Henry, The Jerusalem Post, February 6, 2010 - Nazi-Looted Posters Should Stay in Berlin, Court Says, Catherine Hickley, Bloomberg.com, January 28, 2010 - Einstein's Dentist, Goebbels, and Me -- His Great-grand-daughter Reports , Suzanne Glass, The Times, January 28, 2010 - Gunnar Schnabel, Frankfurter Allgemeine Zeitung, Jewish Heirs are left with the law, March 3, 2009 - Tagesspiegel, Sachs Poster Collection -- Injustice remains injustice, Gunnar Schnabel, February 27, 2009 - Peter Raue, Tagesspiegel “Legal venue where there was no legal venue” (English translation), February 24, 2009 - Peter Sachs wins Battle for £4 million poster collection seized by the Nazis, David Charter, Times Online, February 12, 2009 - Berlin Court Rules in favor of Heir in Nazi-Looted Poster Suit, Catherine Hickley, Bloomberg.com, February 10, 2009 - Die Zeit, Looted Art-- In the name of my father (English translation), Kerstin Kohlenberg, January 15, 2009 - Press Release from Attorneys Representing Peter Sachs, February 18, 2010 - Museum Appeals the Court Decision Affirming Sachs's Claim (English Translation), May 11, 2009 - Bernd Naumann, German Government Press Release (English translation), March 13, 2009 - Bernd Naumann, German Government Press Release (in original German), March 13, 2009
<urn:uuid:64507a83-7604-4ff9-b709-b5d33d40521e>
CC-MAIN-2016-26
http://www.commartrecovery.org/cases/hans-sachs-poster-collection
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931679
826
2.578125
3
Illinois Mulls Building Greenhouse Gas Pipeline NEW YORK The governor of Illinois is mulling whether the state should help build a pipeline to combat global warming by carrying greenhouse gas carbon dioxide from planned clean coal plants to aging oilfields. Greenhouse gas carbon dioxide can be captured at power plants and then pumped through the proposed pipeline to be entombed deep underground. The carbon-capturing technology can be added to plants that gasify coal, which a handful of utilities are planning to build, or those that run on natural gas. Gov. Rod Blagojevich's energy plan, released earlier this year, calls for 10 coal gasification plants over the next 10 years. Blagojevich is also trying to measure corporate interest in building a network of CO2 pipelines with the state. "Constructing a carbon dioxide pipeline is a big part of our plan because it will allow us to build coal gasification plants and use the CO2 they emit to extract more oil without contributing to global warming," Blagojevich, a Democrat who was up for reelection on Tuesday, said in a statement on Monday. U.S. energy companies have been pumping small amounts of CO2 from natural deposits into depleted oil and natural gas fields to boost fuel output since the 1970s. That is about as long as oil production has been declining in the world's largest energy consumer. The Department of Energy said last spring that injecting carbon dioxide from power plants could quadruple U.S. oil reserves. Blagojevich said pumping CO2 into aging Illinois oilfields could nearly double the amount of oil produced in the state annually. It could also be used to push out methane from coal beds to provide about seven years worth of natural gas for the state, he said. Experts say the extra expenses of carbon capture, transport and burial could add as much as a fifth to household energy bills. Some of that cost could be eased through trading of carbon credits. Blagojevich also said Illinois was joining the Chicago Climate Exchange, a voluntary exchange in which active members are legally bound to cut emissions or buy credits representing them. If CO2 becomes a regulated commodity in the United States, something President George W. Bush has fought, surplus quantities could also be injected into underground saline formations, the governor said.
<urn:uuid:876926b8-4e09-4ecc-a27e-f1646e17ab33>
CC-MAIN-2016-26
http://www.enn.com/climate/article/5410
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00080-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966936
473
2.53125
3
Romeo and Juliet Act 3, Scene 1 Mercutio and Benvolio enter talking in the public square. Tybalt arrives looking for Romeo. Romeo arrives and Tybalt instigates trouble by calling him a villain. Romeo responds by saying that he loves Tybalt (for now they are cousins through marriage). Mercutio interferes with their arguing and picks a fight with Tybalt. Romeo and Benvolio try to break up the fight, but Mercutio ends up getting stabbed by Tybalt. Benvolio helps Mercutio to a nearby house. Mercutio keeps yelling: "A plague o' both your houses!" Act 3, Scene 1, line 90 Benvolio returns and tells Romeo that Mercutio is dead. Romeo is outraged: "This day's black fate on more days doth depend;/ This but begins the woe others must end." Act 3, Scene 1, lines 118-19 Romeo and Tybalt then fight. Romeo kills Tybalt and then flees, as Benvolio informs him that the citizens of Verona are awake. If the Prince finds Romeo, he will be put to death. Soon, the Prince, Capulet, his wife, Montague, and his wife, arrive on the scene and question what has happened there. Benvolio informs them of all that took place. The Prince exiles Romeo, and if he is found, he will be put to death. All exit.
<urn:uuid:c767a7bc-39c3-4f2e-9f3d-cf09096f3d39>
CC-MAIN-2016-26
http://www.bookrags.com/notes/rj/part14.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957991
310
2.765625
3
A Conversation for Great Castles Castles in Poland potocki Started conversation Jan 9, 2002 Due to its central location between east and west and due to numerous wars in the area, Poland features a number of very interesting and very different castles. They range from simple fortifications of the early middle ages and the impressing brick structures of the Knights in the high middle ages to the beautifully ornamented castles of rich barons and kings in renaissance and barock and to the heavy fortresses of the 19th century. Each of the following castles represents a different type with a different historical background: 1. The Wawel of Krakow. Originally founded in the 11th century, the Wawel holds a prominent position on a rock over the City of Krakow, controlling both the City and the Vistula River. Thanks to that location and to the importance of Krakow, the Wawel became the seat of the polish kings until the capital was moved to Warsaw in 1596. While in use the Wawel underwent numerous renovations (e.g. after the Tartars overran the city in the 13th century) and now shows a number of styles parallel. Besides, it hosts a beautiful cathedral that in its crypta holds the sarcophagi of most of the polish kings, among them August of Saxonia. 2. Malbork Castle of the Teutonic Knights Malbork (or Marienburg in German), situated in the estuary of the Vistula River, was built in the 13th century by the Teutonic knights who controlled at that time the baltic coast and the areas around Gdansk. The castle is possibly the largest brick structure in the world and, other than the Wawel, kept its original medieval design until today. Its lack of decoration and its numerous walls and towers show immediately that Malbork was built to dominate, not to please. 3. The castles of Warsaw. Once Warsaw became the polish capitalin the 17th century, the king himself as well as numerous courtiers and magnates created for themselves wealthy and beautiful estates in and around Warsaw, mostly in Barock style. Worth to mention is Castle Wilanow, summer residence of the polish Kings, and comparable (though much smaller) to Versailles, featuring many rooms decorated in the fashion of its time and a well laid-out park. Besides Wilanow, Warsaw offers the City Castle in the Old Town, which is not overly nice from the outside but very interesting inside. A definite Highlight are the castle-like buildings in the Baths-Park or Park Lasienkowski, built purely for pleasure around and even in a set of small lakes, close to the city centre. 4. Castle Krzysztopor. Built in the early 17th century, and located in central Poland this castle was still built as a fortress (its name means "battle axe") but was meant also to show the grandeur of its owner, the rich magnate Ossolinski. According to the time's interest in astronomy, it featured 365 windows, 52 rooms and 12 ballrooms. However the castle was intact only for a few years and started to decay after swedish troops overran it in 1656. Now it is a ruin, although still impressive to look at. 5. The fortresses of the 19th century. Although no castles in the typical sense, these structures, comparable in their functionality to the big battleships or factories of their times, represent the end of the evolution of military strongholds. Their prominent features are triangular artillery buttresses, double and triple walls, large casemattes for servicemen and water moats. The forts were still in use in Worl War I and while the Red Army attacked Poland in 1920, but later mobile warfare and airforce made these fortifications useless. The fortresses in Poland are no tourist attractions and partly even difficult to find, as is the case in Warsaw, where a ring of 12-15 forts around the city is now in use as car garages, gardens, storage places or has been submerged under water. Warsaw has still one more or less intact fort over the vistula river. Besides that there is the Boyen Fortress in Gizycko / Lützen, in Mazury. Poland features of course numerous more castles which are worth to be mentioned. Key: Complain about this post
<urn:uuid:8b2cf5c8-a054-4637-b3dc-8609492bd10c>
CC-MAIN-2016-26
http://www.h2g2.com/approved_entry/A648407/conversation/view/F78715/T160107/page/last
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00164-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965671
918
3.046875
3
Visit the Culture of Life Studies Program website! During the month of May, when you buy any one of our products for yourself, we will donate any one of our products (of equal or lesser value) to the school, parish, or friend of your choice. This is a great way for you to build a culture of life right in your neighborhood. To take advantage of this special offer in our online store, email [email protected] with your receipt and send us the name and address of the school or person you would like us to donate to, plus the title of the unit study you would like to send. This special offer starts on May 1 and ends on May 31, so don’t wait! Life Is Precious is a resource for parents and their children and illustrates the basic facts of human development in the womb using scientifically accurate yet kid-friendly coloring pages and traditionally published picture books. In consultation with a former biologist, as well as veteran homeschool parents, we created this 4-week unit study to provide students with accurate information to enhance learning. Designed to complement any existing homeschool curriculum in grades K-2, Life Is Precious instructs children about the truth of human development while still respecting their innocence. It is perfect for the homeschool or co-op setting where parents are involved, and is a fun, easy way to learn about human development and the value of life. A version appropriate for classroom use in production and will be available in early Fall. - 16 lessons focusing on four different picture books which teach the value of human life - Four units, each illustrating a different aspect of precious life and human development - Lessons to help your kids learn basic pro-life facts in a fun and easy way - Activities and material to help students understand the beauty of human development and the intrinsic value and dignity of each human being - Activities to help all types of learners: visual, kinesthetic, and auditory - 15 illustrations and coloring pages of human life and fetal development commissioned by a professional artist - Critical thinking activities to prepare students to recognize the value of each person regardless of age, size, or ability - Fun and easy crafts using everyday materials - Step-by-step instructions and photos for each craft - Games, cooking, and active learning Topics covered in units: - The beauty of preborn life and fetal development - Angel in the Waters by Regina Doman - Each of us is unique - On the Night You Were Born by Nancy Tillman - Standing up for what you believe in - Horton Hears a Who by Dr. Seuss - Defending the least of these - One by Kathryn Otoshi - Comprehensive teacher guide - 12-week fetal model to illustrate key concepts from the study - DVD containing all of the printable appendices and coloring pages - Bonus: Baby Steps DVD with live 4D ultrasound imagery of preborn babies in the womb Your Life Is Precious unit study beautifully engages the student in drawing out the depths of these delightful books. What begins as an enjoyable read-aloud transforms into an insightful discussion through the medium of step-by-step crafts- which are supercute! The easy-to-follow format is a welcome relief to us busy homeschooling moms. What a fabulous way to instill the message of the value of each human life to children of all ages! This material is professionally and uniquely put together to reach children in ways that are developmentally appropriate and fun at the same time. Beyond using it for our children, we could not pass up the opportunity to purchase a copy to lobby this into our hybrid homeschool and promote this around the archdiocese of Atlanta. There is nothing else like this available. Thank you ALL! The Life is Precious Study Program perfectly complements what is already being taught about life in the home. The spiral binding, colorful picture, easy to follow unit studies, additional hands on/step-by-step craft projects, Baby Steps video, and a most precious little baby toy are welcomed treasures into any home. This product is for the entire family. May our children be equipped with the tools they need to truly transform our culture into a culture of LIFE! The Life is Precious curriculum from American Life League has been uniquely crafted with special attention to the learning and developmental needs of young children. Complete with a parent guide, DVD and 12-week fetal model, this is a comprehensive "out of the box" solution to helping your child truly know the dignity of every human life. Strongly recommended for classroom use, homeschool settings, or for any family who wants to learn together about the beauty of all life. -Lisa M. Hendey, Founder of CatholicMom.com and author of The Grace of Yes Only one copy needed per family.
<urn:uuid:1ea3f4f9-9b36-4398-8b84-d92f1b05bf9b>
CC-MAIN-2016-26
http://www.all.org/life-is-precious/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928798
986
2.703125
3
- Overview, goals, learning types, and algorithms - Data selection, preparation, and modeling - Model evaluation, validation, complexity, and improvement - Model performance and error analysis - Unsupervised learning, related fields, and machine learning in practice Welcome to the third chapter in a five-part series about machine learning. In this chapter, we'll continue our machine learning discussion, and focus on problems associated with overfitting data, as well as controlling model complexity, a model evaluation and errors introduction, model validation and tuning, and improving model performance. Overfitting is one of the greatest concerns in predictive analytics and machine learning. Overfitting refers to a situation where the model chosen to fit the training data fits too well, and essentially captures all of the noise, outliers, and so on. The consequence of this is that the model will fit the training data very well, but will not accurately predict cases not represented by the training data, and therefore will not generalize well to unseen data. This means that the model performance will be better with the training data than with the test data. A model is said to have high variance when it leans more towards overfitting, and conversely has high bias when it doesn't fit the data well enough. A high variance model will tend to be quite flexible and overly complex, while a high bias model will tend to be very opinionated and overly simplified. A good example of a high bias model is fitting a straight line to very nonlinear data. In both cases, the model will not make very accurate predictions on new data. The ideal situation is to find a model that is not overly biased, nor does it have a high variance. Finding this balance is one of the key skills of a data scientist. Overfitting can occur for many reasons. A common one is that the training data consists of many features relative to the number of observations or data points. In this case, the data is relatively wide as compared to long. To address this problem, reducing the number of features can help, or finding more data if possible. The downside to reducing features is that you lose potentially valuable information. Another option is to use a technique called regularization, which will be discussed later in this series. Controlling Model Complexity Model complexity can be characterized by many things, and is a bit subjective. In machine learning, model complexity often refers to the number of features or terms included in a given predictive model, as well as whether the chosen model is linear, nonlinear, and so on. It can also refer to the algorithmic learning complexity or computational complexity. Overly complex models are less easily interpreted, at greater risk of overfitting, and will likely be more computationally expensive. There are some really sophisticated and automated methods by which to control, and ultimately reduce model complexity, as well as help prevent overfitting. Some of them are able to help with feature and model selection as well. These methods include linear model and subset selection, shrinkage methods (including regularization), and dimensionality reduction. Regularization essentially keeps all features, but reduces (or penalizes) the effect of some features on the model's predicted values. The reduced effect comes from shrinking the magnitude, and therefore the effect, of some of the model's term's coefficients. The two most popular regularization methods are ridge regression and lasso. Both methods involve adding a tuning parameter (Greek lambda) to the model, which is designed to impose a penalty on each term's coefficient based on its size, or effect on the model. The larger the term's coefficient size, the larger the penalty, which basically means the more the tuning parameter forces the coefficient to be closer to zero. Choosing the value to use for the tuning parameter is critical and can be done using a technique such as cross-validation. The lasso technique works in a very similar way to ridge regression, but can also be used for feature selection as well. This is due to the fact that the penalty term for each predictor is calculated slightly differently, and can result in certain terms becoming zero since their coefficients can become zero. This essentially removes those terms from the model, and is therefore a form of automatic feature selection. Ridge regression or lasso techniques may work better for a given situation. Often the lasso works better for data where the response is best modeled as a function of a small number of the predictors, but this isn't guaranteed. Cross-validation is a great technique for evaluating one technique versus the other. Given a certain number of predictors (features), there is a calculable number of possible models that can be created with only a subset of the total predictors. An example is when you have 10 predictors, but want to find all possible models using only 2 of the 10 predictors. Doing this, and then selecting one of the models based on the smallest test error, is known as subset selection, or sometimes as best subset selection. Note that a very useful plot for subset selection is when plotting the residual sum of squares (discussed later) for each model against the number of predictors. When the number of predictors gets large enough, best subset selection becomes unable to deal with the huge number of possible model combinations for a given subset of predictors. In this case, another method known as stepwise selection can be used. There are two primary versions, forward and backward stepwise selection. In forward stepwise selection, predictors are added to the model one at a time starting at zero predictors, until all of the predictors are included. Backwards stepwise selection is the opposite, and involves starting with a model including all predictors, and then removing a single predictor at each step. The model performance is evaluated at each step in both cases. In both subset selection and stepwise selection, the test error is used to determine the best model. There are many ways to estimate test errors, which will be discussed later in this series. There is a concept that deals with highly dimensional data (i.e., large number of features) known as the curse of dimensionality. The curse of dimensionality refers to the fact that the computational speed and memory required increases exponentially as the number of data dimensions (features) increases. This can manifest itself as a problem where a machine learning algorithm does not scale well to higher dimensional data11. One way to deal with this issue is to choose a different algorithm that can scale better with the data. The other is a technique known as dimensionality reduction. Dimensionality reduction is a technique used to reduce the number of features included in the machine learning process. It can help reduce complexity, reduce computational cost, and increase machine learning algorithm computational speed. It can be thought of as a technique that transforms the original predictors to a new, smaller set of predictors, which are then used to fit a model. Principal component analysis (PCA) was discussed previously in the context of feature selection, but is also a widely-used dimensionality reduction technique as well. It helps reduce the number of features (i.e., dimensions) by finding, separating out, and sorting the features that explain the most variance in the data in descending order. Cross-validation is a great way to determine the number of principal components to include in the model. An example of this would be a dataset where each observation is described by ten features, but only three of the features can describe the majority of the data's variance, and therefore are adequate enough for creating a model with, and generating accurate predictions. Note that people sometimes use PCA to prevent overfitting since fewer features implies that the model is less likely to overfit. While PCA may work in this context, it is not a good approach and is therefore not recommended. Regularization should be used to address overfitting concerns instead8. Model Evaluation and Performance Assuming you are working with high quality, unbiased, and representative data, then the next most important aspects of predictive analytics and machine learning is measuring model performance, possibly improving it if needed, and understanding potential errors that are often encountered. We will have an introductory discussion here about model performance, improvement, and errors, but will continue with much greater detail on these topics in the next chapter. Model performance is typically used to describe how well a model is able to make predictions on unseen data (e.g., test, but NOT training data), and there are multiple methods and metrics used to assess and gauge model performance. A key measure of model performance is to estimate the model's test error. The test error can be estimated either indirectly or directly. It can estimated and adjusted indirectly by making changes that affect the training error, since the training error is a measure of overfitting (bias and/or variance) to some extent. Recall that the more the model overfits the data (high variance), the less well the model will generalize to unseen data. Given that, the assumption is that reducing variance should improve the test error as well. The test error can also be estimated directly by testing the model with the held out test data, and usually works best in conjunction with a resampling method such as cross-validation, which we'll discuss later. Estimating a model's test error not only helps determine a model's performance and accuracy, but is also a very powerful way to select a model too. Improving Model Performance and Ensemble Learning There are many ways to improve a model's performance. The quality and quantity of data used has a huge, if not the biggest impact on model performance, but sometimes these two can't easily be changed. Other major influencers on model performance include algorithm tuning, feature engineering, cross-validation, and ensemble methods. Algorithm tuning refers to the process of tweaking certain values that effectively initialize and control how a machine learning algorithm learns and generates predictive models. This tuning can be used to improve performance using the separate validation data set, and later performance tested with the test dataset. Since most algorithm tuning parameters are algorithm-specific and sometimes very complex, a detailed discussion is out of scope for this article, but note that the lambda parameter described for regularization is one such tuning parameter. Ensemble learning, as mentioned in an earlier post, deals with combining or averaging (regression) the results from multiple learning models in order to improve predictive performance. In some cases (classification), ensemble methods can be thought of as a voting process where the majority vote wins. Two of the most common ensemble methods are bagging (aka bootstrap aggregating) and boosting. Both are helpful with improving model performance and in reducing variance (overfitting) and bias (underfitting). Bagging is a technique by which the training data is sampled with replacement multiple times. Each time a new training data set is created and a model is fitted to the sample data. The models are then combined to produce the overall model output, which can be used to measure model performance. Boosting is a technique designed to transform a set of so-called weak learners into a single strong learner. In plain English, think of a weak learner as a model that predicts only slightly better than random guessing, and a strong learner as a model that can predict to certain degree of accuracy better than random guessing. While complicated, boosting basically works by iteratively creating weak models and adding them to the single strong learner. While this process happens, model accuracy is tested and then weightings are applied so that future learners focus on improving model performance for cases that were previously not well predicted. Another very popular ensemble method is known as random forests. Random forests are essentially the combination of decision trees and bagging. Kaggle is arguably the world's most prestigious data science competition platform, and features competitions that are created and sponsored by most of the notable Silicon Valley tech companies, as well as by other very well-known corporations. Ensemble methods such as random forests and boosting have enjoyed very high success rates in winning these competitions. Model Validation and Resampling Methods Model validation is a very important part of the machine learning process. Validation methods consist of creating models and testing them on a validation dataset. Resulting validation-set error provides an estimate of the test error and is typically assessed using mean squared error (MSE) in the case of a quantitative response, and misclassification rate in the case of a qualitative (discrete) response. Many validation techniques are categorized as resampling methods, which involve refitting models to different samples formed from a set of training data. Probably the most popular and noteworthy technique is called cross-validation. The key idea of cross-validation is that the model's accuracy on the training set is optimistic, and that a better estimate comes from the model's accuracy on the test set. The idea then is to estimate the test set accuracy while in the model training stage. The process involves repeated splitting of the data into different training and test sets, building the model on the training set, and then evaluating it on the test set, and finally repeating and averaging the estimated errors. In addition to model validation and helping to prevent overfitting, cross-validation can be used for feature selection, model selection, model parameter tuning, and comparing different predictors. A popular special case of cross-validation is known as k-fold cross-validation. This technique involves selecting a number k, which represents the number of partitions of equal size that the original data is divided into. Once divided, a single partition is designated as a validation dataset (i.e., for testing the model), and the remaining k-1 data partitions are used as training data. Note that typically the larger the chosen k, the less bias, but more variance, and vice versa. In the case of cross-validation, random sampling is done without replacement. There is another technique that involves random sampling with replacement that is known as the bootstrap. The bootstrap technique tends to underestimate the error more than cross-validation. Another special case is when k=n, i.e., when k equals the number of observations. In this case, the technique is known as leave-one-out cross-validation (LOOCV). In this chapter, we have discussed many concepts and techniques associated with model evaluation, validation, complexity, and improvement. Chapter four of this series will provide a much deeper dive into concepts and metrics related to model performance evaluation and error analysis. By Alex Castrounis on - Wikipedia: Machine Learning - Wikipedia: Supervised Learning - Wikipedia: Unsupervised Learning - Wikipedia: List of machine learning concepts - Wikipedia: Feature Selection - Wikipedia: Cross-validation - Practical Machine Learning Online Course - Johns Hopkins University - Machine Learning Online Course - Stanford University - Statistical Learning Online Course - Stanford University - Wikipedia: Regularization - Wikipedia: Curse of dimensionality - Wikipedia: Bagging, aka Bootstrap Aggregating - Wikipedia: Boosting
<urn:uuid:2a42f077-4fdd-480c-93db-47d4ca44f48b>
CC-MAIN-2016-26
http://www.innoarchitech.com/machine-learning-an-in-depth-non-technical-guide-part-3/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944029
3,100
2.796875
3
Show & Event Daffodil Locations & Vistas Nor Cal Growing Tips Daffodils not Blooming? Growing Daffodils in Pots Support and Services Site Search Page there are numerous reasons why daffodils might not bloom! Here's a check list for you to look at about why your daffodils may not be blooming. See if anything fits your situation: 1. Bulbs have not been 'fed' in a couple of years. A broadcast of 5-10-10 granules at planting, when leaves emerge, and again at bloom is a reasonable feeding schedule. 2. Feeding has been with a high-nitrogen fertilizer. This encourages production of leaves, but seems to quell the plant's need for flowers. 3. Bulbs are planted in a shady area. Daffodils need an half-day of sun at least to produce flowers. If planted in partial sun, longer. 4. Bulbs are in competition for food with other plants. Planting under evergreen trees or with other fast-growing plants limits the food they can get. Result: weak plants and no 5. Bulbs are planted in an area with poor drainage. Daffodils love water but must have good drainage. They do not do well where the water puddles. There, they are weakened by "basal rot" fungus or other evils and die out. Plants infected with basal rot have green color loss on the leaves, malformed leaves, stems, and flowers - or all. Basal rot is incurable - dig and discard the bulbs. 6. Plant leaves were cut too soon or tied off the previous year. Daffodils replenish their bulb for about six weeks after they bloom. The bulbs should be watered for about this long after blooming. The leaves should not be cut off or blocked from sun until they start to lose their green and turn yellow. This signifies the completion of the bulb 7. Bulbs may be stressed from transplanting. Some varieties seem to skip a year of blooming if dug and replanted in a different environment. Some varieties bought from a grower in one climate may have a difficult period of adjustment to a vastly different climate. They may bloom the first year off the previous year's bulb, but then be unable to adequately build a flower for the following year. 8. Some naturalized varieties growing well in one region do not grow well in regions with different climate. (The wild jonquils proliferating and blooming in the Southeastern USA do not flower if moved to the north.) 9. The bulbs may be virused. Many plant viruses attack daffodils. Over time, an infected plant loses its vigor, puts up smaller, weakened leaves and stems, stops blooming, and finally dies. The most common viruses are "yellow stripe" and "mosaic". Yellow stripe shows as fine streaks of yellow the length of the leaves. It appears as the leaves emerge. The plant is weakened by the second year. Mosaic only appears as white blotches on the yellow flowers where the petals lose their color. Plant vigor seems unaffected. Both these diseases are contagious to other daffodils and incurable. Dig and throw away the bulbs. 10. Growing conditions the previous Spring may have been inhospitable - the reformation of the bulb was affected. An early heat wave may have shut down bulb rebuilding before it was complete. The bulbs may have be grown in a smallish pot without adequate feeding or protection from heat and cold. 11. Bulbs may be diseased or stressed from shipping the Summer before. Retail bulbs typically remain in closed crates for a lengthy period of time during shipping. These humid conditions are near-perfect for the proliferation of fungus diseases such as "basal rot" (fusarium). Some bulbs are infected at the time you receive them. Never buy or plant a "soft" bulb. Cut any observed rotting spots on a solid bulb back to clean tissue and soak the bulb in a systemic fungicide such as Clearys 3336 before planting. Look at the NCDS bulb sources for 12. Bulbs may have been growing in the same spot for many years and need dividing. Daffodil bulbs normally divide every year or two. This can result in clumps of bulbs that are competing for food and space. Commonly bulbs in compacted clumps cease blooming. Dig the bulbs when the foliage has yellowed. Separate them into individual bulbs and replant them about 6" apart and about 6" deep. You may replant immediately after lifting, or you may dry the bulbs in the shade, store them in mesh bags, and replant the bulbs in the Fall. If you replant immediately - do not water them until the Fall. 13. Bulbs may be out to get you! (The case when you give them away in frustration and they bloom wildly for the new recipients).
<urn:uuid:75ad6e6a-658c-4141-ab05-34c2269688d4>
CC-MAIN-2016-26
http://www.daffodil.org/daffodil/blooming.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924387
1,080
3.171875
3
This image of layered deposits in Tithonium Chasma was taken by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) at 1537 UTC (11:37 a.m. EDT) on August 31, 2007, near 5.0 degrees south latitude, 270.3 degrees east longitude. CRISM's image was taken in 544 colors covering 0.36-3.92 micrometers, and shows features as small as 20 meters (66 feet) across. The region covered is just over 8.5 kilometers (5.3 miles) wide at its narrowest point. Tithonium Chasma lies at the western end of the Valles Marineris canyon system. It extends approximately east-west for roughly 810 kilometers (503 miles), varies in width from approximately 10 to 110 kilometers (6 to 68 miles), and cuts into the Martian surface to a maximum depth of roughly 6 kilometers (4 miles). Many of the canyon-forming processes found on Mars are readily illustrated in Tithonium Chasma. These features offer a window into the geologic history of the planet. Landslides have enlarged the canyon's walls and formed debris deposits that ring the trough's interior. The chasma's floor is composed of layered deposits which may be volcanic or sedimentary in origin. One of CRISM's tasks is to determine the mineralogy of these deposits. The top panel in the montage above shows the location of the CRISM image on a mosaic taken by the Mars Odyssey spacecraft's Thermal Emission Imaging System (THEMIS). The CRISM data covers an eroded terrace of these interior layered deposits, and is centered along the edge of a knob of eroded wall material just to the southwest. The lower two images are renderings of data draped over topography without vertical exaggeration. These images provide a view of the knob's elevation relative to the surrounding terrain. The lower left image is in infrared false color, and shows light-colored material exposed on the flanks of the layered deposits. The upper right image shows measures of the strengths of different mineral signatures in the red, green, and blue image planes, and reveals diversity in the mineral content of this light-colored material. Some areas have no signature in the data, indicating dust-like spectral properties, while other areas have signatures of monohydrated or polyhydrated sulfate. This signifies a variety of compositions within these layered deposits. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.
<urn:uuid:d72775a9-6b6a-42d2-9ab7-d1d9bd429b8a>
CC-MAIN-2016-26
http://photojournal.jpl.nasa.gov/catalog/PIA10180
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00054-ip-10-164-35-72.ec2.internal.warc.gz
en
0.882826
602
3.328125
3
The latest news about biotechnologies, biomechanics synthetic biology, genomics, biomediacl engineering... Posted: Jul 26, 2013 Polymer ribbons for better healing (Nanowerk News) A new kind of gel that promotes the proper organization of human cells was developed by Prof. Prasad Shastri of the Institute of Macromolecular Chemistry and BIOSS Centre for Biological Signalling Studies Excellence Cluster at the University of Freiburg and BIOSS Centre for Biological Signalling Studies graduate students Aurelien Forget and Jon Christensen in collaboration with Dr. Steffen Lüdeke of the Institute for Pharmaceutical Sciences. These hydrogels made of agarose, a polymer of sugar molecules derived from sea algae, mimic many aspects of the environment of cells in the human body. They can serve as a scaffold for cells to organize in tissues. 3-D organization and branching of human endothelial cells into vascular trees in carboxylated agarose gels. (Image: Aurelien Forget, Prasad Shastri) The cells environment in the body is composed of collagen and polymers of sugars. It provides mechanical signals to the cells, necessary for their survival and proper organization into a tissue, and hence essential for healing. A gel can mimic this scaffold. However it has to precisely reproduce the molecular matrix outside the cell in its physical properties. Those properties, like the matrices stiffness, vary in the body depending on the tissue. The team of Prof. Shastri modified agarose gels by adding a carboxylic acid residue to the molecular structure of the polymer to optimally fit the cells environment. Hydrogels form when polymer chains that can dissolve in water are crosslinked. In an agarose gel the sugar chains organize into a spring-like structure. By adding a carboxylic acid to this backbone, the polymers form ribbon-like structures – this allows for the stiffness of the gel to be tuned to adapt the scaffold to every part of the human body. To demonstrate the versatility of the gel the researchers manipulated endothelial cells that make up vascular tissue to organize into blood vessels outside the body. By combining the appropriate biological molecules found in a developing embryo, they identified a single condition that encourages endothelial cells to form large blood vessel-like structures, several hundred micrometers in height. This discovery has implications in treating damage to heart and muscle tissue. Prof. Shastri says “it is really remarkable that the organization of the endothelial cells into these free standing vascular lumens occurs within our gels without the need for support cells”. It has been long thought the formation of large vessel-like structures requires additional cells called mural support cells, which provide a platform for the endothelial cells to attach and organize. “We were surprised to find that the endothelial cells underwent a specific transformation called apical-basal polarization”, adds Prof. Shastri. It turns out that such polarization is necessary for the development of blood vessels and occurs naturally in a developing embryo. The ability to induce this polarization in cells in three-dimensional cultures in a synthetic polymer environment is a unique feature of the new gel. Source: Albert-Ludwigs-Universität Freiburg If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
<urn:uuid:d8c5b0c7-31ce-4021-b566-1c3ba0b9223c>
CC-MAIN-2016-26
http://www.nanowerk.com/news2/biotech/newsid=31545.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908227
719
3.109375
3
Sat May 20 Tip Parameters vs arguments In everyday usage, “parameter” and “argument” are used interchangeably to refer to the things that you use to define and call methods or functions. Often this interchangeability doesn’t cause ambiguity. It should be noted, though, that conventionally, they refer to different things. A “parameter” is the thing used to define a method or function while an “argument” is the thing you use to call a method or function. def foo(param) ... end foo(arg) # => obj Ultimately, it doesn’t really matter what you say. People will understand from the context.
<urn:uuid:70b21218-2b3c-41df-a1f6-f757a67d8eef>
CC-MAIN-2016-26
http://project.ioni.st/post/790
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00004-ip-10-164-35-72.ec2.internal.warc.gz
en
0.870813
148
3.125
3
UNESCO’s work in Cambodia in the field of Culture is considerably wide and highly visible both at the national and international level. On July 2008, UNESCO World Heritage Committee listed the Preah Vihear Temple as a World Heritage Site—the fourth Khmer heritage listed as world heritage— after the Angkor Archaeological Park in December 1992. The Angkor Temples were seriously damaged during years of continuous war from the 1970s to the 1990s. Monuments and archaeological sites suffered from neglect, degradation and pillage. Following the Paris Peace Agreement in 1991 and the restoration of Constitutional Monarchy after the general elections in 1993, the Royal Government of Cambodia acknowledged the important role of culture in shaping national identity, strengthening social cohesion and contributing to the economic development of Cambodia. The capacity of the Royal Government for the protection, preservation and development of the Cambodian cultural heritage has been gradually strengthened over the past decade through the strengthening of the national authorities’ abilities to safeguard and promote the country’s national heritage.
<urn:uuid:20165d8d-5501-438d-b60a-0d34efc2d035>
CC-MAIN-2016-26
http://www.unesco.org/new/en/phnompenh/culture/tangible-heritage/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932824
206
2.796875
3
Einstein's Big Idea Library Home || Full Table of Contents || Suggest a Link || Library Help |Resources to accompany the PBS broadcast "Einstein's Big Idea," the theory of special relativity: program transcript; teacher's guide; inquiry and articles; and "Interactives, Audio and More," which includes video of ten top physicists each describing E = mc2 in a few minutes or less, and a quiz titled The Power of Tiny Things" (e.g., how much energy does a paper clip pack?). See also filmmaker Gary Johnstone speaking about how creativity fuels both art and science, and watch three young physicists contemplate how a 100-year-old equation figures into their careers.| |Levels:||Middle School (6-8), High School (9-12), College| |Resource Types:||Audiovisuals, Lesson Plans and Activities, Problems/Puzzles, Articles, Web Interactive/Java| |Math Topics:||Quantum Theory| © 1994- The Math Forum at NCTM. All rights reserved.
<urn:uuid:e8913801-dd3e-4a70-81b4-46b088ad9cc9>
CC-MAIN-2016-26
http://mathforum.org/library/view/67469.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00094-ip-10-164-35-72.ec2.internal.warc.gz
en
0.864423
220
2.9375
3
A new study by the Brennan Center for Justice has found voters across the country are being purged through a process that is shrouded in secrecy, prone to error and vulnerable to manipulation. The Brennan Center is calling on states to develop and publish uniform, non-discriminatory rules for purges; provide public notice of pending purges; make purge lists publicly available; and develop rules for individuals to challenge the purge list. [includes rush transcript] JUAN GONZALEZ: We turn now to a new study by the Brennan Center for Justice that has found voters across the country are being purged through a process that is shrouded in secrecy, prone to error and vulnerable to manipulation. Thousands of voters have already been erroneously removed from the polls this year in Mississippi, Georgia and Louisiana. The Brennan Center is calling on states to develop and publish uniform, non-discriminatory rules for purges; provide public notices of pending purges; make purge lists publicly available; and develop rules for individuals to challenge the purge list. AMY GOODMAN: We’re joined right now in the firehouse studio by Myrna Pérez. She is the author of the “Voter Purges” report, an attorney for the Democracy Program at the Brennan Center for Justice. I want to welcome you to Democracy Now! Start off by just defining — what is “voter purging”? MYRNA PEREZ: Voter purging is a way of updating or changing a voter registration list, such that when a person goes to vote on Election Day, he or she finds himself or herself unable to cast a regular ballot or a ballot that will count. AMY GOODMAN: How much does that happen? MYRNA PEREZ: Purging happens all across the country probably every day. It is part of the process that states use to maintain and update their voter registration rolls, and it is very important for all of us that states and localities have good and accurate rolls. The problem is, is this process happens in secret, without accountability, in a haphazard and slipshod manner. And very frequently, voters don’t know that they’ve been erroneously purged until they actually show up and vote. JUAN GONZALEZ: Well, usually, it normally works that if you haven’t voted for a few years, they’ll send you — the local board of election will send you a notice, right, to find out if you are still living at that address or you’ve died or something else has happened. But you’re saying that the notices aren’t going out to many of the voters? MYRNA PEREZ: Well, that’s actually — because it’s a process that is controlled largely at the state level, it varies from state to state. There are, in fact, some states that try to check and update their registration rolls by looking at people who haven’t voted for awhile. But the federal laws specifically say that someone cannot be removed solely on the basis of failure to vote. Now, not all states do the checking the way that you indicated. Sometimes they just don’t do that process. The problem that happens is when the process that they try to update rolls aren’t done in a way with sufficient voter protections, it’s not done enough time in advance, and it’s not done in a way that people can be held accountable. One of the things that struck me when doing this research is that a local election official in Mississippi purged 10,000 voters a week before the primary, and the reports are that she did it from her home computer. AMY GOODMAN: So, has computerized voter rolls made it easier to purge? MYRNA PEREZ: It certainly has. And I’m not of the belief that the computerized voter rolls are necessarily problematic. It’s required by the Help America Vote Act. In many ways, it allows for easier election administration. But that puts the stakes much higher, because now you can go in and, with a push of a button, you know, remove a lot of people from the rolls. JUAN GONZALEZ: Well, interestingly, in Puerto Rico, where I’m originally from, every election, the government actually advertises the names of all of the people that are about to be purged from the rolls, so that there’s a public review process actually of people. And the newspapers publish lists of all the people about to be purged from the rolls, so you have of an opportunity to challenge it. But that doesn’t happen here, as far as I know, in most —- MYRNA PEREZ: We have -— you know, we studied twelve states. We tried to do a representative sample of the states. And we found almost no notice of that kind. Some of the states — or at least set forth in state statute, that they have to do this. We found some states that would say you have to do this before a certain amount of time, but they don’t tell you when it’s going to happen. And, you know, voters have to take the extra step to actually go in and check, if that’s, in fact, available. And that’s not available in every state. AMY GOODMAN: Myrna Pérez, talk about voter purging in Louisiana. MYRNA PEREZ: Louisiana is a state that has befallen a lot of problems — you know, the hurricanes, etc. — and there was a concern that, with the number of people that were having to be temporarily relocated, that there might not be like an accurate representation of the rolls as to who was actually eligible. And what Louisiana did was it looked at, like, other states, and it saw who had been supposedly registered in other states. But there were a lot of things going on, in that you had people that wanted driver’s license, so they, you know, registered to get a driver’s license, but they didn’t intend to disavow or never return to Louisiana, or you had people that were temporarily dislocated but planned to go back. And one of the problems that happened in the way that Louisiana tried to update its registration rolls was that they sent people notices saying, you know, we have reason to believe that you registered in another state. Well, if they were wrong about that, if someone actually didn’t register in another state, they didn’t have the documentation that they could have provided to challenge that. And so, our concern and the concern of a number of other civil rights organizations was that the process wasn’t designed in such a way to actually make sure that the people who were targeted were able to challenge their targeting or that simply got it. JUAN GONZALEZ: You also mention Montana in your report. What happened in Montana? MYRNA PEREZ: Montana is a state which very recently came into public light, because there were a certain number of persons, approximately 6,000 people, that were being challenged because a local party had attempted to look at who was on the registration rolls and compare that to the National Change of Address database, and if there was a discrepancy, there was thought that these people were going to be challenged. And we have a lot of concerns with a challenge on that basis. For example, a lot of people don’t realize that under federal law, if you have moved any and you haven’t changed your congressional district and you haven’t changed whatever your local jurisdiction is — so, in most cases it’s counties; in some places, it’s towns — you are allowed to vote. And sometimes state law allows you to vote at your new place’s polling place. Sometimes it allows you to vote at your old polling place. But you’re not deprived of your right to vote simply because you didn’t formally update your notice. AMY GOODMAN: As we conclude, I thought it was interesting, in your report, Republican officials, as you said, challenging 6,000 registered voters, among them a former Montana state rep., Kevin Furey, a first lieutenant in the Army Reserve, on the challenge list because he’s currently in New Jersey planning to deploy to Iraq. His quote: "It’s ironic, at the same time I’m about to return to Iraq to help build a democracy, that my own right to vote is being challenged at home for partisan purposes. These challenges are a blatant and offensive attempt to suppress the rights of voters." MYRNA PEREZ: And I just wanted to say that last night, the newspaper reports are that these challenges were not going to go forward. AMY GOODMAN: Well, I want to thank you very much, Myrna Pérez, for joining us. We will link to the Brennan Center <a href= http://www.brennancenter.org/content/resource/voter_purges >report on voter purging.
<urn:uuid:d42694ee-dbb5-44f2-ae8c-a893749ab816>
CC-MAIN-2016-26
http://www.democracynow.org/2008/10/9/report_voter_purging_process_is_shrouded
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968653
1,910
2.671875
3
EQ sketch of Martha's quilt Although the Penn's Treaty Quilt attributed to Martha Washington was made at least ten years before the War of 1812, its general medallion design alternating pieced and unpieced borders around a piece of toile would remain fashionable until the 1840s. This similar Virginia quilt is dated to the 1840s. Collection: Virginia Quilt Museum. Medallion from an online auction a few years ago also features flying geese with stars in the corners, the same border Martha used. The five blocks that appear in Martha's pieced borders---basic patchwork of squares and triangles---are among the oldest designs we find in American quilts. These classics remain popular. Measurements (All are Finished Measurements so add seams) See the measurements for the unpieced strips and center in the last post. Notice the unpieced strips have contrasting squares (cornerstones) in the corners. Below is information for the alternate pieced borders. You will need 12 Square in a Square blocks finishing to 3". Pieced Border = 3". Three pieced square in a square blocks in the center of each side. The rest may be strips or scrappy squares finishing to 3". The 4 borders measure 30" without the cornerstones. Border makes quilt 36". You will need 4 cornerstone stars finishing to 6" And 32 square in a square blocks finishing to 6". Martha added extra pieced rectangles to adjust for this border's length. For these piece 8 flying geese rectangles 3" x 6" (Martha seems to have chopped hers off to fit.) Border makes quilt 64 1/2". This border is mostly unpieced strips finishing to 2-3/4". Each of four strips finish to 64 1/2" without the cornerstones. But she has pieced some rectangles into the corners of her strips here. Maybe, piece each 64 1/2"strip with corner rectangles of 5" brown strips finishing out the ends of 54 1/2" strips. Border makes quilt 70". You will need 28 pinwheel blocks finishing to 10". And 4 four-patch blocks finishing to 10" for cornerstones. Border makes quilt 90". Click here to see a medallion believed to be from North Carolina, date estimated to be 1820-1840 from the International Quilt Study Center and Museum collection (#2004.048.0007). Here is another medallion from about the same time that may be American or English in the collection of the Winterthur Museum. It features a toile center like Martha's and is thought to be from the same period. Click here for a Virginia medallion made by Rebecca Ellen Davenport Blackwell from the collection of the Museum of the Daughters of the American Revolution.
<urn:uuid:06cd8a11-3e4d-45cc-88cd-84c7dc940aac>
CC-MAIN-2016-26
http://quilt1812warandpiecing.blogspot.com/2011/10/five-popular-blocks-for-marthas-quilt.html?showComment=1318814039352
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.888929
585
2.65625
3
As grilling season approaches, here's a question for you: What did your steak eat for dinner? If you believe that we are designed to eat meat, yet you want to be a conscious carnivore, it's a question that has heft. Until the mid-20th century, the vast majority of American beef was pasture raised and fed a diet of native grasses. That's not the case today, when most beef served in restaurants and for sale in supermarkets comes from grain-fed cows warehoused in feedlots, or "Confined Animal Feeding Operations," (CAFOs). According to "The Prairie Table Cookbook," (Sourcebooks, Inc. $29.95), during World War II, the U.S. government gave surplus corn to ranchers to feed to their cattle. The ranchers quickly discovered that grain-fed animals bulked up quicker, allowing them to go to market at 14 or 15 months instead of four or five years. That accelerated market cycle translated to profits, and the American cattle industry never looked back. But there's one big problem. Cows can't digest corn. Cows, like other grazing animals, are ruminants, which means they have a rumen, or 45-gallon sized stomach that ferments grass, converting it into protein and fats. Ruminants are not physically equipped to digest grain. Switching a cow from grass to grain opens the floodgates to a host of serious maladies, including the presence of E. coli, which only a constant diet of antibiotics can begin to counter. Author Michael Pollan, who chronicled CAFOs in an article titled "Power Steer," reported that conventionally raised cattle are frequently served cow blood and chicken droppings. Add in the use of growth hormones and the presence of animal byproducts in feed, and the stage is set for scenarios like the recent recall of 143 million pounds of contaminated beef, some of which was used in school lunch programs. "Giving cows corn is like putting diesel in your gasoline powered car. It's wrong, wrong, wrong," said Bill Kurtis, who along with food historian Michelle Martin authored "The Prairie Table Cookbook." Kurtis, a television network newsman for 30 years, bought a 10,000-acre ranch in his home state of Kansas when he retired and eventually started raising Aberdeen Angus cattle. Kurtis didn't think he had any choice but to corn feed his cows, which he raised to nine months before selling them to feed lots. "But I started reading up on the whole business, and that's when I discovered the option of grass feeding," he says. In 2005, Kurtis founded Tallgrass Beef (www.tallgrassbeef.com) and committing to pasture raising his herds on a diet of native grasses. "All of the problems go away. I mean all of them." Kurtis, one of the pioneers in the burgeoning grass fed movement, believes that, while grass fed beef will always be an alternative, it's one that is catching on "like wildfire." "Used to be I'd have to give a PowerPoint presentation because potential customers didn't know what I was talking about," said Kurtis. "Now, they do." He sells to celebrity chefs like Rick Bayless and Charley Trotter, along with supermarkets and gourmet stores in the Chicago market. As is usually the case with significant food trends, chefs at better restaurants are driving the return to pasture raised beef. For Michael Haimowitz, executive chef at Arthur's Landing in Weehawken, featuring grass fed beef and other types of heritage meats on his menu isn't just about giving diners a choice. He sees it as his responsibility. "As a chef, I need to think about the impact of what I'm buying and how it can affect the market," he said. "If I create an awareness of sustainable seafood, organic produce and pasture raised beef, then I'm fulfilling my responsibility as a chef." Although he pays more for grass fed beef, there is also less waste and better yield pound for pound. Haimowitz uses an Australian product that is produced by a consortium of family farms and sold through the Newark-based gourmet food purveyor, D'Artagnan. "There's also the long-term benefit to the environment that comes with reclaiming pasture land and cutting the use of fertilizers and pesticides. It's really the only sane choice." For Mark Faille, owner of Simply Grazin' farm in Skillman, not far from Princeton, raising grass-fed cattle was initially something he did for his family. After losing an infant daughter to what he suspected were complications related to food-borne chemicals, Faille, 44, decided he was going to raise and grow everything that went into his family's mouth. Although he didn't know the first thing about farming -- he'd previously owned a residential high tech heating business -- Faille bought a run-down farm and got busy. "Nobody could explain to me why I had to give the animals grain, so I determined on my own that they didn't need that to survive. I had no preconceived ideas or bad habits to undo," said Faille. He did everything himself, from raising chickens for eggs and meat to vegetables and harvesting his own cattle. Soon friends and neighbors and area restaurants wanted to buy from him. What started as a small family subsistence farm has grown into a business, one that starting in April will supply Whole Foods markets in New Jersey with Simply Grazin' grass fed beef. "We'll make a profit this year, for the first time," said Faille, who now raises cattle on 220 acres in Skillman and another 200 acres in Hunterdon. He and his fiancee also operate a small retail store that carries organic veal, beef, pork and chicken all raised in a chemically free environment, free range, pasture grown, with no growth hormones, antibiotics or steroids ever used. What does it cost to be a conscious carnivore? More than picking up $.99 a pound ground beef at the local supermarket. "Our ground beef is $8.50 a pound," said Faille. "But I can tell you that the beef you purchase comes from one animal, and I know that animal's name, date of birth, sex and where he was processed. You can't get that for $.99 a pound." Faille's beef is processed at Bringhurst Meats in Berlin, the only federally inspected, certified organic processor in New Jersey. One way to pare the price is to commit in advance to a share of an animal, an arrangement you can make directly with the farmer. As to the taste, there is a difference between corn fed and grass fed beef. "Yes, corn fed is tender and has more marbling," said Kurtis. "But it also can be bland. Grass fed beef is the way beef was meant to taste, nutty, slightly sweet and juicy." The flavor of grass fed beef has improved, as ranchers like Kurtis have committed to raising heritage breeds such as Black Angus, Red Angus and Hereford, all raised on grass for thousands of years. Because grass fed has less fat, it's best cooked at lower temperatures rare to medium rare, to preserve its natural juices. And because it's leaner, it's also better for you. A six-ounce grass-fed steak has about 100 fewer calories than its grain-fed counterpart. If you are like most Americans, who consume an average of 66.5 pounds of beef a year, switching to grass-fed lets you skip 17,733 calories annually. Grass fed beef is also rich in Omega-3s, the "good fats" commonly found in fish, along with another type of potentially good fat called conjugated linoleic acids, or CLAs. So bottom line, grass fed beef costs more. But it's better for you and your family, better for the earth and better for the animal. Why not give grass fed beef a try, and lose the side of guilt that is sizzling alongside the corn fed steak on the barby. If you're trying to be a savvy meat eater, it's one serious choice to make.
<urn:uuid:53d8e823-bf41-4922-856a-5f6b049fb87b>
CC-MAIN-2016-26
http://www.nj.com/entertainment/dining/index.ssf/2008/03/getting_the_grain_out_of_your.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977249
1,701
2.9375
3
"What is panel roofing?" you ask. Panel roofing is made with large sheets of material. This material can be metal, fiberglass or even some types of plastic (polycarbonate is of the more common plastic types and is often used in greenhouses). Panel roofs differ from shingle roofs in their appearance and how they are installed. The purpose of all roofs, whether made of roof panels or shingles, is to protect the structure below from the elements. Some roofs are better than others in the protection that they provide. Older styles of asphalt shingles had life-spans of 15 to 20 years. New higher-grade asphalt shingles may have life-spans of up to 40 years. New metal panel roofs, by comparison, have life-spans that start around 40 years and may reach up to 60 years or more. Material Choices and Colors The most common panel roof material by far is metal: steel, aluminum, copper or stainless steel. Aluminum and steel roofs need to be painted or have a coating applied (galvanized) to protect them from corrosion. Painted roof panels can be had in a rainbow of colors to complement the structure they are protecting, and they can add some great architectural style. Copper and stainless steel do not need anti-corrosion coatings due to their natural resistance, but these materials are very expensive. Superior Fire Protection Where I live in the Rocky Mountain West, wildfires are an ever-present concern. Steel roof panels by their very nature provide some of the best fire protection available. Aluminum roof panels will often need some special under-layment to obtain a "Class A" fire rating, but even without this added treatment, aluminum is much safer against wildfires that conventional asphalt roofing. Lightweight and Green Metal roofs also have a few other great things going for them. Compared to an asphalt roof, metal roof panels are typically about 50% lighter. They also may have a recycled component. While it varies by manufacturer, the recycled component can be as high as 30% or more. When the time comes to retire a metal roof, the metal can be recycled again and again. The rates of recycling for asphalt roofs are sadly far below that of metal. More Architectural Variety Another great advantage of panel roofing is the multitude of styles that are available. Panel roofing can be found in many profiles, from a basic standing seam to a whole host of other profiles. Corrugated panel roofing can bring a modern industrial style and are common in some urban areas that have undergone renovations. Whether you are thinking about replacing an existing roof or are in the process of designing a new home, putting panel roofing over your head makes sense for many reasons. Metal roof panels may cost more than a basic asphalt shingle roof, but by the time you factor in the life-span and the environmental advantages, things balance out. Some manufacturers are even incorporating solar cells into their roof panels - but that is a whole different story.
<urn:uuid:1fced23a-5788-4c89-aeae-c4f48792f0d3>
CC-MAIN-2016-26
http://www.networx.com/article/is-panel-roofing-for-you
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959855
615
2.90625
3
When the history books of the future are written, Alan Turing will go down in the company of Newton and Darwin and Einstein. His visions changed how humanity conceives of computation, information and pattern -- and 100 years after his birthday, and 58 years after his tragic death, Turing's legacy is alive and growing. In celebration of his achievements, the Royal Society, the world's oldest scientific fellowship -- Newton was once its president -- published two entire journal issues devoted to Turing's ongoing influence. On the following pages, Wired looks at some of the highlights.Above: Turing at War Though he hardly fit the image of a soldier, Alan Turing had the heart of one. With war on the horizon, Turing joined the British government's codebreaking office in 1938, and one year later turned the full force of his intellect on Enigma, the seemingly uncrackable German cryptography system. "No one else was doing anything about it and I could have it to myself," he said of his decision. In a tour-de-force of logic, information theory and sheer insight, Turing designed the machines that by summer 1940 allowed Allied forces to decipher German communications. Winston Churchill would later describe it as the single largest contribution to Allied victory. Without it, the war may have had a different ending. Photographed above is a statue of Turing in the wartime codebreaking headquarters of Bletchley Park. Image: John Callas/Flickr The Essential Computer Often called the father of computer science, Turing's 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem," articulated ideas that became technological bedrock: that any computable problem could be computed on a machine, with calculations controlled by means of encoded instructions; and that code, rather than machine, was the essence of a computer. A straightforward example of this is videogame emulation: Ambience aside, it doesn't matter if Asteroids is played on an arcade cabinet or Atari 2600 or the latest monster gaming laptop. It's just a set of instructions. Of course, all those examples involve hardware self-evidently recognized as computers, but the concept extends to any programmable system. With the right interface, Asteroids could conceivably be played on a computer made from DNA, the design of which is now being pursued by some 60 laboratories around the world. Image: String of DNA (above) seen under an electron microscope, running instructions encoded (representation below) in its constituent molecules. (Winfree et al./Nature) Citations: "Computing by molecular self-assembly." Natasa Jonoska and Nadrian C. Seeman. Interface Focus, February 8, 2012. "A mechanical Turing machine: blueprint for a biomolecular computer." By Ehud Shapiro, Interface Focus, March 21, 2012. Patterns of the World In the decade after World War II, Turing turned his curiosity to nature's patterns, in particular shapes and patterns in animal bodies. His hypothesized explanation dovetailed conceptually with his computational ideas of code as universal: Turing proposed that biological patterns, whatever the animal or system, emerged from a simple type of chemical interaction. He called this a reaction-diffusion system: one chemical makes more of itself, another slows production of the first chemical, and some mechanism diffuses the chemicals across a concentration gradient. Out of this generic behavior, proposed Turing, the most fantastically complex patterns could arise. Turing himself didn't live to test his ideas, but the 1952 paper describing them, "The Chemical Basis of Morphogenesis," would become one of the most cited papers in scientific history and helped lay the conceptual groundwork for modern self-organization theory. Below, a reaction-diffusion system involving two chemicals spreading through a tray of gelatin that is slightly deeper at one end. Changes in distance from the chemicals' source and the gel's depth produce marked changes in pattern. Images: 1) Tambako the Jaguar/Flickr 2) Rudovics et al./Physical Scripta Citation: "Vignettes from the field of mathematical biology: the application of mathematics to biology and medicine." By J. D. Murray. Interface Focus, 1 February 2012. Turing Patterns for Computation The configurations produced by reaction-diffusion systems are called Turing patterns, and in a neatly meta recursion of his thinking, they can be seen as systems of information storage suitable for use in computing. Instead of binary ones and zeros, a chemical computer might run code based on a chemical number system, with each Turing pattern -- like those seen above in a simple experimental setup -- representing a single unit of information. Image: Szalai et al./Interface Focus Citation: "Chemical morphogenesis: recent experimental advances in reaction−diffusion system design and control." By István Szalai, Daniel Cuiñas, Nándor Takács, Judit Horváth and Patrick De Kepper. Interface Focus, March 27, 2012. Turing proposed that reaction-diffusion systems might explain many types of cellular organization. Scientists now search for evidence of Turing's patterns in our bodies. One place they appear to exist is in stem cells that become blood vessels. In the photograph above, blood vessel tissue structures are compared to reaction-diffusion simulation patterns. This line of research might eventually guide tissue engineering and other regenerative medicine techniques. Image: Chen et al./Interface Focus Citation: "Patterns of periodic holes created by increased cell motility." Ting-Hsuan Chen, Chunyan Guo, Xin Zhao, Yucheng Yao, Kristina I. Boström, Margaret N. Wong, Yin Tintut, Linda L. Demer, Chih-Ming Ho and Alan Garfinkel. Interface Focus, June 18, 2012. Many natural patterns do not arise from Turing's reaction-diffusion systems. Some are produced by what's known as morphogen-gradient reactions, in which a pattern-governing substance diffuses outwards from a single point source. Patterns of veins on fruit fly wings, for example, are produced by morphogen-gradient reactions, a finding that originally suggested an absence of Turing patterns in insect wings. But fruit flies are very simple creatures, and more complex wings, such as those of the Orosanga japonicus moth, appear to have Turing-patterned vein structures. Turn on a porch light on a summer night and Turing's wings will be drawn to you. In the image above, reaction-diffusion patterns are compared to O. japonicus wing veins. Below are two sets of O. japonicus wings. Images: Yoshimoto et al./Interface Focus Citation: "Wing vein patterns of the Hemiptera insect Orosanga japonicus differ among individuals." By Eiichi Yoshimoto and Shigeru Kondo. Interface Focus, June 18, 2012. Hair and Skin Decades passed before Turing's ideas could be rigorously explored in biology, but the molecular revolution allowed scientists to move beyond identifying patterns and into the underlying biochemistry. Mammalian hair patterns are Turing patterns, and in the image above the follicle arrangement of a mouse embryo is juxtaposed with a diagram of the genetic pathway guiding it. Image: Painter et al./Interface Focus Citation: "Towards an integrated experimental – theoretical approach for assessing the mechanistic basis of hair and feather morphogenesis." By K. J. Painter, G. S. Hunt, K. L. Wells, Johansson and D. J. Headon. Interface Focus, June 18, 2012. A Turing Test for Free Will One of Turing's best-known ideas is the Turing test, proposed in his 1950 paper "Computing Machinery and Intelligence" as a method of judging machine intelligence. If a computer could pass as human in conversation, wrote Turing, then for all practical purposes it could think. What occurred inside its machine mind was far less important than the outcome. Turing was also fascinated by the nature of free will: whether it truly existed or was only an illusion disguising a deterministic universe in which events unfolded in unalterable, preordained fashion. In fact, Turing became fascinated with quantum mechanics in part because its particle indeterminacies and unpredictability suggested a basis in physics for free will. In "A Turing Test for Free Will," mechanical engineer Seth Lloyd of the Massachusetts Institute of Technology combines these currents of Turing's thoughts. The upshot: If you believe you have free will, then you do. Image: Michael Lehet/Flickr Citation: "A Turing Test for Free Will." By Seth Lloyd. Philosophical Transactions of the Royal Society A, July 28, 2012. A Lesson in Tolerance Turing's legacy lives on in the digital world, the biological world, and perhaps at scales we're just beginning to comprehend: As seen above, the Whirlpool galaxy looks uncannily similar to cellular swirls in a slime mold. It's hard not to wonder what Turing would have accomplished had he lived. Not long before his death, he wrote to a colleague, "I'm trying to invent a new Quantum Mechanics but it won't really work. How about coming here next week and making it work for me?" -- a line delivered in jest, but no doubt containing a grain of promise. That promise would never be realized. In mid-century Great Britain, homosexuality was a crime, and in 1952 Turing was tried and convicted in a high-profile trial. His security clearance was revoked. Given the choice between imprisonment or hormonal castration, he chose the chemicals. Publicly shamed and persecuted, this war hero and scientific titan took his own life, eating a poison-laced apple just two weeks before his 42nd birthday. On Turing's birthday, however, let the final note be one of celebration. Below is a picture of Alan Turing drawn by his mother in 1923, a fitting tribute to all odd children who stare into flowers and see the universe. Images: 1) European Space Agency 2) National Institutes of Health 3) Interface Focus/Sherborne School
<urn:uuid:ca242cae-5399-4131-86eb-b6521b9a7685>
CC-MAIN-2016-26
http://www.wired.com/2012/06/alan-turing-is-still-alive/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939285
2,103
3.25
3
versão On-line ISSN 1996-7489 S. Afr. j. sci. vol.105 no.3-4 Pretoria Mar./Abr. 2009 P.L. Skelton*; D.P. Smits Department of Mathematical Sciences, P.O. Box 392, UNISA 0003, South Africa W Ursae Majoris (W UMa)-type variable stars are over-contact eclipsing binary stars. To understand how these systems form and evolve requires observations spanning many years, followed by detailed models of as many of them as possible. The All Sky Automated Survey (ASAS) has an extensive database of these stars. Using the ASAS V band photometric data, models of W UMa-type stars are being created to determine the parameters of these stars. This paper discusses the classification of eclipsing binary stars, the methods used to model them as well as the results of the modelling of ASAS 1200363915.6, an over-contact eclipsing binary star that appears to be changing its period. Key words: binaries: eclipsing, binaries: close, stars: individual, ASAS1200363915.6 Most of the stars in the sky are binary or multiple star systems. In only a small percentage of binary systems are the two stars far enough apart to be able to resolve the individual stars. The binary nature of close systems can be inferred from the periodic Doppler shifts seen in the spectral lines of the stars as they orbit their common centre of mass, or, if the orbital plane is inclined at an angle to the observer so that one star passes in front of the other, by variations in the observed brightness of the stars. These stellar systems are referred to as eclipsing binaries. The vast majority of eclipsing binaries have been found by searching for stars whose brightness varies periodically. From Newton's Law of Universal Gravitation it can be shown that the square of the period P of a binary system is proportional to the sum of the masses m1 and m2 of the stars, i.e. P2 ∝ (m1 + m2). Measuring the period is relatively straightforward, and hence binary stars provide a convenient method of determining stellar masses. Because the mass of a star is the main characteristic that determines its properties, including how it changes with time, binary stars have contributed much to our understanding of stellar evolution. However, the origin, structure and evolution of close interacting binaries still hold many mysteries.15 Close binaries could evolve through angular momentum losses due to mass-loss in magnetised winds,610 or via thermal relaxation oscillations due to mass exchange between components,1114 or a combination of both. It could be that during the course of their evolution, interacting binaries become semi-detached (see 'Binary star systems' for descriptions of the different types of systems). More observational data are needed to establish whether either one of the above mechanisms controls the evolution of close binaries, or whether some other as yet unidentified process is driving changes in the systems. The period of an eclipsing binary changes with time due either to a redistribution of the matter between the stars, or when angular momentum is lost or gained by the system.15 For conservative mass exchange, the period decreases if mass is transferred from the more massive to the less massive star, and increases if the opposite occurs. By tidal dissipation orbital angular momentum may be transferred to or drawn from the spin motion of a star causing the period to either increase or decrease until a spin equilibrium is reached in which the tidal torque vanishes. A magnetised stellar wind can reduce the angular momentum of the system by magnetic braking, reducing the period. Period changes observed in binary systems do not have to be intrinsic phenomena. Light time effects (propagation delays) are produced when a binary system has another component that is in orbit about the common centre of mass of this system. Luminosity changes on parts of a binaries' photosphere can also lead to shifts in the time of minimum or maximum which would then appear as a period change. Clearly, analysing systems undergoing period changes will help us to gain a better understanding of the important physical processes occurring in close binary systems. The All Sky Automated Survey16,17 (ASAS) is a project that has discovered thousands of variable stars since 1996, and continues to monitor them on a regular basis. Pilecki et al.18 searched the ASAS data for eclipsing binaries with observed high period change rates. From a sample of 1 711 systems that fulfilled all their criteria of data quality, they present 31 interacting binaries whose periods either increased (10) or decreased (21) in a five-year interval of observations. In this article we report on another candidate, ASAS 120036 3915.6, that appears to be changing its period. Methods used to classify eclipsing binary systems are discussed under 'Binary star systems', followed by a description of W UMa-type stars. Details of the ASAS project are presented under 'The All Sky Automated Survey', followed by the methods used to model W UMa-type stars from the dataset, together with results of modelling ASAS 1200363915.6. Binary star systems Initially eclipsing binary star systems were classified according to their light curves.19 Three standard types were identified that are designated EA, EB and EW after their protostars Algol, β Lyrae and W Ursae Majoris (W UMa), respectively. Representative light curves for EA, EB and EW systems are shown in Fig. 1. The light curves of EA systems show a significant difference in the depths of their two minima. There are obvious start and end times for the eclipses, and outside the eclipses the light variations (magnitude changes) are almost negligible because the stars are well separated and do not interact significantly. Observationally it has been found that the orbital period for these stars is greater than one day. EB light curves display primary and secondary minima that have noticeably different depths and the light curve shows continuous variation outside of the eclipses. The orbital period is in general greater than one day. In EW systems the stars are very close so there is a continuous variation outside of the eclipses. Because of this the stars will experience gravitational distortion and heating effects. The difference between the primary and secondary minima is no more than 0.1 to 0.2 magnitudes because the stars are either of similar temperature or they have a common envelope. The difference between maximum and minimum is typically of the order of 0.75 magnitudes. From classical mechanics it is well known that two gravitationally-bound masses m1 and m2 rotate about their common centre-of-mass C at velocities v1 and v2. In terms of the mass ratio q = m2/m1 with m2 < m1, it is straightforward to show that If the orbital plane is tilted by an angle i with respect to the plane of the sky, the maximum velocity measured will be vmax = v sin i, where v is the velocity. A convenient way of describing and classifying binary star systems is in terms of the Roche model. Consider a system of two point masses m1 and m2 with m1 > m2 in circular orbit about their common centre-of-mass. Choose a reference frame with its origin at m1 that is co-rotating with the system at an angular velocity ω. Let m2 be at the point (a, 0, 0), with the z-axis perpendicular to the orbital plane, as illustrated in Fig. 2. The gravitational potential Φ experienced by a third body with infinitesimal mass at any point P(x, y, z) will be the sum of the potential of the two point masses and the centrifugal potential due to the rotation of the system. That is where G is the gravitational constant. The shape of the potential in 3D space is defined by the separation between the two stars, a, and the mass ratio q = m2/m1. For a binary system with a mass ratio q = 0.3 the equipotential surfaces in the rotating frame of reference are shown in Fig. 3. The equipotential surfaces of the two masses meet at several points. The inner Lagrangian point L1 is the point between the two masses where the potential of the masses exactly cancel, forming a figure-of-eight shape in Fig. 3. The 3D lobes that form this equipotential are called Roche lobes. The application of this model to binary stars is apparent when it is recognised that the surface of a star forms an equipotential surface. The normal of an equipotential surface gives the direction of the local effective gravity. For a single non-rotating star the equipotential surfaces are spherically symmetrically centred on the centre of mass. The shape of each star in a binary system with a circular orbit is defined by an equipotential surface as given by the Roche model. The Roche lobes define the upper limit for the volume of each star for which all its matter is under its own gravitational control. Because the gravitational and centrifugal influence of both stars cancel each other at L1, matter can flow from one star to the other through this point. If matter lies between the Roche lobes and the surface that goes through L2, this material forms a common envelope around the stars. At L2 matter can escape from the system. Using the idea of Roche lobes and the critical surfaces, eclipsing binaries can be separated into morphological classes. If the stars are far apart, the mass of each star is contained within equipotential surfaces that are essentially spheres around the centre of each star. As the ratio of their separation to their radii decreases, the shape of the star becomes distorted by the gravitational influence of its neighbour. Provided the photospheres of both stars lie within their Roche lobes, the binary is referred to as a detached system. EA light curves are typical for detached systems although the prototype of the class, Algol, is a semi-detached system. In a semi-detached system, one component's photosphere lies within its Roche lobe while the other star's photosphere coincides with its Roche lobe. In these systems, mass transfer takes place between the two components through L1. The shape of the Roche lobe-filling star in semi-detached systems is non-spherical; therefore the surface area seen by an observer changes throughout the orbital cycle. The observed flux is proportional to the surface area, and hence the light curve varies continuously between eclipses, producing EB light curves. EB light curves can also be produced by detached, semi-detached and, in some cases, marginally over-contact systems. In contact binaries, the photospheres of both stars equal their Roche lobes, while in over-contact binaries the photospheres of both stars exceed their Roche lobes. Often these stars are referred to as common envelope binaries because they effectively share a common stellar atmosphere. This common envelope will tend to equalise the surface temperatures of the stars. EW stars, or W UMa-type variable stars, are over-contact eclipsing binary systems that have orbital periods between 0.2 day and 1 day. Each component is a main sequence star (i.e. it burns hydrogen in its core) with spectral type ranging from A to K. Both the spectral type and the colour of an EW star do not change during the orbital cycle. This implies that the common envelope is optically thick and has a nearly uniform temperature. Temperature differences of only a few hundred Kelvin are found between the two components. The mass ratio q lies between 0.08 and 0.8. Spectral features for these stars include rotationally broadened and blended absorption lines.15 There are also emission lines in the ultraviolet spectrum which is an indication that these stars are chromospherically active. Circular orbit binaries, such as EW stars, exhibit synchronous rotation which means that the spin period of each component equals the orbital period. Compared to the sun, which has a spin period of ~29 days, EW systems with periods of less than a day are very fast rotators. Magnetic fields in stars are believed to be produced by differential rotation of the photosphere and hence the faster a star rotates, the stronger the field it can generate. Because EW stars are rotating rapidly, they could be expected to be more magnetically active than stars with longer rotational periods. Some EW stars are suspected of being magnetically active because the primary and secondary maxima in the light curves have different magnitudes. Known as the O'Connell effect,15 this phenomenon is generally attributed to the presence of spots on the surfaces of the stars which is an indicator of strong magnetic fields. From observations made in the X-ray, visual, ultraviolet and radio regimes,20 over-contact binary stars were found to have magnetic activity levels lower than those measured for single, rapidly-rotating stars. This suggests that the common envelope suppresses dynamo action to some extent. EW stars are also known to exhibit complex period patterns where intervals of constant period are interrupted with intervals where the period increases and/or decreases. A statistical study by van't Veer21 found that positive and negative period jumps are randomly distributed. In some cases, the period of the stars can be found to be only increasing or decreasing and showing no intervals where the period is constant. Binnendijk22 divided EW stars into two subclasses which he called A-type and W-type. The classification depends on whether the larger or smaller component has the higher temperature. In the A-type systems the larger component has the higher temperature whereas in the W-type systems the smaller component has the higher temperature. Observationally it has been found23 that the A-type systems tend to have low mass ratios (q < 0.3) and spectral type from A to F. W-type systems usually have mass ratios q > 0.3 and spectral types of G or K. In most cases, the orbital periods of W-type systems are smaller than those of the A-types.24 The O'Connell effect is noted more in the light curves of the W-type systems, suggesting that they are more magnetically active than the A-type systems. Currently, there is no clear indication whether there is an evolutionary link between the two subclasses, or whether they form and evolve along separate paths. The All Sky Automated Survey The All Sky Automated Survey16 (ASAS) is a project that was set up in 1996 to detect and monitor the variability of stars between 8th and 12th magnitude in the V and I bands south of declination +28°. Observations are carried out at the Las Campanas Observatory in Chile using telescopes with an aperture of 7 cm and a focal length of 20 cm; one telescope is equipped with a standard V-band (5 500 Å) filter and the other with an I-band (9 000 Å) filter. Images of an 8° × 8° field of view are captured on 2K × 2K CCD cameras. About 60% of the sky is visible from the observatory. All stars are observed once per one to three nights, weather permitting. When observations are performed, several flat-field and dark exposures are taken followed by more than a hundred three-minute exposures. This creates a raw data stream of 1.52 GB per instrument per night. Simultaneous photometry is performed through different apertures ranging from 26 pixels in diameter. All the data are processed separately so that for faint objects the data from the smallest aperture are used and for bright objects data from the largest aperture are used. Each measurement is then graded according to the quality of the data. Every star observed is given an ASAS identification that is coded from the star's right ascension and declination (coordinates) e.g. ASAS 1200363915.6 has a right ascension of 12h00m36s and a declination of 39°15'36" (the minus sign indicates that the star lies in the southern celestial hemisphere). Cross references have been made to stars listed in other catalogues. The project has already discovered over 50 000 variable stars. Of these, more than 5 000 have been classified as eclipsing contact binaries. Many of these stars have not been classified previously as variable stars. The ASAS data are publicly available via the internet. The data for each star have been flux-calibrated into standard Johnson V band magnitudes, as well as being separated into different categories of variable stars. The eclipsing binaries are subdivided using a Fourier analysis method developed by Rucinski.25 Expressing the time-dependent V magnitudes V(t) as a Fourier series it has been found that plotting the amplitude of the fourth harmonic, a4, against the amplitude of the second harmonic, a2, for each star, the eclipsing systems divide into three regions. As can be seen in Fig. 4, eclipsing contact (EC) (which includes contact and over-contact systems), eclipsing semi-detached (ESD) and eclipsing detached (ED) systems can be distinguished from their position in the a2a4 plane except for small regions of overlap where there is uncertainty. Stars falling in areas of overlap between the groups will need follow-up observations and/or further data analysis to decide their classifications. ASAS also provides the orbital period P of the binary, the epoch of minimum brightness T0, the maximum magnitude Vmax, and the change in V band magnitude ΔV. Modelling of eclipsing binaries To obtain a unique model of a W UMa system requires both photometric and spectroscopic data. The period of the system can be determined from timing of the minima in the light curve and many of the system parameters can be determined from the shape of the folded light curve (phase-magnitude plot), but unless the mass ratio q is known, a unique solution cannot be obtained. In choosing systems from the ASAS database for further studies, we concentrated on systems with periods P ~ 0.3 d so that a complete cycle could be observed during a single night of observing. Both photometric and spectroscopic data can then be obtained. Radial velocity amplitudes for EW stars are in the range of 100300 km s1. Radial velocities with accuracies typically better than 10 km s1 are needed to measure v sin i for at least one of the components to constrain the range of allowable solutions for a system. One instrument that is capable of measuring radial velocities of W UMa stars is the South African Astronomical Observatory's Grating High Resolution Echelle Spectrograph known as GIRAFFE, which can be attached to the 1.9-m telescope at Sutherland. The sensitivity of this instrument limits one to stars brighter than 10th magnitude, so this was one constraint that was imposed in choosing suitable candidates from the sample. Because ASAS makes only one measurement on each star about every three days, a complete light curve (plot of magnitude versus time) over a full phase is not seen. However, because the variations from one cycle to the next are very small, by running the data through a period analysis routine, the fundamental period P of the system can be determined. By folding all the measurements on this period, a phase-magnitude plot can be made which can be used to model the systems. Phase values ø are fractional values of the period with values between 0 and 1 calculated from where HJD is the Heliocentric Julian date of the observation, T0 is an epoch of primary minimum and P is the orbital period of the binary system. The Julian date is defined to be the number of days that have elapsed since mean noon at Greenwich on 1 January 4713 BC. When measuring the period of variable stars to an accuracy of a second or less, it is important to remove any light propagation delay effects that might arise from the position of the Earth in its orbit about the Sun. Heliocentric Julian date is the Julian date time system with its frame of reference centred on the Sun. The light curve data (HJD versus magnitude) provided by ASAS were converted into a phase-magnitude diagram by folding the data from the appropriate aperture in a spreadsheet on the period P as given in the ASAS database, using their epoch T0 for zero phase. Interesting features to look for in the folded data are flat-bottomed minima indicating a totally eclipsing system, where one star is completely covered by the other, and magnitude differences between the maxima (the O'Connell effect) suggesting the presence of spots and hence enhanced magnetic activity. Light curve modelling The light curve synthesis programme we are using to model binary star systems is a commercial package known as Binary Maker 3.0. The programme calculates light and radial velocity curves using system parameters that the user inputs. By computing the residual R formed from the sum of the differences between the observed (O) and calculated (C) light curves, (R = Σi (OiCi)2), parameters can be adjusted to give a 'best fit' to the data by minimising the residuals. Binary Maker 3.0 also creates a rendering of the system. The starting point to the modelling is the phase-magnitude data. A parameter that plays a vital role in modelling binary stars is the mass ratio q. When radial velocity measurements are available the mass ratio for the system is simply the inverse of the measured peak radial velocity ratio of the two components (see Eq. 1). If only photometric data are available, the mass ratio of the system is not known, and a unique solution cannot be determined. The reason for this non-uniqueness is that similar values of a minimum residual can be obtained for a range of different parameters which include q. It is possible to restrict the range of values for the mass ratio by using an iterative procedure in which certain parameters are adjusted for each given value of q.26 By plotting the residual values against the mass ratio values it is easy to identify which mass ratios give the lowest residual values. For systems with a total eclipse, additional constraints are placed on the geometry of the system which restricts the range of possible values for q significantly. Another important parameter to consider is the fillout factor for the system. For an over-contact system, the fillout parameter f is defined as where Ωinner is the value of the inner critical Roche equipotential and Ωouter is the value of the outer critical Roche equipotential. If the surface potential is in contact with the inner critical surface then f = 0, whereas if the surface potential is in contact with the outer critical surface then f = 1. The value of the fillout parameter for the system is a measure of how much of the volume between the inner critical surface and the outer critical surface is filled by the binary's photosphere. For an over-contact system the value for f lies between 0 and 1. As a starting value for the fillout parameter, we used the average value f = 0.15 for over-contact systems. The temperature parameters required by Binary Maker 3.0 refer to the mean effective temperature Teff of the stars. One method of estimating the temperature of a star is to measure the difference in magnitude of its luminosity through two different filters and then look up tabulated values of temperature versus colour index27 (such as B V or V I). ASAS only provide V magnitudes at this stage, so values for B were obtained from the SIMBAD database.28 Unfortunately, the values listed were not obtained at a specific phase and hence could contain errors of a few tenths of a magnitude with respect to the V value. If Teff > 7 200 K energy from the core is transported outwards predominantly by radiation and the stars are called radiative; if Teff < 7 200 K the dominant energy transport mechanism is convection and the stars are called convective. The next parameters that are entered are the gravity darkening, limb darkening and reflection coefficients. Gravity darkening is caused by the fact that the local surface gravity on a non-spherical star varies across the surface of the star. The flux that emerges from the star is proportional to the local gravity and hence there is a pole-to-equator variation in the brightness of the star. For convective stars the gravity brightening coefficient is29 α = 0.32 while for radiative stars α = 1 is used. A star's surface brightness also decreases as one looks from the centre of the star to the limb of the star. When looking at the centre of a star, an observer's line of sight passes through the deeper, hotter layers of the star. At the limb, the line of sight passes through the upper, cooler layers of the star. This results in the surface brightness decreasing from the centre to the limb of the disk. The limb darkening law used by Binary Maker 3.0 is where I(0) is the intensity at the centre of the disc, x is the limb-darkening coefficient and θ has a value between 0° and 90° where θ = 0° corresponds to the centre of the disc and θ = 90° corresponds to the limb of the disc. These values are wavelength and temperature dependent. We used the tabulated values of van Hamme.30 The reflection coefficient is a measure of how much incident radiation from one star is re-radiated by its companion. For radiative stars a reflection coefficient of 1.00 was used, while for convective stars a value of 0.50 was applied.31 The inclination angle i is a measure of the angle between the orbital plane of the binary and the observer's line-of-sight. An inclination of 0° means that the system is viewed looking down on the poles of the stars and i = 90° means it is viewed edge-on. If the light curve exhibits a minimum with a flat bottom then one component is being totally eclipsed by the other. If the two components are of roughly the same size, the inclination angle must be close to 90° for a total eclipse to occur. If one star is much larger than its companion, total eclipses can occur for inclination angles up to about 75°. Once all the parameters are entered into the system, a calculated light curve can be rendered. Parameters can then be adjusted until the lowest value for the residuals is obtained. Model using full dataset The eclipsing contact system catalogued as 1200363915.6 in the ASAS database is listed as having Vmax = 10.45 and a period P = 0.292670 d. Folding the data on this period resulted in a smeared-out curve. Experimenting with different periods, it was found that using P = 0.292672 d produced a plot with the least scatter. The phase-magnitude diagram is shown in Fig. 5. This period was adopted for further analysis and produced a phase-magnitude curve with a flat bottomed secondary eclipse. Because the primary minimum occurs when the hotter, larger star is eclipsed, this indicates an A-type system which is expected to have q < 0.3. Values of B and V obtained from SIMBAD give B V = 0.8 which corresponds to a surface temperature Teff = 5 200 K, indicating a late G-, early K-type star which is a later spectral type than is expected for an A-type system. The orbital period of P ~ 0.3 d is more typical of W-type systems than A-type systems. To determine a mass ratio for the system, an iterative procedure was employed. The temperature of the primary component was set to 5 200 K while the inclination angle, the fillout and temperature of the secondary component were varied until the residual was minimised. The process was repeated with a range of mass ratios until the lowest residual was obtained. This occurred for q = 0.255 which was adopted as the mass ratio of the system. The final model for the system has T1 = 5 200 K, T2 = 5 315 K, f1 = f2 = 0.07 and i = 81.2°. Parameters of the component stars are listed in Table 1. The calculated light curve and the residuals for the system are shown in Fig. 6 along with a rendering of the system. From the residuals it is clear that the calculated curve is not an ideal fit to the data. This is probably due to smearing of the data due to a change in the period of the system. A surprising result of the model is that T2 > T1 which indicates a W-type rather than A-type system. The reason that the secondary minimum occurs when the hotter star is eclipsed could be due to the effects of limb darkening and gravity darkening reducing the flux from the smaller star. Spectroscopic data have been obtained for this star (from GIRAFFE) and are currently being analysed. Model with period change The orbital periods determined by ASAS are an average over the whole dataset. Systems whose periods are changing with time cannot be distinguished in their processing procedure, but the folded data will produce phase-magnitude plots with scatter. 1200363915.6 was not identified by Pilecki et al.18 as having a variable period, but with only 278 A or B quality data points, it would have been eliminated from their sample at an early stage of their investigation (they required at least 300 acceptable quality data points). To investigate if 1200363915.6 has undergone a period change, the data were split into two intervals. Using the ASAS period of 0.292670 d to fold the data, phase-magnitude diagrams for both intervals were created. The curve for the first interval looked reasonable, whereas for the second interval the primary minimum was shifted away from phase 0 and the secondary minimum was smeared out. The period for the second interval was modified to 0.2926715 d which reduced the scatter in the data. This implies that the period of the system changed during the observing period. These phase-magnitude diagrams were imported into Binary Maker 3.0 and the model solution obtained using all the data was then applied to the separate diagrams. The calculated light curve and the plot of the residuals for the first and second interval are shown in Fig. 7a and Fig. 7b, respectively. Using ASAS data for the eclipsing over-contact binary 1200363915.6 we have modelled the light curve to determine the physical parameters of the system. It appears to be an A-type system with a mass ratio q = 0.255 although the spectral type, period and T2 > T1 suggest that the system is a W-type system. The mass ratio that we have determined can be verified by radial velocity measurements. The smeared out phase-magnitude plots created from a single period suggest that the orbital period is changing. Further observations are required to confirm if this is the case and, if so, it should continue to be monitored and modelled. If data of sufficient quality is obtained, it might be possible to establish from the models which parameters are changing, and hence what mechanism is operating to change the period. Data from the SuperWASP32 (Wide Angle Search for Planets) database have been obtained for ASAS 1200363915.6 and are currently being used to refine the model obtained from the ASAS data. SuperWASP provides an effective temperature for the system obtained from the 2MASS33 (Two Micron All Sky Survey) project. The 2MASS temperature is 5 192 K which is close to the temperature obtained from SIMBAD suggesting that there is little interstellar reddening towards the star. Once the spectroscopic data have been analysed, the spectroscopic value for q might allow the model to be refined further. 1. Paczyński B. (1971). Evolutionary processes in close binary systems. Annu. Rev. Astron. Astrophys. 9, 183. [ Links ] 2. Robertson J.A. and Eggleton P.P. (1977). The evolution of W Ursae Majoris systems. Mon. Not. R. Astron. Soc. 179, 359375. [ Links ] 3. Webbink R.F. (2003). Contact binaries in 3D stellar evolution. In Astronomical Society of the Pacific Conference Series 293, eds S. Turcotte, S.C. Keller and R.M Cavallo, p. 76. Astronomical Society of the Pacific, San Francisco. [ Links ] 4. Yakut K. and Eggleton P.P. (2005). Evolution of close binary systems. Astrophys. J. 629(2), 10551074. [ Links ] 5. Eker Z., Demircan O., Bilir S. and Karatas Y. (2006). Dynamical evolution of active detached binaries on the log Jo log M diagram and contact binary formation. Mon. Not. R. Astron. Soc. 373(4), 14831494. [ Links ] 6. Vilhu O. (1982). Detached to contact scenario for the origin of W UMa stars. Astron. Astrophys. 109(1), 1722. [ Links ] 7. van't Veer F. and Maceroni C. (1989). The angular momentum loss for late-type stars. Astron. Astrophys. 220(1), 128134. [ Links ] 8. Stępień K. (1995). Loss of angular momentum of cool close binaries and formation of contact systems. Mon. Not. R. Astron. Soc. 274, 10191028. [ Links ] 9. Ibanoglu C., Soydugan F., Soydugan E. and Dervisoglu A. (2006). Angular momentum evolution of Algol binaries. Mon. Not. R. Astron. Soc. 373(1), 435448. [ Links ] 10. Pribulla T. and Rucinski S.M. (2006). Contact binaries with additional components. I: The Extant Data. Astron. J. 131(6), 29863007. [ Links ] 11. Lucy L.B. (1976). W Ursae Majoris systems with marginal contact. Astrophys. J. 205, 208216. [ Links ] 12. Flannery B.P. (1976). A cyclic thermal instability in contact binary stars. Astrophys. J. 205, 217225. [ Links ] 13. Wang J.M. (1999). Contact discontinuities in models of contact binaries undergoing thermal relaxation oscillation. Astron. J. 118(4), 18451849. [ Links ] 14. Qian S. (2003). Are overcontact binaries undergoing thermal relaxation oscillation with variable angular momentum loss? Mon. Not. R. Astron. Soc. 342(4), 12601270. [ Links ] 15. Hilditch R.W. (2001). In An Introduction to Close Binary Stars. Cambridge University Press, Cambridge. [ Links ] 16. Paczyński B., Szczygieł D.M., Pilecki B. and Pojmański G. (2006). Eclipsing binaries in the All Sky Automated Survey catalogue. Mon. Not. R. Astron. Soc. 368(3), 13111318. [ Links ] 18. Pilecki B., Fabrycky D. and Poleski R. (2007). All-Sky Automated Survey eclipsing binaries with observed high period change rates. Mon. Not. R. Astron. Soc. 378, 757767. [ Links ] 19. Kallrath J. and Milone E.F. (1999). In Eclipsing Binary Stars: Modelling and Analysis, chap. 1, pp. 121. Springer-Verlag, New York. [ Links ] 20. Rucinski S.M. and Seaquist E.R. (1988). VLA observations of the contact binary VW Cep. Astron. J. 95(6), 18371840. [ Links ] 21. van't Veer F. (1991). The period variation of W Ursae Majoris binaries and their relation to magnetic activity. Astron. Astrophys. 250(1), 8488. [ Links ] 22. Binnendijk L. (1970). The orbital elements of W Ursae Majoris Systems. Vistas Astr. 12, 217256. [ Links ] 23. Csizmadia S. and Klagyivik P. (2004). On the properties of contact binary stars. Astron. Astrophys. 426, 10011005. [ Links ] 24. Wadhwa S.S. (2005). Photometric analysis of southern contact binary stars, part 1: GZ Pup, AV Pup and II Aps. Astrophys. Space Sci. 300, 289296. [ Links ] 25. Rucinski S.M. (1973). The W UMa-type systems as contact binaries. I: Two methods of geometrical elements determination. Degree of contact. Acta Astronom. 23(2), 79120. [ Links ] 26. Wadhwa S.S. and Zealey W.J. (2004). UX Ret and CN Hyi: Hipparcos Photometry Analysis. Astrophys. Space Sci. 295, 463472. [ Links ] 27. Cox A.N. (ed.) (2000). Allen's Astrophysical Quantities, 4th edn, p. 388. Springer-Verlag, Heidelberg. [ Links ] 29. Lucy L.B. (1967). Gravity-darkening for stars with convective envelopes. Z. Astrophys. 65, 8992. [ Links ] 30. van Hamme W. (1993). New limb-darkening coefficients for modelling binary star light curves. Astron. J. 106, 20962117. [ Links ] 31. Ruckinski S.M. (1969). The photometric proximity effects in close binary systems. II: The bolometric reflection effect for stars having deep convective envelopes. Acta Astronom. 19, 245255. [ Links ] Received 29 February 2008. Accepted 14 April 2009.
<urn:uuid:01278057-a8c2-4e25-9425-39c809328e05>
CC-MAIN-2016-26
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532009000200013&lng=pt&nrm=iso
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913808
7,970
2.953125
3
Television court dramas may draw a lot of young viewers, but they don't educate the public about the juvenile justice system. Author and lawyer John W. Biggers hopes his Kids Law books will help adults and teens understand the juvenile system. Included: Description of Kids Law content. Years of working with teens, their families, and even sometimes their teachers in the U.S. juvenile justice system -- and realizing how confusing it can be - prompted attorney John W. Biggers to write Kids Law: A Practical Guide to Juvenile Justice. The book outlines the juvenile justice system and how it works with different issues. A teacher's guide helps educator's use the book in their classrooms. Biggers recently talked with Education World about the books and his passion for helping adults and kids better understand the country's juvenile justice system. |John W. Biggers| Education World: What prompted you to write Kids Law: A Practical Guide to Juvenile Justice? John W. Biggers: After a long career working in a variety of legal areas, I felt that one of the most critical needs today in the legal arena is to assist children, youth, and their families in confronting the fast-changing area of juvenile justice. Here is one field of the law where lasting, positive results can be achieved. Watching kids grow in terms of maturity and responsibility is a truly rewarding aspect of practicing law. Then, I saw that most kids who I assisted in the juvenile court, as well as the adults who live and work with them -- parents, teachers, counselors, juvenile court staff, and many other caring adults -- need a ready resource that informs them on why and how the law really impacts their lives. EW: What was your goal in writing the book? Biggers: I was motivated to write a book that is practical and explains how the juvenile justice system can be a friend and not a fearsome enemy. This means, how the law affects juveniles because of their own actions, and how it affects them because of the actions (or inaction) of adults. My primary goal was to provide an easily understood, practical guide that can be read by kids or the adults in their lives, not merely to "get out of trouble with the law," but more importantly to give helpful suggestions on how to make choices in their relationships and responsibilities at home, in school, and in the larger community. In other words, Kids Law is a concept that means kids and adults will know first-hand what the law says about their rights and their responsibilities, all within a proper appreciation of the responsibilities that we owe each other and ourselves -- the three R's of Kids Law. Kids Law does not involve only crime and delinquency situations, but also custody issues related to marriage and divorce, adoption, abuse, and neglect of children, employment, education, property issues, and a wide variety of everyday events that happen to kids and their families. EW: What is your involvement with the juvenile justice system? Biggers: For 15 years, I have worked with juveniles and their families involved with the criminal justice system, and also on issues including domestic or family law, employment and school issues, and emancipation of youth -- kids on their own. As an outgrowth of that, I developed a curriculum for middle and high schools, so classes could study the practical aspects of the law, usually within their social studies courses. The book Kids Law was an outgrowth of these classes, and I have promoted the use of Kids Law in middle and high schools, now accompanied by a teacher's manual that was written by an experienced educator who has worked in the field of law-related education. EW: What are some common misconceptions adults and youths have about the juvenile justice system? Biggers: Many, if not most, kids and adults feel that the juvenile justice system is an adversary -- a place where people simply want to hand out punishment for the "bad deeds of bad kids". I have found that the majority of professionals in the system -- judges, prosecutors, probation and corrections workers, and many related support staff members -- are extremely dedicated to seeing that kids who do run up against the law have the kind of help that is needed to prevent continued problems with the law, with the ultimate goal of leading the juveniles into responsible adulthood. I have seen and heard parents use the system as a threat: "You misbehaved, so now you are going to get what you deserve." Others feel that the juvenile justice system is a cop out because they think we need to be tougher on kids who don't follow the rules and quit coddling them. "Do adult crime and you do adult time," the argument goes. Also, often the system is seen as a dumping ground for misfits; a way to get kids out of someone's hair. EW: Why do you think so few adults and young people understand the justice system, even with the popularity of court dramas on TV? Biggers: Most people, of all ages, see law as they do medicine -- a part of life that is out of reach of the ordinary person. A subject that can be used in exciting -- and over-romanticized -- dramas that is more amazing than accurate in their portrayal of the system. In other words, the media -- and to a large degree, the legal profession -- have always made the law, lawyers, judges, and courtrooms a part of life that is larger than life; work and subject matter that is reserved for specialists, usually persons who are more brilliant than the rest of society might be. As a response, how about a prime-time television series that explains the legal issues confronting kids and families? I feel that the law should be just as easy to understand in everyday life as are the rules of the road for driving or proper courtesies in our daily activities. I remind kids and others I talk with about Kids Law that the most available "place" in the community, which belongs to everyone, is the courtroom. Highly trained lawyers are there, of course, but they are not the only ones who should know and appreciate what is happening. In fact, ordinary folks serving as jurors often decide the most important issues in the system. Lawyers and judges do not make the decision on whether someone gets the death penalty, but the jury of our peers, the average citizen, makes that final judgment. EW: What could be done to better educate the public about the justice system? Biggers: Of course, I feel that there should be many programs, such as courses, study groups, workshops, and a variety of formal and informal settings, where the law --primarily the juvenile justice system -- is taught and discussed in very understandable terms. Not just as some academic exercise, but as a give-and-take interaction involving educators, law and other professionals, such as counselors, psychologists, and behavioral health providers, with the goal of bringing the public to a practical and positive understanding of the law. There can be all sorts of settings: in school, youth and family groups in the community, and parent-teacher organizations, Boy and Girl Scouts, church groups, and Big Brother/Sisters. Provisions must be made for well-organized training for class and group leaders, and involvement of community organizations in planning and implementing programs. The legal and associated professions, such as education and behavioral health, should work directly with universities and other socially concerned groups so that relevant and exciting media efforts are undertaken to inform and motivate the public to accept responsibility for dealing with the meaning and impact of the justice system in American society. This e-interview with John W. Biggers is part of the Education World Wire Side Chat series. Click here to see other articles in the series. Article by Ellen R. Delisio Copyright © 2005 Education World
<urn:uuid:59c65db8-6823-445d-8b2f-bafb8c9f2423>
CC-MAIN-2016-26
http://www.educationworld.com/a_issues/chat/chat109.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97027
1,590
2.578125
3
Hard freeze warnings have delivered a powerful lesson on agriculture to students at R.M. Miano Elementary school. “During this past freeze warning, students were able to relate what they saw in the news and how the citrus farmers of California felt when they saw the upcoming weather forecast,” said Sergio de Alba, teacher at the school. Last week temperatures dipped into the low 20s, triggering the hard freeze warning. The record lows lasted for several nights, and as temperatures dropped into the upper teens in some areas, students were busy in the family farm garden picking citrus. Nearly 400 pounds of fruit, some 17 different varieties, were harvested. “This is a $2 billion dollar industry, and the freeze watch affects all of us in one way or another,” de Alba said. “To avoid damage to our crop, we chose to harvest our fruits early.” Though the sugar content on some of the sweet varieties was not as high as students would have liked, they said the fruit was still very good and 132 sixth-grader got to take some home. Built in 2009, the family farm garden is a 6,400-square-foot orchard with 26 varieties of citrus on the east end of R.M. Miano. De Alba said the sixth-graders maintain 26 citrus trees, ensuring they are watered properly, the orchard weeded the fruit harvested. “This garden is geared towards lessons that focus on what it takes to run a farm along with the importance of nutrition,” he said. “We also are keeping a record of our harvest to see what the value is compared to local supermarkets and to see how our harvests compare to years past.” De Alba said youngsters are unaware of how much the local and state economy is dependent on agriculture. “Our agriculture program focuses on teaching our students where our food comes from, how important farmers and ag are to our local community and how ag is vital to the success of our nation,” he said. Said Marijayne Lua, 11, “I learned that in California, we grow over $2 billion worth of citrus. That’s a lot of money!” De Alba also said students are learning about healthy snacks - fruit versus candy, for example - and what it takes to run a profitable farm. “I learned that when the temperature gets this cold, citrus fruit can freeze and the fruit can spoil. I would be very worried if my farm had this problem,” said Hilbert Carrillo, 11. De Alba showed students harvest tricks: grabbing the fruit and giving thhow to bend the fruit one way and snap it back the other way. “After that, the fruit comes right off,” said Jasmine Sanchez, 11. Braulio Palomar said it’s exciting to grow different types of fruit. “I really like being a kid farmer,” he said. Said de Alba, “We have been fortunate to receive the support of many local individuals, businesses, farms and organizations that allow us to continue to fund the ag-based lessons that are so important to the continued success of our program.” In other weather-related news While farmers worked around the clock to keep their crops warm, warming centers popped up around the Valley. The Miller & Lux Building opened as a warming shelter. Paul Cardoza, Parks and Recreation operations manager, said anytime temperatures reach below 32 degrees, the city opens the building for those who need a warm place to sleep. The warming center was open for about a week and Cardoza said it averaged around nine to 10 people per night. He also said around 15percent to 20 percent more people used the shelter this year compared to last. Although not everyone stayed the entire night, blankets were donated for people to take with them. “A group of volunteers brought food in every night,” he said, “and we appreciate that.” Reporter Marina Gaytan can be reached at (209) 826-3831 ext. 6562 or [email protected]
<urn:uuid:dd4aa823-e62b-427e-8d7e-2619bbfd4bf4>
CC-MAIN-2016-26
http://www.losbanosenterprise.com/2013/12/13/218320/freeze-watch-prompts-powerful.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965161
871
2.9375
3
Inventor Ray Kurzweil hopes to develop ways for humans to live forever, and while he’s at it, bring back his dead father. Behind him is the support of a tech giant. This month, Kurzweil, a futurist, stepped into the role of Director of Engineering at Google, focusing on machine learning and language processing. “There is a lot of suffering in the world,” Kurzweil once said, according to Bloomberg. “Some of it can be overcome if we have the right solutions.” Since his father’s death in 1970, Kurzweil has stored his keepsakes in hopes the data will one day be fed into a computer capable of creating a virtual version of him, Bloomberg reported. Interestingly, one of his novels lays out how humans might “transcend biology.” According to TechCrunch, his controversial theories are rooted in the idea of technological singularity, a time when humans and machines sync up to the point of nearly limitless advancement. That idea, which interests Google co-founders Larry Page and Sergey Brin, could happen as soon as 2030, Kurzweil says. “We are a human machine civilization and we create these tools to make ourselves smarter,” Kurzweil told Scientific American. In his latest book, “How To Create A Mind: The Secret of Human Thought Revealed,” he writes about wanting to engineer a computerized replica of the human brain. If we understand the brain well enough, he says, we would be better equipped to fix its problems, like mental and neurological illnesses. He imagines a search engine capable of accessing a database of your thoughts, stored in the Cloud. It would anticipate what people are seeking before they even know. Much of this may sound nearly impossible, but Kurzweil has been spot-on about technological forecasts in the past. “In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic,” he said in a statement announcing his position at Google. “Fast forward a decade –- Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones.” Digital Trends places Kurzweil among the most-celebrated and recognized innovators of the last four decades. In 1976, several of his innovations converged into the first device that could read printed text out loud for the blind. He was 27 years old at the time. Now, the next generation of inventors will learn from him. Google recently allotted more than $ 250,000 toward his graduate school, Singularity University, according to Bloomberg. After 10 weeks of a curriculum focusing on biotech, robots, and artificial intelligence, students — forgoing a traditional degree — create their own startups. “I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality,” Kurzweil said in the statement. Full Story Via Weird News on HuffingtonPost.com
<urn:uuid:aef851be-984e-4854-8b96-6c70991d1cde>
CC-MAIN-2016-26
http://djchaos.com/2012/12/29/watch-googles-new-director-of-engineering-is-planning-to-change-the-future-of-humanity/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00156-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95232
676
2.546875
3
Follow us:Follow Us on Twitter Like Us on Facebook Follow Us on Google+ Watch videos on our Youtube channel The menisci - the medial meniscus and lateral meniscus - are crescent-shaped bands of thick, rubbery cartilage attached to the shinbone (tibia). They act as shock absorbers and stabilize the knee. The medial meniscus is on the inner side of the knee joint. The lateral meniscus is on the outside of the knee. Meniscus tears can vary widely in size and severity. A meniscus can be split in half, ripped around its circumference in the shape of a C or left hanging by a thread to the knee joint. A barely noticeable tear may resurface years later, triggered by something as simple as tripping over a sidewalk curb. A meniscus tear can occur when the knee is suddenly twisted while the foot is planted on the ground. A tear can also develop slowly as the meniscus loses resiliency. In this case, a portion may break off, leaving frayed edges. Symptoms of a Torn Meniscus If you have a torn meniscus you may: - Be unable to extend your leg comfortably and may feel better when your knee is bent (flexed) - Develop pain gradually along the meniscus and joint line when you put stress on your knees (usually during a repeated activity). This most often happens when the tear develops over a period of time. - Have swelling, stiffness or tightness in your knee In sports, a meniscus tear usually happens suddenly. Severe pain and swelling may occur up to 24 hours afterward. Walking can become difficult. Additional pain may be felt when flexing or twisting the knee. A loose piece of cartilage can get stuck in the joint, causing the knee to temporarily lock, preventing you from fully extending your leg. Diagnosing a Torn Meniscus Typically, your doctor will ask you how the injury occurred, how your knee has been feeling since the injury and whether you have had other knee injuries. You may be asked about your physical and athletic goals to help your doctor decide on the best treatment for you. Your doctor will hold your heel while you lie on your back and, with your leg bent, straighten your leg with his or her other hand on the outside of your knee as he or she rotates your foot inward. There may be some pain. It is important to describe your symptoms accurately. The amount of pain and first appearance of swelling can give important clues about where and how bad the injury is. Tell your doctor of any recurrent swelling or of your knee repeatedly giving way. A magnetic resonance imaging (MRI) scan is often used to diagnose meniscal injuries. The meniscus shows up as black on the MRI. Any tears appear as white lines. An MRI is 70 to 90% accurate in identifying whether the meniscus has been torn and how badly. However, meniscus tears do not always appear on MRIs. Meniscus tears, indicated by MRI, are classified in three grades. Grades 1 and 2 are not considered serious. They may not even be apparent with an arthroscopic examination. Grade 3 is a true meniscus tear and an arthroscope is close to 100% accurate in diagnosing this tear. Treating a Torn Meniscus If your MRI indicates a Grade 1 or 2 tear, but your symptoms and physical exam are inconsistent with a tear, surgery may not be needed. Grade 3 meniscus tears usually require surgery, which may include: - Arthroscopic repair. An arthroscope is inserted into the knee to see the tear. One or two other small incisions are made for inserting instruments. Many tears are repaired with dart-like devices that are inserted and placed across the tear to hold it together. The body usually absorbs these over time. Arthroscopic meniscus repairs typically takes about 40 minutes to do. Usually you will be able to leave the hospital the same day. - Arthroscopic partial meniscectomy. The goal of this surgery is to remove a small piece of the torn meniscus in order to get the knee functioning normally. - Arthroscopic total meniscectomy. Occasionally, a large tear of the outer meniscus can best be treated by arthroscopic total meniscectomy, a procedure in which the entire meniscus is removed.
<urn:uuid:a658b597-ff71-4842-829e-92db04c816fa>
CC-MAIN-2016-26
http://www.cedars-sinai.edu/Patients/Programs-and-Services/Orthopaedic-Center/Clinical-Programs/Sports-Medicine/Repairing-Torn-Meniscus.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.921794
927
3.34375
3
World Archives Project: England, Newspaper Index Cards (Andrews) About this project Compiled in England during a period from the 1880s to 1965 by a number of Chancery Agents, this is an index of thousands of small cards containing information on individuals, from next of kin advertisements, will notices, unclaimed estates, and missing persons listings taken from all national and several overseas newspapers and many other sources including The London Gazette, notices under the Colonial Probate Act of 1892 and deaths abroad, as well as other official sources including obituaries. Many of the sources are now extinct. Mostly, the original newspaper cutting has been pasted on the card and this is often annotated either with references from recorded wills or with information gathered from Civil Registration and other reliable sources. The exact dates covered by each source are, as yet, uncertain, but most dates of death are usually in the 20th century referring back to individuals born in the 18th and 19th centuries. The latest date of death found has been 1970. This collection doesn't have a consistent amount of data, so we aren't going to capture it all, just the things we find most frequently. Just fill in the fields you can find information for on the image and don't feel bad about skipping them if there is no information on the image. These are generally not difficult to read, but they require some thought as to which fields to fill in and which to leave blank, so this progect has an Average difficulty.
<urn:uuid:6bc70231-3212-4a5a-90e8-9ccfce6c98fb>
CC-MAIN-2016-26
http://www.ancestry.com/wiki/index.php?title=World_Archives_Project:_England,_Newspaper_Index_Cards_(Andrews)
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00031-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973989
304
2.71875
3
Measuring brain activity for emotional markers that may indicate risk for developing alcoholism - New research uses an electroencephalogram (EEG) to look for a connection between brain activity thought to reflect trait-like differences in emotionality and alcoholism. - Findings show an asymmetry of activity in the left and right frontal areas of the brain. - The pattern of asymmetry found is similar to that found in individuals with depression. Although prior research has looked at brain activity and alcoholism, much of it has focused on cortical activity as a marker for impulsivity among alcoholics. A new study examines measures of brain activity in the frontal regions of the brain, thought to reflect individual differences in emotionality, an important aspect of personality. The discovery of an imbalance of activity in the right and left frontal areas may indicate a dysregulation in brain systems that govern emotion and motivation. Results are published in the December issue of Alcoholism: Clinical & Experimental Research. "It is important to see how measures of personality and emotionality relate to alcohol dependence," said Elizabeth P. Hayden, assistant professor in clinical psychology at the University of Western Ontario and corresponding author for the study. "However, most studies of this question use self-report measures of personality and emotional experience. In this paper, we looked at measures of brain activity thought to reflect individual differences in emotional behavior to see whether these were different in a group with alcoholism and other problems [when] compared to control participants." "An electroencephalogram or EEG is a test that measures electrical activity in the brain," added Emily Grekin, assistant professor of psychology at Wayne State University in Detroit. "Sometimes different brain areas show different patterns of electrical activity, a condition known as EEG asymmetry, which may be a marker of depression. Specifically, individuals with a history of depression have demonstrated lower levels of electrical activity in the front left compared to the front right region of the brain." Yet very little, added Grekin, is known about EEG asymmetry and alcohol dependence. "Dr. Hayden's study is the first to directly address this issue." Researchers compared resting brain activity in the anterior and posterior cortical regions of 193 individuals who had alcoholism with 108 individuals who did not have a history of psychopathology, including alcoholism. Within the alcoholism group alone, they also examined if a lifetime history of major depressive disorder (MDD) or antisocial personality disorder (ASPD) had effects on regional asymmetry. "Major depression and antisocial behavior are both problems that commonly co-occur with alcoholism," explained Hayden, "but alcoholics who have these problems may differ [from individuals who do not] in terms of key characteristics related to personality and emotion. Thus, looking at these groups and how they differ on variables of interest may reveal more consistent, clear-cut patterns than looking at alcohol-dependent participants as a whole." Study results indicate an imbalance in the right and left frontal cortex regions of the brain. "We found that alcoholics had lower brain activity in left frontal areas relative to right frontal areas, as measured by EEG, when compared to nonalcoholics," said Hayden. "This is interesting because left frontal activity may reflect brain systems involved in acquiring rewards and the positive moods we feel when we obtain a desirable object or goal. Conversely, right frontal activity may be involved in inhibiting behavior in the face of negative consequences and the anxiety we feel in those circumstances." This imbalance, she speculated, may be genetically based – at least partially. Echoing Grekin's earlier remarks, Hayden noted that the pattern of asymmetry she found was similar to that found in individuals with depression. "Although this finding dovetails with research that indicates shared genetic influences on these disorders, it is important to note that the difference we found between controls and alcoholic subjects was pretty small," she said. Nonetheless, said Grekin, "these results are compelling and suggest that EEG asymmetry may be an index of a general vulnerability to psychopathology. Interestingly, participants with both alcohol dependence and depression were found to exhibit less EEG asymmetry than individuals with alcohol dependence alone. These findings are puzzling and warrant further study." "Our findings probably have the most relevance for understanding vulnerability markers for alcoholism," said Hayden. "We know from research with preschoolers and infants that individual differences in frontal asymmetry may be meaningfully linked to behavior and personality even early in life. It seems likely that these brain measures are trait-like and exist prior to the development of problems. An important next step in this line of research would be to see whether the same is true of frontal brain activity in alcoholism, possibly by looking at the children of alcoholics. If these patterns of brain activity emerge early in development, we may be able to use these measures – in conjunction with other information – to understand who is vulnerable to developing alcoholism." Alcoholism: Clinical & Experimental Research (ACER) is the official journal of the Research Society on Alcoholism and the International Society for Biomedical Research on Alcoholism. Co-authors of the ACER paper, "Patterns of Regional Brain Activity in Alcohol-Dependent Subjects," were: Ryan E. Wiegand of the Medical University of South Carolina; Eric T. Meyer, Sean J. O'Connor and John I. Nurnberger, Jr. of the Indiana University School of Medicine; Lance O. Bauer of the University of Connecticut School of Medicine; and David B. Chorlian, Bernice Porjesz and Henri Begleiter of the State University of New York Health Science Center at Broolyn. The study was funded by the National Institutes of Health, the National Institute on Alcohol Abuse and Alcoholism, and the National Institute on Drug Abuse. Last reviewed: By John M. Grohol, Psy.D. on 30 Apr 2016 Published on PsychCentral.com. All rights reserved.
<urn:uuid:787cfc80-92de-4194-bf9d-687661551cab>
CC-MAIN-2016-26
http://psychcentral.com/news/archives/2006-11/ace-mba112006.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942633
1,195
2.984375
3
by Staff Writers Madison, Wis. (UPI) Oct 14, 2013 Diamonds, big enough to grace a necklace or bracelet, may be raining down in the atmospheres of Saturn and Jupiter, U.S. scientists say. Carbon in its dazzling crystal form may be abundant on the solar system's two largest planets, researchers told the annual meeting of the Division for Planetary Sciences of the American Astronomical Society in Denver last week. The diamond creation starts when lightning storms high in the atmosphere turn methane into soot -- carbon -- which as it falls into layers of the atmosphere with increasing pressure hardens into chunks of graphite and then diamond. The biggest of the diamonds would be more than a quarter of an inch diameter, "big enough to put on a ring, although of course they would be uncut," Kevin Baines of the University of Wisconsin-Madison and NASA's Jet Propulsion Laboratory said. Baines and colleague Mona Delitsky from California Specialty Engineering analyzed the latest temperature and pressure predictions for the planets' interiors, as well as new data on how carbon behaves in different conditions. Stable crystals of diamond would "hail down over a huge region" of Saturn in particular, they concluded. Other researchers said the findings, based on what is known of the atmospheres of Jupiter and Saturn, make sense. "The idea that there is a depth range within the atmospheres of Jupiter and [even more so] Saturn within which carbon would be stable as diamond does seem sensible," Planetary scientist Raymond Jeanloz of the University of California, Berkeley, told the BBC. Carbon Worlds - where graphite, diamond, amorphous, fullerenes meet |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:6bef946e-294c-49cf-81ff-7f0a4a1199f6>
CC-MAIN-2016-26
http://www.spacedaily.com/reports/Diamonds_may_be_raining_down_on_Jupiter_Saturn_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.895798
474
2.703125
3
Swimmers - along with scuba divers and body surfers - have been using fins for decades,but only recently have coaches and swimmers alike begun to appreciate what these excellent training tools can do to help people of all ability levels improve their swimming. Increase fitness and Cardiovascular Conditioning Although swimming is considered one of the best aerobic or cardiovascular-conditioning exercises, many people forget to use their legs where the largest muscles are located. Since the greatest cardiovascular benefits come from including the highest percentage of the body's muscles - that's why cross-country skiing and rowing are considered two of the most demanding sports - it makes sense that swimmers who activate the large muscle mass of the legs by kicking will benefit from a more demanding workout that burns more calories and increases fitness levels. Add fins to the equation and the increased load they add to the legs means that as long as exertion levels remain high, the cardiovascular system gets an even more intense workout with even greater fitness benefits. Unlike all other fins, even those with open toe drains, "Only Force Fin swimming fins are designed to concentrate the force of the blade further back on the foot, using the muscles of the whole leg, not just the lower leg." Increase Ankle Flexibility Have you ever noticed that runners, cyclists or triathletes new to swimming who start a serious swim program have a hard time just kicking and going anywhere? In fact, they sometimes go backward! One reason is that their ankles are so inflexible that when they kick, their feet act like hooks, catching the water and pulling the frustrated swimmer in the wrong direction. Good swimmers, on the other hand, can hyperextend (plantar flex) their ankles, pointing their toes so that the top of the foot forms a straight line with the shin. Because of the extra load from the increased surface area that fins provide, swimming or kicking with fins forces ankle extension during the power phase (pushing down when swimming freestyle) of the kick. Repeated fin use eventually stretches the ankles, increasing their flexibility for moving in all directions and helping the kick become more propulsive and efficient. The extra load from the increased surface area that fins provide, swimming or kicking with fins forces ankle extension during the power phase (pushing down when swimming freestyle) of the kick. Repeated fin use eventually stretches the ankles, increasing their flexibility for moving in all directions and helping the kick become more propulsive and efficient. Ankle flexibility is one key to efficient kicking. To measure your flexibility, sit on the floor with legs extended and pching the floor, point your toes as far forward as you can while having lace a stiff piece of paper against the side of the foot. With heels tousomeone trace this side view. Measure the distance from the base of your big toe to the ground or floor; your goal should be from one to four inches. Using a regular program of kicking with FORCE FIN swimming fins, re-measure and chart your progress. Develop Leg Strength Kicking with fins is like lifting weights: the added resistance of the water on the blade of the fin increases the workload on your leg muscles. Your body adapts by increasing the strength and endurance of the muscles involved. Stronger muscles move more water making you swim faster, all other things being equal. A word about specificity: It's important to realize that muscular strength for swimming needs to be "specific." Good runners, cyclists, roller bladers, etc. can have very strong leg muscles, but the muscles have developed for running, cycling or roller blading, not for swimming. Fins develop leg strength specifically for swimming, and in a way that few other activities can. Improve Body Position and Technique Fins add extra propulsion to the stroke, which increases a swimmer's speed through the water. Good swimmers tend to plane on top of the water while poor swimmers tend to drag their legs and swim in a more vertical position slowing them down. One of the goals of swimming faster with fins is to swim faster when the fins are taken off! By transferring the feeling of swimming faster and higher with fins to swimming without them, a swimmer makes use of a phenomenon know as neuromuscular patterning. The muscles and the nerves can actually remember the feeling of swimming faster and will try to duplicate the pattern the next time out. The more times the pattern is repeated (swimming faster and higher in the water with fins), the easier it is to duplicate it. The end result: the swimmer's technique and neuromuscular coordination improves. WHY USE FORCE FIN SWIMMING FINS? We've been designing and making improvements to fins for over 25 years, with hundreds of thousands of satisfied customers, and many major product design awards under our belts, we can honestly say that we are the experts in fin design. That's all we do, and we think our innovative fins are without equal in terms of comfort, efficiency, durability, performance, versatility and sheer value for your money. We offer a variety of fins and each as been developed in response to your needs, comments and suggestions. Studying marine animal locomotion and human biomechanics, we have researched and designed our fins to take advantage of the laws of nature and physics. Not to mention common sense. Force Fin swimming fins utilize the body's strength and put it where it's needed. The human body is built to have more strength when kicking down (during freestyle), than when kicking up. The downward or power phase of the kick emphasizes the powerful quadriceps muscle group at the front and side of the upper leg. The upward, recovery phase uses the weaker hamstrings at the back of the thigh. While still developing both sets of muscles, our blade design assists -- or helps take the load off -- the upward recovery phase that uses the weaker muscles, maximizing energy efficiency. Open Foot Pocket Design Unlike all other fins, even those with open toe boxes, only FORCE FIN swimming fins are designed to concentrate the force of the blade further back on the foot, using the muscles of the whole leg, not just the lower leg. Try this simple test to see the difference between how our fins work compared to other brands. While seated, cross your ankle over your knee, grab your toes and pull them in the direction of the sole of your foot. Do you feel the stretch and tension running from the top of your foot up and along the shin? Those are the primary muscles that other fins work because the fin blade extends from your toes. Now, grab the foot in both hands - one holding the heel, the other holding the top of the foot - pull again toward the sole of the foot and resist with the leg. The tension has moved up to the upper leg, hasn't it? Because of their unique design based on human biomechanics, Force Fins and Slim Fins work the larger muscles of the upper leg in addition to the lower leg. These are the muscle groups you need to develop for a strong kick and a better workout. Patented Blade Design Our fins do not have flat blades like most other brands. Instead the blade curves up or away from the bottom of your foot to provide more resistance on the power or downward (during freestyle) phase of the kick. The blade then snaps back to assist on the recovery. This two-stroke cycle does several things for a swimmer: The snap of the blade helps increase kicking tempo keeping correct arms-to-legs coordination intact. There's no worry about the kick slowing down unnaturally as with all other fins. The rebound of the blade during the recovery phase helps bring the legs higher in the water (during free-style), raising the lower body to the surface in the desirable high-in-the-water position, where you encounter less drag and can swim faster and more efficiently. By kicking against a load that provides for a separate power and recovery cycle as opposed to the traditional power-power cycle of other fins, oxygen depletion is reduced, the legs and body work more aerobically and less anaerobically, and swimmers can maintain their workout efforts for longer periods of time. Independent research has shown less lactic acid buildup (lactic acid is the proof of anaerobic activity) and more oxygen absorption with Force Fins. Flat fins bring on symptoms of fatigue and cramping more quickly. "The curves and flexibility of our blade design help keep the legs properly oriented for more efficient power, even as they fatigue. As you kick down, the blade pushes against the water engaging muscles of the whole leg. The water is pushed behind and through the split in the fin to provide initial forward thrust. At the end of the downstroke, the fin recoils, setting up to rebound to its original position during the recovery or return kick stroke. Assited by the rebound, the fin throws and directs water behind, faster than you can kick it with any other fin. The fin's tips fold inward to aid the upward recovery and to prepare your leg for the next downward power stroke. At the same time, it resets and recovers your leg into position for its next forward kick stroke. Inspired by Nature Have you ever seen a fast-swimming fish with a blunt or squared-off tail? Neither had we. That's why we patterned our fins after the split-V shape of fish tails that more efficiently channel the water. The other brands of swimming fins still use the cheaper-to-make, straight-across design. Other fins, flat fins want to go through the water along the path of least resistance, which is sideways! Don't believe it? Hold a flat, heavy object at the surface of the water, let go and watch what happens. It turns on its edge and heads for the bottom. Or, if it's light like a sheet of paper, it will zig-zag or "dish" its way down. Neither action is going to help your kicking. With flat fins - even small or cutoff flat fins - a swimmer is constantly fighting all this twisting and torquing, and any effort spent this way is wasted. We also noticed that other fins worked basically like boards tied to your feet; they were stiff and inflexible. Again, we took our cue from nature and physics, and we came up with very flexible fins using innovative materials like polyurethane instead of rubber. The curves and flexibility of our blade design help keep the legs properly oriented for more efficient power, even as they fatigue. FORCE FINS or SLIM FINS? Force Fins are our original design, and being somewhat larger than Slim Fins, they provide a more concentrated leg workout (more resistance). They are a more versatile fin, in that they can be used for other sports, such as snorkeling, SCUBA and float tube fishing. From a swimming perspective, they are a very specific training tool and are excellent for butterfly, "new-wave" breaststroke, and the backstroke dolphin kick. Slim Fins are narrower than the Force Fin and are specifically designed for pool swimming - especially freestyle - applications. With Slim Fins, the blades don't touch, flip turns are no problem, and high stroke turnover or cadence is easily maintained. Slim Fins can be used either while swimming or in kicking-only drills. They receive the highest recommendations from swimming coaches and professionals, and will do the best job for and are the best choice for most swimmers. Other swim fin choices: Multi Force are a short bladed swim training fin, with a foot pocket that adjusts to fit a wide variety of feet, with an added twist. You can add Force Wings to vary the output of your fin, to work harder or faster. You can even fine-tune the Wings to target specific muscles or stroke correction needs. They are great for reconditioning, lap swimming and threshold training. HOW ARE THEY MADE? Manufactured in the USA and backed by over 30 years of research and development, Force Fin swimming fins are made of ultraviolet-, abrasion- and chemical-resistant polyurethane and heat-treated for 16 hours at temperatures where other fins melt. This process allows the molecules to cross-link giving the blade its snappy, high-performance characteristics and extreme durability. Unlike other brands, Force Fin swimming fins will not melt on hot pool decks or mark the sides of pools, and should last season after season, even when used daily. We could go on forever describing the unique benefits of using Force Fin swimming fins to improve your swimming. But there's only one way to really find out why our fins are the best swimming fins in the world: try a pair you will be happy!
<urn:uuid:5433bac3-d644-43ed-a190-bb90535ef4bb>
CC-MAIN-2016-26
http://www.forcefin.com/wp_truths.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947426
2,624
2.75
3
How to play Short History of the World Game Use your mouse to complete each game in super-quick time Short History of the World game controls fire = left_mouse;jump = na;movement = mouse; A thrill ride through the history of time. 36 exciting mini games, each one covering a different period in history from the Ice Age to present day.
<urn:uuid:23da9217-0197-4a4b-bdc2-7465d32ac229>
CC-MAIN-2016-26
http://www.mobiletoones.com/free-online-games/action-games/11039/short-history-of-the-world-flash-game.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887
77
2.734375
3
There are so many things that we buy so that we can use less energy. From CFL light bulbs to hybrid cars, we spend a lot of money in our attempts to be kinder to the earth. What if we could cut back energy consumption in the United States by two percent each year without spending any money? What if you, as a consumer, could actually save some money while helping to decrease energy usage? Wouldn’t you think that if that type of energy savings was possible, we’d be doing it already? Well, according to the American Chemistry Society, that type of energy savings is possible, but we’re literally throwing away our opportunity save the energy equivalent of 350 million barrels of oil each year. This waste comes in the form of food that gets produced but never eaten. The United States Department of Agriculture estimates that 27 percent of food in the country ends up being thrown away instead of getting consumed. The scientists involved with the study concluded that “the waste might represent a largely unrecognized opportunity to conserve energy and help control global warming.” You can help put a dent in the amount of food wasted by being more careful with the food in your own kitchen. Eat your leftovers before they go bad by finding creative ways to use up things like burger buns or spaghetti sauce. Every once in a while, dig in your pantry to see what’s in the back that needs to get used. You can look in the grocery store for marked down items like fresh bread and meat with looming sell-by dates. Often stores mark those items down for quick sale. You can buy them inexpensively and store them in your freezer until you’re ready to use them. You’ll keep them from ending up in the store’s dumpster and you can save some cash. If everyone took these easy, money saving ideas, it would be a good start in chipping away at that two percent of energy we waste each year just from throwing away perfect good food. Also on MNN:
<urn:uuid:25bab04e-5e82-435a-9c03-0f8fd9650b2c>
CC-MAIN-2016-26
http://www.mnn.com/food/healthy-eating/blogs/eat-your-leftovers-and-curb-global-warming
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00067-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955746
423
3.140625
3
Development Cooperation Handbook/Cooperation and Communication/Meta-communication< Development Cooperation Handbook | Cooperation and Communication The concept of meta-communication The fundamental concept of Pragmatics of Human Communication is the concept of "meta-communication". "Meta" means after, beyond, besides. Like metaphysics, which means that which comes after physics. In mathematical models, the idea of a "meta-level" is very important. For instance, the rules of the game are not a part of what the teams play. Setting the rules is a "meta" level with respect to playing the game. Rules are required for the game, but they are not what we are playing. "Meta-communication" is when we communicate about communication. So when I am talking to you about my ideas, I am communicating with you; but when I tell you that I like, or dislike, our communication, then I am meta-communicating. Some times meta-communication is open; many more times, it is not explicit. . If I ask you "Have you understood", I am explicitly meta-communicating. But in fact meta-communication is not just something that happens sometimes—like when we say "have you understood what I’m saying?". That is an official meta-communication (or if I tell you that we are now going to speak in Italian, it is officially a meta-communication, because we are setting a code). But the Pragmatic School says that this meta-communication always happens. Every time I am communicating, I’m at the same time meta-communicating. And finally meta-communication is more important than communication itself and it makes sense of the meaning of what I’m communicating. Let us take the example of communication which takes place during courting. Now clearly meta-communication, in this case, is much more important than what we are talking about. I am courting you. When I ask you "what you’re doing this evening", my real purpose is not to know what you’re doing this evening, but to know if you would be interested in seeing me this evening. And you immediately understand that mine is an invitation, even if you pretend not to understand. The invitation is clearly the meta-communication, while the knowledge of how you planned the evening is what communication is officially dealing with. (And so, in this case the invitation is the real "content" of the communication, while the question of the evening schedule is just the "form"). On the one level I am telling something, on the other, I am asking "how do you perceive the fact that I’m telling you this". What I want to know is your reaction. It is not yet an open invitation, but nevertheless is a clear invitation.. If you want to be cool, you just tell me what you are actually doing, pretending that you haven’t understood that what I said was an invitation. It would mean that you are not interested in the invitation. If, instead, you show you have understood my invitation and you ask "Why"?, you are openly meta-communicating: this would mean that we are focusing on the true issue; we are getting more intimate. But we are meta-communicating in both the cases. If you say "This evening, I’m studying" , your meta-communication will be "thank you for your invitation, but now I am not interested" and from your tone, I will understand if it is a "maybe next time" or "no chance". If you say "Why?", you really ask "why are you interested in what I’m doing?" Now the ball will be back in my field. I have to explain the reasons of my interest for you. I can become more intimate or less intimate or pretend that I just wanted to know. Or I can say "I would like to meet you this evening". Now the ball is again in your court. You have to tell me if you are interested in my interest for you. Now, clearly in this process, the meta-communication is the essence of our communication. And what is being said is absolutely marginal. Is the body language a meta-communication?Edit The question raised by one of the trainees was whether body language can be considered a kind of meta-communication. In many cases—yes. But this does not mean that body language communicates only about communication. We can do direct communication with the body and use a different media to express what we mean by "body". I can make normal gestures with the body, like offering food, and say "I do these gestures because I love you". In this case, words are meta-communicating, with respect to how the other person should interpret my body language. And such a pattern will be used in many instances, while we use an audio-visual media. It could happen that you use the audio as a meta-communication on the image or vice versa. All codes can be used to communicate and all codes can be used to meta-communicate. It is free. We do it all the time and we do not need to study a text on communication to be able to do it. We need to study books on communication only when we want to hide the reasons of what we say and when we want the other to follow what we are saying. Then, we need to be able to manipulate the meta-communication codes when we do not want the other to know our true scopes. Meta-communication is based on reciprocity and on a hierarchy of levelsEdit This process of meta-communication is based on reciprocity and on a hierarchy of levels. I look at you and you look at me. This is level 1: direct communication. When I try to understand "How do you look at me?" we are on a different step. It is the level 2. And if I try to understand how you look at the way I look at you, then we are on a even higher plane: level 3. This is a meta-meta-communication. I prefer to take examples from the sphere of the erotic mode of relationships since there is no abstraction in this and each one of us can easily recollect the mood. So we talked about the level 3: let’s make examples of it. I am courting her. How do I perceive her reaction to the interest that I’m showing in her? This is a higher level. It is the real level on which we are playing. The rest of the conversation for us is just an occasion, an excuse. The third level is where we are confronting each other. The other levels were simply the levels of "information". Down there we exchange data. But the third level is the level where we decide what to do with our game. Here, we make the choice. From what happens at level 3, we will decide to go on with the relationship or to stop. Because what really counts is how I think that she thinks of the way I’m thinking about her; and vice-versa.
<urn:uuid:02836570-d4c6-4375-9abb-fb5e56260a72>
CC-MAIN-2016-26
https://en.m.wikibooks.org/wiki/Development_Cooperation_Handbook/Cooperation_and_Communication/Meta-communication
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00034-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952152
1,450
2.859375
3
By Robert A. Krieger The ground and surface waters of the Ladder Creek area were sampled at the 44 points shown on Figure 12, and the results of analyses of these samples are given in Table 6. The wells that were sampled are listed by well location number and under geologic source. The results in equivalents per million are given in Table 7. Figure 12--Water-sampling sites and generalized geologic map. Table 6--Chemical analyses at water, in parts per million |Hardness as CaCO3||Percent sodium| |Sanborn and Meade formations| |Ladder Creek at Wichita-Greeley County line||Wichita||9/21/51||55||8.7||539||52||17||43||14||205||70||14||10||b321||198||7||32| |Lake McBride on Ladder Creek||Scott||66||8.6||681||63||22||57||16||242||104||23||5||b409||247||22||33| |Ladder Creek above confluence with Chalk Creek||Logan||9/19/51||74||8.5||705||60||23||61||11||230||130||24||3.1||b425||246||39||35| |Chalk Creek above mouth||Logan||9/19/51||76||8.2||1,620||156||51||139||0||101||713||55||0.5||b1160||600||517||34| |Twin Butte Creek above mouth||Logan||9/19/51||77||8||2,850||440||89||185||0||88||1,650||53||1.2||b2,460||1,460||1,390||22| |Ladder Creek above mouth||Logan||9/19/51||71||8.3||876||80||28||74||7||233||222||27||2.6||b556||313||110||34| |Smoky Hill River below Hinshaw Spring||Logan||9/19/51||75||8.1||1,050||81||28||111||0||207||338||24||1.2||b685||316||146||43| |Smoky Hill River Elkader, Kansas||Logan||9/19/51||70||8.2||966||89||30||75||0||158||325||27||1.6||b626||345||215||32| |1. Alluvium or Sanborn and Meade formations or all three. 2. Sanborn and Meade formations or Ogallala formation or all three. 3. Niobrara formation or alluvium or both. a. Sum of determined constituents. b. Sum of determined constituents. Table 7--Chemical analyses of water, in equivalents per million. |Source and Location||Date of |Sanborn and Meade formations| |Ladder Creek at Greeley-Wichita County line||9-21-1951||2.59||1.37||1.88||.47||3.36||1.46||.39||.16| |Lake McBride on Ladder Creek||3.14||1.80||2.46||.53||3.97||2.17||.65||.08| |Ladder Creek above confluence with Chalk Creek.||9-19-1951||2.99||1.93||2.66||.37||3.77||2.71||.68||.05| |Chalk Creek above mouth||9-19-1951||7.78||4.22||6.06||.00||1.66||14.84||1.55||.01| |Twin Butte Creek above mouth||9-19-1951||21.96||7.28||8.06||.00||1.44||34.35||1.49||.02| |Ladder Creek above mouth||9-19-1951||3.99||2.27||3.21||.23||3.82||4.62||.76||.04| |Smoky Hill River below Hinshaw Spring||9-19-1951||4.04||2.28||4.81||.00||3.39||7.04||.68||.02| |Smoky Hill River at Elkader, Kansas||9-19-1951||4 44||2.46||3.25||.00||2.59||6.77||.76||.03| Chemical Constituents in Relation to Use The most important basic ion in the ground waters of the area is calcium. Calcium and magnesium are the cause of most hardness of water. These two ions combine with soap to form an insoluble curd or scum, and therefore the use of hard waters results in excessive soap consumption. Calcium in these waters ranged from 33 to 573 ppm and magnesium from 10 to 100 ppm. The bicarbonate ion, which is the principal anion in the ground water, ranged from 170 to 453 ppm. Hardness equivalent to the bicarbonate is carbonate ("temporary") hardness; the rest is noncarbonate ("permanent") hardness. Noncarbonate hardness results from the solution of compounds other than the bicarbonates of calcium and magnesium. Sulfate, chloride, and nitrate ranged from 17 to 1,760 ppm, 3.5 to 185 ppm, and 1.6 to 240 ppm, respectively. A high sulfate content may indicate that the water has dissolved gypsiferous materials. Excessive nitrate content in water may be an indication of pollution. Whether polluted or not, water high in nitrate is undesirable for domestic supplies because of its toxic effect on some infants (Comly, 1945). According to Comly, waters containing nitrate amounting to more than 45 ppm should not be used in infant feeding. Only two samples analyzed (17-33-1bd and 16-32-32dc) had greater concentration of nitrate. Fluoride in concentrations of about 1 ppm in drinking water used by children during the calcification of the teeth prevents or lessens the incidence of tooth decay; concentrations greater than 1.5 ppm may cause mottling of the enamel. Fluoride in waters of the Ladder Creek area ranged from 0.6 to 2.8 ppm. Boron is essential for normal plant growth, but it is beneficial only within narrow limits. The element is very toxic to many plants, and quantities in excess of the optimum will cause serious damage. A concentration of 1.0 ppm of boron in the soil waters may cause injury to many plants. Boron concentrations in the ground waters that were analyzed ranged from 0.05 to 0.25 ppm. Ionic Relations in the Water The analyses in equivalents per million, expressions of the chemical combining weights of the ions, are given in Table 7 and are shown graphically in Figures 13 and 14. Figure 13--Principal mineral constituents in ground water. Figure 14--Principal mineral constituents in surface water. Many natural waters that contain small or average amounts of dissolved solids are bicarbonate waters, but waters containing more than average amounts of dissolved materials contain much greater proportions of sulfate or chloride, and a correspondingly greater amount of calcium or sodium. Gypsiferous waters are high in calcium and sulfate content. An example of gypsiferous water is that found in well 15-32-19bd (Fig. 13). The relation between dissolved solids and sulfate is shown graphically in Figure 15. Sulfate makes up a larger part of the dissolved solids as the amount of dissolved solids increases. Although the data available for water from wells in the Niobrara formation are few, those available show that the relation of sulfate to dissolved solids is very similar to that in water from the alluvium. The proportion of sulfate to dissolved solids in the more concentrated waters from wells in the Ogallala formation is somewhat greater than in water from the Niobrara formation and alluvium. Figure 15--Relation of sulfate to dissolved solids in ground water. In Figure 16, the relation between chloride and dissolved solids in the ground waters is shown. The relation of chloride to dissolved solids of waters from the alluvium and the Ogallala formation is very similar. Figure l6--Relation of chloride to dissolved solids in ground water. Hardness of the waters from the Niobrara and Ogallala formations and the alluvium was very similar in relation to dissolved solids, as shown in Figure 17. Figure l7--Relation of hardness as CaCO3 to dissolved solids in ground water. Chemical Quality in Relation to Movement of Ground Water The ground water moves east by south generally in the direction of a line drawn from the northwest corner of 15-42-6 to the southeast corner of 16-31-36 (Fig. 12). The location of each well along the base line, as indicated by a perpendicular drawn from the well to the base line, was plotted as abscissa, and the concentrations in parts per million of sulfate, chloride, and hardness were plotted as ordinates in Figures 18, 19, and 20, respectively. The concentration of sulfate increases gradually from west to east (Fig. 18). In the Ogallala formation, the sulfate shows a relatively abrupt but slight increase between 20 and 30 miles from the western edge of the state, and at 40 miles from the west border the sulfate concentration becomes approximately constant. In the alluvium, the steady increase of the sulfate content conforms with the downstream increase in Ladder Creek. The data for the Niobrara formation and Sanborn and Meade formations were not sufficiently widespread to indicate any trends. Figure 1B--Sulfate content of ground water along direction of movement. Chloride in water from the Ogallala formation showed trends similar to that for sulfate (Fig. 19). After the increase at 20 to 30 miles, the chloride concentration showed a slight tendency to decrease at 50 miles from the western edge of the state. This may be due either to an exchange of chloride ion for some other ion or, more likely, to a dilution by less concentrated waters. Chloride increased slightly in the alluvium with distance from the western boundary; chloride data for the Niobrara formation and Sanborn and Meade formations were too meager to show any significant trends. Figure 19--Chloride content of ground water along direction of movement Hardness in water from the Ogallala formation and the alluvium showed a slight but steady increase with distance from west to east (Fig. 20). Figure 20--Hardness (as CaCO3) of ground water along direction of movement. Chemical Quality in Relation to Geologic Source Samples of water were obtained from the Niobrara formation, Ogallala formation, the Sanborn and Meade formations, undifferentiated, and the alluvium along streams in the area. In addition, several water samples were obtained from the streams that drain the area. Only a few wells obtain water from the Niobrara formation. Consequently, only four water samples from this source were obtained, and it is uncertain whether well 15-32-19bd yields water from the Niobrara formation or alluvium or both. The data available show that the composition of the water from the Niobrara formation differs considerably from place to place. WeIl14-34-26aa yielded water containing 1,440 ppm sulfate, but samples from two other wells contained only 177 and 89 ppm of sulfate. Insofar as conclusions can be drawn on the basis of three samples, the water from the Niobrara formation seems to contain more dissolved material than water from the Ogallala formation. The principal anions are bicarbonate and sulfate; calcium is the principal cation. The Ogallala formation is very calcareous; consequently, the action of dissolved carbonic acid produces a water whose dissolved material is relatively rich in calcium, magnesium, and bicarbonate. The total amount of dissolved material in water from the Ogallala formation, however, is generally less than that in water from other sources in the area. Calcium and bicarbonate are the principal . ions in the water from the formation, ranging in concentration from 33 to 84 ppm and 170 to 248 ppm, respectively. The amount and proportions of dissolved material were nearly uniform, as shown in Figure 13. The Ogallala formation yields water of the best quality and most nearly uniform chemical composition of all the ground-water sources in the Ladder Creek area. Sanborn and Meade Formations Undifferentiated Two wells and one spring from which samples were obtained are believed to draw water from the Pleistocene deposits (Sanborn and Meade formations), although one of the wells, IB-32-7ac, may yield water from the Ogallala formation also. Water from this well contained 226 ppm of bicarbonate and 39 ppm of sulfate; water from the spring (13-37-23bb) contained 255 ppm of bicarbonate and 212 ppm of sulfate. The waters are hard. The concentration of dissolved material in the six samples of water from the alluvium ranged from 324 to 3,190 ppm. Water in the alluvium along Ladder Creek was of better quality than that obtained from the alluvium in the valleys of Chalk and Twin Butte Creeks, as the samples from the latter two were gypsiferous in addition to containing considerable calcium bicarbonate. All the water is very hard. Waters from the alluvium contained 0.7 to 2.8 ppm of fluoride, the maximum being somewhat more than the maximum limit set by the U. S. Public Health Service for acceptable public water supplies. The streams in the Ladder Creek area were sampled at or near base flow stages. The chemical characteristics of the surface waters would then be similar to water from wells in the alluvium. Chalk and Twin Butte Creeks were high in sulfate and low in bicarbonate. The water in Smoky Hill River was somewhat gypsiferous and was of poorer quality than the water in Ladder Creek. Chemical Quality in Relation to Use The Ladder Creek area provides water for irrigation, stock, and domestic use. Water from the Ogallala formation is the best for irrigation. It is low in boron, and values for percent sodium range from 10 to 39. The specific conductance of the samples analyzed ranged from 350 to 2,720 micromhos, but values for all except one sample were below 757 micromhos. Waters from the alluvium generally are less suitable for irrigation because of high salinity. Most waters from the Niobrara formation and from the Sanborn and Meade formations can be used for irrigation but are less suitable than water from the Ogallala formation. No high-boron waters were found. All the waters are suitable for livestock except those containing excessive sulfate, which may be unpalatable. The U. S. Public Health Service (1946), in setting up standards of quality for drinking water used on interstate carriers and for public supplies in general, stated that the following chemical substances should not exceed the stated concentrations: |Iron and manganese together||0.3| Total solids should not exceed 500 ppm, but 1,000 ppm is acceptable if water of better quality is not available. Most of the waters meet the U. S. Public Health standards with regard to all the ions except sulfate and fluoride. The largest number fail to meet the standards because of high fluoride concentrations; of 22 samples analyzed for fluoride, 11 contained more than 1.5 ppm. Kansas Geological Survey, Geology Placed on web Jan. 30, 2013; originally published December 1957. Comments to [email protected] The URL for this page is http://www.kgs.ku.edu/Publications/Bulletins/126/07_chem.html
<urn:uuid:c58df4cd-5c67-41e2-94f3-10a7a80ddfa1>
CC-MAIN-2016-26
http://www.kgs.ku.edu/Publications/Bulletins/126/07_chem.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00097-ip-10-164-35-72.ec2.internal.warc.gz
en
0.735858
3,481
2.734375
3
THE BASAL VERTEBRATES A FAMILY TREE The vertebrates represented below are those with jaws, or the Gnathostomes. Features that distinguish the jawed vertebrates include the vertebral column -- a reiterated series of bones that surround the spinal cord, and a skull that surrounds the brain. These are formed of either bone or cartilage. Other important features are paired pectoral and pelvic fins. In the living sharks and rays (Chondrichthyes), a cartilaginous skeleton helps to lighten the animal's body. In the bony fish (Osteichthyes), the cartilage in most groups is replaced with bone, and buoyancy increased by lungs or a swim bladder. The bony fish are divided into two groups based on the structure of the fin. In the Sarcopterygians, or fleshy-finned fish, the fin has a fleshy base with articulated bones supporting the fin. In the Actinopterygians, or ray-finned fish, the fins are supported by a fan of spiky fin rays. Among the ray-finned fish, the largest and most diversified group is the teleost fish. For groups that could be seen at the James R. Record Aquarium at the Fort Worth Zoo, a link has been provided to representatives of each group. The James R. Record Aquarium was closed in 2002. Representatives of most of these vertebrate groups can be seen in the James R. Record Aquarium. Paddlefish and bowfins are not currently on exhibit. Coelacanths survive poorly in captivity. (Tree redrawn from the cladograms of The Phylogeny Wing of the University of California Museum of Paleontology)
<urn:uuid:61742599-7f42-48e3-9286-63a6a82a943a>
CC-MAIN-2016-26
http://whozoo.org/fish/fishtaxa.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90003
364
3.703125
4
The Escalante River and Removing Invasive Russian Olive Trees The Conservation Lands Foundation wants to help inspire people to become advocates for the National Conservation Lands. And for many people, that first step is often doing something—something physical where you’re outside, on the land and often happily getting dirty. The many organizations that are part of the Escalante River Watershed Partnership know exactly what this kind of advocacy is about. Formed in 2009, this diverse partnership of private organizations and government agencies is removing invasive species, Russian olive in particular, from 70 miles of this river. Most of the Escalante River runs through the Grand Staircase-Escalante National Monument, the Glen Canyon Recreation Area, and some private lands. People walk the river corridor with clippers, saws and sometimes chainsaws to cut the stuff down. They apply herbicide to the stumps to keep it from growing back. Sometimes they have to walk half a day or longer to reach their work site, and they often stand in the river to cut. They make big piles of slash for later disposal. They work long, hot days to grapple—and beat—these brambly, scratchy, thirsty trees that are choking the river, crowding out native plants and shading the water, which raises its temperature and harms aquatic species. A revered river for paddling, Russian olive trees grow so thick in some places on the Escalante that river runners can barely pass through. For some, removing Russian olive from the Escalante is a paying job. Bureau of Land Management employees, national park employees, youth conservation corps, and private contractors are all at it. And so are a big number of volunteers—advocates—who sign up through Wilderness Volunteers, The Nature Conservancy, or other non-profits that organize volunteers to work on public lands. Together, they are making progress. There is also a great deal of research, scientific study, additional planning and monitoring taking place through the Escalante River Watershed Partnership—in conjunction with the physical work of removing these non-natives. Many groups in the Conservation Lands Foundation’s Friends Grassroots Network are waging battles against Russian olive and numerous other nonnatives that are negatively impacting habitat on the National Conservation Lands. For example, take a look at the Amargosa Conservancy’s work to improve this desert ecosystem for various species of rare pupfish. Friends of Nevada Wilderness have several opportunities this summer and fall for volunteers to do a range of projects, everything from monitor high-elevation springs, remove old fencing that harms wildlife, to improving trails in riparian areas. Volunteers from the newly formed Lower Dolores Boating Advocates, based in Dolores, Colorado, are participating in an urban river clean-up to help shed light on the connection and impact river towns have on downstream wild lands. The effort to remove Russian olive on the Escalante River, with Friends Grassroots Network group Grand Staircase Escalante Partners playing a lead administrative role, is among the most ambitious and far-reaching. You can see some excellent “before and after” photographs of removing Russian olive from the Escalante River here. And below is a short video of the river, national monument scenery and some of the work we’re describing.
<urn:uuid:2ee1784d-070f-41c2-8152-e53e8d0e824e>
CC-MAIN-2016-26
http://conservationlands.org/the-escalante-river-and-removing-invasive-russian-olive-trees
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.933871
681
2.78125
3
There are many good rock songs out there. However, to write your own song playing the guitar, drums, bass, or singing is not enough. In fact, you can write a great rock song without even playing an instrument. Writing a rock song can be hard, but once you know what you are doing you'll find it gets easier. You just have to be inspired. 1There are many different types of rock songs, and it would be extremely hard to cover all of them on one page. However, most rock songs all break down to 4 parts: The Hook, The Melody, the Bridge, and The Chorus. (but there are so many other kinds of ways to set a rock song, so don't worry about it). But before you begin to record the song, before you begin to develop the song, before you begin writing the song, before you even have an idea, you must have inspiration. You can't force inspiration, it comes to you in the right moment. Like, you have just broke up with your girlfriend, or you've just have been kicked out of your school, so don't waste your time, this is the best moment, write down the first thing that comes in your brain, now you've got the idea. The idea is the very first part of everything. So, how do you get the idea? First, start thinking about the message and the mood the song will carry out. This doesn't have to be very detailed or anything, just a basic idea. 2Get a pen or pencil and a piece of paper to record your ideas. Start thinking about the hook. The hook is a sort of a sub-melody, a riff, something you can easily hum, or the part of the song that gets stuck in your head. The hook generally sets the mood for everything else. Keep the hook simple as it is not the main melody of the song. After thinking about how the hook should sound, start to actually write it. Rock music is generally based on the major/minor/power chords of two sets of chords: I, bIII, IV, V, bVII (In the key of E the bar chords begin on the frets: 0,3,5,7 and 10) or I, bIII, IV, bVI and bVII ( these are 0,3,5,8 and 10). The first set will give you a more "classic Rock" sound whereas the second set sounds more modern. Mess around with the chords from one of the sets above until you get something that works. 3Develop the main melody. The main melody usually appears in the verses of the song. It is usually played in an Intro and then sang as the vocals throughout the song. However, it's too early for the lyrics yet, so just think up a general melody. One thing you can do is record the hook and just jam to it until you find a clear melody that sounds good and goes well with the hook. 4Often melody starts with the same note as the "ground note" or root in the key chord, for example in D major it would be a d-note and in E minor an e-note. Then melody might lift up to third or fifth above, or drift to third or fifth below the note where it began from. (In key of D major, to f# or a and then down to B or G.) Usually a verse ends up returning to the same note as begun with or go one octave below. 5The two main components of a song are the verse and the chorus. Each time the verse is played the words differ, whereas the words for the chorus stay the same throughout. Verse pretty much follows the hook and the main melody. But the chorus should be catchier and more memorable than the verse. 6Think of a chord progression that is a little different from the hook and (possibly) in a different key. Then write the melody. Another thing to keep in mind is the transition from verse to chorus. A good idea would be to add a bridge. A bridge is like a verse but the music is different and it is only played once. 7Develop a background. Add a bassline (base it roughly around the chords used by the guitar), drums, and any other instruments you think are necessary. They should go along with the theme and the hook. Also, add any backing vocals. It is a good idea to work through this step parallel to the previous ones. - Now, it's time for the lyrics. The lyrics should relate to the mood and the message of your song. They are the part that speaks to people. Lyrics in the verse should be more like telling a story. Lyrics in the chorus, on the other hand, outline the main theme(s) of the song. Some rock lyrics will have a statement, some will have a suggestion, some will tell the story and some will just be gibberish. The lyrics that write a great song are the lyrics that a crowd can sing along to. 8You should now be able to write your solo. Solo is usually played on a guitar, however there are no limitations. Solo should be like the verse, chorus, or both, except instead of the lyrics the guitar (or another solo instrument) is playing the melody. Note, though, the solo instrument should not be playing the main, secondary, or any other melody used previously in the song. Good solo-writing technique is simply improvising. if you are a beginner solo-er, then a great rock solo would be playing the pentatonic scale. 9Put them all together. Most rock songs are organized in this manner: Intro, Verse 1, Chorus, Verse 2, Chorus, Chorus (repeated), Solo. Some songs have another part after the solo: The Outro. There also might be another verse somewhere inside the song. But as a beginner, you should stick to the general structure and then, as you gain more experience, develop your own. Get a piece of paper and write all of these down. Insert parts of the song in appropriate places. 10Organize a band. Start practicing the song. Then you might want to put ads somewhere to get some members. During the practicing time, you may make some final changes to the hook, melody, and any other part of the song. Then, you are ready to record. 11Get some good recording equipment and record the song. (Pawn shops or music stores usually have all you need.) 12Show your song to your friends and family. You want to have other opinions other than your own. Then make necessary changes according to feedback you got. 13You can now burn the song to a CD and release it to public, or write more songs and produce an album. Sample Rock Song - Make sure you enjoy the music you are writing, if you don't, there will be less inspiration and you will hate playing it. - Experiment. As you gain more experience, you can make changes to the general structure and even make up your own. - Be real. Rock is about passion, emotion, and power. Write about whatever you want. The more real, the better the song is. - If you think of something catchy in your head write it down or you will probably forget - Always be yourself. Write about your emotions or personal experiences, or the experiences of people close to you. People want to hear about stuff that is personal and true. If something is fake they can tell almost immediately. Any experience that effects you and/or changes your life, for better or worse, is something that others can relate to and something that they will enjoy. - Relax and get in a comfortable spot to think. - The lyrics don't have to rhyme. - As you write, try to keep the same theme. Don't be too complex with your lyrics. - Remember to copyright your songs before you sell them. - When writing instrumental parts, try having the guitar and the bass playing in a different rhythm. This makes the song sound more complex; like you put some thought into it. - Try to get some inspiration from other bands or other songs, but do not copy/parody their songs. - If you have to, walk around for a while and think of a couple things you may want to write about. - If you've come up with a bit of a song but have to leave record it on your phone so you don't forget it. - It's easier to compose a rock song when you know how to play an instrument. - It's easier to write the lyrics when you have the guitar part done. - Don't make it too repetitive otherwise people will get bored with the song. - As a beginner, keep your songs under 3 minutes as it gets a little boring after that. However, as you gain more experience, you can write much longer songs that are not boring at all. - Make sure the song is okay with the band before you set it in stone. Your band will get angry if you start to become independent. - Obviously, don't copy other existing songs/melodies because you will get in trouble eventually and sound very fake. Try to be original (You can borrow and alter bits and bobs, like guitar effects, vocal styles, a few chords here and there, words you admire, etc. But not whole chunks and riffs.). It sounds terrible to completely rip someone else off, unless you're in a cover or tribute band. - Don`t argue with the crew/band. Let everyone have a turn to voice their opinions. - Don't try it so hard, if the song doesn't come to you, so go to the pub, have a beer, or just call a friend and it will come easier in your mind. - Don't make it hurtful Things You'll Need - Pen/pencil or computer - A reason - A message - passion for your music Categories: Songs and Song Writing In other languages: Español: componer una canción de rock, Português: Escrever uma Música de Rock, Italiano: Scrivere una Canzone Rock, Русский: написать рок песню, Deutsch: Schreibe einen Rocksong, Français: écrire une chanson de rock Thanks to all authors for creating a page that has been read 282,639 times.
<urn:uuid:6e6c79f1-9ceb-44bc-b8e0-e06c188ad00b>
CC-MAIN-2016-26
http://www.wikihow.com/Write-a-Rock-Song
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00177-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954333
2,197
2.640625
3
In this lesson, students will learn that enslaved people resisted their captivity constantly. Because they were living under the domination of their masters, slaves knew that direct, outright, overt resistance—such as talking back, hitting their master or running away––could result in being whipped, sold away from their families and friends, or even killed. In this lesson, students learn firsthand about the childhoods of Jacobs and Keckly from reading excerpts from their autobiographies. They practice reading for both factual information and making inferences from these two primary sources. Slave narratives are a unique American literary genre in which former slaves tell about their lives in slavery and how they acquired their freedom. Henry “Box” Brown escaped from slavery by having himself shipped in a crate (hence, the nickname “Box”) from Richmond, Virginia, to Philadelphia, Pennsylvania, in 1849.
<urn:uuid:a1179c8b-996a-40d0-b0e9-fb6bfe8d4fc3>
CC-MAIN-2016-26
http://edsitement.neh.gov/taxonomy/term/77?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975638
179
4.125
4
Challenging the claims by the biotech industry that genetically modified foods are safe for human health and the environment, a new report presents evidence to the contrary and stirs up controversy, as its authors are genetic engineers themselves. Entitled “GMO Myths and Truths,” the report’s authors include two genetic engineers: Dr. Michael Antoniou, who uses genetic engineering for medical applications at King’s College London School of Medicine in the U.K. and Dr. John Fagan, the former genetic engineer who returned more than $600,000 in grant money to the National Institute of Health in 1994 over his concerns about the safety of genetic engineering. Among the study’s claims that GMO technology is not safe, the researchers show the harmful effects of genetically modified foods on laboratory animals in feeding trials, as well as damage to the environment in cultivation practices. The authors state that GMOs present greater risks of toxicity, allergic reactions and decreased nutritional value than crops from conventional farming practices. The lack of adequate regulations in the industry makes human safety a near impossible concern to address. Some of the biggest claims by the biotech industry are that genetically modified seeds can increase yield potential, and are being used to combat poverty and hunger around the globe, but the authors say it is simply not the case. Yields are consistently falling below expectations, leading already struggling farmers further into debt witht these multinational corporations that hold patent rights on the seeds and demand payment regardless of how well the crops perform. Farmers are also being forced to use more pesticides to deal with resistant weeds and insects that have developed in recent years, mainly to Monsanto’s glyphosate-based Roundup pesticide. One of the early promises of genetically modified crops was the decreased use of pesticides over time. And the continual, excessive use of pesticides is causing irreparable damage to water, soil and biodiversity. On her blog, author and NYU Nutrition professor Marion Nestle says the report “provides plenty of justification for the need to label GM foods.” Along with Dr.’s Antoniou and Fagan, the report was co-authored with Claire Robinson, research director at Earth Open Source who says, “We all need to inform ourselves about what is going on and ensure that we – not biotechnology companies – keep control of our food system and crop seeds.” Keep in touch with Jill on Twitter @jillettinger Image: Idaho National Laboratory
<urn:uuid:d668a81c-d330-4e2b-a69a-11584507a452>
CC-MAIN-2016-26
http://www.organicauthority.com/blog/organic/genetic-engineers-slam-gmo-industry-claims-in-new-report/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951721
501
2.890625
3
Diagnostic Tests for Nausea Diagnostic Test list for Nausea: The list of diagnostic tests mentioned in various sources as used in the diagnosis of Nausea - Physical examination - Check hydration status by checking skin elasticity, moisture within mouth, pulse rate and blood pressure (lying and standing) - Check temperature - Look for evidence of anemia - e.g. pale skin and conjunctiva - may suggest chronic gastritis, peptic ulcer, stomach cancer or renal failure. - Look for evidence of jaundice - e.g. yellows skin and sclera of eyes - may suggest gallstones that pass into common bile duct, cancer of the pancreas or acute viral hepatitis. - Observe for ataxia (lack of co-ordination of voluntary movement) which requires ruling out a brain hemorrhage - Abdominal examination for tenderness - e.g. tenderness in epigastric area (midline below ribs) may suggest peptic ulceration; tenderness over the right upper abdomen may suggest gall bladder disease. - Abdominal examination for mass which may suggest pregnancy, stomach cancer, pyloric or intestinal obstruction, pancreas cancer, acute cholecystitis (inflammation of the gallbladder usually due to obstruction from a gallstone), Crohn's disease, abscess around the kidneys or diverticulitis - Examination of the central nervous system for signs of elevated intracranial pressure, acute meningitis and abnormalities of the inner ear - Blood tests - Full blood count and ESR - Iron studies - Electrolytes and renal function (especially if persistent vomiting as renal failure can present with dyspepsia, nausea and vomiting) - Calcium (elevated calcium may cause nausea, vomiting and abdominal pain) - Fasting blood sugar - Liver function tests - Pregnancy test should be routine in women of childbearing age - Digoxin level, if suspect digoxin toxicity - Helicobacter pylori serology (to test for presence of bacteria that is now accepted to cause duodenal ulcers, but does not distinguish from past or present infection) - Gastrin levels, if multiple peptic ulcers to help diagnose Zollinger-Ellison syndrome - Tumor markers including Carcinoembryonic antigen (for bowel cancer) - Urine tests - Stool tests - Stool sample for microscopy and culture - Stool occult blood test may be positive with peptic ulcer or stomach cancer - Radiological investigations - Chest X-Ray - Abdominal X-Ray - Abdominal ultrasound scan - can give information about the liver, bile ducts, pancreas, kidneys. This is the best test for gallstones. - CT Scan abdomen if ultrasound suggests pathology but undefined - HIDA scan can exclude an obstructed bile duct - Barium swallow and barium meal can diagnose small bowel motility disorders and obstruction of the upper gastrointestinal tract - Scintographic gastric -emptying studies may help to diagnose delayed gastric emptying e.g. in diabetes - ERCP, if jaundiced, which provides information about the bile ducts - Intravenous pyelogram (IVP) may suggest a perinephric abscess (abscess around the kidneys) - CT scan of brain to rule out brain tumor, brain hemorrhage - Angiography is useful to diagnose mesenteric artery ischemia - Urea breath test - test of choice for following the response to treatment of Helicobacter pylori. - Upper gastrointestinal endoscopy (with biopsy of mass if found and for Helicobacter pylori) - may be required to diagnose peptic ulcers, stomach cancer. - Endoscopic Retrograde Cholangiopancreatography (ERCP), if jaundiced - Arterial blood gases (ABGs) - Electrocardiograph (ECG) Home Diagnostic Testing These home medical tests may be relevant to Nausea causes: - Food Allergies & Intolerances: Home Testing: - Digestive-Related Home Testing: Conditions listing medical symptoms: Nausea: The following list of conditions have 'Nausea' or similar listed as a symptom in our database. This computer-generated list may be inaccurate or incomplete. Always seek prompt professional medical advice about the cause of any symptom. Select from the following alphabetical view of conditions which include a symptom of Nausea or choose View All. Conditions listing medical complications: Nausea: The following list of medical conditions have 'Nausea' or similar listed as a medical complication in our database. » Next page: Glossary Medical Tools & Articles: Tools & Services: Forums & Message Boards - Ask or answer a question at the Boards:
<urn:uuid:eaef624d-86ea-4d33-a4ba-2f3812632b2c>
CC-MAIN-2016-26
http://www.rightdiagnosis.com/symptoms/nausea/tests.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.844298
1,056
2.84375
3
This clip is just a few minutes of a multi-hour course. Master this subject with our full length step-by-step lessons! Lesson Summary: In this lesson, you'll learn about the concept of Angular Speed and Angular Acceleration in physics. Essentially, these quantities are directly related to linear speed and linear acceleration. The only difference is that the angular forms occur for rotating or turning objects while linear speed related to motion in a straight line. Here we introduce these concepts and show how these concepts are very easy to understand when the student knows where they come from.
<urn:uuid:4a488b9e-097e-4824-9d56-bf0becb7faf2>
CC-MAIN-2016-26
http://www.mathtutordvd.com/public/Angular_Speed_and_Angular_Acceleration_in_Physics.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930855
118
4.25
4
Previous abstract Next abstract Session 111 - Pulsars. Display session, Saturday, January 10 Although the locations of over 600 pulsars are known today, their birthplaces remain somewhat of an enigma. It has been conjectured by Narayan and Ostriker (1990) that there are two different types of pulsars in the Galaxy, one born near the Galactic plane and a second born high above the plane with large velocities. The existence of two distinct subpopulations would account for the observations by Harrison, Lyne, and Anderson (1993) of four pulsars with velocities toward the Galactic plane. Thirteen of the 31 pulsars in our group have been observed at two epochs at 20 cm at the VLA in Socorro, New Mexico. This pulsar data is unique in that it was taken at the VLA using a "matched filter." The pulsar was detected using the Princeton Mark III Timing Machine and the correlator was then gated with the correct phase so that it only gathered data when the pulsar was "on." By gating our data, we have been able to increase the signal to noise ratio of weak pulsars by a factor of six. This increased SNR enables us to accurately determined positions for pulsars as weak as 2 mJy in the ungated data. By doubling the number of pulsars high above the Galactic plane with measured proper motions, we will be able to improve our understanding of the birth and evolution of the pulsar population. Program listing for Saturday
<urn:uuid:b7b8d27a-5679-418a-a143-8a719a3face8>
CC-MAIN-2016-26
https://aas.org/archives/BAAS/v29n5/aas191/abs/S111017.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953854
315
2.984375
3
50 Ways to Improve Your Improvisational Skills. Guitar Educational. Improvisation and Play Along. Instructional book and examples CD. With introductory text, instructional text, musical examples and standard guitar notation. 88 pages. Published by Hal Leonard (HL.695583). Item Number: HL.695583 ISBN 0634021648. 9x12 inches. This book/CD pack explores all the main components necessary for crafting well-balanced rhythmic and melodic phrases. It also explains how these phrases are put together to form cohesive solos. Many styles are covered - rock, blues, jazz, fusion, country, Latin, funk and more - and all of the concepts are backed up with musical examples. The 50 ideas are divided into five main sections: The Basics - covers fundamental but all too often forgotten techniques, such as slurs and vibrato, that can breathe new life into your phrases; Melodic Concepts - explores various aspects of melodic phrasing, such as motifs, chromaticism and sequences; Harmonic Embellishments - discusses the melodic potential of harmonic intervals (dyads), chords and chord partials; Rhythmic Concepts - explores various aspects of rhythmic phrasing, such as accents, free-time phrasing and metric modulation, and how it pertains to melodic soloing; Solo Structure - all of the topics discussed in the book come together to help form the big picture. The companion CD contains 89 demos for listening, and most tracks feature full-band backing. Also available for keyboard, tenor saxophone and trumpet.
<urn:uuid:136179aa-c359-403d-b7ea-332026edfff8>
CC-MAIN-2016-26
http://www.sheetmusicplus.com/look_inside/4507112
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.868036
325
2.625
3
We have been immersed in another round of what some like to call “public lands theater,” the seemingly endless war over who best to manage or, perhaps even own, the federal land estate of the United States. Last year the Arizona legislature tried to demand almost all the federal lands within its boundaries, even the Grand Canyon. The legislature submitted the demand to a vote of Arizonians, and lost 66% to 33%. Perhaps the notion of "Grand Canyon State Park" put some people off. It put me off, but I was a ranger at Lees Ferry once. Utah began heading down the same path, thought better of it, and then first called for a study of the various issues and problems regarding the management of federal lands. Idaho and New Mexico have also considered similar moves and strategies; certainly an unbiased study makes sense. What I would like to do here is outline some starting points for conversations that might move things in a positive direction. 1. These lands are public lands managed by the national government. Congress can decide that the states ought to manage them, by transferring such lands. This is the only way to do this. States cannot "demand" them, say that federal management is unconstitutional or "require" the national government to hand them over. The supremacy clause of the US Constitution, and court understandings that federal lands are federal property, make that clear. 2. The National Environmental Policy Act (NEPA) and forest planning regulations have created frustrating decision gridlock. Most observers, on all sides of land management debates, and Forest Service professionals, think there is a lot of room for reform. 3. The proposition that "the states can do a better job" makes no sense. They could not, if they had to follow all the laws and procedures that federal land agencies must follow. Because state land is managed under a different legal mandate, we are comparing apples to oranges. A proper way to test this assertion would be to hold all management mandates the same. 4. Not every acre under federal management needs to be so. The system was not created through some sort of rational land designation process. Conversely, certain newer concerns, such as the protection of biodiversity, confront the fact that federally managed protected areas do not always align with where the species are. Boundaries should be open to alteration. 5. Collaboration is slow, painful and is often, but not always, working. More timber is coming off some national forests in Idaho due to collaborative agreement. The people involved in the Idaho collaboratives have learned a great deal and have insights on how to make effective recommendations and where the landmines lie. I have watched them work and applaud their efforts. They are practicing democracy. It is true elsewhere too. 6. Many easterners, including the media, often don't know the difference between national parks and national forests, and they haven't a clue about the Bureau of Land Management. Nonetheless they feel that these are public lands, partly theirs. They might be receptive to concerns of westerners with extensive federal land in their backyards, and the lost revenues that often means, but we have yet to discover a common starting point for these conversations. Essays in the Range blog are not written by High Country News. The authors are solely responsible for the content. John Freemuth is a professor of political science and public policy at Boise "State" University. Photo of Grand Canyon National Park courtesy Flickr user krandolph.
<urn:uuid:b4bc3bb8-8f3a-45e8-9a4a-db61f4c77364>
CC-MAIN-2016-26
http://www.hcn.org/blogs/range/grand-canyon-state-park/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00167-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966048
704
2.84375
3
14, 19, 29, ...Series teasers are where you try to complete the sequence of a series of letters, numbers or objects. What are the next two numbers in this series? 14, 19, 29, 40, 44, 52, 59, 73, __, __ HintIt has to do with the digit numbers. AnswerThe next two numbers are 83 and 94. The pattern is: Take the two digits in each number, like the 1 and 4 in "14" for example, and add them together (to get 5 in this case). 5 plus 14 equals 19, which is the next number. See another brain teaser just like this one... Or, just get a random brain teaser If you become a registered user you can vote on this brain teaser, keep track of which ones you have seen, and even make your own. Back to Top
<urn:uuid:36bccd44-593d-4c90-bcea-072967e94551>
CC-MAIN-2016-26
http://www.braingle.com/brainteasers/8442/14-19-29-.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00193-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92182
187
2.6875
3
Spring is here and with it often come what people perceive to be as allergies. Runny noses, itchy eyes, and low-grade fever are just some of the symptoms people find themselves dealing with. But hope is not lost! You do NOT have to suffer. But before a resolution can be developed, it’s important to get to the root cause of the problem, rather than just fighting with the symptoms. Allergies are actually sensitivities to a stimulant or trigger—food, chemicals, and/or the environment. Organ congestion, parasites, yeast overgrowth, emotional triggers, and a toxic environment commonly cause allergies. The symptoms of so-called allergies are nothing more than a reflex or the body’s reaction to a burdened state of toxicity/overload. Therefore, what people think of as allergies are really the body’s way of showing its inability to cope with its state of toxicity and organ burden. If you are like most people who suffer from allergies, you may have turned to prescription medications or over-the-counter remedies such as anti-histamines, decongestants, nasal sprays, eye drops, and inhalers, to manage your allergies Some of you try to avoid the triggers of allergies and may have resigned yourselves to living in a confined environment full of air purifiers and other filtered air systems. Many of you may also have tried allergy shots. My solution is to see what organs are in need of cleansing and strengthening (tonifying). This is done with QRA (Quantum Reflex Analysis) a simple, painless, and effective way to examine the layers of what’s going on with your allergies. Also, not commonly known, but allergies can often be related to emotional traumas. For example, a person may be eating broccoli and then see their pet run over. Years later, they may become acutely allergic to broccoli. There are ways to safely and effectively reintroduce that food to the body. QRA can also begin to identify your allergens and recommend what nutritional support might be needed. I hope this post helps you realize that you don’t have to suffer with allergies, and/or rely on ineffective over-the-counter medications. Have you suffered from allergies? What methods have you found that work for you? Have you ever tried an all-natural approach? I welcome comments and questions. Pain is actually a physical symptom telling the body something is wrong. It often manifests itself in the body as a response to: poorly treated past injuries; overexertion; common, everyday work-related activities; and inactivity. Stress, emotional life, or even spiritual issues can also result in physical pain. Typically, traditional medicine treats the symptom of pain, but not its root cause. Responding to pain by taking over-the-counter or prescription medications serves to treat the symptom but does not address the cause of the pain. Take back pain, for example. Western society faces a huge epidemic of low back pain due to continuous sitting, inactivity, and poor diet. Uninterrupted sitting and inactivity can create a neuromuscular response in the body to shorten muscle length, resulting in decreased range of motion and—often—pain. Muscle shortening and decreased range of motion then creates a biomechanical dysfunction within the joints, causing anatomical discrepancies in leg length and pelvis alignment. These anatomical discrepancies lead to spinal misalignment, which in turn causes pain in the limbs. In an effort to survive, the body seeks to protect itself by correcting or compensating for any imbalances. However, with back pain, the body does not know how to go to the source of the pain, so its first response is to protect the injured area. The body’s second response is to begin to repair the injured area nutritionally. To repair the injured area, the body responds to pain by tightening, reducing the range of motion, creating inflammation to protect and immobilize the painful area, and by producing endorphins and other chemicals that serve as a mask. The body then compensates its posture and biomechanics to move away from the source of the pain. Since the body’s survival mechanism is to keep the eyes level and the body coordinated within its surroundings, it creates distortions—twisting of the torso, shortening of the leg, tilting of the neck, lowering or moving the shoulder forward, rotation of the pelvis—which lead to more spinal and neurological stress. As a therapist it is crucial to understand biomechanics and our relationship to gravity. When a person’s posture is compromised our relationship with gravity is far from the path of least resistance. My job as practitioner is to release those shortened muscles that are distorting a person’s posture due to whatever the circumstance is and to show the patient various movements and stretches to help improve their posture so they can move with the least pressure/distortion on their joints, which then releases the pressure on their nerves. This, in turn, releases their pain in most cases, which improves their relationship with gravity. It has been reported time and time again how much more energy and ease clients feel after making those changes. The declining health for most Americans can be readily observed simply by noticing the amount of commercials on TV that talk about cancer research, high blood pressure and cholesterol, depression, and on and on. We have heard the alternative health doctors and professionals talk about the adverse side effects of vaccines and foods made up of genetically modified organisms (GMOs). And we’ve all heard the arguments about organic versus nonorganic food, the raw diet, the vegan diet, and every other diet. And we have heard about the ill effects cell phones and Wi-Fi have upon us. And of course the ill effects of mind over body. It seems that every which way we move these days we are being accosted with messages about some health issue or another. We can’t live in our homes because of the off gassing, electrical wiring, cell phones, and the food in our refrigerators. Then we step into our cars to go to work and again we are accosted by EMFs and emissions. Next, we step in to our places of employment, which are filled with cell phones, WI-FI, and microwave ovens, which we’re also told can harm us. Then at the end of the day we drive home and repeat it all over again. But amid the many messages we receive about how to try and be healthy and happy, we hear nothing about going to the root causes of our personal health issues. Sorry to sound so defeatist, but I needed to set the foundation for what I’m about to share. I have been involved in the alternative health field since the late ’70s, when practicing chiropractic adjustments were illegal in Massachusetts, and people were told that acute or chronic pain due to soft tissue injury was all in their heads. Fortunately, we have come a long way. For years I have worked with people on their various physical injuries and sundry health issues ranging from A to Z. Many people would get well and many would not. It was not until I started working with the Biofield— the subtle aspect of our body— and the compromising effects of a physical trauma—referred to as an interference field (IF)—that things really started to make sense. When we have an injury, be it a concussion, stitches, a broken bone, or something as small as a vaccination, it is considered an insult to the Biofield. Yes, it is important to repair the broken bone and address the concussion, but it is equally as important to attend to the body’s post trauma as well. In my field I have found that when the Biofield has received an insult that is then healed, most of us go on about our business forgetting and, in most cases, not knowing and therefore not addressing our subtle body. Our subtle body is the part that communicates and supports the true quality of our health. If we do not address the issues of trauma to the body, then the brain/body communication starts to break down, nutrition is not as effective, healing takes longer, and other issues begin to surface for no other reason.
<urn:uuid:28474315-7392-4c90-ac3d-a1147c9e06f9>
CC-MAIN-2016-26
http://peterhowehealer.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963565
1,699
2.53125
3
Amblyopia “lazy eye” Affecting one in 25 to 50 people, amblyopia is a condition in which the visual function of an eye is underdeveloped. Usually, the vision of the other eye is normal, although, at times, amblyopia can affect both eyes. Amblyopia is most likely to be successfully corrected if detected and treated during infancy or early childhood. This disorder, like others that affect the visual development, calls for early and regular visual examinations. Young children are not always aware of having one good eye and one impaired eye, and parents have no way of recognizing the problem unless the underdeveloped eye is obviously abnormal. Any factor that prevents clear vision during infancy or childhood promotes amblyopia. The chief causes are: Strabismus (misaligned eyes). Unequal focus (an asymmetrical refractive error). Cloudiness in normally clear eye tissues such as corneal opacities and cataracts Obstruction of the visual axis by droopiness of the eyelid. Because young children are often difficult to examine, pediatric ophthalmologists use a variety of methods to measure visual funciton and determine the existence of amblyopia and its cause. Once amblyopia is detected, the brain must be encouraged to process visual information from the affected eye. This is frequently accomplished by applying a patch over the child’s good eye. Eye drops are also sometimes used to treat amblyopia. If left untreated, visual acuity in an amblyopic eye may be permanently reduced and a lifetime of poor and uncorrectable vision could result. This can become an ever more significant and disabling problem if the remaining healthy eye ever becomes diseased or injured. Unfortunately, once a child has reached roughly nine years of age, treatment rarely is successful.
<urn:uuid:c99f3351-6b83-4abc-9a4b-5b680cd77e94>
CC-MAIN-2016-26
http://bascompalmer.org/specialties/pediatric-ophthalmology/amblyopia-lazy-eye
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00050-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940949
376
3.734375
4
VIB researcher Mohamed Lamkanfi, connected to the Ghent University, discovered that mice that do not produce the receptor protein NLRP6, are better protected against bacterial infections and can easier remove bacteria from the body. Therapeutic drugs that neutralize NLRP6 could be a possible treatment option, in addition to the use of antibiotics, for fighting bacterial infections. His research was published in the leading scientific magazine Nature. Mohamed Lamkanfi (VIB Ghent University): "Our lab investigates the role of the innate immunity, which is of crucial importance to protect the body against bacteria and other pathogens. We started looking for genetic mutations that lead to an increased sensitivity to infections. Our research showed that mice with NLRP6 are less resistant to bacteria and have greater difficulty removing the bacteria from the body. This NLRP6 plays a disastrous role in the entire process." New therapeutic track? The discovery of Mohamed Lamkanfi is not insignificant. The first line treatment for bacterial infections is and will remain antibiotics, of course. However, due to the intensive use of antibiotics in the fight against infections, also the resistance of the bacteria grows against this group of medicines. Sometimes this makes it much more difficult for doctors to treat a patient efficiently. Mohamed Lamkanfi: "Our search for products to help immune systems in the fight against bacterial infections is very important. In spite of the availability of antibiotics bacterial infections continue to pose a serious threat to public health all over the world. Now that we have exposed the role of the receptor protein NLRP6 in the immune response, we can start thinking about the clinical application thereof for the treatment of bacterial infections. A vaccine, in my opinion, seems less suitable, but a medicine that neutralizes the receptor protein NLRP6, is a possibility. Of course not in the immediate future, but it does deserve all the attention to research this further." |Contact: Sooike Stoops| VIB (the Flanders Institute for Biotechnology)
<urn:uuid:73e816b2-5333-4720-8e84-d4b13da20081>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news-1/Toward-an-alternative-for-antibiotics-to-fight-bacterial-infections-3F-25668-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00146-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931259
407
3.0625
3
MOSIX: A Cluster Load-Balancing Solution for Linux MOSIX schedules newly started programs in the node with the lowest current load. However, if the machine with the lowest load level announces itself to all the nodes in the cluster, then all the nodes will migrate newly started jobs to the node with the lowest load and soon enough this node will be overloaded. However, MOSIX does not operate in this manner. Instead, every node sends its current load status to a random list of nodes. This prevents a single node from being seen by all other nodes as the least busy and prevents any node from being overloaded. How does MOSIX decide which node is the least busy among all the cluster nodes? This is a good question; however, the answer is a simple one. MOSIX comes with its own monitoring algorithms that detect the speed of each node, its used and free memory, as well as IPC and I/O rates of each process. This information is used to make near optimal decisions on where to place the processes. The algorithms are very interesting because they try to reconcile different resources (bandwidth, memory and CPU cycles) based on economic principles and competitive analysis. Using this strategy, MOSIX converts the total usage of several heterogeneous resources, such as memory and CPU, into a single homogeneous cost. Jobs are then assigned to the machine where they have the lowest cost. This strategy provides a unified algorithm framework for allocation of computation, communication, memory and I/O resources. It also allows the development of near-optimal on-line algorithms for allocation and sharing these resources. MOSIX uses its own filesystem, MFS, to make all the directories and regular files throughout a MOSIX cluster available from all nodes as if they were within a single filesystem. One of the advantages of MFS is that it provides cache consistency for files viewed from different nodes by maintaining one cache at the server disk node. MFS meets the direct file system access (DFSA) standards, which extends the capability of a migrated process to perform some I/O operations locally, in the current node. This provision reduces the need of I/O-bound processes to communicate with their home-node, thus allowing such processes (as well as mixed I/O and CPU processes) to migrate more freely among the cluster's node, for load balancing and parallel file and I/O operations. This also allows parallel file access by proper distribution of files, where each process migrates to the node that has its files. By meeting the DFSA provision, allowing the execution of system calls locally in the process' current node, MFS reduces the extra overhead of executing I/O-oriented system calls of a migrated process. In order to test MOSIX, we set up the following environment: 1) a cabinet that consists of 13 Pentium-class CPU cards running at 233MHz with 256MB of RAM each; and 2) a Pentium-based server machine, PC1, running at 233MHz with 256MB of RAM. This machine was used as an NFS and DHCP/TFTP server for the 13 diskless CPUs. When we start the CPUs, they boot from LAN and broadcast a DHCP request to all addresses on the network. PC1, the DHCP server, will be listening and will reply with a DHCP offer and will send the CPUs the information needed to configure network settings such as the IP addresses (one for each interface, eth0 and eth1), gateway, netmask, domain name, the IP address of the boot server (PC1) and the name of the boot file. The CPUs will then download and boot the specified boot file in the DHCP configuration file, which is a kernel image located under the /tftpboot directory on PC1. Next, the CPUs will download a ramdisk and start three web servers (Apache, Jigsaw and TomCat) and two streaming servers (Real System Server and IceCast Internet Radio). For this setup, we used the Linux Kernel 2.2.14-5.0 that came with Red Hat 6.2. At the time we conducted this activity, MOSIX was not available for Red Hat; thus, we had to port MOSIX to work with the Red Hat kernel. Our plan was to prepare a MOSIX cluster that consists of the server, PC1 and the 13 diskless CPUs. For this reason, we needed to have a MOSIX-enabled kernel on the server, and we wanted to have the same MOSIX-enabled kernel image under the TFTP server directory to be downloaded and started by the CPUs at boot time. After porting MOSIX to Red Hat, we started the MOSIX modified installation script “mosix.install” that applied the patches to the 2.2.14-5.0 kernel tree on PC1. Once we finished configuring the kernel and enabling the MOSIX features (using $make xconfig), we compiled it to get a kernel image: cd /usr/src/linux make clean ; make dep ; modules_install Next, we copied the new kernel image from /usr/src/linux/arch/i386/boot to /boot and we updated the System.map file: cp /usr/src/linux/arch/i386/boot/bzImage cp /usr/src/linux/arch/i386/boot/System.map ln /boot/System.map.mosix /boot/System.mapOne of the configuration files that was modified was lilo.conf. We added a new entry for the MOSIX kernel to make the server boot as a MOSIX node by default. The updated lilo.conf on PC1 looked like Listing 1. Having done that, we needed to complete the configuration steps. In /etc/profile, we added one line to specify the number of nodes in the MOSIX cluster: # Add to /etc/profile NODES=1-14 We created /etc/mosix.map that allows the local MOSIX node to see all other MOSIX nodes. The mosix.map looked as follows: # Starting node IP Number of Nodes 1 x.x.x.x 13 14 y.y.y.y 1We created the /mfs directory to be used as a mount point for the MOSIX filesystem. We added mosix.o to /lib/modules/2.2.14-5.0/misc/ so it can be loaded at boot time by the MOSIX startup file. Then we applied the same modifications to the ramdisk that will be downloaded by the diskless CPUs at boot time. Once we completed these steps, we rebooted PC1, and when it was up and running, we rebooted the diskless CPUs. After reboot, the diskless CPUs received their IP addresses, booted with the MOSIX-enabled kernel, and downloaded the ramdisk using the TFTP protocol. Et voilà! All 14 nodes mounted /mfs as the MOSIX filesystem directory. Figure 2 shows a snapshot of /mfs on CPU10. Fast/Flexible Linux OS Recovery On Demand Now In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems. Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc. Free to Linux Journal readers.Register Now! - Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI" - Profiles and RC Files - Astronomy for KDE - Maru OS Brings Debian to Your Phone - Understanding Ceph and Its Place in the Market - Snappy Moves to New Platforms - Git 2.9 Released - OpenSwitch Finds a New Home - What's Our Next Fight? - The Giant Zero, Part 0.x With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon. This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide
<urn:uuid:d5bf4522-c6ef-41a5-b7eb-e733ff9a4ed3>
CC-MAIN-2016-26
http://www.linuxjournal.com/article/4546?page=0,1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907046
1,866
2.5625
3
Photographs of the Warsaw Ghetto by Sybil Milton Ulrich Keller, ed. The Warsaw Ghetto in Photographs: 206 Views Made in 1941. New York: Dover Publications, 1984. 160 pages. Two images have come to symbolize the complex series of events now known as the Holocaust: the first shows a child with his hands raised in surrender in the Warsaw ghetto; the second shows a British bulldozer burying skeletons in mass graves at Bergen-Belsen. Both images were part of the official administrative record of the perpetrators and the liberators; the first, of German provenance, was the twelfth of 53 photos appended to the official report about the liquidation of the Warsaw ghetto in the spring of 1943; the second was produced by British camera crews after the liberation of Bergen-Belsen.1 These two photos also reflect the nature of the surviving photographic record: the largest volume of material consists of the official and private snapshots of German professional and amateur cameramen; a somewhat smaller body of material was made by the liberators; and the smallest quantity was made by the Jewish victims and the resistance. Although more than two million photos exist in the public archives of more than 20 nations, the quality, scope, and content of the images reproduced in scholarly and popular literature has been very repetitive during the past 40 years and largely derived from Gerhard Schoenberner's classic volume The Yellow Star, first published in 1960.2 Ulrich Keller's study and reproductions of the unofficial photographs made by several members of propaganda company 689 in the winter of 1941 in the Warsaw ghetto is a welcome addition to the growing literature linking the concerns of Holocaust research with those of serious analysts of photographic communication.3 The first photographic publications after 1945 appeared under Allied military auspices. In the spring of 1945, the Office of War Information in conjunction with the Supreme Commander of the Allied Military Forces published a slender illustrated brochure entitled Concentration Camp, containing 38 images from Bergen-Belsen, Buchenwald, Gardelegen, Nordhausen, and Ohrdruf.4 The gruesome scenes from the liberated camps were to serve as an instantaneous emotional indictment of the Nazi regime and it was believed that the shock value of the published photographs would prove salutory for the denazification and the re-education of the civilian populace. Also, more than 250 Nazi photos were published in the official proceedings of the International Military Tribunal at Nuremberg.5 These images had been an integral part of the administrative record of the Final Solution and were printed as appendices to the German reports about the murder of Galician Jews, the destruction of the Warsaw Ghetto, and the facilities at Auschwitz and Janovska.6 The photos were accompanied by minimal captions, usually in the language of propagandists, that were found on the original documents. Details providing precise dates, locations, and personalities were usually absent.7 Although these materials were submitted as prosecution evidence to the International Military Tribunal, the Nuremberg staff neither interrogated nor planned to question,Nazi military and propaganda photographers about their official and candid snapshots. The only photographer to testify at Nuremberg was the Spanish concentration camp prisoner Franqois Boix, who identified several photos from Mauthausen and Gusen and described the work of the photo section of the Mauthausen identification department between 1941 and 1945.8 In the immediate postwar period, survivors also published photo- graphic accounts and documentation that, they had collected and preserved during the war.9 Many of the memorial albums included photographs of Jewish life in Eastern Europe before and during the Nazi occupation. One such album published by the Documentation Center of the Central Union of Jewish Religious Communities in Slovakia contained most of the images rediscovered and republished in 1981 under the title Auschwitz Album.10 The work of Jewish D.P. organizations and historical commissions provided the transition to the major institutional publications about the Holocaust. During the 1950s, Yad Vashem Martyrs' and Heroes' Memorial Authority, the Jewish Historical Institute in Poland, the Institute for Contemporary History in Munich, and YIVO Institute for Scientific Research in New York published serials and monographs about the Holocaust.11 Few of these basic historical books contained photographs. This can be attributed only in part to the economics of publishing. During the 1950s, Allied teams in Washington and London studied the more than 1,600 tons of captured German documents transferred into their custody for analysis and microfilming. The massive amount of these records was daunting, and the problems of photographic documentation were secondary. Moreover, most historians were not formally trained in the evaluation and interpretation of visual sources and thus were uncomfortable with the nuances of the photographic record. They sorted and cataloged more than one million photographs in very large bulk categories, such as "Atrocities-Eastem Front." Unlabelled images and gaps in photographic sequences were common and considered irreparable. The photographs had obviously suffered damage from destruction by perpetrators seeking to hide their complicity in Nazi crimes, from Allied bombing, vandalism, and postwar souvenir looting. Moreover, the overabundance of textual sources inhibited research and publication from the accompanying visual sources.12 A number of coincidental factors led to the resurgence of interest in these historical photographs during the 1960s and 1970s. The rise of a new generation unfamiliar with the history of the Holocaust contributed to the demand for images that would provide explanations, immediacy, and authenticity. Hollywood had already discovered the Holocaust and presented a bowdlerized version of reality in dramatic films such as The Diary of Anne Frank (1959), Exodus (1960), and Judgment at Nuremberg (1961).13 Coincidentally the television coverage of the Eichmann trial in Jerusalem was widely viewed in the United States and the need to understand the perpetrators as well as the victims was reflected in academic literature such as Raul Hilberg's Destruction of the European Jews (1961), Isaiah Trunk's Judenrat (1972), and Reuben Ainsztein's Jewish Resistance in Nazi- Occupied Europe (1975).14 Similar developments in cinema and academic research were also taking place in Western Europe and Israel.15 The trend toward a starkly realistic understanding of the concentration camps continued during the 1970s. The post-Vietnam and post-Watergate era was generally also nostalgic and witnessed a boom in ethnicity and genealogy that in turn reinforced interest in old historic photographs. In a parallel development, the literature about concerned photography and photojournalism rapidly achieved sophistication as American audiences began to understand the potential subjective and political manipulation of visual symbols.16 The result of these many simultaneous developments was a more sophisticated reappraisal of visual material from the Ho locaust. Initially, art of the Holocaust received more systematic scholarly attention, although new approaches to understanding Holocaust photography were apparent in books about the Warsaw and Lodz ghettos and their photographers.17 It was within this framework that Lucy Dawidowicz criticized BBC television for its 1968 documentary The Warsaw Ghetto, assembled from still photos and film footage made by Nazi propaganda teams in 1941 and 1942.18 While Ulrich Keller uses these same images for his book The Warsaw Ghetto, he handles the problem of the reliability and legitimacy of his sources in an intelligent 25- page introductory essay, where he carefully differentiates the ideological biases implicit in the propagandist photographers' choice of subject.19 He reflects on the nuances of propaganda photography, but draws sharp contrasts between "outside photography" (i.e., the illustrations in his volume), however sympathetic, and "inside photography" such as the clandestine Mendel Grossman snapshots made in the Lodz ghetto. He also explains that in official Nazi photography there is enough sophistication to account for sympathetic and neutral photos, which adds a valuable corrective to Dawidowicz's parochial outlook. Keller selected his images from the record group Bild 101 located in the Bildarchiv [Photo Archive] of the West German Federal Archives in Koblenz.20 Since many of these images were only identified with minimal captions on the back surfaces of the photos and most daily log books and data sheets of propaganda units had been lost, he provides additional information where street signs, posters, and other data on the images themselves allow him to augment the obvious. The 206 images are divided into eight categories: 1) Warsaw ghetto administration (including medical, postal, and food coupon facilities as well as the Jewish ghetto police); 2) labor inside the ghetto and forced labor outside the ghetto; 3) the amusements of the ghetto elite; 4) worship; 5) street scenes (including market scenes, charity food programs, and street vendors); 6) portraits of ghetto inhabitants (including beggars and children); 7) burials and death (including victims of hunger and typhus); and 8) one small section of 18 images from the Lodz ghetto, showing the physical layout, street scenes, and ghetto labor. The contrived sense of oppressive normality is most apparent in the smallest subsection on the "amusements of the ghetto elite."21 Unfortunately Keller did not realize that many images taken by PK 689 in Warsaw were published in the Berliner Illustrirte Zeitung in July 1941.22 A correlation of published and unpublished photographs would have revealed how easily any image could be propagandized through appropriately slanted captions and texts. Despite this failure to check the contemporary illustrated Nazi press, Keller's volume is an important study and perhaps an indispensable supplement to volumes without illustrations, such as Yisrael Gutman's The Jews of Warsaw, 1939-43,23 and it is an important addition to any Holocaust library collection. Supplementing Keller's presentation of the official photography of the Warsaw and Lodz ghettos are candid images recently published by Joe Heydecker, an anti-Nazi photographer, who was stationed with the same propaganda unit 689 in Warsaw, where he worked in the photo laboratory.24 The coincidence that Keller's and Heydecker's images come from the same unit at the same date makes these volumes complementary. Heydecker recorded the ghetto in candid photos made in January and February 1941, snapped from his barracks roof and inside the ghetto. Heydecker explained why he documented the Warsaw ghetto: "I photographed for myself and at my own risk, fortunately without the knowledge of any official body. I felt simultaneously torn by shame, hate, and powerlessness."25 Heydecker's compassionate imagery is similar in quality and perspective to the work of Mendel Grossman. Heydecker's unique photographs include the ghetto lending library, bookshops, and portraits of street children in the ghetto. Even the fragmentary photographic record of the Warsaw Ghetto is widely scattered in the archives of eight nations on three continents.26 The problem is magnified when one considers the whole range of topics commonly understood to comprise the Holocaust. The potential travel costs to correlate the available resources would be prohibitive. In the attempt to create cost- effective technical solutions, Yad Vashem published a microfiche of 15,000 photographs from their archive. Despite the best of intentions, this is a particularly unfortunate example of photographic publication marred by utterly inadequate labels.27 Other compendia published in book form, such as La Deportation, Deutsche Chronik, and The Hitler File offer hundreds of images of Holocaust photography, and place the Warsaw ghetto in a broader geographical and substantive context.28 Some concentration camp memorial authorities have also produced excellent photographic source books about their particular sites. The best examples are the respective volumes about Dachau and Auschwitz.29 Both contain a few token images from Warsaw. If every memorial museum issued similar volumes, it would certainly make a substantial dent in our ignorance of the visual record of the Holocaust. It is clear that the unfinished agenda for a history of Holocaust photography is larger than what has been accomplished since World War II, and it is to be hoped that more volumes like Ulrich Keller's book on the Warsaw ghetto will appear in the near future. 1. The Stroop Report: The Jewish Quarter of Warsaw Is No More, trans. and annotated by Sybil Milton (New York, 1979), photo 12 in unpaginated photo section; and "Out of the archives: the horror film that Hitchcock couldn't bear to watch," The Sunday Times (London), 19 Feb. 1984, p. 5. 2. Gerhard Schoenberner, Der gelbe Stern (Hamburg, 1960); and idem, The Yellow Star (rev. exp. ed.; London, 1969). 3. Sybil Milton, "The Camera as Weapon: Documentary Photography and the Holocaust," SWC Annual 1 (1984): 45-68; Harry James Cargas, "Holocaust Photography," Centerpoint: A Journal of Interdisciplinary Studies 4, no. I (Fall 1980): 141-50; Zosa Szajkowski, An Illustrated Sourcebook of the Holocaust, 3 vols. (New York, 1977); and Diethart Kerbs, Walter Uka, and Brigitte Walz- Richter, Die Gleichschaltung der Bilder: Zur Geschichte der Pressefotografie 1930-1936 (Berlin, 1983). 4. U.S. Office of War Information (on order of the Supreme Commander of the Allied Military Forces), KZ: Bildbericht aus funf Konzentrationslager (n.p., 1945); facsimile edition published by the Witness to the Holocaust Project, Emory University, Atlanta, 1983. 5. Trial of the Major War Criminals before the International Military Tribunal [Blue Series] 42 vols. (Nuremberg, 1947-49) [hereafter cited as TMWC1. 6. TMWC, 37: 391-431 (Doc. L 018); 26: 628-94 (Doc. PS 1061); and 30: 357-472 (Doc. PS 2430). 7. The captions in The Stroop Report are typical. Photo 12 is captioned: "Pulled from the bunkers by force." 8. TMWC, 6: 263-78. Boix was born in 1920 in Barcelona and died in France during the 1960s. He was a Spanish news photographer who fled after the Civil War to France, where he was subsequently captured as a prisoner of war in 1940. Biographical data provided by Dr. Helmut Fiereder, Deputy Director of the Mauthausen Museum Archive, Vienna, August 1984. 9. Centralna Zydowska Komisja Historyczna, ed., Zaglada Zydowstwa polskiego: Album Zdjec (Lodz, 1945); Jerzy Andrzejewski, Warszawa 1939-45 (Warsaw, 1946); and International Information Office for the Former Concentration Camp Dachau, comp., Dachau Album (Dachau, 1946). 10. F. Steiner, ed., The Tragedy of Slovak Jewry (Prague, 1949); see Auschwitz Album, ed. Lili Meier and Peter Hellmann (New York, 1981). Duplicate photographs are located today in the State Jewish Museum in Prague and in Yad Vashem. 11. Henry Friedlander, "Publications on the Holocaust," in The German Church Struggle and the Holocaust, ed. Franklin H. Littell and Hubert G. Locke (Detroit, 1974), pp. 69-94. 12. See Robert Wolfe, ed., Captured German Documents and Related Records (Athens, OH, 1974); John Mendelsohn, "The Holocaust: Records in the National Archives on the Nazi Persecution of the Jews," Prologue 16, no. 1 (Spring 1984): 22-39; Mayfield S. Bray and William T. Murphy, Audiovisual Records in the National Archives relating to World War II (Washington, DC, 1974); and Josef Henke, "Das Schicksal deutscher zeitgeschichtlicher Quellen in Kriegs- und Nachkriegszeit: Beschlagnahme, Riickfuhrung, Verbleib," Vierteljahrshefte ffir Zeitgeschichte 30 (1982): 557-620. See also Thomas Trumpp, "Zur Geschichte, Struktur, und Nutzung der photographischen Oberlieferungen des Bundesarchivs: Bildarchiv, Bildsammlung, oder Bildagentur," Der Archivar 36 (1983): 365-79. 13. Annette Insdorf, Indelible Shadows: Film and the Holocaust (New York, 1983), pp. 7-10. 14. Raul Hilberg, The Destruction of the European Jews (Chicago, 1961); Isaiah Trunk, Judenrat: The Jewish Councils in Eastern Europe under Nazi Occupation (New York, 1972); and Reuben Ainsztein, Jewish Resistance in NaziOccupied Europe (New York, 1974). 15. Representative dramatic films include: Jacob The Liar (German Democratic Republic, 1978); The Garden of the Finzi-Continis (Italy, 1970); The Last Metro (France, 1980); and The Tin Drum (Federal Republic of Germany, 1979). Documentary films include Marcel Ophuls The Sorrow and The Pity (1970) and The Memory of Justice (1976). Representative examples of the new literature are: Uwe Dietrich Adam, Judenpolitik im Dritten Reich (Dusseldorf, 1972) and H.G. Adler, Der verwaltete Mensch: Studien zur Deportation der Juden aus Deutschland (Tubingen, 1974). 16. See Gisele Freund, Photography and Society (Boston, 1980); Susan Sontag, On Photography (New York, 1978); and Allan Sekula, "On the Invention of Photographic Meaning," in Thinking Photography, ed. Victor Burgin (London, 1983), pp. 84-109. 17. For art, see Janet Blatter and Sybil Milton, Art of the Holocaust (New York, 1981); Janina Jaworska, Nie wszystek umre (Warsaw, 1975); and Hachiro Sakonishi and Age Shuppan, eds., Ecce Homo (Tokyo, 1980), in Japanese. New literature on photography of the Warsaw Ghetto includes G. Deschner, Menschen im Getto (Gutersloh, 1969) and on the Lodz Ghetto, Mendel Grossman, With a Camera in the Ghetto (Tel Aviv, 1970). 18. Lucy Dawidowicz, "Visualizing the Warsaw Ghetto: Nazi Images of Jews Refiltered by the BBC," Shoah (New York) 1, no. 1 (1978): pp. 5-6. Dawidowicz is incorrect in her assertion that the Nazis "made no photographs of schools which the Jews operated for their children ... [or] of Jewish cultural activities, of the secret lending libraries." In Lodz, the Jewish ghetto administration prepared four albums in 1942, containing images of schools, hospitals, and recreation areas. Bibliographical entries for these publications are located in Guide to Jewish History under Nazi Impact, eds. Jacob Robinson and Philip Friedman, YIVO and Yad Vashem Joint Documentary Projects, Bibliographical Series No. 1 (New York, 1960), p. 320, entries 3577a and 3577b. Dawidowicz could obviously not have consulted the candid photographs published after her article appeared showing the Warsaw ghetto lending libraries and book carts; see Joe J. Heydecker, Where is Thy Brother Abel? Documentary Photos of the Warsaw Ghetto (Sabo Paulo, Brazil, 1981). 19. Keller, The Warsaw Ghetto, pp. xvii-xxi. 20. Ibid. Keller's photographic selection is from record group Bild 101/111, which contains ca. 775 images, including 720 images taken in the winter of 1941 by Albert Cusian, Erhard Josef Knobloch, and Wiesemarm as part of their official propaganda duties for PK unit 689 then stationed in Warsaw, and ca. 55 images by Zermin in the Lodz ghetto during the spring of 1941. The images selected by Keller show atypical and relatively unprejudiced photography by Nazi cameramen, in contrast to the Adolf vom Bomhard record group, Bild 121, which contains more biased images by the photographic section of the Order Police in Warsaw, Cracow, Kattowitz, and Kielce. Information based on my visit to the Bundesarchiv/Bildarchiv Koblenz, Sept. 1984. 21. Keller, The Warsaw Ghetto, pp. 39-47. 22. "Juden unter sich (Bericht uber das Warschauer Getto)," in Berliner Illustrierte Zeitung, no. 30 (24 July 1941), pp. 790-93, contains 12 photos by Knobloch, 5 by Cusian, and 1 by Helmut Koch. 23. Yisrael Gutman, The Jews of Warsaw, 1939-1943: Ghetto, Underground, Revolt, trans. Ina Friedman (Bloomington, 1982). 24. Joe J. Heydecker, Das Warschauer Getto (Munich, 1983). 25. Ibid., pp. 22-23. See Sybil Milton, in SWC Annual 1: 49-51, 55-57. 26. The American repositories include the National Archives, the Library of Congress, YIVO, and the Leo Baeck Institute. Sources in West Germany are at the Bundesarchiv/Bildarchiv, Bildarchiv Preussischer Kulturbesitz, and Ullstein Bilderdienst. In East Germany, a partial list of repositories would include the ADN official news agency, the Museum for German History, and the German Central Archives in Potsdam. In Poland, one would visit the Historical Museum of the City of Warsaw, the Jewish Historical Institute, the Pawiak Prison Museum, and the archives of the Auschwitz and Treblinka memorials. In Austria, photographs of the Warsaw Ghetto are found in the Documentation Archives of the Austrian Resistance; in France, the Centre Documentation Juive Contemporaine and the agency FNDIRP hold relevant images; and in England, the Imperial War Museum, the Public Records Office, the Wiener Library, and the Sikorski Institute could be consulted. In Israel materials are located in Yad Vashem, Moreshet, and the Kibbutz Lochamei HaGhettaot. There are also private news agencies with relevant images in the United States and the Soviet Union. 27. Yad Vashem, Archives of the Destruction: A Photographic Record of the Holocaust (Jerusalem, 1981) contains 15,000 images on microfiche; however, more than 60 percent of the entries are either unidentified, misidentified, or obscurely listed without date and photographer's name or nationality. The fiche collection also incorporates many images from the Bundesarchiv and other repositories without informing the reader that the originals are not in the Yad Vashem collections. 28. Federation Nationale des Deportes et Internes Resistants et Patriotes, La Deportation (Paris, 1978); Heinz Bergschicker, ed., Deutsche Chronik: Alltag im Faschismus, 1933-1945 (Berlin, 1983); and Giinther Bernd Ginzel, ed., Judischer Alltag in Deutschland, 1933-1945 (Dusseldorf, 1984). The latter, although attempted from a Jewish point of view, has no discernible organizing principle and the sequence of illustrations and text is somewhat haphazard. For comparable English-language pictorial source books, see Szajkowski, An Illustrated Sourcebook of the Holocaust and Frederic V. Grunfeld, The Hitler File: A Social History of Germany and the Nazis, 1918-1945 (New York, 1979). 29. Kazimierz Smolen, ed., KL Auschwitz: Documentary Photographs (Warsaw, 1980) is a multilingual volume with all captions and texts in French, German, English, Russian, and Polish. See Barbara Distel and Ruth Jakusch, ed., Concentration Camp Dachau, 1933-1945 (Brussels and Munich, n.d.). Simon Wiesenthal Center-Museum of Tolerance Library & Archives For more information contact us at (310) 772-7605 or [email protected]. We are located at 1399 S. Roxbury Drive, Los Angeles, CA 90035, 3rd Floor
<urn:uuid:6f6e32dd-7a9e-45fb-b2ba-0fc6325e73c1>
CC-MAIN-2016-26
http://www.museumoftolerance.com/site/c.tmL6KfNVLtH/b.8325701/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900966
5,107
3.359375
3
In the late afternoon light along the Peruvian coast, local workmen gather as archaeologists Miłosz Giersz and Roberto Pimentel Nita open a row of small sealed chambers near the entrance of an ancient tomb. Concealed for more than a thousand years under a layer of heavy adobe brick, the mini-chambers hold large ceramic jars, some bearing painted lizards, others displaying grinning human faces. As Giersz pries loose the brick from the final compartment, he grimaces. “It smells awful down here,” he splutters. He peers warily into a large undecorated pot. It’s full of decayed puparia, traces of flies once drawn to the pot’s contents. The archaeologist backs away and stands up, slapping a cloud of 1,200-year-old dust from his pants. In three years of digging at this site, called El Castillo de Huarmey, Giersz has encountered an unexpected ecosystem of death—from traces of insects that once fed on human flesh, to snakes that coiled and died in the bottoms of ceramic pots, to Africanized killer bees that swarmed out of subterranean chambers and attacked workers. Plenty of people had warned Giersz that excavating in the rubble of El Castillo would be difficult, and almost certainly a waste of time and money. For at least a century looters had tunneled into the slopes of the massive hill, searching for tombs containing ancient skeletons decked out in gold and wrapped in some of the finest woven tapestries ever made. The serpent-shaped hill, located a four-hour drive north of Lima, looked like a cross between the surface of the moon and a landfill site—pitted with holes, littered with ancient human bones, and strewn with modern garbage and rags. The looters liked to toss away their clothing before they returned home for fear of bringing sickness from the dead to their families. But Giersz, an affable 36-year-old maverick who teaches Andean archaeology at the University of Warsaw, was determined to dig there anyway. Something important had happened at El Castillo 1,200 years ago, Giersz was sure of that. Bits of textiles and broken pottery from Peru’s little-known Wari civilization, whose heartland lay far to the south, dotted the slopes. So Giersz and a small research team began imaging what lay underground with a magnetometer and taking aerial photos with a camera on a kite. The results revealed something that generations of grave robbers had missed: the faint outlines of buried walls running along a rocky southern spur. Giersz and a Polish-Peruvian team applied for permission to begin digging. The faint outline turned out to be a massive maze of towers and high walls spread over the entire southern end of El Castillo. Once painted crimson red, the sprawling complex seemed to be a Wari temple dedicated to ancestor worship. As the team dug down beneath a layer of heavy trapezoidal bricks in the fall of 2012, they discovered something few Andean archaeologists ever expected to find: an unlooted royal tomb. Inside were interred four Wari queens or princesses, at least 54 other highborn individuals, and more than a thousand elite Wari goods, from huge gold ear ornaments to silver bowls and copper-alloy axes, all of the finest workmanship. “This is one of the most important discoveries in recent years,” says Cecilia Pardo Grau, the curator of pre-Columbian art at the Art Museum of Lima. While Giersz and his team continue to excavate and explore the site, analysis of the finds is shedding new light on the Wari and their wealthy ruling class. Emerging from obscurity in Peru’s Ayacucho Valley by the seventh century a.d., the Wari rose to glory long before the Inca, in a time of repeated drought and environmental crisis. They became master engineers, constructing aqueducts and complex canal systems to irrigate their terraced fields. Near the modern city of Ayacucho they founded a sprawling capital, known today as Huari. At its zenith Huari boasted a population of as many as 40,000 people—a city larger than Paris at the time, which had no more than 20,000 inhabitants. From this stronghold the Wari lords extended their domain hundreds of miles along the Andes and into the coastal deserts, forging what many archaeologists call the first empire in Andean South America. Researchers have long puzzled over exactly how the Wari built and governed this vast, unruly realm, whether through conquest or persuasion or some combination of both. Unlike most imperial powers, the Wari had no system of writing and left no recorded narrative history. But the rich finds at El Castillo, a journey of some 500 miles from the Wari capital, are filling in many blanks. The foreign invaders probably first appeared on this stretch of coast around the end of the eighth century. The region lay along what was then the southern frontier of the wealthy Moche lords, and it seems to have lacked strong local leaders. Just how the invaders launched their offensive is unclear, but an important ceremonial drinking cup discovered in El Castillo’s imperial tomb depicts poleax-wielding Wari warriors battling coastal defenders brandishing spear throwers. When the fog of war lifted, the Wari were in firm control. The new lord constructed a palace at the foot of El Castillo, and over time he and his successors began transforming the steep hill above into a towering temple devoted to ancestor worship. Cloaked in nearly a thousand years of rubble and wind-borne sediment, El Castillo today looks like a huge stepped pyramid, a monument built from the bottom up. But from the beginning Giersz suspected that there was more to El Castillo than met the eye. To tease out the building plan, he invited a team of architecture experts to examine the newly exposed staircases and walls. Their studies revealed something that Giersz had suspected—that Wari engineers began construction along the very top of El Castillo, a natural rock formation, and eventually worked their way downward. They adapted this method from elsewhere, says Krzysztof Makowski, an archaeologist at the Pontifical Catholic University of Peru in Lima and the El Castillo project’s scientific adviser. “In the mountains the Wari made agricultural terraces, and they started at the top.” As they moved downward, they cut into the slopes to make a tier of platforms. Along the summit of El Castillo the builders first carved out a subterranean chamber that became the imperial tomb. When it was ready for sealing, laborers poured in more than 30 tons of gravel and capped the entire chamber with a layer of heavy adobe bricks. Then they raised a mausoleum tower above, with crimson walls that could be seen for miles around. The Wari elite left rich offerings in small chambers inside, from the finely woven textiles that ancient Andean peoples valued more highly than gold; to knotted cords known as khipus, used for keeping track of imperial goods; to the body parts of the Andean condor, a bird closely associated with the Wari aristocracy. (Indeed, one title of the Wari emperor may well have been Mallku, an Andean word meaning “condor.”) At the center of the tower was a room containing a throne. In later times looters reported to a German archaeologist that they found mummies arrayed in wall niches there. “We are pretty sure this room was used for the veneration of the ancestors,” says Giersz. It may even have been used for venerating the emperor’s mummy, yet to be discovered by the team. To rub shoulders in death with members of the royal dynasty, nobles staked out places on the summit for mausoleums of their own. When they exhausted all the available space there, they engineered more, building stepped terraces all the way down the slopes of El Castillo and filling them with funerary towers and graves. So important was El Castillo to the Wari nobles, says Giersz, that they “used every possible local worker.” Dried mortar in many of the newly exposed walls bears human handprints, some left by children as young as 11 or 12 years old. When the construction ended, likely sometime between a.d. 900 and 1000, an immense crimson necropolis loomed over the valley. Though inhabited by the dead, El Castillo conveyed a powerful political message to the living: The Wari invaders were now the rightful rulers. “If you want to take possession of the land,” says Makowski, “you have to show that your ancestors are inscribed on the landscape. That’s part of Andean logic.” In a small walled chamber along the western slopes of the necropolis, Wiesław Więckowski hunches over a mummified human arm, brushing sand away from its gaunt fingers. For the better part of an hour now the University of Warsaw bioarchaeologist has been clearing this part of the chamber, collecting debris from a Wari funerary bundle and looking for the rest of the body. It’s slow, delicate work. As he edges his trowel into the corner of the room, he exposes part of a human femur lodged in a jagged hole in the wall. Więckowski frowns in disappointment. Looters, he explains, probably tried to haul the mummy out from an adjacent room and literally pulled it to pieces. “All we can say is that the mummy was a male person and quite old.” A specialist in the study of human remains, Więckowski has begun analyzing the skeletons of all the individuals found in and near the imperial tomb. Preservation of human soft tissue in the sealed chamber was poor, Więckowski says, but his studies are starting to fill in key details of the lives and deaths of the highborn women and their guardians. Almost all of those buried inside the chamber were women and girls who had likely died over a period of months, most probably of natural causes. The Wari treated them in death with great respect. Attendants dressed them in richly woven tunics and shawls, painted their faces with a sacred red pigment, and adorned them with precious jewelry, from gold ear flares to delicate crystal-beaded necklaces. Then mourners arranged their bodies in the flexed position favored by the Wari and wrapped each in a large cloth to form a funerary bundle. Their social rank, says Więckowski, mattered as much in death as it did in life. Attendants placed the highest ranked women—perhaps queens or princesses—in three private side chambers in the tomb. The most important, a female of about 60, lay surrounded by rare luxuries, from multiple pairs of ear ornaments to a bronze ceremonial ax and a silver goblet. The archaeologists marveled at her wealth and conspicuous consumption. “This lady, what was she doing?” muses Makowski. “She was weaving with golden instruments, like a true queen.” Beyond, in a large common area, attendants arranged the lesser noblewomen along the walls. Beside each, with few exceptions, they laid a container roughly the size and shape of a shoe box. Made of cut canes, it stored all the weaving tools needed to create high-quality cloth. Wari women were consummate weavers, producing tapestry-like cloth with yarn counts higher than those of the famous Flemish and Dutch weavers of the 16th century. The noblewomen buried at El Castillo were clearly dedicated to this art, creating textiles of the finest quality for the Wari elite. When the chamber was ready for sealing, attendants brought the last offerings up the slopes of El Castillo: human sacrifices. There were six individuals in all, three children—including what might be a nine-year-old girl—and three young adults. It’s possible, says Więckowski, the victims were the offspring of the conquered nobility. “If you are the ruler and want people to prove their loyalty to the lineage, you take their children,” he says. When the killings were done, attendants threw the corpses into the tomb. Then they closed the chamber, placing the wrapped corpses of a young adult male in his prime and of an older woman at the entrance as guards. Each body had lost a left foot, perhaps ensuring that they couldn’t desert their posts. Więckowski is awaiting the results of DNA analyses and isotopic tests to learn more about the females in the tomb and where they might have come from. But for Giersz the evidence is all beginning to add up to a detailed picture of the Wari invasion of the north coast. “The fact that they built an important temple here, on a prominent piece of land along the former borders of the Moche, strongly suggests that the Wari conquered the region and intended to stay.” In a quiet back room at the Art Museum of Lima, El Castillo’s archaeologists beam as they examine some of the newly cleaned finds. For weeks now conservators have been stripping away the thick, black patina that coated many of the metal artifacts, revealing glimmering designs. Cushioned in tissue paper are three gold ear ornaments, each roughly the size of a doorknob and bearing the image of a winged deity or mythical being. Team member Patrycja Prządka-Giersz, a University of Warsaw archaeologist who is married to Giersz, looks them over in delight. These adornments, she says, “are all different, and we can only see them after conservation.” Peering inside a large cardboard box on the table, Giersz finds one of the team’s prize discoveries: a ceramic pilgrim’s flask. Richly painted and decorated, the flask depicts a sumptuously dressed Wari lord voyaging by balsa raft across coastal waters teeming with whales and other sea creatures. Found among the cherished grave goods of a dead queen at El Castillo, the 1,200-year-old flask seems to portray an event—partly mythical, partly real—in the history of the north coast, the arrival of an important Wari lord, possibly even the Wari emperor himself. “And so we are starting to make a story of the Wari emperor who takes to the sea in a raft,” says Makowski with a smile, “an emperor who dies on the Huarmey coast accompanied by his wives.” For now it is only a story, an educated archaeological guess. But Giersz, the maverick who saw the buried outlines of walls where others saw only looters’ rubble, still thinks that the tomb of a great Wari lord may lie somewhere in the maze of walls and subterranean chambers. And if the looters haven’t beaten him to the punch, he intends to find it.
<urn:uuid:a20531bd-b9ba-4558-8680-0b020b968b1c>
CC-MAIN-2016-26
http://ngm.nationalgeographic.com/2014/06/peru-tomb/pringle-text
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00182-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958842
3,181
3.203125
3
This page lists all the various symbols in the Adinkra symbols category. Adinkra are visual symbols, originally created by the Ashanti of Ghana and the Gyaman of Cote d'Ivoire in West Africa, that represent concepts or aphorisms. Adinkra are used extensively in fabrics, pottery, logos and advertising. They are incorporated into walls and other architectural features. Fabric adinkra are often made by woodcut sign writing as well as screen printing. Adinkra symbols appear on some traditional akan gold weights. The symbols are also carved on stools for domestic and ritual use. Tourism has led to new departures in the use of the symbols in such items as T shirts and jewelry. The symbols have a decorative function but also represent objects that encapsulate evocative messages that convey traditional wisdom, aspects of life or the environment. There are many different symbols with distinct meanings, often linked with proverbs. In the words of Anthony Appiah, they were one of the means in a pre-literate society for "supporting the transmission of a complex and nuanced body of practice and belief".
<urn:uuid:74bbf5e6-d663-49dc-b7ab-bfa332a0d89c>
CC-MAIN-2016-26
http://www.symbols.com/category/31
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955872
229
2.515625
3
One of the most common computer audio problems is that you are unable to hear any sound. There can be many possible reasons for experiencing no sound problems on Windows Vista. Let us explore these and learn how to resolve them. - The first thing you need to ensure is that your computer has a sound card installed on it. To do this, click Start button, and then in the Start Search box, type Device. Next, select Device Manager from the Programs list. In the Device Manager window, check Sound, video and game controllers category to see if an entry for a sound card is there. Many times, sound cards are displayed in the Other devices category so you must check this also to see if your sound card is listed there. At times, you may find an entry for your sound card, but it may be marked with a Yellow triangle with a black exclamation mark within it. This symbol informs you that there are some problems with your Sound card. To find out what is wrong, right-click on your sound card, and then select Properties. Here, check the Device Status box on the General tab to find information related to problems with your sound card. If there is a problem with the Device Driver, you may open the Driver tab. On this tab, you may perform various tasks, such as update your driver, uninstall faulty driver or rollback a recent driver update to resolve issues related to device drivers. - Many computers, especially laptops and notebooks come with built in speakers. However, at times, you may want to use external speakers for better sound quality. Very simple mistakes may cause problems and prevent you from making the speakers work. So, the following are a few simple things that you need to do to ensure that your speakers are working fine and the sound output through them is good: - Ensure that your speakers are connected to the power source and are switched on. A simple task, but a task that many users often forget. - Ensure that the speakers are plugged into the correct slots on your computer. For instance, you need to connect a speaker with a USB cable into a USB port and the speakers with 1/8 inch cable in a round jack that is usually present at the back of your computer. Many computers provide a sound card jack in the front to make the PCs more user-friendly. - Also, check that all connectors are properly seated in their sockets. If they are not inserted properly, you may hear noise in the sound output. - You should see a volume icon in the bottom right-corner of your task bar. Right-click on this icon and ensure that you have not set the volume settings to mute. Also, if you are working on a laptop or notebook, there might be a sound switch on the body of the system. Check this switch to ensure that it is switched on. - Sound to your speakers is cut off automatically if headphones are connected to the line out jack of your sound card. So, check out the line out jack to ensure no headphones are connected to it. Your computer may also behave erratically if there is some problem with the Windows registry or the system is infected with a troublesome virus or a spyware. To rule out these problems, use a reliable registry tool, such as RegServe to perform a thorough registry scan and clean up. Next, use antimalware tools, such as STOPzilla Antivirus and Spyware Cease to scan your PC and weed out all infections from it.
<urn:uuid:e928e02a-70b9-404a-873d-4070b6c2e35f>
CC-MAIN-2016-26
http://www.instant-registry-fixes.org/fix-common-windows-vista-sound-problems/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935552
708
2.625
3
Settlement of Monterey County following the Hispanic Period was at first concentrated around the residences, inns, and commercial establishments of the earlier Hispanic settlers. As ranchos were subdivided and settlers applied for preemption to public lands, clustered farms appeared in the canyon mouths that opened on to the Salinas Valley. A source of water was of primary concern to the settlers who located around the "Upside-Down" Salinas River, and those who settled away from the easily dug wells and pools of the bottomlands were completely dependent on springs. The broad valleys containing former Mission San Antonio lands and the public domain that surrounded the lands were settled quickly in the 1860s and 1870s. The towns of Jolon and later Lockwood became south county centers for commerce and social activities. With the extension of the Southern Pacific Railroad down the Salinas Valley to Soledad in 1874, townsites grew south from Salinas, promoted by their founders with advertising in eastern and European newspapers and notices. A county directory compiled within a year of the extension of rail service to Soledad offered the following description of the landscape and its people during that transitional period. Chualar was founded about nine miles south of Salinas on the ranch of David Jacks, a controversial figure who was heavily involved in the transfer of rancho holdings to Americans during the 1860s, and who was certainly the wealthiest individual in the county by 1880 through his shrewd and exploitive real estate ventures. In 1875, it was noted that Chualar City boasted 51 persons, a hotel, stores, restaurants, shoeshop, blacksmith shop, and freight depot, all of which had been nonexistant a year before. Gonzales was originally a stop on the Southern Pacific Railroad. The Southern Pacific Railroad laid tracks through the area in 1872, and later a depot was erected to allow trains to stop for freight and passengers. The original town, consisting of 50 blocks, was planned in 1874 by Mariano and Alfredo Gonzales on the land granted to their father, Teodoro Gonzales, in 1836. Twenty years later, in 1894, the earliest recorded population of Gonzales was 500 residents. Soledad marked the end of the Southern Pacific line, and at this point passengers transferred to the Coast Line Stage Company. The stage road left the marshy Salinas Valley to follow the Arroyo Seco, with the first horse changing station at Last Chance, fifteen miles from Soledad. Three miles further along the road was the Gulch House inn, operated by Mr. Thompson. Four miles beyond Thompson's was the store and hotel of A.E. Walker. San Antonio, or Lowe's Station as it was also known, was four miles beyond Walker's, where passengers could get supper and sleep. Ten miles further the stage reached the village of Jolon, which was by that time a substantial settlement dominated by the two-story adobe Dutton Hotel. The stage continued to Pleito Station, where it was noted that 43 persons had recently arrived from Kentucky to take up farming land. Pleito now lies beneath the waters of San Antonio Lake. Harris Valley with its fine grazing land was six miles west of Pleito, and beyond that was Sapaque Valley, where three families worked farmland and grazing lands of 1,000 acres. The directory went on to describe other features and townsites. Quicksilver and gold mines were described ten miles northeast of Jolon, and note was made of the new community of Rootville six miles northeast of Soledad. Here Samuel Brannan, who had brought news of the 1848 American River gold discoveries to San Francisco, and H. Higgins had invested in a gold mine and brought in 32 settlers. Three mining companies were in operation at Rootville, the Robert Emmet, Comet, and Bambridge. The location of Murietta Stronghold was described five miles north of Rootville in a narrow, boulder filled canyon, while well known Indian caves 18 miles northwest of Jolon were also believed to be a Murietta and Vasquez gang rendezvous point. The physical description of the caves seems to fit Wagon Cave, a Santa Lucia range historic landmark used as a rest point on the trail from the Monterey County coast to King City (CA-MNT-307). Wagon Cave also contains evidence of prehistoric occupation. In the hills east of the Salinas Valley the directory author noted that Mormons had settled Long Valley and had built up productive farms, and that Peachtree Valley was headquarters for eleven farmers and four stockraisers, with one shepherd listed among them. The 1878 directory listings noted substantially increased growth in the rural areas. Merchants and businesses in the town of Chualar were operated predominately by Danes, and the town boasted three hotels. In addition to the thirteen in-town businesses, 36 farms operated by individuals, families, and partnerships were listed for the Chualar post office. The settlement of Imusdale in the Cholame hills was center for 34 stockraising and agricultural operations, while thirteen ranches were listed for Long Valley and forty for Peachtree Valley. The commercial district of Jolon boasted two grocers, a butcher, a blacksmith, a harness maker, and a constable as well as the general merchandise, post office, and Wells Fargo station operated by George Dutton in his adobe hotel. The outlying communities were again described in a promotional county history published in 1881. San Antonio continued as a stage stop at the eastern end of Jolon Valley, while Jolon was described as the southernmost settlement in the county. Harris Valley and Sapaque Valley were described as fine grazing and grain acreage with few settlers. The Ray, Harris, and Liddle families were early settlers of Harris and Sapaque Valleys, according to oral histories compiled by descendant Rachel Gillett. These settlers are tabulated in the 1880 census, with Ray and Harris cultivating barley, wheat, and corn and raising swine and poultry on their farms, while Liddle was involved in sheepraising as well as the same types of grain cultivation. Elliott and Moore described Peachtree Valley settlements in 1881, noting that the village of Peachtree contained two saloons, a hotel, store, post office, and blacksmith shop. Peach Tree Ranch was by that time a Miller and Lux operation, consisting of 1,500 acres in grain in the fourteen mile length of the ranch, a ranch headquarters complex that was a small village in itself, and tenants farming on shares in the lower end of the valley. Local historian Valance Heinsen has chronicled the growth of Jolon, noting that it had its beginnings as a home remodeled to an inn as early as 1850, then further remodeled to the two-story Dutton Hotel in 1876. A Chinese population attracted to mining ventures in the area operated a laundry in Jolon in the 1850s. The village experienced a growth spurt with Dutton's remodeling of the inn, and a dance hall and community church were added between 1876 and 1879. A community hall, school, granary, and several new houses were constructed by 1888. Several large horse barns and a smithy were added in the early 1890s, along with a detached post office and a telephone office. Several farmers moved into town in the 1890s, further expanding the population and offsetting losses brought about by the closing of the Los Burros mines. Former San Antonio Mission grazing and agricultural lands in the San Antonio Valley had been quickly appropriated as ranchos in the 1830s and 1840s, and were in turn greatly desired by American and European investors with the passage of the 1851 Land Act. The old road connecting the missions linked several fine old adobe ranch headquarters through the valley, and as travelers increased on the Stage Road, so did interest in the rancho lands, which by that time were surrounded by homesteads and being infringed upon by squatters. San Francisco and London land agents purchased the vast spreads from the financially beaten Hispanic owners, locking up much of the land in widespread stockranging from the early 1870s. One of the ugliest chapters in south county history took place during this period, when Faxon Dean Atherton, a San Francisco area financier and land investor, purchased Rancho Milpitas immediately upon its title clearance in the San Francisco court. He then sent notice to evict the squatters on the land, most of whom were settlers on improved lands awaiting preemption, and who included George Dutton and others who had believed they owned property in the town of Jolon. Efforts at an appeal and lobbying in Washington by the settlers failed, and in 1877 Atherton's son was sent with the sheriff to remove the occupants and repossess their homes. The wealthier among them repurchased their properties, but many moved on. Five of the former mission ranchos were eventually consolidated and in 1922 sold to Hearst's Piedmont Land and Cattle Company. The Lockwood area was settled almost entirely by former neighbors and relatives from the Island Fohr, located off the west coast of German Schleswig-Holstein in the North Frisian Islands. The first arrivals purchased 160 acre plots from earlier homesteaders, and brought or sent for relatives to help expand the acreage and work the farms. Several early families in the Lockwood area are now in their third generation of farming the family holdings, some of which have been considerably expanded to several thousand acres. The coastal regions of southern Monterey County were isolated from settled regions to the north (Big Sur) and south (Cambria) because of the precipitous terrain, and were more closely tied to commercial and social affairs of the San Antonio Valley-Jolon-Lockwood area than to other coastal communities. A mail road, actually a horse trail, led from Jolon through present day Fort Hunter Liggett lands to the Santa Lucia divide, where several trails led down to the coast or to the mining camps in the mountains. Settlers from the Lucia area and south to Pacific Valley followed trails over the mountains that rendezvoused at Wagon Cave (CA-MNT-307) on the San Antonio River, where horseback travelers switched to wagons stored there for the purpose of hauling provisions from King City and Jolon. Soledad remained the southern terminus of the Southern Pacific Railroad until 1885, when construction was begun to carry the line south to San Miguel. Throughout the 1870s homesteaders continued to locate along the watered canyons and high valleys of the coast ranges. The Paraiso Springs area, once a retreat for Soledad Mission, became a part of the public domain and was settled in several contiguous 160 acre tracts. The settlers who came to Paraiso found that in spite of the $40 paid to a locator they had taken up railroad lands, and that their only source of drinking water was privately owned . Public records show that most of those filing claims to Paraiso area lands did so in the 1873-1877 period. The settlers compensated for the water problem by purchasing and hauling water from the owner, and by digging cisterns to catch runoff. One homesteader located on top of a ridge overlooking the Paraiso and Mission districts hand dug a well 4 feet on a side to 400 feet deep, shoring it with hand sawn lumber and carrying a candle to warn of poisonous gasses. His efforts were in vain, although the dry well still remains in the southeastern quadrant of Township 18 South, Range 5 East. Most of the settlers found the conditions were more than they could bear, and their purchases were consolidated by later settlers. One of these, ancestor to a present day Paraiso resident who provided an oral history to the Monterey County Parks Department, purchased fourteen older homesteads to combine with his own in 1882. The Paris Valley area west of San Lucas was settled as early as the 1860s by French and Basque, although the largest number of French, Basque, and Swiss immigrants arrived in the late 1880s and early 1890s to establish farms, stockranches, and dairies. The Parkfield area in the Cholame Hills was settled in 1854 by the Imus brothers, and drew settlers by way of Slack Canyon and Peachtree Valley through the 1860s to 1880s. A sawmill, brick kiln, and hotel had been constructed by 1887, when the Parkfield Land Company of San Francisco was intensively promoting the healthful aspects of living in the remote country. Descendants of the early settlers note that the area was served by a circuit riding minister out of San Miguel from the 1880s population boom until 1917, when an Episcopal church was built with donated land, funds, and labor and used as a community church. Nearly all the collected history of the outlying canyons such as Bitterwater, Cholame, Hames Valley, and other remote regions of southern Monterey County is contained in untranscibed oral histories taken by members of the San Antonio Valley Historical Association. Descendants of early settlers such as Brodie Reiwerts of Hames Valley provide a rare glimpse of the concerns of rural life in the isolated valleys, where the yearly cycle included conformance to Danish values of work and play, harvests that required a man's 15-hour work day of a 13 year old boy, picnics and social gatherings with other Hames and Sapaque Valley families, and close commercial ties with the service centers of San Luis Obispo County at Paso Robles and San Luis Obispo rather than those of the Salinas Valley. The south county regions are very poorly documented, and, although they would provide rich material for a study of the settlement process in marginal rural landscapes, have not been of interest to those doing settlement studies. The Cachagua area was a submarginal area similar to Paraiso Springs in terms of water and soils. Homesteaders were drawn to the region early in the 1870s, but few stayed to build up holdings. During the recession of the 1890s, people from the Salinas Valley again migrated into the Cachagua to take up small holdings and carry out subsistence farming. Jamesburg was established as a stage stop on the rough stage road to Tassajara Hot Springs in 1885, during a period when the springs were heavily promoted. Colony settlement schemes were much a part of the settlement history of Monterey County, receiving their push from the development of irrigation canals in the late 1890s. In 1897 German promoters Lang and Dorn offered ten acre parcels in St. Joseph's Colony southeast of Salinas in conjunction with Claus Spreckels' newly constructed sugar beet refinery. The colony contained a post office, store, school, and church in addition to a number of dwellings, and offered a German community to its residents in addition to Spreckels as a guaranteed buyer of their beet crops. The colony was heavily promoted in the German language in eastern cities and western centers. The inexperience of the farmers, the limited acreage, and fluctuating beet prices along with dishonest promotional practices killed the Colony within eight years. In 1898, Claus Spreckels supported the formation of the Salvation Army agricultural commune of Fort Romie Colony on acreage situated close to Soledad Mission. The Salvation Army subdivided the property into ten acre parcels and recruited impoverished unemployed city-dwellers in an idealistic attempt to "return the landless man to the manless land." Settlers were bound by contract to repay the Army over a ten year period for housing, seed, and supplies provided. The attempt unfortunately coincided with a severe drought, and required intervention by the Salvation Army in construction of an irrigation system. By 1903 there were 70 colonists working under contract to Spreckels. The small size of the parcels prevented any real success, and the parcels were eventually sold to Spreckels or consolidated by Swiss dairy farmers and others moving into the area. Rancho Arroyo Seco was the setting for a third colony, that of the California Home Seekers Association. Clark Colony was sold in twenty acre parcels, and irrigation canals were drawn from the Salinas River, while hedge rows of eucalyptus were planted for windbreaks on the windswept Salinas plain. The irrigation experiment was a success, and the Colony is now the town of Greenfield. - Breschini, G.S., T. Haversat, and R.P. Hampson, A Cultural Resources Overview of the Coast and Coast-Valley Study Areas [California] (Coyote Press, Salinas, CA, 1983).
<urn:uuid:27ce1c29-b477-4048-a40e-18ef96d0d1ac>
CC-MAIN-2016-26
http://mchsmuseum.com/mcoverview.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9771
3,440
2.71875
3
Brain and Mind The Fundamental Question of Philosophy? How The Universe started to exist? A Method of Logic/A Case Study To elaborate and clarify the issue via An Example: The Relation Between Brain and Mind! Matter is what you can sense with your 5 senses directly. Energy is what you can sense with your 5 senses directly or indirectly (tools). Spirit is what you cannot sense with your 5 senses. Materialism = The belief that Matters & Energies have created the Spirits. Idealism = The belief that Spirits have made the Matters & Energies. Agnosticism = The philosophy of Doubt. You do not know which has made which! The Answers to The Question of Creation: The Conservation Law of Matter & Energy in Physics: Matters & Energies have never been created, will never be destroyed, yet they always change from one form to another. The total amount of matters & energies in the universe stays the same. Theory of Evolution: All elements in the universe has been evolved through millions of years of Evolution to the present form. Theory of Relativity: All elements are in relation to one another, everything is connected and nothing is absolute. The universe has formed as of it's present form through millions of years of interaction between the inner and outer matter energies and matters transformation from one form to another which has resulted in existence of all animated and inanimated objects and The Evolution of all life forms, also through millions of years of Evolution Time table. The universe has never been created and will never be destroyed; therefore, there was never a creator. The limited mind of the average human is always in need to believe in a creator and a creation. God is a spirit. God has always been here, is here now, and will always be here in the future. He created everything in the universe. We have no answer for the question of creation, we do not know what has happened! According to Philosophical Terminology, Brain: Is made of matter, you can sense it with your 5 senses, directly. Brain Waves: are energy, you cannot sense it with your 5 senses directly, but you can sense it with scientific tools & measure it indirectly with machines, so still you can sense it with your 5 senses. Mind: Is made of spirit, cause you cannot sense it with your 5 senses @ all. That was the philosophical definition of all 3 above. Now analyze this: Mind cannot exist without brain: There is no way that one can think without having a brain in the skull. Brain can exist without the mind: Imagine the situation that you are in deep sleep, without thinking or dreaming or having a nightmare, in this situation, your brain is in your skull yet you are not thinking @ all; therefore, the mind does not exist. You are in a steady state of sleep with no thoughts. Do not mix mind with brain-waves, brain-waves are there & can be measured. So Brain can be without the mind but Mind cannot be without the brain. Matters can "be" without the spirits but spirits "cannot be" without the matters. Furthermore: I can be without the God, but God cannot be without me = God did not create me = I created God This was a basic simple example analysis in logic & philosophy about mind, Brain, Human being & God. This was a basic lesson in Philosophy for beginners, very simple to understand for all. This called reasoning & logic for beginners. And it was in simple language so all could understand. Now picture this, After death, when body transfers to energy, it has nothing to do with unexplained elements or spirits. Actually when we die, eventually the body mass transfers to other simpler masses & also much energy gives away, true that is, but the spirit element which is Mind, totally dies. Remember Mind cannot exist without the Brain! So there is nothing supernatural about it. Now if you want to call the inner & outer Energies in the body & in the ecology, which control the whole universe: "Allah",  "Lord", "Yahveh", "God" or whatever else, well you can call it anything you want, you can even call it Jane, or Joe, or Habib. But the fact is that they are not Spirits, they are Energies; therefore, technically cannot be called Allah or ....... (A spiritual form). Human being is simply too selfish to accept that there is no life or reincarnation in "Afterlife"! You will simply rot, deteriorate, turn to maggots, eventually to soil and then grass. Next thing you know, the cow will eat you & release you as feces. So your afterlife will be in feces form! There will be no rivers of milk & honey, no 70 naked huries Female Angels and 2 pretty boys, and for sure no heavenly beautiful afterlife or possibly Hellish suffrage! You were not created in God's image! You came from primates, your brothers & sisters were monkeys and you will turn to Cow Feces in your Afterlife. On the Judgment Day or as the results of Reincarnation, you will come back as Cow Feces, thats all. The sooner you face it, the sooner you will wake up from your colorful bubbly balloon life & will become a "Logical Being". Enjoy your life cause this will be the only life as a conscious human being as yourself that you will ever get! The nightmare of confusion, duality & Agnosticism starts when one for instance is a believer of Dialectical Materialism & Evolutionism, yet also due to Cultural or traditional back ground he falls into the abyss of Philosophical Idealism or Creationism (Spiritualism)! Have you ever seen the high level scientists or Dr.s up to a scientific research, dealing with Evolution at work, yet as soon as they go to their private lives, you might find them @ the Church, then you can understand this contradiction! A good example would be the element of "Evolutionist Christian", a contradiction in contradiction indeed! One cannot prove any of the religious rhetorics, scientifically and via logic. Either, one must believe in logics and proofs, or one must believe in religion, holy books, and the prophets who wrote them, by pure faith. Dialectical Materialism, Materialism, and Evolution are one entity. Philosophical Idealism, Spiritualism, and Creation are another entity. You cannot mix and match the two above as you please or as you find it convenient! State your position and make a stand: Logic or Faith or Indecision? Science or Religion or Doubt? But please do not dodge through philosophical terminologies like a con artist! a) Believing in Science and Logic, then you are an Evolutionist. b) Believe in Religion and Super Natural, then you are a Creationist. c) Believe in Doubts and indecisions, then you are an Agnostic. Yet you cannot create an abomination of above philosophies as you see fit & as it gets convenient! Mind is a terrible thing to be wasted on the story books made for adults, called the holy books. So, what will it be? More power to all, Watcher in the woods
<urn:uuid:ad3d31e5-5fea-4b9d-8438-5ce2e5bd0009>
CC-MAIN-2016-26
http://www.iranpoliticsclub.net/club/viewtopic.php?p=2171
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404405.88/warc/CC-MAIN-20160624155004-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932145
1,538
3.015625
3
The 2009 Nobel Prize in physics has been awarded to Charles K. Kao, Willard S. Boyle and George E. Smith for breakthroughs involving the transmission of light in fiber optics and inventing the CCD sensor. The Royal Swedish Academy of Sciences said all three have American citizenship. Kao also holds British citizenship while Boyle is a Canadian dual national. Kao initiated the search for and the development of the low-loss optical fiber used in optical fiber communication systems. He showed that fused silica (SiO2) had the purity required for optical communication, which led to a search to produce glass fibers with low losses. Four years after Kao's seminal 1969 paper, a research team from the Corning Glass Works used a Chemical Vapor Deposition process in making glass fibers of fused silica with the low losses that Kao had envisioned. Boyle and Smith invented the charge-coupled device used in many digital cameras and in advanced medical and scientific instrumentation. They conceived the device at Bell Laboratories in 1969. The award, announced Tuesday (Oct. 6), includes a cash award for $1.4 million. The prize ceremonies will be held in Stockholm on Dec. 10.
<urn:uuid:7d3f6045-3732-4e44-8320-291c58fc9892>
CC-MAIN-2016-26
http://www.eetimes.com/document.asp?doc_id=1171871&piddl_msgorder=asc
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00018-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942088
250
2.96875
3
Buzz buzz - the spelling bee learning game helps children learn about various words, sounds and patterns. Help your child discover how much fun learning can be with this innovative and enjoyable kids activity. Number of players: Either show a picture or say a word and let the children determine either the letter that makes the beginning sound or ending sound. If it is one child's turn and they do not know, they may call on a friend to help. For older children, choose words based on your whereabouts for them to spell in full. Find more inspiring kids' craft ideas - How-to videos for kids' crafts - Make a melted crayon masterpiece - Easter crafts - Christmas crafts - Fun kids craft ideas - 30 fun things to do with kids (including craft) - Dress up ideas kids can make - Crafts to make from the recycling bin - Fairy-inspired craft ideas - Great ideas for paper crafts Find more printables and fun: - Chore and reward charts - Colouring pages - Easy mazes - Medium level mazes - Hard level mazes - Medium dot-to-dots - Hard dot-to-dots - Colour by numbers
<urn:uuid:17dfdd12-9745-48c0-a9ad-3bfee1dc3362>
CC-MAIN-2016-26
http://www.kidspot.com.au/kids-activities-and-games/Car%20games+6/spelling-bee+36.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885858
255
2.890625
3
They say there’s no such thing as a free lunch, but that’s about to change with the advent of food-sharing apps. At least once per week — if not once per day — you probably have some leftovers, and you’re in good company. Many people have bigger eyes than their stomachs, and feel guilty about throwing away perfectly delicious meals. Composting helps assuage that guilt, but wouldn’t you rather give what you don’t need to someone who is in need of a nutritious meal? In an era of app startups, it's not surprising that one leading app creator based out of Seattle, Washington — appropriately called LeftoverSwap — has heeded the call. You simply input what's up for grabs and the app instantly notifies others who have signed up as recipients. With most apps, givers and receivers live close to each other and agree on a safe place to meet. An up-and-coming trend "Some people are shocked and find the concept absolutely disgusting while others love it, and some wonder if it’s real,” says Dan Newman, co-founder of LeftoverSwap, which now has more than 10,000 users around the world. "There’s a large part of the population that want to do their best to share the resources we have," Newman says. This "food sharing economy" blends frugality and sustainability, much like the home-sharing nature of Airbnb. And like Airbnb, people are still getting comfortable with this idea of sharing so openly. It’s an honorable undertaking considering developed countries waste around 40 percent of food. In the U.K., households throw out 20 percent of all food purchased. Fortunately, the idea is catching on in places like Germany. "Food waste has become a very hot topic here, and at the same time the sharing economy has boomed," Barbara Merhart, a coordinator for the German nonprofit Foodsharing.de, told the Guardian. "Personally, I don’t buy groceries anymore. Why would you spend money on it?" A social phenomenon While there are certainly users who are in high need of food and wouldn’t eat without food sharing, there’s also an increasing number of professionals who agree with Merhart and see no point in wasting money on groceries when there are so many leftovers. What started as a forum for sharing leftovers straight form the fridge has gotten the attention of shops, bakers and even high-end restaurants. Farmers are getting involved by donating extra goods to charities that are in need of food, via apps like Cropmobster. Food sharing isn't just about charity; it's shifting into a social, community endeavor — and it's making a tangible difference. It’s estimated that 33 tons of food has been shared and saved so far. Clearly, "waste not, want not" is a mantra enjoying a much needed revival thanks to technology. Related on MNN:
<urn:uuid:ebd0a4a3-d106-45a3-ac96-e0f3472148c7>
CC-MAIN-2016-26
http://www.mnn.com/lifestyle/responsible-living/stories/food-sharing-apps-put-an-end-to-wasted-grub
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970342
623
2.53125
3
Stephen Johnson Field Facts The American jurist Stephen Johnson Field (1816-1899) was an associate justice of the U.S. Supreme Court and a powerful partisan of unimpeded business expansion. Stephen Field was born on Nov. 4, 1816, in Haddam, Conn., the son of a Congregationalist minister. He spent 2 years in Europe and the Middle East before entering Williams College, from which he graduated in 1837. He read law in the firm of his brother David in New York City, then moved to California in 1849. The contradiction in Field's life between outrageous personal boldness and determination for law and order was a reflection of the frontier. At Yurbaville (later Marysville), Calif., as justice of the peace, Field was noted for his arbitrary but firm enforcement of the law. Despite an undignified controversy with another judge, he was elected in 1857 to the state's supreme court. Field was a Unionist in the Civil War, and in 1863 President Abraham Lincoln appointed him to the U.S. Supreme Court. Notable among his early court writings were dissents in the Slaughter-House Cases (1873) and Munn v. Illinois (1873). In the latter Field presaged his philosophy of protecting business from the competition of governmentally created monopolies and from governmental regulation. This legal philosophy, referred to as "substantive due process," expressed the idea of putting limits on government in order to preserve liberty, along with the notion that government interference in the jungle of economic competition was unnatural. Substantive due process came into full force in Field's circuit court opinion, later upheld by the Supreme Court, San Mateo v. Southern Pacific R.R. Co. (1882). Here, a corporation was defined as a "person" and was thus protected by the 14th Amendment from any deprivation of its rights by government intervention "without due process of law." This clause was a firm barrier against regulation, and with its protection, business enjoyed a legal immunity that lasted until the 1930s. Field was a powerful voice in the Democratic party. He greatly resented being bypassed for the chief justiceship in 1888 by President Grover Cleveland. A gregarious man, he was not above sharing the hospitality of men whose corporations were engaged in litigation before the Supreme Court. After serving on the Supreme Court longer than any other justice in its history, Field resigned in 1897. He died on April 9, 1899, in Washington, D.C. Further Reading on Stephen Johnson Field Field's Personal Reminiscences of Early Days in California (1880; rev. ed. 1893) is illuminating. Carl Brent Swisher, Stephen J. Field: Craftsman of the Law (1930), is the standard biography. Also of value is the essay on Field in Robert G. McCloskey, American Conservatism in the Age of Enterprise: A Study of William Graham Sumner, Stephen J. Field, and Andrew Carnegie (1951). Additional Biography Sources The Fields and the law: essays, San Francisco: United States District Court for the Northern District of California Historical Society; New York: Federal Bar Council, 1986.
<urn:uuid:c69d4747-703d-4bf3-9008-b91b1f2766f0>
CC-MAIN-2016-26
http://biography.yourdictionary.com/stephen-johnson-field
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969314
663
2.796875
3
Today’s crazy busy lifestyle can damage our central nervous system, especially if we are not taking proactive steps to counteract the perpetual negative effects on our body. An overtaxed nervous system typically expends most of its resources just defending the body against attack, whether it is in the form of anxiety, panic, or stress; they all deplete your energy reserves and have the potential to harm your endocrine system. There is a simple way to counteract this damage, eat foods that minimize the amount of time and energy your nervous system spends in catabolic mode, foods that nourish, heal, and regenerate your body. Here are seven powerful foods that can help calm your nervous system naturally for maximum health: Whey is naturally rich in L-tryptophan, which assists in the production of serotonin, which in turn regulates endocrine, digestive, nervous system, and blood health, and a wide range of healing amino acids and nutrients, which are excellent for calming your nervous system. Whey is also rich in L-glutamine, a non-essential amino acid that is the precursor to gamma-aminobutyric acid (GABA), a substance that helps regulate the nervous system and promote calmness. 2) Sweet potatoes, yams This starchy vegetable has bulk to keep you satisfied and an impressive nutrient roster with high levels of vitamins A, C, and B, sweet potatoes are a nutritionally-dense food that can help calm your nerves, eliminate stress, and lower your blood pressure. Yams contain an array of nutrient compounds that feed the glandular system and promote respiratory, urinary, and nervous system health. This handy, easy-open packaged fruit is packed with high doses of potassium, magnesium, vitamin B6, and other nutrients that help boost production of digestion-enhancing mucous, as well as promoting feelings of happiness and calm in the body. They assist production of serotonin and melatonin, which are crucial regulators of mood and sleeping patterns. 4) Green tea Green tea contains the amino acid L-theanine that enhances mood by stimulating production of alpha waves in the brain, it naturally reduces stress and promotes relaxation. 5) Dark chocolate Dark chocolate and cacao are packed with antioxidants, essential minerals, and protective plant nutrient flavanols, they also contain L-tryptophan and magnesium, a mineral widely recognized for its ability to calm the nervous system. Dark Chocolate also contains the neurotransmitter anandamide, it has the ability to alter dopamine levels in the brain causing a sense of peace and relaxation. 6) Brazil nuts Said to be nature’s richest source of selenium, these nuts are incomparable when it comes to relaxing the nervous system. Spinach is a great way to obtain fat-soluble vitamins that build protective layers that protect your nerves from damage, including vitamin K, to help your brain and nervous system function as they should. 101 Best Super Foods, Betsy A. Hornick, MS, RD, 2011 Edited 7/1/14 by Stephanie Dawson
<urn:uuid:b8ee8456-fc95-4db7-ba91-88bee2a44daa>
CC-MAIN-2016-26
http://positivemed.com/2013/06/26/7-powerful-foods-to-naturally-calm-your-nervous-system/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00126-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913565
634
2.578125
3
The latest news about space exploration and technologies, astrophysics, cosmology, the universe... Posted: Jan 16, 2013 Light from the darkness (Nanowerk News) An evocative new image from ESO shows a dark cloud where new stars are forming, along with a cluster of brilliant stars that have already emerged from their dusty stellar nursery. The new picture was taken with the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile and is the best image ever taken in visible light of this little-known object. On the left of this new image there is a dark column resembling a cloud of smoke. To the right shines a small group of brilliant stars. At first glance these two features could not be more different, but they are in fact closely linked. The cloud contains huge amounts of cool cosmic dust and is a nursery where new stars are being born. It is likely that the Sun formed in a similar star formation region more than four billion years ago. This cloud is known as Lupus 3 and it lies about 600 light-years from Earth in the constellation of Scorpius (The Scorpion). The section shown here is about five light-years across. As the denser parts of such clouds contract under the effects of gravity they heat up and start to shine. At first this radiation is blocked by the dusty clouds and can only be seen by telescopes observing at longer wavelengths than visible light, such as the infrared. But as the stars get hotter and brighter their intense radiation and stellar winds gradually clear the clouds around them until they emerge in all their glory. The bright stars right of the centre of this new picture form a perfect example of a small group of such hot young stars. Some of their brilliant blue light is being scattered off the remaining dust around them. The two brightest stars are bright enough to be seen easily with a small telescope or binoculars. They are young stars that have not yet started to shine by nuclear fusion in their cores and are still surrounded by glowing gas . They are probably less than one million years old. Although they are less obvious at first glance than the bright blue stars, surveys have found many other very young stellar objects in this region, which is one of the closest such stellar nurseries to the Sun. Star formation regions can be huge, such as the Tarantula Nebula (eso0650) where hundreds of massive stars are being formed. However, most of the stars in our and other galaxies are thought to have formed in much more modest regions like the one shown here, where only two bright stars are visible and no very heavy stars are formed. For this reason, the Lupus 3 region is both fascinating for astronomers and a beautiful illustration of the early stages of the life of stars. These are known as Herbig Ae/Be stars after the astronomer who first identified them. The A and B refer to the spectral types of the stars, somewhat hotter than the Sun, and the “e” indicates that emission lines are present in their spectra, due to the glow from the gas around them. They shine by converting gravitational potential energy into heat as they contract. If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
<urn:uuid:6f45a48b-961e-4733-9963-7a63f7f45365>
CC-MAIN-2016-26
http://www.nanowerk.com/news2/space/newsid=28477.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958772
681
3.53125
4
- Earthenware, blackened in firing - 18.1 x 28.9 cm - 18th-19th century - Origin: Burma, Laos, or Northern Thailand - Gift of Osborne and Gratia Hauge, and Victor and Takako Hauge Black alms bowl with wide mouth and round bottom. A crack on mouth. Clay: buff earthenware, polished, blackened by reduction firing. 1. (Candy Chan, Research Assistant, May 14, 2003) Small quantities of black alms bowls were produced in Wor Khok in Shan States, Burma. Black water bottles, Buddhist altar vases, and water jars with lids were also made there (Tsuda 2001). Tsuda Takenori. 2001. "Myanmaa, Shyan-shū no tōji (2) (Ceramics in Shan States, Burma (2))." Tōsetsu 578: 24–32. 2. (Louise Cort, 21 August 2003) In 1922 W. A. Graham wrote: "Concerning the alms-bowl of the mendicant Buddhist monk, called 'Bhatr", (from the Sanskrit 'Patra', a plate or cup). My youth was spent in a part of Burma where alms-bowls are made of hard pottery, turned a shining black by a coat of sessamum oil applied before firing in the kiln.... I have, however, seen several old pottery alms bowls at Ayuthia.... I am told that in the Chiang Mai neighborhood also, where fine black pottery is made, the 'Bhatr" is usually of iron but that many old earthenware bowls are preserved in Wat [monasteries], and that once upon a time they were all of this material" (Rooney ed. 1986, 21). In 1990 I saw earthenware alms bowls being made at a Chinese-run ceramics factory in Ratchburi. Graham, W. A. 1922. "Pottery in Siam." The Journal of the Siam Society, 16(1): 1–27. Reprinted in Rooney, Dawn F. ed. 1986. Pp. 11–37 in Thai Pottery and Ceramics: collected articles from the Journal of the Siam Society, 1922–1980: Bangkok: The Siam Society. . 3. (Louise Cort, 17 September 2003) According to Bundit 1999, the food bowl is one of eight personal items permitted to a Theravada Buddhist monk. Food bowls found in Thailand dating from the Dvaravati (7th–11th centuries) and Sukhothai (13th–15th centuries) periods are made of clay; iron bowls begin to be used in the Ayutthaya period (1350–1767) and became dominant during the Bangkok period (1782 to present). Aluminum bowls are common in the present day. Research by Charlotte Reith, Alexandria, Virginia (1998 manuscript), led her to visit the village of Letthit, near Mandalay, a name recorded in earlier articles and gazetteers. She discovered that the production of alms bowls took place in the adjacent village, Thaa Pait Tan, "Alms Bowl Place." The potter throws the vessel as a deep dish form, using a wheel, then finishes the form with a paddle and anvil followed by a ring-shaped metal tool. The bowl is polished using four small, smooth stones held between the fingers of one hand. The alms bowls are fired in an updraft kiln, with reduction induced by inserting green wood and sealing the kiln at the close of the firing to produce smoke to blacken the bowls. (Some bowls are finished with a coat with lacquer.) Bundit Leuchaichan. 1999. "The Food Bowl and Its History". Pp. 524–537 in Thailand: Culture and Society. Bangkok: Pricess Maha Chakri Sirindhorn Anthropology Center. Reith, Charlotte. 1998. Alms Bowl Place—Tha Pait Tan (Unpublished manuscript).
<urn:uuid:84999354-b17f-4349-9d6e-c7376a3f6534>
CC-MAIN-2016-26
http://seasianceramics.asia.si.edu/search/object.asp?id=S2005.207
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00057-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935364
863
2.921875
3