text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Transition of Secondary Students with Emotional or Behavioral Disorders Transition of Secondary Students with Emotional or Behavioral Disorders This newly revised and expanded edition focuses on successful practices, models, programs, and recommendations for working with adolescents who have emotional or behavioral disorders. Dr. Cheney is joined by 31 nationally recognized contributing authors to provide answers to the hard questions of how to improve the educational, vocational, and community outcomes of youth with EBD. "Findings from the work of this book’s authors, as well as many others, confirms the importance of self-determined transition goals and activities in normative settings in the community as critical features for success. We have evolved from a culture that once suggested that best placement for young adults with EBD was in sheltered workshops with adults who had chronic mental health problems. Our best practices now suggest that early involvement in school-based vocational programs that are linked with competitive employment in the community leads to greater success and learning by youth with EBD. Further, the more these placements take into consideration the interests and skills of the individual student, the more likely he or she is to succeed in employment. Across the educational programs discussed in this book, it is clear that flexible educational programs, which may be located on or off campus, are needed to provide youth with alternative modes of earning school credits. Requiring youth with EBD to take existing classes in a rigid high-school curriculum is a formula for disaster. It is highly recommended that youth be able to choose from an array of classes that are school, community, or work based. The IEP should include statements about classes that are to be taken and how they meet the educational and curriculum needs of the student. For students who have been adjudicated in the juvenile justice system, a concentrated effort must be made to engage school, vocational rehabilitation, mental health, and other agencies providing social services for these youth when the youth reenter the community. These youth will need a highly involved transition specialist to work with them, as found in the project in Oregon. Furthermore, they will need more than just a placement with a family or living facility in the community. These youth will need extensive and ongoing supervision and support in their work and education in the community. Social support as well as educational or vocational support will be necessary to fulfill the employment and life goals of these youth. Finally, a multicultural and familial perspective is required to improve upon our outcomes with youth. We must truly be able to reach these youth as individuals and use respectful methods that are culturally responsive. Too often, the institutional demands of school and work override the individual needs of ethnically diverse learners. A sound transition plan will include culturally competent practices and consider the role that family members might take in the plan. Family networks and support have been found to be important in the cases presented throughout this book. By following these suggestions, it should be possible for educators and social service providers to improve transition services for youth with EBD. Although barriers and challenges will continue to surface for these youth, their service providers, and their families, the lessons learned to date are that unconditional care and a zero-reject model is the bottom line for service provision. This bottom line always emphasizes what we have known for years — caring, individualized services provided by authoritative adults can lead to successes for youth with EBD in employment, education, and community living." —Douglas Cheney, Editor - An Overview of Transition Issues, Approaches, and Recommendations for Youth with Emotional or Behavioral Disorders Section I: The Cultural, Familial, and Personal Context of Transition Services for Students with EBD - Transitioning Culturally and Linguistically Diverse Learners with Emotional Behavioral or Disorders —Festus E. Obiakor and Lynn K. Wilder - Self-Determination and Transition-Age Youth with Emotional or Behavioral Disorders: Promising Practices —Erik W. Carter - Building Transition Partnerships with Families of Youth with Emotional Behavioral or Disorders —Amy M. Pleet and Donna L. Wandry - Four Strategies to Create Positive Transition Outcomes for Students with Emotional or Behavioral Disorders —Francie R. Murry and Michael Todd Allen Section II: Assessment and Planning Services for Students with EBD at the Secondary Level - Age-Appropriate Transition Assessments: A Strategic Intervention to Help Youth with Emotional or Behavioral Disorders to Complete High School —Larry J. Kortering, Patricia M. Braziel, and Patricia L. Sitlington - Development of Individualized Education Programs for Students with Emotional or Behavioral Disorders: Coordination with Transition Plans —James G. Shriner, Anthony J. Plotner, and Chad A. Rose - Positive Behavior Support and Transition Outcomes for Students in Secondary Settings —Cinda Johnson and Hank Bohanon Section III: Settings and Services for Students with EBD in their Transition to Adulthood - Preparing for Postsecondary Life: An Alternative Program Model —Thomas G. Valore, Claudia Lann Valore, Dennis A. Koenig, James Cirigliano, Patricia Cirigliano, and Steven Cirigliano - The RENEW Model of Futures Planning, Resource Development, and School-to-Career Experiences for Youth with Emotional or Behavioral Disorders —JoAnne M. Malloy, Jonathan Drake, Kathleen Abate, and Gail M. Cormier - Transition to Independence Process (TIP) Model: Understanding Youth Perspectives and Practices for Improving Their Community-Life Outcomes —Hewitt B. “Rusty” Clark, Sarah A. Taylor, and Nicole Deschênes Section IV: Transition Approaches for Students with EBD in Juvenile Justice - Project STAY OUT: A Facility-to-Community Transition Intervention TargetingIncarcerated Adolescent Offenders —Deanne Unruh, Miriam Waintrup, and Tim Canter - Practices in Transition for Youth in the Juvenile Justice System —Heather Griller Clark and Sarup R. Mathur - Hard Questions and Final Thoughts Regarding the School-to-Community Transition of Adolescents with Emotional or Behavioral Disorders Transition requirements under IDEIA 2004 include four main points: - An assessment that identifies one or more postsecondary goals. - Listing of postsecondary goals in the areas of education and training, employment, and, when appropriate, independent living. - Annual goals to assist students in meeting their postsecondary goals. - Specification of transition services including instructional activities and community experiences designed to help the student in his or her transition from school to anticipated postschool environments and to help achieve identified postsecondary goals. "It is imperative that special educators and transition service providers have the knowledge and skills to address these legislative requirements. Fortunately, an extensive knowledge base has emerged over the past 10 years from research and demonstration projects in EBD. Because the law and research findings occurred concurrently, it became apparent that a book focusing on successful practices, models, programs, and directions for students with EBD and practitioners working with them was needed. The contents of this book have resulted from work of the authors over the past decade. These approaches have achieved some very positive results and hold promise for improving educational and transition services for youth with EBD." Kathleen Abate is the Executive Director of the Granite State Federation of Families for Children’s Mental Health. She has over 20 years experience in advocacy, training, and program development, based in the principles of self-determination for people with disabilities. Kathleen is also the parent of a young man with emotional challenges who is now a successful leader in his community. Michael Todd Allen, Ph.D., is an associate professor in the School of Psychological Sciences and the College of Education and Behavioral Sciences at the University of Northern Colorado. His research interests include the neural substrates of learning and memory, applying findings from psychology and neuroscience to the classroom, and metacognitive tutoring of at-risk students. Hank Bohanon, Ph.D., is an associate professor in the School of Education at Loyola University Chicago. He conducts research regarding the implementation of positive behavior support in high schools. He also leads projects that conduct evaluation for statewide initiatives, including response to intervention and social and emotional learning. Patricia M. Braziel is the project coordinator for dissemination and outreach services for the National Secondary Transition Technical Assistance Center (NSTTAC). Her research areas include special education identification practices, school completion, and transition services for students with disabilities. Michael Bullis is the Sommerville-Knight Professor and Dean of the College of Education at the University of Oregon. For more than 20 years, he has conducted research on the school-to-community transition of adolescents with emotional disorders and directed model demonstration projects that provide direct transition services to this population. Tim Canter is a transition specialist located at the Serbu Youth Campus, Lane County, Oregon, Juvenile Justice Center. He is also employed by the Springfield, Oregon, School District. Erik W. Carter, Ph.D., is an associate professor of special education in the Department of Rehabilitation Psychology and Special Education at the University of Wisconsin–Madison. His research and teaching address secondary transition services, self-determination, peer relationships, and access to the general curriculum. Heather Griller Clark, Ph.D., is a principal research specialist at Arizona State University. Her research focuses on issues of transition, gender, and professional development for youth with emotional and behavior disorders in the juvenile justice system. Hewitt B. “Rusty” Clark, Ph.D., is Director of the National Network on Youth Transition for Behavioral Health and a professor at the Florida Mental Health Institute, College of Behavioral and Community Sciences, University of South Florida. Dr. Clark has innovated and researched numerous programs, has published widely in the areas of individualized interventions for children and youth with emotional/behavioral difficulties, and has developed the Transition to Independence Process (TIP) system, and evidence-supported model. Gail M. Cormier is the Executive Director of North Carolina Families United. She has worked for over 25 years in New Hampshire and North Carolina with at-risk youth who are struggling with mental health issues, helping them get back into their communities, stay in school, and be successful contributing adults and family members. Jonathon Drake, MSW, has been the RENEW Training Coordinator at the Institute on Disability at the University of New Hampshire since 2008 and has worked with over 50 youth using the RENEW model. Mr. Drake has also provided training and technical assistance to high school professionals and mental health clinicians to implement RENEW in various settings. Nicole Deschenes, RN, M.Ed., is Codirector of the National Network on Youth Transition (NNYT), an organization dedicated to improving practice, systems, and outcomes for youth and young adults with emotional and behavioral difficulties. Author of various publications and reports, she is also on the faculty of the Department of Child and Family Studies at the Louis de La Parte Florida Mental Health Institute in Tampa. Her current efforts focus on developing effective transition models for youth. Cinda Johnson, Ed.D,, is an assistant professor in the College of Education and Director of the Special Education Program at Seattle University. Her research areas include secondary transition services and the post-school outcomes of youth in special education, with particular emphasis on youth with emotional and behavioral disorders. She is the principal investigator for the Center for Change in Transition Services for Washington State. Dennis A. Koenig is Chief Clinical Officer for Positive Education Program (PEP). He manages referrals and enrollment and supervises all agency clinical services for children and families. He provides mental health programming and service consultation and crisis intervention and has been instrumental in developing and launching the agency’s schoolbased program for transitional youth ages 16 to 22. Larry J. Kortering, Ph.D., is a professor of special education at Appalachian State University and a co-principal investigator for the National Secondary Transition Technical Assistance Center (NSTTAC). His research areas include school completion, assessment, and transition services for students with disabilities. JoAnne M. Malloy, MSW, is a developer of the RENEW model and has directed six state and federally funded employment and dropout prevention projects, with a focus on intensive services for youth with emotional or behavioral disorders. Ms. Malloy has authored numerous articles and book chapters on employment and secondary transition for youth with emotional disorders and adults with mental illnesses. Sarup R. Mathur, Ph.D., is an associate professor in the College of Teacher Education and Leadership at Arizona State University. Her research areas include social skills, behavioral issues of children and youth, and professional development. Francie R. Murry, Ph.D., is a professor in the School of Special Education at the University of Northern Colorado. Her research areas include positive support and academic and behavioral program development for youth with emotional and behavioral disorders and those at risk for identification of the disability. Festus E. Obiakor, Ph.D., is a professor in the Department of Exceptional Education at the University of Wisconsin–Milwaukee. His research interests include multicultural psychology and special education, self-concept development, school reform, and international/comparative education. In addition, in his works he is interested in how we can reduce misidentification, misassessment, miscategorization, misplacement, and misinstruction of culturally and linguistically diverse learners in general and special education. Amy M. Pleet, Ed.D., Secondary Inclusion Consultant at the University of Delaware Center for Secondary Teacher Education, provides professional development to Delaware school districts on topics related to program improvement, instructional strategies, and parent engagement so students with disabilities are better prepared to transition into adulthood. Anthony J. Plotner, Ph.D., is a research fellow in the Department of Special Education at the University of Illinois at Urbana-Champaign. His areas of research interest include transition planning/services and postsecondary outcomes for persons with disabilities. Chad A. Rose, MA., is a doctoral candidate in the Department of Special Education at the University of Illinois. His areas of research interest include bullying and victimization among students with disabilities, with a focus on students with emotional or behavioral disorders. James G. Shriner, Ph.D., is an associate professor in the Department of Special Education at the University of Illinois at Urbana-Champaign. His areas of research interest include issues related to policy implementation and standards-based instruction/assessment/accommodation for students with disabilities, including those with emotional or behavioral disorders. Patricia L. Sitlington, Ph.D., (1947–2009) was a professor of special education at the University of Northern Iowa. She wrote extensively in the area of transition services, assessment, and post-school outcomes for students with disabilities and also served as director or co-director of a number of federally and state-funded research projects. Sarah A. Taylor, MSW., Ph.D., is the CalSWEC-II Mental Health Coordinator and a lecturer in the Department of Social Work at California State University, East Bay. Dr. Taylor earned her MSW in 2002 and PhD in 2007, both from the University of California, Berkeley. Her research interests include transition-age youth, community mental health, disability, and LGBTQ issues. Deanne Unruh, Ph.D., is a senior research associate in the Secondary Special Education and Transition Research Unit at the University of Oregon. Her areas of research interest include secondary transition services targeting youth in the juvenile justice system and adolescents with emotional and/or behavioral difficulties. Claudia Lann Valore is Chief Program Officer for Positive Education Program (PEP). She oversees programming for the agency’s early childhood, day treatment, and autism centers, which annually serve nearly 1,000 youth with emotional, behavioral, and/or significant developmental disabilities. Thomas G. Valore, Ph.D., is Staff Development Director for Positive Education Program (PEP). He oversees the creation and implementation of consultation and training curricula for internal and external audiences of special education and mental health professionals. An educator and psychologist, his training and experience qualify him as an expert in serving troubled and troubling youth. Miriam Waintrup, M.Ed., is a senior research assistant in the Secondary Special Education and Transition Programs Research Unit at the University of Oregon. Her work has focused on coordinating research and model demonstration projects on transition for high-risk youth with disabilities. Donna L. Wandry, Ph.D., Associate Professor and Chair of the Department of Special Education at West Chester University of Pennsylvania, has professional priorities in teacher preparation, family empowerment in transition, school legal issues, and school/agency transition systems change. Lynn K. Wilder, Ed.D., is Associate Professor and Program Leader for Special Education and Early Childhood Education in the College of Education at Florida Gulf Coast University. Her research and publications include reliability of assessment for diverse students with emotional/behavioral disorders, positive behavior support for parents of children with challenging behavior, developing culturally responsive faculty, and working with students with low socioeconomic status. - Anger and Conflict Management - Assessment and Response to Intervention - Behavior Management - Bullying Prevention - Girls and Boys Programs - Grief Counseling - Life Skills and Character Development - Mental Health Issues - Motivation and School Success - Other Professional Resources - Parenting Solutions - Personal and Social Development - Social and Emotional Learning - Social Skills - Special Education - Stress Management - College and University Professors - General Education Teachers K-12 - Mental Health Professionals - Parents and Parent Coordinators - School Administrators K-12 - School Counselors K-12 - School Psychologist K-12 - Social Workers - Special Education Professionals
<urn:uuid:00f68e99-3ff6-4706-a0f2-5a7c4e362902>
CC-MAIN-2016-26
https://www.researchpress.com/books/851/transition-secondary-students-emotional-or-behavioral-disorders
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940771
3,702
2.59375
3
JERUSALEM — Israeli archaeologists say they have unearthed a 1,500-year-old lantern decorated with crosses and a wine press that shed light on life in the Byzantine period. The Israel Antiquities Authority this week announced the discovery of the rare items, which were found in the ruins of a Byzantine settlement near the city of Ashkelon. Archeologist Saar Ganor said the Christian lantern is significant because of the rarity of such items. It was carved in a way that when lit, glowing crosses were projected on walls of a room. He said the wine press is of note because of its large size. The wine made in such a press was often exported to countries in the Mediterranean as well as Europe and North Africa. Copyright 2016, Deseret News Publishing Company
<urn:uuid:07cd0c96-9e6d-4df7-befa-99f85199d02d>
CC-MAIN-2016-26
http://www.deseretnews.com/article/print/765626267/Archaeologists-unearth-ancient-lantern-wine-press.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979593
161
2.875
3
Print version ISSN 0716-9760 Biol. Res. vol.40 no.3 Santiago 2007 Biol Res 40: 347-355, 2007 Volatile Organic Compounds Produced by Human Skin Cells CRISTIAN A. ACEVEDO1, ELIZABETH Y. SÁNCHEZ1, JUAN G. REYES2 and MANUEL E. YOUNG1 1 Universidad Técnica Federico Santa María, Biotechnology Center, Av. España 1680. Valparaíso, Chile. Skin produces volatile organic compounds (VOCs) released to the environment with emission patterns characteristic of climatic conditions. It could be thought that these compounds are intermediaries in cell metabolism, since many intermediaries of metabolic pathways have a volatile potential. In this work, using gas chromatography, we answered the question of whether VOC profiles of primary cultures of human dermal fibroblasts were affected by the type of culture conditions. VOCs were determined for different types of culture, finding significant differences between skin cells grown in classical monolayer culture -2D- compared with 3D matrix immobilized cultures. This indicates that VOC profiles could provide information on the physiological state of skin cells or skin. Key terms: GC/MS, Skin cells, SPME, Volatile organic compounds. Skin is the largest human organ responsible for multiple functions, from being a physical protection barrier to being a site of sense perception or vitamin synthesis. The release of organic volatile compounds (VOCs) through the skin, generating the characteristic odors of the human body is part of our daily experience. These VOCs include a large number of volátiles that can be listed as carboxylic acids, aldehydes, alcohols or ketones (Bernier et al., 2000). These volatile compounds are organized into emission patterns that vary with climatic conditions, among other parameters (Zhang et al., 2005). It can be speculated that these compounds are intermediaries in cell metabolism since many intermediaries of metabolic pathways have similar structure and volatile potential. However, these compounds can also be part of a sophisticated system of intercellular signals, similar to the pheromones in mammalian communication (Mombaerts, 1999), or the emission of terpenoids in plants (Paré and Tumlinson, 1999). It is important to emphasize, that the VOCs emission pattern released by whole skin is different in winter compared to spring (Zhang et al., 2005). These differences can be an adjustment response of metabolism to the environmental changes. In addition, it has been reported that alterations of the metabolic balance produced by pathologies (like cancer), can also cause modifications in the human VOC profile obtained from different sources (blood or breath) (Deng et al., 2004). In current studies with skin cells cultured in vitro, scientific effort has been centered on the cell and the soluble compounds secreted into the culture medium or the production of extracellular matrix components. Nevertheless, in these cell cultures, a volatile phase always coexists that has not yet been systematically investigated. In cell cultures, very few compounds have been extensively studied in the gas phase, mainly: the presence of 02, C02 and NO gases (Tokuda et al., 2000; Chakrabarti and Chakrabarti, 2001; Shekhter et al., 2005), all of them being part of well known metabolic pathways. VOCs in human samples have been analyzed since 1971 (Pauling et al., 1971) using gas chromatography. A recent approach used solid phase micro-extraction (SPME) for the sampling of VOCs (Grote and Pawliszyn, 1997). This analytical technique has been applied for VOCs in cell culture (Poli et al., 2004) and also to analyze alcohol, pesticides and hexanes in human fluids (Namiesnik et al., 2000). In this work, we explore the VOC profiles of primary cultures of human dermal fibroblasts cultured in flasks in traditional monolayers, or encapsulated in alginate beads that better resemble the dermal matrix, finding significant differences that open interesting possibilities for monitoring cell cultures based on VOC profiles, especially, associated with novel chromatographic techniques. MATERIALS AND METHODS The technique to analyze human blood samples by means of GC/MS-SPME/HS (Gas chromatography / Mass spectrum -Solid phase micro extraction / Head space) proposed by Deng et al. (2004) with some modifications, was used to measure the VOCs. Four ml of culture medium were deposited in HS vials with 15 ml of HS. Carboxen-PDMS fiber (Supelco) was exposed in the vial head space for 30 minutes at 60°C. The fiber was then injected in a CG/MS HP 6890 (with a HP MD5973 quadrupole mass spectrometer) in split-less mode (2 minute). Separation was performed in a HP-5MS (Agilent) column. The fiber was exposed for desorption at the port of the chromatograph for 5 minutes at 250°C. The identification of individual VOCs was performed on the basis of the standard mass spectrum NIST-02 library, considering a fit value less than 85% as not fully identified (Zhang et al., 2005). The fit value indicates the degree at which the target spectrum matches the standard spectrum in the NIST-02 library (100% meaning a perfect fit). Siloxanes were discarded from the analysis since they are generally considered in the literature as main background interferences, apparently stemming from the capillary column stationary phase (Zhang et al., 2005). In addition, the compounds extracted from samples of fresh PBS (phosphate buffered saline) were discarded in the analysis, being considered contaminants from the laboratory environment. Skin cell culture Human dermal fibroblasts were obtained under informed consent from the foreskin of healthy donors and processed using standard protocols including mechanical and enzymatic (trypsin and collagenase) separation (Freshney, 2000; Karmiol, 2002). The cultures obtained were tested for the presence of mycoplasma using a commercial PCR assay (Biological Industries, Israel). Only mycoplasma negative cultures were used. For VOCs measurement, 105 cells per flask were cultured in 25 cm2, Falcon flasks with closed cap, in DMEM/F12 (1:1) with FBS (10%) and HEPES (25 mM), at 37°C. Cultured medium was extracted at 120 hrs (no evidence of acidification or damaged cells was found). Following the trypsinization and counting procedure where a hemacytometer was used, a known number of cells were suspended in an appropriate volume of alginate solution (0.5%) to achieve a concentration of 105 cells per gram of alginate solution, and microencapsulated by dropping the cell suspension onto a sterile solution of CaCl2 (50 mM) (Kierstan and Bucke, 1977; Abuzzo et al., 2001; Yang and Wright, 2002). Microcapsules were collected, carefully washed with DMEM/ F12, and routinely examined for physical appearance under a microscope (10X). One gram of capsules with cell density of 105 cells per gram was added per flask, for VOCs measurement, same culture conditions as the monolayer culture, in DMEM/F12 (1: 1) with FBS (10%) and HEPES(25mM),at37°C. Multivariate analysis (Principal component analysis and Contribution analysis) was performed with SIMCA-P software (UMETRICS) (Eriksson et al., 2006). Basic statistics analysis (t-test) was performed using Microsoft Excel. Fibroblasts appeared with a rounded morphology when cultured within the capsules as compared to the relatively extended morphology of the cells cultured on monolayers (Figure 1). These cell shape differences were most likely associated to a change in cell function related to attachment conditions, as suggested by the VOC profiles shown below. Thus, the VOC profile obtained using the chromatographic GC/MS-SPME/HS technique described in the methods section, allowed separation of 13 compounds with retention times between 5 and 12 minutes (Table 1 and Figure 2). Six of these compounds were fully identified. Four of these compounds (styrene, benzaldehyde, ethylhexanol and acetophenone) are reported in the literature as present in whole human skin and also in human blood samples. There was also indication of the presence of other seven compounds present in small amounts, three of them representing less than 1% of the chromatographic area. These "trace" compounds were not reliably identified by the NIST-02 library, giving fits for putative compounds such as alcohols or ketones (Deng et al., 2004; Zhang et al., 2005). Table 2 shows the chromatographic areas of the identified compounds under different culture conditions. It can be appreciated that clear differences exist in pattern profiles between fresh medium and culture medium incubated with and without skin cells. Similar differences exist when skin cells were cultured in monolayer (Figure 2, D) compared with microencapsulated cells (Figure 2, F). To give statistical meaning to these qualitative observations, Principal Component Analysis (PCA, multivariate statistic) (Brereton, 2003) on the data of the chromatograms presented in Figure 2 was performed, taking abundance (magnitude of the signal shown in the chromatogram) for all retention times as the variables (with time windows of 0.3 s between retention times), which means the whole chromatographic spectrum was taken as variables and the different culture conditions and medium as observations. The result is presented in Figure 3, showing the monolayer (2D) and microencapsulated (3D) VOCs data profiles (Figure 3, B and C) located in separate clusters, therefore fully indicating different pattern behaviors. Two independent cell cultures grown on monolayer cluster close together showing similar PC A profiles, a similar pattern occurs for the two independent microencapsulated cells cultured in alginate matrix. The VOCs profile of fresh medium was different after 120 hours of incubation without cells (Figure 3, A and D). It is likely then, that the VOCs found in long term incubations (37°C) are products released by the culture materials with time. Even though belonging to the same cluster in the Principal Component analysis, the location for each culture is not the same due to slight differences in the compounds profile. To account for these differences, a Contribution Analysis was performed. Contribution Analysis, commonly used to illustrate the factors that contributed the most to significant differences for different experimental observations, was based on the two first principal components, with the chromatographic area as the variable for each identified VOC (multivariate statistics) (Eriksson et al., 2006). This analysis, indicated that ethylhexanol and benzaldehyde are the compounds that contributed most to the variability of the cell cultures under different conditions (monolayer and encapsulated). In Figure 4, the height of the bar (in absolute value) represents the discriminator level of the VOC as a possible marker to separate two culture situations. The cell shape differed remarkably between cells grown as monolayer on plastic, and cells encapsulated in the alginate matrix. In our case, the rounded shape cells predominated inside the capsules, similarly to skin cells in contact with fibrin matrix (Weiss et al., 1998), or fibroblasts encapsulated in alginate (Paek et al., 2006). This matrix-induced change in the morphology suggests also a possible change in cell function. From a physical perspective, the encapsulated cells constitute a better model of a tissue due to their 3D structure similar to the skin, in contrast to the 2D system, typical of monolayer cultures (Yang, et al., 2001). In vivo, the cells are immersed in the extracellular matrix, where nutrients, oxygen and metabolic products must diffuse from the circulatory system. Similarly, in the encapsulated culture the access of the cells to the culture medium is diffusion limited. (Muschleretal.,2004). The statistical analysis of the results indicate that ethylhexanol and benzaldehyde are the VOCs that contributed most in the comparative study when cells were cultivated under different attachment conditions, i.e., as a two dimensional superficial monolayer or inside a three dimensional encapsulated matrix that resembles an actual tissue (Yang, et al., 2001). These compounds have also been identified in human skin, and ethylhexanol has being selected as a marker that contributes in discriminating the VOCs released by whole human skin under different environmental conditions (Zhang et al., 2005). Our data indicate that these chemicals are the main compounds that account for the differences between preparations, suggesting that they reflect the characteristic metabolism of the cells from each individual. Cyclohexanol and 1,3-di-tert- butylbenzene, present in fresh culture medium, were not detected in incubated culture medium, regardless of the presence of cells. This fact suggests that these VOCs disappear during incubation, most likely by a non-biological mechanism such as evaporation at 37°C (vapor pressure of cyclohexanol 83 mmHg; Perry and Green, 1997). Cyclohexanol being an organic solvent commonly used in petrochemical and organic chemical industries, could be present in the package processing environment of culture reagents and flask factories. Cyclohexanol was not present in the laboratory environment, because this compound was not present in the environmental blank. 1,3-di-feri-butylbenzene was reported as a chemical appearing as a decomposition product from the radiolysis processes related to polymer container sterilization, therefore it was not unexpected to find it in the sterilized plastic flasks used in cell cultures (Welle et al, 2002). Styrene was not present in fresh medium, but appeared in media incubated in culture flasks with cells. This finding suggests that styrene can be a degradation product derived from the polystyrene flask. Styrene diffusion into foods has been reported in some food packaging processes (Choi et al., 2005) and has also been reported to be present in human blood (Deng et al., 2004). In principle, polystyrene resists acid, alkalis and alcohol, but it can be attacked by some organic solvents. The possible biodegradation of polystyrene by mammalian cells has not been described in the literature. The amount of styrene in monolayer cells culture was significantly lower than its control (p<0.05, t-test). However, when cells were cultured microencapsulated, the difference of styrene content in the medium compared to the control culture was not significant (p>0.05, t-test). These findings suggest that fibroblasts in monolayers could be metabolizing styrene degraded from the culture flask, as others mammalian cells do (Hynes et al., 1999; Carlson, 2000). However, in the case of microencapsulated cultures, the presence of the alginate matrix could create a physical barrier affecting the diffusion of styrene inside the microcapsule or styrene metabolism by encapsulated cells is lower than that of cells grown on monolayer. A protective effect of microcapsules has been described by other authors as well, showing that nutrients and oxygen diffuse into the microcapsule but substances of higher molecular weight would only slowly diffuse into it (Orive et al., 2002). A pathway for styrene degradation has been described in some prokaryotes, but not in humans. Nevertheless, the oxidoreductase that catalyzes styrene to styrene cis-glycol (EC. 1.13.11.) in the styrene degradation pathway, was reported present in some metabolic pathways in humans (e.g.: tryptophan metabolism), and the gene (ALOXE3) that codes for this protein is expressed in skin cells (Jobard et al., 2002). Benzaldehyde and acetophenone, which were not present in fresh culture medium, were detected when the medium was incubated in culture flasks, suggesting that they could be released from the culture flasks. However, these compounds were not present in monolayer cell cultures indicating that they were probably metabolized by the cells. In the case of encapsulated cells the protective diffusion mechanism already mentioned, could also be protecting these compounds from cell degradation. Zhang et al. (2005) detected these two compounds in human skin suggesting that they could be part of the normal skin metabolism. In addition, acetophenone and styrene are intermediaries in the metabolic pathway of ethylbenzene degradation in humans (Kegg, 2007). In summary, we found that for different skin cell culture conditions VOCs are released into the culture medium generating profiles with significant statistical differences, indicating that those VOCs and their profiles could provide useful information on the physiological state of skin or skin cells. The VOCs found in culture media can have different sources, VOCs released from the culture materials, VOCs released from the culture materials and metabolized by the cells, or VOCs produced by the cell metabolism. As a whole, their pattern appears to reflect the metabolic and functional state of cells in culture and could be used for their characterization. The authors wish to thank CONICYT by: FONDEF Grant (# DO2I1009) and Doctoral fellowship for Cristian A. Ace vedo (# D-21050588). ABRUZZO T, CLOFT H, SHENGELAIA G, WALDROP S, KALLMES D, DION J, CONSTANTINIDIS I, SAMBANIS A (2001) In vitro effects of transcatheter injection on structure, cell viability, and cell metabolism in fibroblast-impregnated alginate microspheres. Radiology 220: 428-435 [ Links ] BERNIER U, KLINE D, BARNARD D, SCHRECK E, YOST R (2000) Analysis of Human Skin Emanations by Gas Chromatography/Mass Spectrometry. 2. Identification of Volatile Compounds That Are Candidate Attractants for the Yellow Fever Mosquito (Aedes aegypti). Anal Chem 72: 747-756 [ Links ] BRERETON R (2003) Chemometrics: data analysis for the laboratory and chemical plant. Chichester UK: John Wiley & Sons Ltda. pp: 183-269 [ Links ] CARLSON G (2000) Metabolism of styrene oxide to styrene glycol in enriched mouse clara-cell preparations. J Toxicol Environ Health A 61: 709-17 [ Links ] CHAKRABARTI R, CHAKRABARTI R (2001). Novel role of etracellular carbon dioxide in lymphocyte proliferation in culture. J Cell Biochem 83: 200-203 [ Links ] CHOI J, JITSUNARI F, ASAKAWA F, SUN LEE D (2005). Migration of styrene monomer, dimers and trimers from polystyrene to food simulants. Food Addit Contam 22: 693-969 [ Links ] DENG C, ZHANG X, LI N (2004). Investigation of volatile biomarkers in lung cancer blood using solid-phase microextraction and capillary gas chromatography-mass spectrometry. J Chromatography B 808: 269-277 [ Links ] ERICSSON L, JOHANSSON E, KETTANEH-WOLD N, TRYGG J, WIKSTROM C, WOLD S (2006). Multi and megavariate data analysis, part I, basic principles and applications. Third edition. Umea: Umetrics Academy. Pp: 171-194 [ Links ] FRESHNEY R. (2000) Culture of animal cells: a manual of basic technique. Fourth edition. New York: John Wiley & Sons, Inc., Publication, pp 149-175 [ Links ] GROTE C, PAWLISZYN J (1997) Solid-phase microextraction for the analysis of human breath. Anal Chem. 69: 587-596 [ Links ] HYNES D, DENICOLA D, CARLSON G (1999) Metabolism of styrene by mouse and rat isolated lung cells. Toxicol Sci 51: 195-201 [ Links ] JOBARD F, LEFEVRE C, KARADUMAN A, BLANCHET-BARDON C, EMRE S, WEISSENBACH J, OZGUC M, LATHROP M, PRUDHOMME J, FISCHER J (2002) Lipoxygenase-3 (ALOXE3) and 12(R)-lipoxygenase (ALOX12B) are mutated in non-bullous congenital ichthyosiform erythroderma (NCIE) linked to chromosome 17p 13.1. Hum Mol Genet 11: 107-13 [ Links ] KARMIOL S (2002) Cell isolation and selection. In: ATALA A, LANZA R (eds) Methods of tissue engineering. San Diego: Acedemic Press, pp 19-35 [ Links ] KIERSTAN M, BUCKE C (1977) The immovilization of microbial cells, subcellualar organelles, and enzymes in calcium alginate gels. Biotechnology and Bioengineering 19: 387-397 [ Links ] MOMBAERTS P (1999) Seven-transmembrane proteins as odorant and chemosensory receptors. Science 286: 707-711 [ Links ] MUSCHLER G, NAKAMOTO C, GRIFFITH L (2004) Engineering principles of clinical cell-based tissue engineering. The Journal of Bone and Joint Surgery 86-A: 1541-1558 [ Links ] NAMIESNIK J, ZYGMUNT B, JASTRZEBSKA A (2000) Application of slid-phase microextraction for determination of organic vapors in gaseous matrices. J. Chromatography A 885: 405-418 [ Links ] ORIVE G, HERNÁNDEZ R, GASCÓN A, IGARTA M, PEDRAZ J (2002) Encapsulated cell technology: from research to market. Trends in Biotechnology 20: 382-387 [ Links ] PAEK H, CAMPANER A, KIM J, GOLDEN L, AARON R, CIOMBOR D, MORGAN J, LYSAGHT M (2006) Microencapsulated cells genetically modified to overexpress human transforming growth factor-betal: viability and functionality in allogeneic and xenogeneic implant models. Tissue Eng 12: 1733-1739 [ Links ] PARÉ P, TUMLINSON J (1999) Plant volátiles as a defence against insect herbivores. Plant Physiol 121: 325-331 [ Links ] PAULING L, ROBINSON A, TERANISHIT R, CARY P (1971) Quantitative Analysis of Urine Vapor and Breath by Gas-Liquid Partition Chromatography Partition. PNAS 68: 2374-2376 [ Links ] PERRY R, GREEN D (1997) Perry's chemical engineers' handbook. 7th edition. New York: Mc Graw-Hill Book Co [ Links ] POLI D, VETTORI M, MANINI P, ANDREOLI R, ALINOVI R, CECCATELLI S, MUTTI A (2004) A Novel Approach Based on Solid Phase Microextraction Gas Chromatography and Mass Spectrometry to the in Cells Cultures: Styrene Oxide. Chem Res Toxicol 17: 104-109 [ Links ] SHEKHTER A, SEREZHENKOV V, RUDENKO T, PEKSHEV A, VANIN A (2005) Beneficial effect of gaseous nitric oxide on the healing of skin wounds. Nitric Oxide 12: 210-219 [ Links ] TOKUDA Y, CRANE S, YAMAGUCHI Y, ZHOU L, FALANGA V (2000) The levels and kinetics of oxygen tension detectable at the surface of human dermal fibroblast cultures. J Cell Physiol 182: 414-420 [ Links ] WEISS E, YAMAGUCHI Y, FALABELLA A, CRANE S, TOKUDA Y, FALANGA V (1998) Un-cross-linked fibrin substrates inhibit keratinocyte spreading and replication: correction with fibronectin and factor XIII cross-linking. J Cell Physiol 174: 58-65 [ Links ] WELLE F, MAUER A, FRANZ R. (2002) Migration and sensory changes of packaging materials caused by ionizing radiation. Radiation Physics and Chemistry 63: 841-844 [ Links ] YANG H, WRIGHT J (2002) Microencapsulated methods: alginate (Ca2+ induced gelation). In: ATALA A, LANZA R (eds) Methods of tissue engineering. San Diego: Acedemic Press, pp 787-801 [ Links ] YANG S, LEONG KF, DU Z, CHUA CK (2001) The design of scaffolds for use in Tissue Engineering. Part I. Traditional Factors. Tissue Eng 7: 679-689 [ Links ] ZHANG Z, CAI J, RUAN G, LI G (2005) The study of fingerprint characteristics of the emanations from human arm skin using the original sampling system by SPME-GC/MS. J Chromatography B 822: 244-252. [ Links ] Corresponding author: Manuel E. Young. Address: Universidad Técnica Federico Santa María, Centro de Biotecnología. Av. España 1680, Valparaíso, Chile. Casilla 110-V. Telephone: 56-32-2654730, Fax: 56-32-2654783, e-mail: [email protected] Received: April 5, 2007. In revised form: July 10, 2007. Accepted: November 13, 2007
<urn:uuid:c5aa3793-da8b-44af-851e-c62c74582435>
CC-MAIN-2016-26
http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0716-97602007000400009&lng=en&nrm=iso&tlng=en
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.90178
5,618
2.609375
3
When Kansas State University nutrition professor Mark Haub began his experiment, he weighed 201 pounds and had a body mass index of 28.8 (overweight). Ten weeks later, he had lost 27 pounds, lowered his bad cholesterol and raised his good cholesterol. Plus, he lowered his blood pressure. How did he do it? I can give you the answer in just one word: twinkies. From Twinkies to Toned The diet plan (if we can call it that!) that Haub followed is often called the “twinkie diet,” and for good reason. For ten straight weeks, Haub has eaten mostly junk food, like soda, snack cakes, Doritos, and especially Twinkies. Professor Haub volunteered himself as a guinea pig in what began as a simple test of portion control. Instead of his normal diet of 2,600 calories, he limited himself to only 1,800 calories to see if he would lose weight. What started as an experiment in portion control of calories (regardless of where they came from), actually resulted in a loss of weight… and a considerable amount, at that. But Is He Healthier? Pure numbers point to yes…in addition to losing 27 pounds, Haub’s LDL (the bad cholesterol) dropped by 20 percent and his good cholesterol rose by the same amount. Haub does not recommend we all go out and load our cars with boxes of Twinkies…at least not yet – and he has yet to draw conclusions about it: “I wish I could say it’s healthy…I’m not confident enough in doing that…One side says it is irresponsible. It is unhealthy, but the data doesn’t say that.” Count Your Calories Weight loss experts constantly flip-flop on the importance of counting calories…but Haub’s results show that at the end of the day…calories count! Before the experiment, Haub claimed to be eating well, but just consuming too much. From eating a controlled diet of only 1,800 calories, Haub lost weight, even though what he ate is usually blamed for adding weight, not an aid to losing it. Also, his before lifestyle shows us that eating too much, even healthy food, will not help you to lose weight. So, obviously lowering your caloric intake is the real catalyst to losing weight… The bottom line: if you want to lose weight, you’ve got to consume less than you burn. That is it. Drawbacks To Haub’s Method Plenty of critics have already started talking about this study, and some do have valid points… One said Haub did not receive the full range of proper nutrients that come from a well-rounded diet (note: Haub did take a multi-vitamin daily, drank a protein shake, and ate one serving of vegetables per day also). Another very valid comment was that the “twinkie diet” may take a toll on your health if you stay on it long-term – there could potentially be risks to his heart health, and his risk for diabetes may rise considerably. He may also be changing his blood-glucose level by eating too much processed sugar. Haub plans to stop his “twinkie diet” the day before Thanksgiving, so we will give you another update as soon as we can! Checkout the video:
<urn:uuid:ea974f93-4a94-4586-9ea9-83f3d285d400>
CC-MAIN-2016-26
http://exploringthemind.com/weight-loss/lose-weight-with%E2%80%A6junk-food
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970877
726
2.59375
3
1. Basic Ideas A tutorial-like introductory chapter. 2. Using Array Syntax Using array syntax. 3. Graphics How to plot things. 4. Embedding Compiled Routines Inside Yorick Embedding compiled code in a custom Yorick. Concept Index -- The Detailed Node Listing --- 1.1 Simple Statements 1.2 Flow Control Statements Defining functions, conditionals, and loops. 1.3 The Interpreted Environment How to use Yorick's interpreted environment. 1.1.1 Defining a variable 1.1.2 Invoking a procedure 1.1.3 Printing an expression Flow Control Statements 1.2.1 Defining a function 1.2.2 Defining Procedures 1.2.3 Conditional Execution Conditionally executing statements. 1.2.4 Loops Repeatedly executing statements. 1.2.5 Variable scope Local and external variables. 220.127.116.11 General if and else constructs 18.104.22.168 Combining conditions with && and || 22.214.171.124 The while and do while statements 126.96.36.199 The for statement 188.8.131.52 Using break, continue, and goto How to break, continue, and goto from a loop body. 184.108.40.206 extern statements 220.127.116.11 local statements The Interpreted Environment 1.3.1 Starting, stopping, and interrupting Yorick 1.3.2 Include files How to read Yorick statements from a file. 1.3.3 The help function Using the help command. 1.3.4 The info function Getting information about a variable. 1.3.5 Prompts What Yorick prompts mean. 1.3.6 Shell commands, removing and renaming files Issuing shell commands from within Yorick. 1.3.7 Error Messages What to do when Yorick detects an error. 18.104.22.168 A sample include file 22.214.171.124 Comments 126.96.36.199 DOCUMENT comments The help command recognizes special comments. 188.8.131.52 Where Yorick looks for include files Directories Yorick searches for include files. 184.108.40.206 The `custom.i' file and `i-start/' directory How to execute Yorick statements at startup. 220.127.116.11 Runtime errors 18.104.22.168 How to respond to a runtime error Using Array Syntax 2.1 Creating Arrays How to originate arrays. 2.2 Interpolating Interpolation functions. 2.3 Indexing How to reference array elements. 2.4 Sorting How to sort an array. 2.5 Transposing How to change the order of array dimensions. 2.6 Broadcasting and conformability Making arrays conformable. 2.7 Dimension Lists 3.1 Primitive plotting functions The basic drawing functions. 3.2 Plot limits and relatives Setting plot limits, log scaling, etc. 3.3 Managing a display list The display list model. 3.4 Getting hardcopy How to get it. 3.5 Graphics style How to change it. 3.6 Queries, edits, and legends Seeing legends and making minor changes. 3.7 Defaults for keywords Setting (non-default) defaults. 3.8 Writing new plotting functions Combining the plotting primitives. 3.9 Animation Spielberg look out. 3.10 3D graphics interfaces An experimental interface. Primitive plotting functions 3.1.1 plg Plot graph. 3.1.2 pldj Plot disjoint lines. 3.1.3 plm Plot quadrilateral mesh. 3.1.4 plc and plfc Plot contours. 3.1.5 plf Plot filled quadrilateral mesh. 3.1.6 pli Plot image. 3.1.7 plfp Plot filled polygons. 3.1.8 plv Plot vectors. 3.1.9 plt Plot text. Plot limits and relatives 3.2.1 limits Set plot limits. 3.2.2 logxy Set log axis scaling. 3.2.3 gridxy Set grid lines. 3.2.4 palette Set color palette. 3.2.5 Color model More about color. 22.214.171.124 Zooming with the mouse How to zoom by mouse clicks. 126.96.36.199 Saving plot limits Save and restore plot limits. 188.8.131.52 Forcing square limits Assure that circles are not ellipses. Managing a display list 3.3.1 fma and redraw Frame advance (begin next picture). 3.3.2 Multiple graphics windows How to get them. 3.4.1 Color hardcopy Dumping palettes into hardcopy files. 3.4.2 Binary CGM caveats Caveats about binary CGM format. 3.4.3 Encapsulated PostScript Encapsulated PostScript output. 3.5.1 Style keyword Accessing predefined graphics styles. 3.5.2 `style.i' functions Bypassing predefined graphics styles. 3.5.3 Coordinate systems Multiple coordinate systems. 3.5.4 Ticks and labels How to change them. Queries, edits, and legends 3.6.1 Legends Setting plot legends. 3.6.2 plq and pledit The plot query and edit functions. 3D graphics interfaces 3.10.1 Coordinate mapping Changing your viewpoint. 3.10.2 Lighting The 3D lighting model. 3.10.3 gnomon Gnomon indicates axis orientation. 3.10.4 plwf interface The plot wire frame interface. 3.10.5 slice3 interface The slice and isosurface interface.
<urn:uuid:65ce7351-2244-499e-a0e9-158640150fa4>
CC-MAIN-2016-26
http://yorick.sourceforge.net/manual/yorick.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.688867
1,216
3.25
3
More to Explore Mounted Police in the Sunshine Territory By Chuck Hornung “A rather unusual institution within New Mexico is the mounted police, who numbered 11 [men] in 1907, whose work was almost entirely in the cattle country, and who had authority to patrol the entire Territory and to make arrests or to preserve order wherever their presence was needed, unhampered by the restrictions limiting the jurisdiction of the local police.” The Encyclopedia Britannica, The New Mexico Mounted Police was born in a forge of frontier civil crisis and hammered to life upon the anvil of necessity. New Mexico, the Sunshine Territory of 121,510 square miles of mountains, desert, high plains and farm and range country, at the turn of the 20th century was the last stronghold of the legendary Wild West. Daily men would buckle on their pistol belts as part of getting dressed because the nearest lawman was often many miles away. The law was carried on a man’s hip. The United States had fought a war of Manifest Destiny with the Spanish Empire in 1898. Many of America’s young men had sought adventure in the short lived conflict that Secretary of State John M. Hay had called a “splendid little war” and the Paris Peace Treaty had made America a world colonial power. Many of these young rough riding western volunteers returned to the southwest range country hoping to begin a new life. Some of these men chose illegal means to gain a living. Granville A. Richardson, an attorney serving as a Democrat lawmaker from Chaves County, saw the need for a ranger force to address New Mexico’s outlaw situation. As a neophyte councilman in the Thirty-Third Territorial Legislative Assembly in 1899, Richardson introduced Council Bill 54 to create a Territorial Mounted Police Company with Roswell, his home town, as their field headquarters. The New Mexico ranger force was designed to operate much like their Canadian namesake. The company was to be composed of a captain, a lieutenant, a sergeant and ten privates and cost the taxpayers $9,120 per year for the ranger’s salaries and expenses. The Council’s Territorial Affairs Committee debated the need for rangers to help local lawmen combat the growing menace to peace, safety and economic development. On March 14th the study committee presented their recommendations, but on a split vote the taxpayer’s pocketbook was the winner. The Mounted Police idea was tabled because a new tax would be needed to pay for the police. A new tax was not a popular idea in a territory reeling from a weak economy. The lawmakers reasoned that the job of catching criminals was the chief responsibility of the county sheriff, so they increased the governor’s ability to offer rewards for the capture of select law breakers. The lawmakers’ gamble failed; peaceful conditions did not improve with the dawn of the new century and economic growth of the Sunshine Territory continued to suffer. New Mexico’s path to statehood was seriously endangered by these conditions. In the spring of 1901, the legislative bodies in the State of Texas and the Arizona Territory had each authorized a ranger force to deal with roving outlaw bands in their frontier regions. Within four years of their formation, these two rangers groups had arrested, killed off, or driven the troublesome outlaw gangs from their jurisdiction. This aggressive action on New Mexico’s eastern and western borders had caused the territory to become the central outlaw haven in the southwest. This unwelcome lawless distinction was a strange twist of fate because the Arizona lawmakers had prudently adopted the territorial police plan first proposed and rejected in New Mexico just two years earlier. New Mexico’s Legislative Assembly hadn’t even discussed a ranger force at their 1901 session. Burton Mossman, Arizona’s first ranger captain, was the son of a Las Cruces rancher. Following his visit home for Christmas in 1901, Captain Mossman lamented to the Tucson Citizen, “We only wish that the territorial government of New Mexico would organize a Ranger troop to cooperate with this work. With their aid we could soon clear both territories of the fugitives from justice who have sought refuge here and continue on their depredations.” A year after Mossman expressed his wish, the Las Vegas Daily Optic reported that livestock raisers in the territory were “agitating” for a ranger force to patrol the territory’s vast remote areas. In spit of these pleas for help, as in 1899 and 1901, lawmakers in 1903 also refused to create a force of territorial rangers. In the years since the territorial police concept was first proposed by Greenville Richardson, cattle and sheep growers had organized a corps of semi-official detectives or range riding bounty hunters. These manhunters freely roamed across New Mexico’s southern tier of counties empowered by the deputy sheriff commissions issued in each county they served. The local stock associations paid the men a fee per case they handled and the governor would from time to time offer a reward for some special criminal, but this reward or bounty system was only partly successful as the Sunshine Territory continued to witness a steady increase in all levels of criminal activity. A renewed crime wave hit New Mexico in 1904. Rustler gangs boldly raided in day light; killing some ranchers who tried to apprehend them. A postmaster was murdered during a robbery of the Golden Post Office, a Wells Fargo Express office was robbed in Magdalena, trains were stopped and robbed near Tularosa and Logan, fence cutters roomed over Taos County at will. The police force at Roswell was ‘treed” by a drunken mob, a lawman was killed in Silver City, a bank was robbed at Hillsboro, Indian parties hunted out of season, misdemeanor crime increased in all sections and both the territorial solicitor-general and the territorial superintendent of public education were assassinated. Local and county peace officers were unable to curb the violence. Miguel A. Otero, Jr. had become territorial governor in 1897 and had privately supported a ranger force since the issue was first introduced in 1899. On Monday, January 16, 1905 he publicly requested lawmakers to create a territorial police force in his bi-annual legislative message to a joint session of the 36th Legislative Assembly. The governor said, “I have been urged by stockmen to recommend the passage of a Ranger Law, whose duty it shall be to patrol the ranges, to prevent the theft of stock and to aid in the apprehension of criminals. The suggestion seems to me a good one.” William H. Greer, a freshman Republican councilman, introduced Council Bill 26 as a revised version of the Richardson Mounted Police Bill of 1899. Bipartisan support, in all sections of the territory, had grown for a central police force that would, without favor, justly enforce the law in all localities. The Greer Mounted Police Bill quickly moved through committee and passed both houses of the assembly with limited debate, was signed by Governor Otero on February 15th and was published as chapter nine of the session Laws of 1905. The Mounted Police Act authorized a single company of rangers composed of a captain, a lieutenant, a sergeant and eight privates. The governor was to be the commander-in-chief of the territorial police, but the captain would serve as the day-to-day leader in the field. Each of the rangers was furnished a Winchester 95 rifle, an Army size Colt .45, a gray military style uniform, a silver shield, a commission of office and all the ammunition he could use. A Mounted Police had to supply his own horse and saddle, and a quality pack horse and camp equipment. Each man paid for his own personal expenses, including the care and feeding of his horses. The territory would replace a policeman’s horse if it was killed or injured while on patrol duty, but if a ranger was hurt he had to pay his own medical expenses. Over 200 men made application for appointment to the $60 per month range rider job. When all the job’s pluses and minuses where added up, most of the new lawmen felt the positives won. Albuquerque’s Morning Journal caught the spirit of the new ranger force when it reported that the “members of the territorial mounted police are as tickled over their jobs as a boy with an all day sucker.” The governor’s two man ranger selection committee chose Socorro County stockraiser John F. Fullerton to be the first captain of the Mounted Police at $2,000 per year. He had no law enforcement background, but he was popular among the territory's stock growers and a strong supporter of the governor’s fiscal policies. The Mounted Police company’s second- in-command was Cipriano Baca. He had served as a Socorro County deputy sheriff, Grant County’s chief deputy sheriff and Luna County’s first sheriff and was presently a deputy serving under United States Marshal Creighton Foraker. Baca earned $1,500 per year as the company’s lieutenant. The sergeant was paid $900 per year and the committee’s choice for this post also had Socorro County connections. He was former chief deputy sheriff Robert W. “Stuttering Bob” Lewis. The ranger law required that Captain Fullerton select “as his base the most unprotected and exposed settlement of the territory” so the captain chose Socorro. The Mounted Police Department was the only territorial agency not housed in the capitol at Santa Fe in 1905. The three officers and a ranger private were stationed in the city while the other rangers were assigned to three or four man squads to patrol troubled areas across the territory. Their office was on the back of a horse. This first company of Mounted Police were commonly called Fullerton’ Rangers. They were New Mexico’s premier police agency with authority to enforce all level of laws in all sections of the territory and the future state. Sam Ballard holds the dubious distinction of being the first man arrested by a new territorial policeman. He was taken into custody by Ranger Will Dudley, who was on a single man scout, and charged with “larceny of stock” after the ranger found him in the rugged mountains of Lincoln County rebranding some stolen cattle. Shortly after the mustering of the Mounted Police, the Otero County Advertiser commented on the new lawmen, “The mounted police force is made up of experienced men, all dead shots, who can be relied on to capture or kill.” A few months later, Ranger Fate Avant became the first Mounted Police to kill a criminal in the line of duty. During the morning of Thursday, August 24, 1905, Avant single handedly arrested a gang of cattle thieves near Capitan. The arrest was affected following an extended gunfight with the three longropers. Later that same evening in a shootout with a career criminal who was attempting to burglarize a Capitan store, Avant’s shotgun proved to be deadly. A Lincoln County coroner’s jury found the ranger’s lethal action was justified and he faced no criminal charges for killing Robert Rusher. Attorney General George Prichard issued an opinion that the rangers could use what every force was needed to make an arrest. This judgment made it clear that no Mounted Policeman would ever have a reason to back down in the face of danger. Other men made the same mistake that Rusher had made and each met the same ending. Two Mounted Police gave their lives for the cause of justice while helping to tame New Mexico’s outlaws. Special Mounted Police John A. McClure was murdered by a father and sons train robber gang near Abo in 1911. McClure’s fellow rangers hunted down his murderers. John B. Rusk faced a different type of death. The ranger died in a Colorado hospital the day after New Mexico had gained statehood. He was attempting to return a wanted man to New Mexico during a winter storm when he developed a cold. The resulting pneumonia killed him. Sergeant Bob Lewis epitomized the Mounted Police’s dogged determination to get their man. The sergeant spent the first months of 1906 trailing a murderer across the snow covered western mountains of the territory, then south across the border into Old Mexico. Lewis finally arrested a fugitive from justice and returned him to the Sunshine Territory, but during this 71 day scout Lewis’ young daughter died in a tragic accident. Duty is a law enforcement tradition; it even comes before family. Lt. Cipriano Baca became the example of faithfulness to duty, and the Mounted Police mission to protect and serve. He spent more days on scout duty than any other member of Fullerton’s Rangers. The first company of Mounted Police made 72 arrests before their one-year enlistments expired. The professional relationship between New Mexico’s county sheriffs and the Mounted Police was a stormy one from the inception. Sheriffs were elected officials and often supported local values and the judgment of the voters over unpopular laws like gambling and alcohol sales. The Mounted Police, without local favoritism, were able to track and capture suspects that local officers could not or would not attempt to apprehend. Many these county officers felt the rangers’ forthright activities cut into their pocketbook. The police earned an annual salary regardless of their job performance, while county sheriffs earned compensation in the form of fees for each warrant and subpoena he or his deputies served. Mileage and prisoner fees were another source of income. County sheriffs were elected officials and voters often remembered that the police had done the job they had elected their sheriff to perform. On one occasion a Mounted Police investigation ended with the local grand jury bring criminal charges against a Torrance County sheriff. He was convicted and removed from office Captain Fullerton was not asked to continue his command by the new territorial governor, Herbert J. Hagerman, in April 1906. The rest of Fullerton’s force was reappointed for another year. President Roosevelt had named Hagerman governor and ordered him to clear up the territory and make it ready for statehood. Hagerman named veteran lawman Fred Fornoff as the new ranger captain and gave him Roosevelt’s directive. Fornoff was a former Albuquerque city marshal, secret service agent, deputy US marshal, and had served with President Roosevelt’s Rough Riders in 1898. Among the new ranger captain’s first acts was to relocate the territorial police headquarters, from Socorro to Santa Fe in violation of the Mounted Police Act, to two rooms on the first floor of the capitol building. The Mounted Police office remained in that location until December 1913; the headquarters move was made legal as part of the Mounted Police Reorganization Act of 1909. From 1914-1918, the police had no central headquarters. The reorganized state Mounted Police office was established in Silver City in the spring of 1918, but was relocated to Las Vegas in January 1919 and remained in that city until the force was disbanded in January 1921. Captain Fornoff made two other major changes. He ordered the rangers to hang up their gray uniforms and to ride the back country dressed in range rider gear. The Mounted Police no longer made extended group scouting trips. Fornoff’s Boys, as the Mounted Police were commonly called during these years, rode their own trail and used their new five-pointed silver-star as his badge of authority. In 1909 territorial lawmakers, due to “economic reasons” and strong support from county officers, reduced the size of the Mounted Police from three officers and eight rangers down to two officers and four rangers. This new six man force was on occasion augmented by one or two “additional members” who served short term appointments as salaried rangers. The 1909 law also provided that the governor could appoint non-salaried Special Mounted Police with the same police authority as the regular force. Many of these men were railroad police detectives or railyard guards. Captain Fornoff’s men made almost 900 recorded arrests during the closing years of the territorial era. This number did not include arrests made by the Special Mounted Police. In 1910, two Mounted Police investigated a series of stage robberies near the gold mining camp of Mogollon. The local deputy sheriff was less then effective in his enforcement of territorial law and was the cover man for the businessmen who wished to have a “wide open” town. The deputy’s actions were supported by the local peace justice. The territorial police quickly arrested the robbery suspects, and then began to close down the town’s many illegal saloon operations, illegal gambling dens and enforced the regulation of the camp’s two red light districts. The deputy sheriff stationed at Mogollon was paid by subscriptions from the local saloonmen and gamblers, but his authority came from the county sheriff who fired him. The peace justice was removed from office by the county commissioners. The Mogollon “gang” quickly sought revenge. The former deputy sheriff and peace justice joined a plot to kill or discredit the territorial officers. The trap was to be sprung in the discharged deputy’s own saloon. However, none of the conspirators had counted upon how quick and deadly accurate Mounted Policeman John A. Beal’s was with a pistol. The former deputy was fatally shot as he attempted to kill the ranger. Beal and his partner Bob Putman were now forced to confront a mob of angry saloonmen and gamblers for a few days until Mounted Police Sergeant John W. Collier could arrive in Mogollon to backup the two rangers. Beal and Putman were later tried and acquitted of all charges during their murder trial. The law and order legend of the New Mexico Mounted Police had reached a zenith. On Saturday, January 6, 1912, President William H. Taft signed a special bill granting statehood to New Mexico. The Sunshine Territory, after 62 years, was no more and with that action the Territorial Mounted Police became the new state's first police force with statewide authority. By the time the second session of the new State Legislature met in the spring of 1913, the Mounted Police had once again earned their reputation for fearlessness in enforcing the letter of the law. This time the arrested suspects were state senators charged with accepting bribes for their votes. A bill to abolish the state mounted police was quickly introduced in the Senate and a lengthy pro and con debate on the merits of the state police followed. Governor William C. McDonald, a Democrat, quickly made it clear that he would veto any measure to abolish the popular police force. The Republican lawmaker’s answer was to bypass the chief executive, and public sentiment, by excluding any funding for the Mounted Police from the state’s annual Appropriation Bill. No money, no police. Captain Fred Fornoff sued the state auditor to make him pay the Mounted Police under a provision of the original Mounted Police enabling act of 1905. The state district court at Santa Fe upheld Fornoff’s complaint, but the State Supreme Court reversed the lower court and the Mounted Police ran out of money on December 1, 1913. The police were not abolished because the law that had established their authority still existed; the rangers just ceased to be a body in the field and became a phantom force. During the mid-teen years of 1914 -1917, Governor McDonald made use of his office contingency fund to support limited actions by the Mounted Police. Fred Lambert was the “Phantom Ranger Force” and he seemed to be everywhere there was trouble. National security became a major concern when the United States entered the First World War. Particular concern centered on security along the Mexico border area after the New Mexico National Guard was federalized because the Mexican government was pro-Imperial Germany in the global conflict of 1914-1918. New Mexico’s governor reactivated the Mounted Police with special funding from the State War Council in the spring of 1918. He selected Herbert J. McGrath, a veteran Grant County sheriff and one of Fullerton’s Rangers, to lead the new ranger force. During their eight months of operation the 16 man force made 452 felony arrests for crimes ranging from murder, to bootlegging, prostitution, burglary, violation of the stock laws and the new automobile code. McGrath's men also recovered over $12,000 worth of stolen property. These men had set a new standard for effectiveness of a state police agency. In 1919, a new governor appointed Apolonio A. Sena, a former ranger and deputy US marshal, to head a reorganized and fully funded company of Mounted Police. Sena commanded a gray uniformed ranger corps of five sergeants and 16 policemen divided into five service districts. The men used automobiles as well as horses to patrol their areas. In an unpopular action, the governor ordered the whole Mounted Police force to Gallup, during the final months of 1919, to maintain civil order during a coal miner’s strike. The ranger’s image, among the working class populace, was damaged by this ill-advised action. In 1920, the Mounted Police continued to deal with the old frontier era crimes as well as some "modern day" crimes like auto theft, speeding and “running a car without a license.” The Mounted Police arrest record for 1919 was 114 persons. The state police documented 103 cases in 1920, but no arrest data is now available for the few days the token ranger force operated in January 1921. Former Governor Hagerman chaired a special legislative finance committee in 1920 that recommended that the Mounted Police be abolished and that a three man state marshal team be established in their place. This move would still keep a state police function in the field, but would save the state $40,000 per year. The newly elected governor had won election by supporting a tax cut plan designed to balance the state budget and jumpstart the state’s post-war economy. Captain Sena resigned in December and was replaced by former San Miguel County Sheriff Lorenzo Delgado. Captain Delgado and Governor M. C. Mechem agreed that Lorenzo would be the caretaker head of the state police until the new marshal system was established by the legislature. He would then become the new chief marshal. The governor signed the bill that abolished the Mounted Police on February 15, 1921, sixteen years to the day when the New Mexico’s rangers had been born. Lawmakers had done with a vote what outlaw bullets had not been unable to accomplish. In a further twist of fate, the state marshal plan was never debated or established by the New Mexico State Legislature. The obligation of the historian is incomplete if their account ends in the past and fails to provide direction for the future. The citizens of the Land of Enchantment can look to the future with pride and respect for the guardians of the law in their state. The legacy of public service created by the New Mexico Territorial Mounted Police lives on in the courage and devotion of the men and women who wear the black and gray uniform of the present day New Mexico State Police. This article is based upon the author’s forty years of research on the New Mexico Mounted Police. For more information about the rangers, consult the author’s series of ranger books listed below. The Thin Gray Line: The New Mexico Mounted Police (Fort Worth, TX: Western Heritage Press, 1971). Fullerton’s Rangers: A History of the New Mexico Territorial Mounted Police (Jefferson, NC: McFarland & Company, 2005). New Mexico’s Rangers: The Mounted Police (Charleston, SC: Arcadia publishing, 2010). Cipriano Baca, Frontier Lawman of New Mexico (Jefferson, NC: McFarland & Company, 2012).
<urn:uuid:f884a1b1-27e4-4da4-ade1-a37be28e65cc>
CC-MAIN-2016-26
http://newmexicohistory.org/people/mounted-police-in-the-sunshine-territory
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975685
4,985
3.0625
3
Language development in children is amazing, and it’s a development that many parents really look forward to. The secret to helping your child learn language is very simple: talk together lots and listen lots. Language development in children: what you need to know Although the first year is really important for language development in children, major learning continues throughout a child’s early years. And learning language is a lifelong process. In their first 12 months, babies develop many of the foundations that underpin speech and language development. For the first three years or so, children understand a lot more than they can say. Language development supports your child’s ability to communicate, and express and understand feelings. It also supports thinking and problem-solving, and developing and maintaining relationships. Learning to understand, use and enjoy language is the critical first step in literacy, and the basis for learning to read and write. How to encourage your child’s language development The best way to encourage your child’s speech and language development is to talk together frequently and naturally. Talking with your baby Talk to your baby and treat her as a talker, beginning in her first year. Assume she’s talking back to you when she makes sounds and babbles, even when she’s just paying attention to you. When you finish talking, give her a turn and wait for her to respond – she will! When your baby starts babbling, babble back with similar sounds. You’ll probably find that he babbles back to you. This keeps the talking going and is great fun! Responding to your baby As your baby grows up and starts to use gestures and words, respond to her attempts to communicate. For example, if your child shakes her head, treat that behaviour as if she’s saying ‘No’. If she points to a toy, respond as if your child is saying, ‘Can I have that?’ or ‘I like that’. When you tune in and respond to your child, it encourages him to communicate. You’ll be amazed at how much he has to say, even before his words develop. Talk about what’s happening. Talk to your baby even if she doesn’t understand – she soon will. Talk about things that make sense to her, but at the same time remember to use lots of different words. As your baby becomes a toddler, keep talking to him – tell him the things that you’re doing, and talk about the things that he’s doing. From the time your child starts telling stories, encourage her to talk about things in the past and in the future. At the end of the day, talk about plans for the next day – for example, making the weekly shopping list together or deciding what to take on a visit to grandma. Similarly, when you come home from a shared outing, talk about it. Introducing new words It’s important for children to be continually exposed to lots of different words in lots of different contexts. This helps them learn the meaning and function of words in their world. Reading with your baby Read and share books with your baby and keep using more complex books as he grows. Talk about the pictures. Use a variety of books and link what’s in the book to what’s happening in your child’s life. Books with interesting pictures are a great focus for talking. Read aloud with your child and point to words as you say them. This shows your child the link between written and spoken words, and that words are distinct parts of language. These are important concepts for developing literacy. Your local library is a great source of new books. Following your child’s lead If your child starts a conversation through talking, gesture or behaviour, respond to it, making sure you stick to the topic your child started. You can also repeat and build on what your child says. For example, if she says, ‘Apple,’ you can say, ‘You want an apple. You want a red apple. I want a red apple too. Let’s have a red apple together’. Language development: the first six years Here are just a few of the important things your child might achieve in language development between three months and six years. In this period, your baby will most likely coo and laugh, play with sounds and begin to communicate with gestures. Babbling is an important developmental stage during the first year and, for many children, words are starting to form by around 12 months. Babbling is often followed by the ‘jargon phase’ where your child will produce unintelligible strings of sounds, often with a conversation-like tone. This makes his babbling sound meaningful. First words also begin by around 12 months. Babbling, jargon and new words might appear together as your child’s first words continue to emerge. Find out more about language development from 3-12 months. During this time, first words usually appear (these one-word utterances are rich with meaning). In the following months, babies continue to add more words to their vocabulary. Babies can understand more than they say, though, and will be able to follow simple instructions. In fact your baby can understand you when you say ‘No’ – although she won’t always obey! If your baby isn’t babbling and isn’t using gestures by 12 months, talk to your GP, child and family health nurse or other health professional. 18 months to 2 years In his second year, your toddler’s vocabulary has grown and he’ll start to put two words together into short ‘sentences’. He’ll understand much of what’s said to him, and you’ll be able to understand what he says to you (most of the time!). Language development varies hugely, but if your baby doesn’t have some words by around 18 months, talk to your GP, child and family health nurse or other health professional. Find out more about language development from 1-2 years. Your child will be able to speak in longer, more complex sentences, and use a greater variety of speech sounds more accurately when she speaks. She might play and talk at the same time. Strangers will probably be able to understand most of what she says by the time she’s three. Find out more about language development from 2-3 years. Now your child is a preschooler, you can expect longer, more abstract and complex conversations. He’ll probably also want to talk about a wide range of topics, and his vocabulary will continue to grow. He might well show that he understands the basic rules of grammar, as he experiments with more complex sentences. And you can look forward to some entertaining stories too. Find out more about language development from 3-4 years and language development from 4-5 years. During the early school years, your child will learn more words and start to understand how the sounds within language work together. She’ll also become a better storyteller, as she learns to put words together in a variety of ways and build different types of sentences. Find out more about language development from 5-6 years. Children grow and develop at different rates, and no child exactly fits a description of a particular age. In each area of development things happen in a fairly predictable order, but there’s also a wide variation in what’s ‘normal’. If you have any concerns, ask your child and family health nurse or see a speech pathologist Speech and language: what’s the difference? Speech means producing the sounds that form words. It’s a physical activity that is controlled by the brain. Speech requires coordinated, precise movement from the tongue, lips, jaw, palate, lungs and voice box. Making these precise movements takes a lot of practice, and that’s what children do in the first 12 months. Children learn to correctly make speech sounds as they develop, with some sounds taking more time than others. Language is the words that your child understands and uses as well as how he uses them. Language includes spoken and written language. The parts that make up language include vocabulary, grammar and discourse: Vocabulary is the store of words a person has – like a dictionary held in long-term memory. Grammar, or syntax, is a set of rules about the order in which words should be used in sentences. These rules are learned through the experience of language. Discourse is a language skill that we use to structure sentences into conversations, tell stories, poems and jokes, and for writing recipes or letters. It’s amazing to think that very young children begin to master vocabulary, grammar and syntax – such a complex collection of concepts.
<urn:uuid:09f1c222-3091-4403-9a81-d322bc88fbb7>
CC-MAIN-2016-26
http://raisingchildren.net.au/articles/language_development.html/context/1142
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962951
1,865
3.8125
4
Tuesday, September 27, 2011 Monday, September 19, 2011 Monday, September 12, 2011 Tuesday, September 6, 2011 There are many similarities to ancient Egyptian tomb figures in Chinese tombs. Not only did they surround themselves with their worldly luxuries such as ceramics and clothing, they also included figures of servants who were meant to, as in Egypt, magically entertain the deceased as the living servants had in life. This was, of course, a step up from Shang Dynasty (ca. 1523–1028 bce) burials which included pets, servants, horses, and guards killed to accompany the deceased (the wealthy, obviously). This was also a practice in some ancient Mesopotamian cultures (2000s bce). From the Zhou Dynasty (1027–256 bce) through the Song Dynasty (960–1279 ce), human sacrifices were replaced by ceramic figures of familiar human attendants. If you compare the wide range of figures across cultures, they present charming images of domestic life from everyday work (an Egyptian grinding corn) to works such as this Chinese example of a household servant playing music. During the Six Dynasties period (220–589 ce) — the period in which this figure was made—China was more or less in turmoil with numerous rival kingdoms in both the north and south. However, obviously, life (and death), went on. Figures from this period were made predominantly of black earthenware with the face and hands painted. During the Tang Dynasty (618–907 ce), many tombs of the wealthy were painted with scenes of pleasures in the everyday world, such as dancing and viewing a garden. Activity: Using clay, make a sculpture of a person playing, working or resting. Look at various pictures of people doing a variety of things to select a pose. (Explorations in Art Grade 1) Correlations to Davis programs: Explorations in Art Grade 1: 9-10 studio, Explorations in Art Grade 2: 5.29–30 studio, Explorations in Art Grade 3: 1.3-4 studio, Explorations in Art Grade 4: 2.7-8 studio, Discovering Art History: 4.3, The Visual Experience: 13.4
<urn:uuid:8e1c221f-a092-4346-bb37-997125ed455d>
CC-MAIN-2016-26
http://www.curatorscorner.com/2011_09_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976425
462
3.109375
3
A new set of tests, a new way to look at student achievement This spring, Waterloo students took the new Iowa Assessments test for the first time. As you may be aware, in 2011-2012, the new Iowa Assessments test replaced the Iowa Tests of Basic Skills (ITBS) and Iowa Tests of Educational Development (ITED). The Iowa Assessments offer new and different data, and because of this, you may need information to assist you in interpreting your student’s scores. The Iowa Assessments will provide valuable information about the yearly academic growth of students. It will also provide strong indicators of college readiness. Student academic growth is monitored based on something called a “standard score”. Interpreting Iowa Assessment Results– The results from the Iowa Assessments will look different than what most parents will be familiar with from the ITBS and ITED and there’s a reason for that –they are different! The Iowa Assessments are different tests, with new and rewritten questions. To compare results on last year’s ITBS and this year’s Iowa Assessments would not provide an accurate comparison. To put it simply, it’s comparing apples to oranges. National Standard Score (NSS) The National Standard Score, or NSS, describes performance on a continuum from kindergarten through high school. The continuum is based on scores from testing thousands of students and determining where students at certain grade levels fall within a range. The achievement continuum connected with the Iowa Assessments is divided into three categories: Non Proficient, Proficient, and Advanced. Using these scores allows teachers, parents and students to track not only proficiency at a test time, but year-to-year growth. National Percentile Ranking (NPR) The Iowa Assessments also include a National Percentile Ranking (NPR). This compares a student’s score with others in the nation in the same grade who took the test at the same time of year. The NPR is based on a scale of 1 to 99, so if a third grade student receives an 75 that means the student did as well or better than 75 percent of other third graders in the nation taking the test at the same time. In past years this NPR has been the more important score on the ITBS or ITED. With the switch to the new Iowa Assessments the NSS will be the more important indicator of student achievement, as it will be easier to track one student’s growth year to year, instead of compared to other students. For more about the Iowa Assessments, please visit the Iowa Testing Programs at the University of Iowa’s College of Education at http://itp.education.uiowa.edu/ia/default.aspx
<urn:uuid:8124ae35-f133-4dad-9d55-93cf39608e76>
CC-MAIN-2016-26
http://www.waterloo.k12.ia.us/interpreting-your-childs-iowa-assessment-scores/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935682
572
3.46875
3
In the summer of 1945, a group of African-American paratroopers for the U.S. Army became smokejumpers assigned to a special Forest Service mission known as “Operation Firefly.” Also known as the Triple Nickles, they represented the 555th Parachute Infantry Battalion for colored soldiers who set out to make a jump for change. Two of these valiant, pioneering men recently passed away or “took their last jump” as the Triple Nickles Association likes to say. Lt. Col. Roger S. Walden, 91, took his last jump on Sept. 17. Walden will be interred at Arlington National Cemetery at a later date. Second Lt. Walter Morris, 92, took his last jump on Oct. 13 and was memorialized on Oct. 19 in Palm Coast, Fla. Read more »
<urn:uuid:e8c818e3-8221-44bc-8208-35167ba8558b>
CC-MAIN-2016-26
http://blogs.usda.gov/tag/roger-walden/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981014
174
2.625
3
September 26, 2013 Why Have Americans' Income Expectations Declined So Sharply? Data from the Thomson Reuters/University of Michigan Surveys of Consumers (Michigan survey) suggests that Americans' income expectations declined sharply in the 2008-09 recession and remain depressed. The reasons for this marked increase in pessimism are important because theory suggests that income expectations are a fundamental determinant of consumer spending and may help us understand the slow economic recovery. In this article, I examine two related questions about income expectations: - Was the decline concentrated among households with certain characteristics, or was it much more widespread in the population? - Did the decline reflect the more adverse experiences of certain households, or was it reflective of a more general malaise? I used the monthly individual-level data from January 2007 to April 2012 in the Michigan survey to answer these questions.1 Large, Persistent Drop in Income Expectations The first chart plots the survey responses to the following question on nominal income expectations: "During the next 12 months, do you expect your (family) income to be higher or lower than during the past year?" As shown by the black line--from the start of the series in the late 1970s until 2007--on average, 60 percent of respondents expected their family income to be higher over the next year. The percent expecting higher income typically dipped in earlier recessions (the gray bars), but the decline in the last recession was much larger and has been much more persistent. The percent expecting lower income (the red line) reached an unprecedented high level in the last recession and has remained elevated. The responses to a separate survey question about real income expectations (not shown) display a similar pattern. Pessimism Seen across a Wide Range of Households The recent drop in income expectations does not appear to be limited to a select group of households. The summary measure of income expectations used here is the diffusion index, or the percentage of respondents expecting higher income minus the percentage expecting lower income plus 100. The left panel in the second chart shows that income expectations fell sharply in the recession for all levels of education. The decline was somewhat larger for individuals with less education (the red and black lines), but by the first half of 2012 the income expectations of all education groups were comparably depressed relative to their pre-recession levels. The right panel shows the differences in income expectations by age. The data indicate that income expectations declined in all age groups. Interestingly, the income expectations of households headed by individuals over the age of 60 (the red line) also fell sharply, suggesting that the weak labor market, at least directly, may not be the only reason for the increased pessimism. This decline in expectations across all education and age groups, as well as similar patterns (not shown) in income expectations by gender, marital status, and position in the income distribution, point to a widespread decline in income expectations over the past several years. Income Expectations Vary by Financial Experiences or Circumstances There is a strong relationship in this survey between individuals' recent financial experiences and their income expectations. Income expectations are substantially higher for households who consider themselves to be better off financially than for those who are worse off financially. While this may seem obvious, if a household viewed their recent financial setback to be temporary, they might be even more likely to expect higher income growth than a household whose finances had improved recently. Within each group of financial experiences--the "better-offs" and "worse-offs"--there is only a modest decline in average income expectations from 2007 to 2012. However, as is well known, household finances generally deteriorated considerably during the recession. The percent of households reporting being worse off financially than the previous year rose sharply and the percent better off financially fell considerably. Altogether, this suggests that at least some of the steep decline in income expectations might simply be a reflection of more households than usual experiencing adverse shocks to their own finances. To further investigate this explanation, I use the variation across households to identify the relationship between income expectations and selected household characteristics and experiences. The household characteristics included are age, education, and gender. To proxy for households' experiences, I use the unemployment rate in a respondent's state of residence, the change in respondent personal finances (just presented above), and the change in respondent home value. For this regression analysis, I pooled all household responses from January 2007 to April 2012. I also remove the average differences in income expectations from quarter to quarter and across states of residence. Household Circumstances Only Partly Explain Lower Expectations The last chart uses the estimated relationships to decompose the change in income expectations by households' experiences, households' characteristics, and an unexplained residual. From 2007 to 2009, income expectations fell about 30 index points (the left bar), and almost two-thirds of the decline was accounted for by households' adverse experiences (the blue portion of the bar). Household characteristics played only a small role (the red portion), and one-third of the decline in income expectations was unexplained by the variables associated with households (the green portion). The recovery in income expectations from 2009 to the first half of 2012 (the right bar) has mainly reflected modest improvements in economic experiences, whereas the unexplained improvement has been small. Notably, the unexplained drag on expectations in the recession has not been unwound. This pattern could imply a permanent downshift in income expectations. Alternatively, it might signal the potential for a bounce back in expectations. In summary, the pessimism of households about their future income is deep and broad based. The large and persistent decline in income expectations in the aggregate data is evident among several different types of households. This analysis also shows that adverse experiences of households can account for half of the net decline in income expectations since 2007. Given the only modest improvement in household finances and the labor market seen in the aggregate, it is perhaps not surprising that income expectations remain downbeat. Moreover, the large, unexplained shock to income expectations might suggest a permanent change in households' views--a phenomenon that would continue to weigh against a recovery in consumer spending. Or the unwinding of this excess pessimism might provide an extra boost to spending as the economy picks up further. This article highlights some recent work on income expectations--an area of ongoing study. 1. The Michigan survey is a nationally representative monthly survey based on about 500 telephone interviews. Individuals are selected for the survey using random digit dial sampling (including a cell phone sample) and are interviewed twice, six months apart. See www.sca.isr.umich.edu for more information on the survey. Return to text Disclaimer: FEDS Notes are articles in which Board economists offer their own views and present analysis on a range of topics in economics and finance. These articles are shorter and less technically oriented than FEDS Working Papers.
<urn:uuid:5d3506e4-5457-4758-8efe-3df41ae8cbed>
CC-MAIN-2016-26
http://www.federalreserve.gov/econresdata/notes/feds-notes/2013/why-have-americans-income-expectations-declined-so-sharply-20130926.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95216
1,398
2.515625
3
Library Information for International Students Welcome to George Fox University libraries! George Fox University has two libraries: Find books and articles A library catalog contains information on all of the materials owned or subscribed to by the library. Use the catalog to find books. The name of our catalog is Primo. Start searching Primo by typing in a keyword. When you find a book that you want, write down the "Call Number" so that you can find it in the book stacks (the aisles where the books are shelved). All call numbers are from the Library of Congress Classification system and start with a letter or letters, for example, D or DA. The stacks are arranged alphabetically so that you can locate the book with the call number. The books are grouped together by subject so that when you find a book for your topic there could be other books on the same topic nearby. At libraries in the United States, it is customary to go to the library stacks and to get the book for yourself. If you cannot find a book, please ask a librarian to help you. You will need to find scholarly articles to use to write your papers (please see our information on plagiarism). Databases contain articles, or information about articles. Begin by going to the Subject Guides webpage. Select your subject and you will find a list of the databases relevant to that subject. Start searching the databases for articles by entering keywords in the search box. There are many ways to search a database. If you do not find what you are looking for ask a librarian for help. The library Website contains many other tools and resources. A good place to go for more helpful information is the How do I...? section of the library's Website. When you write your papers you will use ideas or information from articles or books that have been written by other people. If you show in your paper that the ideas or information belong to someone else then it is acceptable to use their written work. It is wrong if you use someone else's article or book without showing in your paper that the ideas or information belong to them. This is called plagiarism. Plagiarism is stealing someone else's written work. Students who plagiarize may be penalized or expelled from school. Please read George Fox University's policy on academic honesty. This is very important! You can avoid plagiarism by putting a citation in your paper. A citation is information about the author and the book or article that they wrote. Get Help at the library Librarians are information professionals with graduate degrees who are available to help you. Here are some of the things that librarians can help you with: - Finding information and resources - Learning how to use the catalog - Learning how to use the databases - Learning how to do academic research - Learning how to use the library Many people feel uncomfortable asking a librarian for help. We want you to ask for help when you need it! We enjoy helping our students! We are always available when the library is open and you can contact us at the reference desk, by phone, by email, and also by CHAT. Email, phone, and chat links are available at "Ask us" on the library homepage. Dictionaries for the English language learner - NTC's American English learner's dictionary : the essential vocabulary of American language and culture - Longman handy learner's dictionary of American English - Prentice Hall encyclopedic dictionary of English usage Global Reference Center Britannica's Global Reference Center is the home of comprehensive online content in a variety of global languages, including Spanish, French, Chinese, Japanese and Korean. Resources include: - Gran Enciclopedia Planeta - Enciclopedia Juvenil - Encyclopædia Universalis - Britannica Online Japan - Britannica Online Korea - Britannica Pocket Encyclopedia (Chinese) Links to helpful English language books and e-books - Americanisms -- dictionaries - English language -- Textbooks for foreign speakers - English language -- Spoken English -- United States - English language -- United States - English language -- Conversation and phrase books - English language – pronunciation Library materials by language The library offers materials in languages other than English. You can use the catalog (Primo) to find books, electronic books, and videos in Chinese, Japanese, Spanish, Korean, French, and German. Links to library materials by language: Database language features Many of the databases offer interfaces (the layout of graphics and controls that allow you to use the database) in languages other than English. The WorldCat catalog also offers an interface in multiple languages. Learn more about the database and catalog language options that are available to you. Databases with reading level options - MAS-Ultra School Edition Hundreds of periodicals, reference books, biographies, and primary source documents. In the “Search options” you can select a “Lexile Reading Level”. The "850-1050" level is the easiest. - Middle Search Plus Good place to start researching current events. Contains full-text for nearly 140 titles. In the “Search options” you can select a “Lexile Reading Level”. The level "750-950" is the easiest. - Student Resource Center Gold Covers the major subject areas and provides a premium selection of reference, thousands of full-text periodicals and newspapers, primary documents, creative works, and multimedia, including hours of video and audio clips and podcasts. Under the search box select the content level that you want: basic, intermediate, or advanced. - Links of interest to students & teachers of English as a Second Language - Resources for English as a Second Language - The Tower of English: The ESL Guide to the Internet - Dave’s ESL Café---for students - ESL Student Resources from Ohio University’s Department of Linguistics - Online Picture Dictionary---German, French, Italian, Spanish, English - About.com: English as a 2nd Language
<urn:uuid:f3d3b6f8-ac41-4ce7-807e-a7d4b6be99e0>
CC-MAIN-2016-26
https://www.georgefox.edu/offices/murdock/Services/International/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.900467
1,267
2.953125
3
Have summer fun without salmonella Don’t invite Sam and Ella (you know, Salmonella?!) to your summer events! Are you aware that illnesses from Salmonella spike in the summer? And that the overwhelming majority of hospitalization from Salmonellosis occur in children under 5 years of age? Children under 4 are 4.5 times more likely to get bacterial infections from food compared to adults. Be ready this summer to take action and reduce the risk of infection in your family! Four quick tips to remember: 1. Don’t rinse chicken. We used to think that this was a good practice. But it only spreads germs around the kitchen and isn’t a food safety step. 2. Cook chicken to 165 degrees F – always use a food thermometer to check. 3. Wash your hands before all meal preparation – be among the 35 percent of food preparers who do! Sad but true that 65 prcent in studies do not wash hands before preparing foods. 4. Always use soap for effective hand washing. Salmonella is common and can be found in many types of foods such as undercooked eggs, poultry and meat, raw produce and unpasteurized milk or other dairy products. Symptoms include abdominal cramps and tenderness, fever and diarrhea. Whether your picnic spot is a roadside table, a city park or your backyard, eating outside creates special memories for kids. Picnic picks include easy to eat foods and water or natural fruit juices, which are better thirst quenchers than soda pop. Fill re-sealable plastic bags with ice cubes to keep food cold. Use the cubes to keep drinks cool; then water the grass with the leftovers to lighten the load for the trip home. A big blanket or sheet is great for covering a picnic table or sitting on the grass. Trash bags are a good idea so you can “pack it in and pack it out.” You might need insect repellant and sunscreen. Patricia Steiner is a nutrition and health specialist with Iowa State University Extension and Outreach in Mediapolis and serves Des Moines, Henry, Jefferson, Lee, Louisa and Van Buren counties.
<urn:uuid:8c3848ca-37ae-44c6-9449-01c2a4670dd7>
CC-MAIN-2016-26
http://fairfield-ia.villagesoup.com/p/have-summer-fun-without-salmonella/1213721
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923756
457
2.640625
3
ROBERT SIEGEL, host: The comparisons were inevitable. JFK, the 1960 Democratic nominee, he was a Catholic, facing public opposition from some prominent Protestant churchmen. The only other Catholic to win the nomination, Al Smith, had run and lost back in 1928, never publicly speaking about his faith. Kennedy, in 1960, spoke to Protestant ministers in Houston, Texas, about his ideas about religion and politics. (Soundbite of archived speech) President JOHN F. KENNEDY: I believe in an America where the separation of church and state is absolute; where no Catholic prelate would tell the president - should he be Catholic - how to act, and no Protestant minister would tell his parishioners for whom to vote; where no church or church school is granted any public funds or political preference, and where no man is denied public office merely because his religion differs from the president who might appoint him, or the people who might elect him. SIEGEL: Kennedy also reminded his audience that day of his record in Congress. He'd not only opposed what he called unconstitutional aid to parochial schools, he had also opposed sending an ambassador to the Vatican. David Campbell is a political scientist at Notre Dame, who's written about religion and the American politics. Welcome to the program. Professor DAVID CAMPBELL (Political Scientist, Notre Dame University): Thank you. SIEGEL: To hear Mitt Romney today and just that part of JFK's speech in 1960, you can't help but sense that political discourse in America about religion was a lot different then than it is today. Prof. CAMPBELL: Certainly. I mean, Kennedy stood before his audience and essentially said to a group of Protestants, you can vote for me, a Catholic, because you can be assured that my Catholicism will not necessarily inform what I will do as president. Today, when Mitt Romney spoke, he was essentially saying exactly the opposite. Vote for me, my faith will inform what I would do as president. But then, of course, he had to reassure his audience that his faith has common ground with other faiths that may not be the same. SIEGEL: Given that we didn't hear presidential candidates saying that sort of thing back in 1960, what happened? What was the change? When did it happen in American life? Prof. CAMPBELL: Well, there is - some scholars, they debate about this. But I think for the most part, most people would agree that the candidacy of Jimmy Carter in 1976 was critical. Carter was really the first presidential candidate to publicly identify himself as an evangelical or a born again Christian and therefore kind of put that group on the political radar screen as, you know, being a politically salient portion of the electorate. One of the great ironies, of course, of American politics is that Carter, a Democrat, kind of started the ball rolling, but it was the Republicans who ended up with that group more or less in their camp, and that, of course, we see today with the presidency of George W. Bush. SIEGEL: There was a moment in the speech today that Mitt Romney gave, when he reiterated his lack of interest in talking about many of his beliefs. Certainly, he doesn't want to talk about the doctrines of his church. But he did say people often ask me this, and I should answer. Here's what he said. (Soundbite of archived speech) Mr. ROMNEY: What do I believe about Jesus Christ? I believe that Jesus Christ is the son of God and the savior of mankind. SIEGEL: Rather unusual theological statement for a candidate for a presidential nomination. Prof. CAMPBELL: Well, unusual only in the sense that perhaps Romney has said more about his belief in Jesus than we've heard in the past. But we certainly, within the last few election cycles, have heard candidates use overtly religious language. You may recall a famous incident in the primary season in 2000, when George W. Bush was running for the Republican nomination. When he was asked to name his favorite political philosopher, he named Jesus Christ. And I would say that what we have here from Romney is kind of an extension of that sort of talk that you have found in the past. SIEGEL: Have you heard anyone recently voiced the interpretation of the separation of church and state that John F. Kennedy was expressing in that speech in Houston in 1960? Prof. CAMPBELL: Well, that's a good question. I don't think there is anyone, at least of the top-tier candidates running today, who have articulated quite that way. I will say that when Bill Bradley was running in 2000 against Al Gore in the Democratic primary, I do remember Bradley making statements that were very similar to what Kennedy was saying in 1960. But of course, he didn't win the nomination. SIEGEL: I think, back in the days of JFK, a lot of Americans just assumed that the role of religion in our life was something that was steadily declining over the years. That hasn't been the case for the past couple of decades. Where do you think it's headed in the future, over the next couple of decades? Mr. CAMPBELL: Well, that's a very good question. And I'm involved with some research right now that suggests that there actually is increasing secularization; that is more and more people who are turning away from religion or telling pollsters that they have no religious affiliation. It's not a large group, but it is a growing group. And if you look forward, it is likely that we will see, you know, increasing numbers of Americans who claim no religion at all. And that may very well have an effect on how much God-talk we hear from political candidates in the future. But for the time being, I think we're going to hear still a lot of talk about religion whenever candidates run. SIEGEL: Professor Campbell, thank you very much for talking with us today. Mr. CAMPBELL: Thank you. SIEGEL: That's political scientist David Campbell of the University of Notre Dame.
<urn:uuid:60f4ff92-a07d-48d3-820b-f538e0960a4d>
CC-MAIN-2016-26
http://www.npr.org/templates/transcript/transcript.php?storyId=16981132
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982514
1,274
2.625
3
What's the Big Idea? The Internet has a terrible habit of misquoting Einstein on energy and creativity until he sounds like he’s the author of The Secret, not the theory of relativity. Here’s something he actually did say. Describing the effect of music on his inner life, he told a friend: “When I examine myself and my methods of thought, I come close to the conclusion that the gift of imagination has meant more to me than any talent for absorbing absolute knowledge.” At times, he explained, “I feel certain I am right while not knowing the reason.” Today, what Einstein believed intuitively – that insight was essential to scientific discovery and to the arts – can be observed methodically in the lab. Thanks to the invention of fMRI imaging, neuroscientists are capable of peering into a living, thinking brain in a way that their predecessors never dreamed of, with the potential to test long-standing ideas about how we arrive at novel solutions. Eric Kandel is a pioneer in the field who worked alongside Harry Grundfest in the very first NYC-based laboratory devoted to the study of the brain. In 2000, he was awarded a Nobel Prize in physiology/medicine for showing that memory is encoded in the neural circuits of the brain. Kandel believes that we’re on the verge of reaching an understanding of the nature of creativity that is more than anecdotal. Watch our live interview with Eric Kandel, which originally aired 3/22/2012: “There are a group of people who have studied aspects of creativity,” he says. “And they found that when people do it in a sort of creative way, the ah-ha phenomenon, there is a particular area in the right side of the brain that lights up. And they show this not only with imaging, but also with electrophysiological recording.” The ah-ha phenomenon or Eureka effect is the well-documented flash of insight that occurs when, after much thought, a solution seems to suddenly just come to you. A recent paper by G. Jones of the Centre for Psychological Research in Health and Cognition, describes the two major theories of cognitive insight: Insight in problem solving occurs when the problem solver fails to see how to solve a problem and then--"aha!"--there is a sudden realization how to solve it... The representational change theory (e.g., G. Knoblich, S. Ohlsson, & G. E. Rainey, 2001) proposes that insight occurs through relaxing self-imposed constraints on a problem and by decomposing chunked items in the problem. The progress monitoring theory (e.g., J. N. MacGregor, T. C. Ormerod, & E. P. Chronicle, 2001) proposes that insight is only sought once it becomes apparent that the distance to the goal is unachievable in the moves remaining. What's the Significance? In the genre of study Kandel is referring to, researchers use fMRI to measure neural activity in participants solving a visuospatial creativity problem involving divergent thinking. The result? Creativity is not something that can be pinpointed to one specific region of the brain, but creative tasks do seem to engage the right side of the brain in particular. The study of right-brain creativity stems all the way back to the work of John Hughlings Jackson, a nineteenth century physician whose work at the National Hospital for the Paralyzed and Epileptic in London laid the groundwork for the modern practice of bedside neurology. Influenced by Darwin’s theories, Hughlings Jackson believed that the self was a function of the brain which had evolved along with the human species. (He was, by at least by his own estimation, the first person to use the word “self” in medical literature.) He discounted the idea “that there exists a centre of the nervous system that acts as a metaphysical interpreter, standing outside of sensory and motor function,” and aimed instead to locate the mind in the physical body, and to understand its mechanics. Based on his clinical observations, Hughlings Jackson theorized that the left hemisphere is especially involved in language, logical processes, calculation, mathematics, and rational thinking, while the right hemisphere is involved in musicality and synthesis, an aspect of creativity. Evidently, Hughlings Jackson was right, says Kandel. (Both men fall squarely on the side of reductionism, an epistemological worldview which claims that methods and properties in one domain of science can be explained by another). “The sing-song in my language comes from the right hemisphere, the grammar and the articulation comes from my left hemisphere.” Hughlings Jackson also believed that in typical brains, the two hemispheres inhibit one another, but brain damage occasionally changes that relationship. “So,” Kandel explains, “if you have lesions of the left hemisphere that remove the inhibitory constraint on the right hemisphere, [it] frees up certain processes.” Surprisingly, Hughlings Jackson found that children that developed an aphasia or language difficulty late in life sometimes also developed a musicality that they didn’t have before. More recently, researchers analyzing frontotemporal dementia have found that when the dementia is expressed solely on the left side, patients being to show creativity that they’ve never had before. People who develop this type of dementia, which is related to Alzheimer’s, have been known to take up painting for the first time in their lives, or to start experimenting with the use of new colors and forms, if they have been painters. “This is quite unusual,” says Kandel. “It’s conceivable that as we get deeper and deeper insights into the mind, artists will get ideas about how combinations of stimuli effect, for example, emotional states that will allow them to depict those emotional states better.” He elaborates, "It’s amazing we know anything about creativity, but this is certainly – we are heading into an era in which one can really get very, very good insights into... the kinds of situations that lead to increased creativity, you know, is group think productive? Does it lead to great – greater creativity or does it inhibit individual creativity? Lots of these questions are being explored, both from a social psychological and from a biological point of view."
<urn:uuid:6dbe3b1e-e22b-4a6c-b21f-06ff4d7c018c>
CC-MAIN-2016-26
http://bigthink.com/think-tank/eureka-the-neuroscience-of-creativity-insight
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953445
1,340
2.671875
3
The Benefits of Drama In Education By: Diana Morrone Many people when they here the word "Drama" automatically think performance. Well this web cite had been created to break people of this mentality. There are many positive benefits that drama can play in realm of a child's development. Implementing drama within the classroom as a great option for educators. Not only can drama be used and adapted across the curriculum, but it can also serve as a catalyst building individual skills that students can later use in everyday life situations. Drama in the classroom is great because it makes drama active, engages students and makes learning purposeful. Drama can be used across the curriculum and adapted to suit any subject. From acting out skits, to exploring different characters alternative endings and scenarios. Drama promotes critical thinking, so that students can formulate and express their own opinions. In the following pages you will be informed of the benefits of drama on a student's development and will also be provided some links in which you can visit for further information.
<urn:uuid:1b87b729-4947-4c11-839b-0ce64b781bdb>
CC-MAIN-2016-26
http://www.angelfire.com/art3/dramainedu/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957227
209
3.53125
4
London (Nov. 13) The text of Mr. Bevin’s address on the British Government’s Palestine policy reads as follows: His Majesty’s Government have been giving serious and continuous attontion to the whole problem of the Jewish community that has arisen as a result of Nazi persecution in Germany, and the conditions arising therefrom. It is unfortunately true that until conditions in Europe become stable the future of a large number of persons of any races, who have suffered under this persocution, cannot finally be determined. The plight of the victims of Nazi persecution, among whom were a large number of Jews, is unprecedented in the history of the world. His Majesty’s Government are taking every step open to them to try and improve the lot of these unfortunate people. The Jewish problem is a great human one. We cannot accept the view that the Jews should be driven out of Europe and should not be permitted to live again in these countries without discrimination and contribute their ability and talent towards rebuilding the prosperity of Europe. PALESTINE DOES NOT PROVIDE SOLUTION OF WHOLE JEWISH PROBLEM Even after we have done all we can in this respect it does not provide a solution of the whole problem. There have recently been demands made upon us for large seale immigration into Palestine. Palestine, while it may be able to make a contribution, does not by itself provide sufficient opportunity for grappling with the whole problem. His Majesty’s Government are anxious to explore every possibility which will result in giving the Jews a proper opportunity for revival. The problem of Palestine is itself a very difficult one. The mandate for Palestine required the mandatory to facilitate Jewish immigration and to encourage close settlement by Jews on the land, while con(##)uring that the rights and position of other sections of the population are not prejudiced thereby. His Majesty’s Government have thus a dual obligation, to the Jews on the one side and to the Arabs on the other. The lack of any clear definition of this dual obligation has been the main couse of the trouble which has been experienced in Palestine during the past twentyfive years. His Majesty’s Government have made every effort to devise some arrangement which would enable Arabs and Jews to live together in peace and to cooperate for the welfare of the country, but all such efforts have been unavailing. Any arrangement acceptable to one party has been rejected as unacceptable to the other. The whole history of Palestine since the mandate was granted has been one of continual friction between the two races, culminating at intervals in sericus disturbances. SAYS IMPOSSIBLE TO FIND COMMON GROUND BETWEEN JEWS AND ARABS The fact has to be faced that since the introduction of the mandate it has been impossible to find common ground between the Arabs and the Jews. The differences in religion and in language, in cultural and social life, in ways of thought and conduct, are difficoult to reconcile. On the other hand, both communities lay claim to palestine, one on the ground of a millenium of occupation and the other on the ground of histeric association coupled with the undertaking given in the first World War to establish a Jewish home. The task that has to be accomplished now is to find means to reconcile these divergencies. The repercussions of the conflict have spread far beyond the small land in which it has arisen. The Zionist cause has strong supporters in the United States, in Great Britain, in the dominions and elsewhore. Civilization has been appalled by the sufferings which have been inflicted in recent years on the persecuted Jews of Europe. On the other side of the picture, the cause of the Palestinian Arabs has been imposed by the whole Arab world and more lately has become a matter of keen interest to their ninety million co-religionists in India. In Palestine itself, there is always serious risk of disturbances on the part of one community or the other, and such distances are bound to find their reflection in a much wider field. Considerations at only of equity and of humanity, but also of international amity and world peace are this involved in any search for a solution. STRESSES ANGLE-AMERICAN INTEREST IN SOLUTION OF PALESTINE ISSUE In dealing with Palestine all parties have entered into commitments. There are the commitments imposed by the mandate itself, and, in addition, the various statements of the last twenty-five years. Furthermore, the United States Government themselves have undertaken that no decision should be taken in respect to what, in their opinion, affects the basic situation in Palestine without full consultation with both Arabs and Jews. Having regard to the whole situation and the fact that it has caused wide interost which affects both Arabs and Jews, His Majesty’s Government decided to invite the Government of the United States to cooperate with them in setting up a goint anglo-American committee of enquiry, under a rotating chairmanship, to examine the question of European Jewry and to make a further review of the Palestine problem in the light of that examination. I am glad to be able to inform the House that the Government of the United States have accepted this invitation. (At this point Bevin outlined the “terms of reference” of the committee as listed in President Truman’s announcement) The procedure of the committee will be determined by the committee themselves and it will be open to them, if they think fit, to deal simultanecusly, through the medium of sub-committees, with their various terms of reference. COMMITTEE WILL ALSO CONSIDER POSSIBILITIES FOR SETTLEMENT IN EUROPE The committee will be invited to deal with the matters referred to in their terms of reference with the utmost expeditions. Complying with the second and fourth paragraphs of their terms of reference, the committee will presumably take such steps as they consider necessary in order to inform themselves of the character and magnitude of the problem created by the war. They will also give consideration to the problem of settlement in Europe and to possible countries of disposal. In the light of their investigations, they will make recommendations to the two Governments for dealing with the problem in the interim until such time as a permanent solution can be submitted to the appropriate organ of the United Nations. The recommendations of a committee of enquiry such as will now be set up will elso be of immense help in arriving at a solution of the Palestine problem. The committee will, in accordance with the first and third paragraphs of their terms of reference, make an examination on the spot of the political, economic and social conditions which are at present held to restrict immigration into Palestine and, after consulting representative Arabs and Jews, submit proposals for dealing with these problems. It will be necessary for His Majesty’s Government both to take action with a view to securing some satisfactory interim arrangement and also to devise a policy for pernment application thereafter. This inquiry will facilitate the findings of a solution which will in turn facilitate the arrangements for placing Palestine under trusteeship. So far as Palestine is concerned it will be clear that His Majesty’s Government 1. They will consult the Arabs with a view to an arrangement which will ensure that, pending the receipt of the ad interim recommendations which the committee of enquiry will make in the matter, there is no interruption of Jewish immigration at the present monthly rate. 2. After considering the ad interim recommendations of the committee of enquiry, they will explere, with the parties concerned, the possibility of devising other temporary arrangements for dealing with the Palestine problem until a permanent solution of it can be reached. 3. They will prepare a permanent solution for submission to the United Nation and if possible an agreed one. VIOLENT DEPARTURE FROM PRESENT POLICY WOULD CAUSE REACTIONS IN MIDDLE EAST The House will realise that we have inherited, in Palestine, a most difficult legacy and our task is greatly complicated by undertakings, given at various times to various parties, which we feel ourselves bound to honor. Any violent departure without adequate consultation would not only afford ground for a charge of breach of faith against His Majesty’s Government, but would probably cause serious reactions throughout the Middle East, and would arouse widespread anxiety in India. His Majesty’s Government are satisfied that the course which they propose to pursue in the immediate future is not only that which is in accordance with their obligations, but is also that which, in the long view, is in the best interests of both parties. It will in no way prejudice either the action to be taken on the recommendations of the committee of enquiry or the terms of the trusteeship agreement, which will supersede the existing mandate, and will therefore control ultimate policy in regard to Palestine. His Majesty’s Government, in making this new approach, wish to make it clear that the Palestine problem is not one which can be settled by force and that any attenpt to do so by any party will be resolutely dealt with. It must be settled by discussion and conciliation and there can be no question of allowing an issue to be forced by violent conflict. We have confidence that if this problem is approached in the right spirit by Arabs and Jews, not only will a solution be found to the Palestine question, just to both parties, but a great contribution will be made to stability and peace in the Middle East. Finally, the initiative taken by His Majesty’s Government and the agreement of the United States Government to cooperate in dealing with the whole problem created by Nazi aggression, is a significant sign of their determination to deal with the problem in a constructive way and a humanitarian spirit. But I must emphasize that the problem is not one which can be dealt with only in relation to Palestine. It will need a united effort by the power to relieve the miseries of these suffering peoples.”
<urn:uuid:30a6f81b-1e90-4b57-b679-1d809bcd3662>
CC-MAIN-2016-26
http://www.jta.org/1945/11/14/archive/exit-of-foreign-minister-bevins-statement-on-palestine-in-house-of-commons
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957817
2,025
2.546875
3
This latest volume in the acclaimed Flora of Australia series covers the subfamilies Arundinoideae, Danthonioideae, Aristidoideae, Micrairoideae and Chloridoideae. It describes a mixture of tropical and temperate grasses and includes a number of economically and environmentally important groups, such as: - Triodia – iconic spinifex grasses of Australia’s arid areas that are an important major habitat for a variety of species - Wallaby grasses – attractive grasses with distinctive purple and green heads that are a major structural component of endangered south-eastern grasslands - Aristida (kerosene grasses and three-awns) – a large tribe of grasses whose characteristic three long bristles are problematic for the agricultural industry as they can contaminate fleece - Mitchell grasses – of great economic importance for the pastoral industry in Queensland - Couch grass – one of the lawn grasses we take for granted - Parramatta grasses – well-known weeds on the eastern seaboard - Arundo and Phragmites – the reeds along our waterways The volume includes native and naturalised species, treating five subfamilies, 55 genera and over 450 species. Many of the species treated are endemic to Australia. It features over 90 pages of illustrations as well as the traditional tightly written authoritative descriptions, identification keys, bibliographic information, and notes on ecology and distribution. An essential reference for plant taxonomists, ecologists and grassland researchers.
<urn:uuid:604aead8-46d2-43d1-9810-81d10ab378b9>
CC-MAIN-2016-26
http://www.publish.csiro.au/nid/22/pid/4789.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899618
326
3.328125
3
Butterfly bush is a common name that refers to several plants, though the most familiar are plants belonging to the Buddleia genus. These Chinese natives are brightly colored perennials that are rich sources of nectar, attracting butterflies and hummingbirds. Which type you choose depends somewhat on your climate. Understanding Butterfly Bush Of the two main butterfly bushes, one grows in cold climates, while the other grows in subtropical climates. Both belong to the Loganiaceae (Buddlejaceae) family and to the genus Buddleia. Butterfly bush (Buddleia davidii), the cold-tolerant variety, grows in U.S. Department of Agriculture plant hardiness zones 5 and above. Wooly butterfly bush (Buddleia marrubiifolia) grows in USDA zones 8 through 10b. Wooly butterfly bush is a better choice in hot climates because it tolerates heat well. Butterfly bush is a deciduous to semi-evergreen perennial that is hardy enough to survive winter in USDA zone 5. It flowers all summer and into the fall, usually from June through October, provided the spent flowers are deadheaded regularly. Flowering peaks in July and August. This fast-growing shrub may reach its mature size within one or two seasons after planting. At maturity, the shrubs may reach 6 to 10 feet tall and 4 to 10 feet wide. The flower clusters, called panicles, are 5 to 12 inches long. Flower colors include blue, yellow, white, pink, purple and lavender, and they are pleasingly fragrant. Butterfly bush is a staple for wildlife habitats and when you want to attract pollinators. The flowers provide a rich source of nectar for bees, butterflies, lady beetles and hummingbirds. Wooly Butterfly Bush Wooly butterfly bush grows in USDA zones 8 and above. Its clusters of marble-sized orange flowers bloom from spring all the way through fall. The leaves are an ashy gray color and often remain on the plant well into winter, although it is considered a deciduous shrub. This smaller species only grows to 5 feet tall and wide. It also attracts pollinators, especially butterflies and hummingbirds. It is the better choice for hot climates. Care and Culture Both types of butterfly bush grow best in full sun. These plants need soil with excellent drainage to prevent them from developing root rot. These fast-growing bushes do best in soil with a neutral pH -- between 6.0 and 7.0. They also benefit from a 2- to 3-inch layer of organic mulch. Once the plants are established, they benefit from fertilizing at moderately frequent intervals. To ensure these shrubs will flower prolifically, deadhead the spent flowers. Another way to promote more flowering is to cut the plants back to about 1 foot from the ground, but only before new growth emerges. Fall or winter pruning may put the plants at risk for serious cold damage. - Clemson Cooperative Extension: Butterfly Bush - University of Florida IFAS Extension: Buddleia Spp. - University of Florida IFAS Extension: Cassia Bicapsularis: Butterfly Bush - Texas A&M Aggie Horticulture: Wooly Butterfly Bush - North Carolina State University Cooperative Extension: Buddleia Davidii - New Mexico State University: Buddleia Davidii - Iowa State University Extension: Butterfly Bush Lives Up to Its Name - University of Connecticut: Buddleia Davidii - Purdue University Cooperative Extension Service: Landscape Plants for Sandy Soils - Arizona State University: Buddleia - Hemera Technologies/Photos.com/Getty Images
<urn:uuid:a54ec261-361b-41d0-aa11-7c502d10d4fb>
CC-MAIN-2016-26
http://homeguides.sfgate.com/hardiness-zones-butterfly-bush-68317.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909824
750
3.453125
3
Headaches From Hell Anyone who has ever had a migraine will say they do not just happen in the head. The headache is usually the worst and most painful part of a migraine, but there’s more. Most migraineurs (people who suffer from migraines) will talk about photosensitivity (sensitivity to light), phonosensitivity (sensitivity to sound), scent sensitivity, gastric pain, cramping, and vomiting. Sometimes the abdominal symptoms show up without the other typical migraine symptoms. When they do, a patient is said to be experiencing an abdominal migraine. An abdominal migraine is pain, usually varying from mild to medium, in the abdomen. The pain is either along the midline or unspecified and is frequently accompanied by abdominal tenderness, cramp- like spasms, bloating, vomiting, and loss of appetite. Since abdomen pain can be caused by a wide variety of conditions other causes need to be ruled out before a diagnosis can be made. In a classic abdominal migraine, no gastric cause for the pain can be identified. Migraineurs need to let their doctors know about their migraines when they experience unspecified abdominal pain so that the doctor knows abdominal migraine may be a possibility. Abdominal migraines are most common in children. Children who experience abdominal migraines frequently grow up to be migraineurs. While abdominal migraine is not unheard of in adults, it is rare. Like most other types of migraine, it is also more common in females than in males. While the exact cause of abdominal migraines is unknown, it is highly likely to be related to serotonin deficiency. Serotonin deficiency has been linked in several studies to migraines, and 90% of the body’s serotonin is produced in the gastric system. Serotonin deficiency causes cascading waves of nerve reaction in the brain when triggering a migraine and a similar process may be in effect in the abdomen.
<urn:uuid:5945e1ae-014a-4d6a-b364-37d111af5238>
CC-MAIN-2016-26
http://www.free-ebooks.net/ebook/Headaches-From-Hell/html/2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905393
432
2.765625
3
How many North American traders know that one of the most common stock-charting techniques did not originate in the West, but was in fact developed and used in Japan over 300 years ago? Before the United States was even a nation, the Japanese were using candlesticks to predict future price movements in rice trading. Candlesticks were not first introduced to North America until 1989, when Steve Nison first wrote an article about them in the magazine Technical Analysis of Stocks & Commodities. Candlesticks allowed the trader to see at a glance where the market opened and closed, along with the high and low of the period. But candlesticks were similar to most other techniques (with the exception of point & figure charting): time and price were plotted on the X- and Y-axes, and whether it was one minute or one year, all price action occurring over that time was squeezed into the one frame. Price could then be plotted arithmetically or logarithmically, but either way, time and price were locked together in a set relationship. But market action was not under the same constraints. In a slow market, there would be very little price movement, while fast moving markets could witness rapid changes in price. Was an arbitrary representation of price per unit of time the best way to forecast future price movement? Markets as Energetic Systems Inventor John Chen didn't think so. After searching for a better way to show price action, Chen decided that markets behaved more like energetic systems than systems confined to the two dimensions of time and price. Current methods were useful for looking at market action in hindsight, but he believed they did little to anticipate future movement. If markets were in effect energetic systems, possessing varying levels of energy, would it not be easier to determine where prices were going with a higher probability of success? Chen sees markets working as thermodynamic systems. Alternating between periods of equilibrium and chaos, prices seek to find a new balance point after each trend. Think of this behavior as the act of going up or down flights of stairs that are separated by landings. When there is an increase in buying, prices move out of equilibrium and trend higher until a new equilibrium point is reached (the next landing). The whole process is not time driven, but rather price dependent. And the 'inner force' is investor behavior, which drives price action in a cause-effect relationship. According to Chen, price is the only event that really matters. Once a trader understands the process of how price interacts and changes, he or she can exploit it more easily. How It Works Chen's program, called J-Chart, plots price as a five-part Chinese 'Jeng' or JE character (). One part of the character plots each time a transaction occurs at a specific price, allowing the user to determine the level of equilibrium at any given time. Depending on user preference, any time period may be set and periods may be combined. Opening prices are plotted in yellow and closing prices for the period are plotted in cyan. As price plots in a given frame, a triangle begins to form. If it is top heavy, as is the case in Figure 1 below, the part of the plot with indentations (or caves) will generally be filled in subsequent sessions, unless the market is trending strongly in the opposite direction. |Figure 1 - Price plots in a J-Chart showing triangular formation from high (point of origin) to low (image point) with open (yellow) and close (cyan) and balance point (solid red line). Chart provided by J-Chart.com| The point of origin is either the high or low where price plots occur. The image point in Figure 1 contains no price plots, making this formation top heavy and out of balance. In a situation where there is equilibrium, the high and low would be equidistant from the balance point (center), where the greatest number of price plots occur and JE plots would symmetrically fill the triangle outlined by the gray lines. J-Chart treats markets as energetic systems, thereby giving us a new way of looking at them. It is designed to help the trader decide when markets are in equilibrium and when they are not. The closer the price action comes to filling a perfect isosceles triangle in a given period (turned on its end), the more it is in equilibrium. If markets were efficient, they would also be logical. But as any trader knows, markets are neither totally efficient nor totally logical. The reason is simple: markets are prone to the herd mentality. Herds rarely move efficiently, and they are certainly not driven by rational logic. They are more likely to vacillate between periods of greed (when prices are driven up in the rush as people buy not wanting to miss out) and periods of fear (when people realize they got carried away). Figure 2 - J-Chart model of price movement. In reality the price plot triangles are out of balance and must be balanced in subsequent price action. The green line shows how forecasts work. The lowest horizontal green line is the target plotted from taking the high marginal point (high of the period) and connecting the next period balance point. Figure 3 - Real price action showing double balance points and price vacuum or \'cave\' in the middle. The natural tendency is to fill this void in subsequent price action unless there is an overwhelming move either higher or lower. However, sooner or later, this price cave will have to be filled. Chart provided by J-Chart.com. Even when driven by strong investor sentiment, markets must obey certain laws of energy. As Isaac Newton put it, "For every action there is an equal and opposite reaction." Price moving upward too quickly must come back down and fill the areas it missed at some point. These areas show up on the J-Chart display as voids or caves. If price moves too far in either direction, the equilibrium is broken and a new one must be formed. Price - Where to Next? Now the trick is to determine what is more likely to occur when setting price forecasts. Will the target that was set using the forecasting tool with an image or marginal point and subsequent balance point be hit next? Or, will caves left empty from past sessions be filled? By changing settings on the program, the user can look at price action in a number of ways. It is possible to view up to 45 days of price action at once, but the user also has the option of changing the scaling, or combining price action over a number of periods to get a clearer picture of what is going on. Looking at the market one day at a time gives a different picture than combining 30 or 40 days together. To a certain extent, the number of periods combined will depend on the trader's preferred trade duration. Short-term or day traders will look at past action one day at a time, and then look at 15- or 30-minute intervals on the trading day. Swing traders will prefer to set the current day interval to 60 or 120 minutes to look for ideal entries and exits, but they'll also combine two to five days together to get a longer-term view. |Figure 4 - In attempting to fill the cave (red ellipse) the previous balance point was broken, necessitating the establishment of a new equilibrium. Chart provided by J-Chart.com.| Ultimately, the trader's experience and skill in reading the program provide him or her with the ability to decide. Stop losses are set using major balance points from prior days, past highs or lows, and at the horizontal blue lines plotted by the program showing significant support/resistance. The trader can also use the previous day together with overnight price activity before markets open on trading day. He or she is looking for more target points and confirmation that the trend is still positive. Once the trading day begins, with the interval set to anywhere from 15 to 60 minutes (depending on his or her trading horizon), the trader watches the action unfold. As the market moves, he or she sets new targets and stop losses. J-Chart versus Market Profile In some ways, J-Chart is similar to Market Profile, developed by J. Peter Steidlmayer in conjunction with the Chicago Board of Trade in the 1980s. Unlike the traditional bar or candlestick charts, both J-Chart and Market Profile provide the trader with a three-dimensional view of market action. But the programs are different in a number of ways as well. |Figure 5 - Market Profile display of soybeans showing the way the 30-minute chart prints. Letters plotted most often in the middle of the price range (point of control) and least often at the highs (upper tail) and low (lower tail). The letter in the alphabet corresponds to the time of day in which the trade occurred. An \'A\' plots if the trade occurred between 0800 and 0830, a \'B\' from 0830 to 0900 etc. Trading decisions are based on time price opportunities (TPO). At each 30-minute display, the trader must decide the likelihood of price in a particular direction. The point of control corresponds to the balance point in J-Chart. Chart provided by cisco-futures.com.| Market Profile plots a letter where transactions take place during a 30-minute period so that an 'A' plots for transactions between 8:00 and 8:30 a.m., a 'B' for transactions between 8:30 and 9:00a.m. and so on, plotting a bell-curve-like distribution for daily price action. The display allows the trader to see which prices had the most and the least activity. The value area is where 70% of price activity has occurred (see Figure 5). The premise of Market Profile is that if prices move away from the value area there is a strong likelihood they will move back to this area as volume dries up. In other words, prices have a tendency to revert to the point of control, or the point where most price action takes place. Market Profile plots at 30-minute intervals. J-Chart's ability to plot in a multitude of time frames and its different interpretation of price action make it different from Market Profile. J-Chart allows the user to look at up to 45 days of price action in one chunk by plotting 45 days with 45 combine, or in daily chunks by plotting 45 days and one combine. On the trading day, the trader can set the interval to anywhere from one minute to a full trading day of 405 minutes (for futures). From a theoretical standpoint, the biggest distinction between the two programs is that Market Profile is based on a bell-curve distribution of price and has a tendency to revert to the point of control, while J-Chart is based on an energetic distribution of price and the idea that each action will be met with an equal and opposite reaction. J-Chart allows the trader to anticipate future price action and make forecasts based on past activity and the distribution of prices over various time frames. Price forecasting is also easier with J-Chart, thanks to its forecasting tool. And the J-Chart user does not necessarily assume prices will revert back to the daily mean either. If there are gaps in the price action or if the longer-term price equilibrium is out of balance, the trader can expect a re-balancing in the near future. Another big advantage that J-Chart has over Market Profile is that traders who have mastered J-Chart have the ability to anticipate price reversals thanks to the principle of resonance. Resonance occurs when forecasts in various time frames give very close or identical values, indicating that a move is getting ready to take a rest or reverse. Resonance points often provide excellent points for the J-Chart trader to take low risk reversal positions and make big profits. Both programs are very useful in trading forex markets, where volume data is either difficult or impossible to get in most cases. Traders can determine implied volume by the number of times transactions have occurred at various price levels - something that would not be possible using standard candlestick or bar charts. The End of the Beginning New indicators and systems are produced on a regular basis by developers and traders looking to capitalize on a better mousetrap. Some will gain popularity and be added to the trader's stable of useful tools, but most will never attain the critical mass necessary to become commercially viable. Unfortunately, the outcome has less to do with the merit of the system and more to do with the marketing acumen of the system's developer or creator. Revolutionary systems often have the most difficult time gaining public acceptance, simply because the theory or application behind them is unfamiliar. If the developer does not have the staying power to promote it until it gains acceptance, the system will either be hoarded by a small group of traders who will take the time to learn how it works and use it profitably, or it will end up collecting dust in the warehouse of trading ideas that never made it. As with any worthwhile trading tool, it is up to each trader to add it to their bag of tricks to augment what they are already doing. If markets do in fact act as thermodynamic systems, traders who use J-Chart will have a distinct advantage over fellow players who use more traditional charting methods such as bars and candlesticks.
<urn:uuid:ef69dc55-26a2-4a67-9efa-8e9d5aaee9a3>
CC-MAIN-2016-26
http://www.investopedia.com/articles/technical/04/060204.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00188-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946747
2,746
3.140625
3
|Why is Venus hotter than Mercury if Mercury is closer to the sun?| This is a great question! The answer to it lies in the fact that Venus has a very dense atmosphere made up of carbon dioxide, nitrogen, and sulfuric acid, while Mercury has a very thin atmosphere with various gases, but very little carbon dioxide. So what's so important about carbon dioxide? Well, sunlight will pass through Venus' clouds (which contain mostly carbon dioxide) and warm the surface of the planet. Usually, the surface of a planet is warmed during the day and cools off at night by releasing infrared radiation (heat) back into space. But the carbon dioxide in Venus' clouds absorbs energy from infrared radiation very well and "traps" the heat on the planet, making it very warm. This has sometimes been called a "runaway greenhouse effect." We don't see this happen on Mercury because its atmosphere is not thick and does not have much carbon dioxide in it. I hope this helps! Venus is hotter than Mercury because it has a much thicker atmosphere. The atmosphere, the gaseous layer surrounding a planet, is like a blanket. Think of two people sitting next to a campfire one is much closer to the fire while another is further away. The one that is closer doesn't have a blanket (Mercury), while the other further away has a sleeping bag (Venus). Both persons are getting heat from the fire but the person with the sleeping bag keeps all the heat he or she gets. Mercury is closer but because it has a very thin or no atmosphere at all the heat goes out into space. Venus on the other hand with it's much thicker atmosphere holds all the heat it gets. The heat the atmosphere traps is called the greenhouse effect. If Venus did not have an atmosphere the surface would be -128 degrees Fahrenheit much colder than 333 degrees Fahrenheit, the average temperature of Mercury. Venus is hotter due to the greenhouse effect: Venus has an atmosphere about ninety times thicker than that of Earth, and made almost entirely of carbon dioxide, which is one of the gasses that causes the greenhouse effect on Earth. The greenhouse effect on Venus is so great that it raises the surface temperature on Venus to, as you say, hotter than that of Mercury, despite being farther from the sun. Click Here to return to the search form. Copyright © 2015 The Regents of the University of California, All Rights Reserved.
<urn:uuid:86aa856e-6ece-41d2-a344-29526292387e>
CC-MAIN-2016-26
http://scienceline.ucsb.edu/getkey.php?key=3824
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91622
568
3.890625
4
The study of the Humanities promotes the development of many important skills and capacities which are applicable to any subject matter and which are useful throughout our public and private lives. Although the Humanities are not the only subject areas which promote them, these skills and capacities are a central concern and a primary focus in all Humanities disciplines. They include the following: - Critical thinking (reasoning, organizing ideas, making distinctions, recognizing important similarities, grasping what is essential); - Decision making (maturity and refinement of judgment, ability to give good reasons); - Communication (clear, cogent expression of ideas and beliefs, ability to say and write what one means); - Self-understanding (ability to locate oneself culturally, ethnically, religiously, politically); - Valuation (ability to deal rationally with questions of value, to set priorities and balance competing ideals); - Integrative understanding (ability to synthesize learning, make relevant connections among a diversity of subjects); - Cross-cultural awareness (capacity for mutual understanding, tolerance, and the rational resolution of conflicts); - Aesthetic sensibility (capacity for the appreciation of fine art, music, literature, and the beauty of the natural world); - Civic responsibility (the ideals of truth, justice, and respect for persons are implicit in the study of the humanities).
<urn:uuid:ec970ff2-e604-4c62-a16d-40c5e8ef0129>
CC-MAIN-2016-26
http://www.plymouth.edu/department/humanities/humanist-statement/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937727
272
3.71875
4
Posted by Cat and fair on September 28, 2003 1.Is "a cat with nine lives" a phrase? How to use it? He has spent most of his life in danger as the chief target of Israel. But, just like a cat with nine lives, Arafat escaped every time. 2.Could you explain "fair" in "fair share"? No one goes through life without their fair share of problems.
<urn:uuid:ee438221-10e5-4057-ae3a-fe0f5657cf0e>
CC-MAIN-2016-26
http://www.phrases.org.uk/bulletin_board/24/messages/762.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968526
89
2.734375
3
- About Us - Local Savings - Green Editions - Legal Notices - Weekly Ads Connect with Us The 'Vashon Hum' can be explained The story in The Beachcomber about the “Vashon Hum” is both a first-class example of cutting-edge journalism and a reminder that we exist at the discretion of the natural world. Being a scientist, I gave consideration to this subject for some time, literally minutes, and discovered more theories regarding the source of the hum. It is possible to separate causes into two categories: natural and man-made. I include as natural those that originate with humans but are not intentional. Sources of noise like rap music and Sarah Palin clearly fall into the man-made category (subcategory irritating whines). Natural causes also include those sounds made by extraterrestrials, but I do not list them here because it has been weeks since I’ve seen one. And unlike many on the Island, I do not find the presence of large subterranean worms credible, despite the discovery of huge burrows on the west side. There are plenty of other potential natural causes; some that I wish to address include: • Cosmic radiation: We all know that our Sun emits massive amounts of radiation, which in space would fry us like corn dogs. “The atmosphere is our friend,” but as the atmosphere warms because of carbon dioxide, it also thins, and more radiation reaches the Earth’s surface, causing changes to the soil and rocks. This could include aural effects, particularly just before critical thinning is reached and we cook in a natural microwave oven. Try not to think about this. • Already mentioned were low-frequency waves used in marine communication, but there are also low-frequency sounds that emanate from seismic zones. All those rocks grinding against each other make a low rumble, and here we are sitting on top of a convergent margin. Duh. • I’m quoted as saying it’s not geologic, but the reporter caught me without my coffee. Another geologic source could be because we are pumping ground water at rates never before seen (everywhere on the planet) and the reservoirs are slowly collapsing. The hum could be the rumble of shrinking aquifers. If you worry about this, drink beer instead. I’m having one as I write this just in case. • James Thurber highlighted the danger of electricity leaking from unused outlets in his story, “The Car We Had To Push.” With the advent of compact fluorescents and increased environmental awareness, more and more outlets and light sockets are going wanting, leading to a 60-cycle hum that is common around buildings. My suggestion is that you wear rubber-soled shoes. • A legacy of ASARCO … arsenic can combine in audiotropic reactions with silica and other common rock-forming compounds. What you are hearing is the slow march of arsenic down toward our water supply. • Lastly, consider the aging demographics of our Island. More people are suffering from buildups of ear wax, leading to low-frequency ringing in the ears. It’s a form of tinnitus. Look it up. None of these effects is very dangerous, except maybe the cosmic radiation that could destroy all life in the solar system, and the buildup of ear wax that has already taken the lives of two Islanders (I’ve been told). One guy had his ear blown completely off. But I don’t worry about that; instead, I’m more concerned about that tapping noise coming from the woods near our house, usually around midnight. Has anyone else heard that? — Greg Wessel is a geologist and curator of Two Wall Gallery. This piece was written with help from Margaret Wessel.
<urn:uuid:268029b4-092a-48e5-b946-55c2329af625>
CC-MAIN-2016-26
http://www.vashonbeachcomber.com/opinion/91633119.html?period=W&mpStartDate=02-02-2013
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95778
798
2.53125
3
In this instructable I will be showing you how to make a Hovering hockey puck! Although the process of making this toy requires a lot of testing (finding a light enough casing, finding a strong enough fan etc.) it is very simple, cheap and easy to make :] This toy is great for kids of all ages. The puck is durable enough to play real indoor hockey with, yet lightweight enough to be used as an air hockey puck. Step 1: Materials First, gather all of the given materials: -9v Duracell battery This instructable was a little bit rushed, and will be made using rechargeable batteries in the future. -A propeller and motor from an old RC plane/helicopter -Smoke detector (can use other circular objects to case the puck, but i'll explain that later) -Plasti Dip (can be found at most hardware stores) -Large plastic spacer First open the 9V battery's metal casing as shown, using a plier. It is easiest to open the battery from the seam on the side. If you want to see a video demonstration, check this out... JUST DO NOT CUT ANY OF THE LEADS ATTACHED TO THE BATTERY -Cut off the plastic wrapping and put it to the side for later
<urn:uuid:8bed331f-7b4c-485c-8bc1-d9f941da6a4f>
CC-MAIN-2016-26
http://www.instructables.com/id/Life-Size-Air-Hockey-The-Hover-Puck/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.879606
275
2.875
3
The mass percentage of a component in a given solution is the mass of the component per 100g of the solution. For e.g., if WA is the mass of the component A, WB is the mass of the component B in a solution. Then, Example: A 10% solution of sodium chloride in water (by mass) means that 10g of sodium chloride are present in 100g of the solution. This unit is used in case of a liquid dissolved in another liquid. The volume percentage is defined as the volume of the solute per 100 parts by volume of solution.For e.g., If VA is the volume of component A present is Vsol volume of the solution. For e.g., a 10% solution of ethanol C2H5OH, in water (by volume) means that 10cm3 of ethanol is present in 100cm3 of the solution. Strength of a solution is defined as the amount of the solute in gms, present in one litre of the solution. It is expressed as gL-1. Molarity of a solution is defined as the number of moles of solute dissolved per litre of solution.Mathematically, For e.g., If 'a' is the weight of the solute (in gms) present in VCC volume of the solution. Molarity is expressed by the symbol M. It can also be expressed as, Normality of a solution is defined as the number of gram equivalents (gm.e) of a solute dissolved per litre of the given solution.Mathematically it is, For e.g., If a is the weight of the solute (in gms) present in VCC volume of the solution. Then, Normality is expressed by the symbol N. It can also be expressed as, Relationship between molarity and normality The molarity and normality of a solution is related to each other as follows: Molality of a solution is defined as the number of moles of solute dissolved in 1000g of a solvent. Mathematically, it is expressed as Molality is expressed by the symbol m.Molality does not change with temperature. In case of ionic compounds like KCl, CaCO3 etc. Formality is used in place of molarity.It is the number of gram formula masses of solute dissolved per liter of the solution. It is denoted by the symbol F. Mathematically it is given as, It is the ratio of number of moles of one component (solute or solvent) to the total number of moles of all the components (solute and solvent) present in the solution. It is denoted by the symbol X. Let us suppose that a solution contains two components A and B and suppose that nA moles of A and nBmoles of B are present in the solution then, Adding eq (i) and (ii) we get xA + xB = 1 Parts per million (ppm) When a solute is present in very small amounts, its concentration is expressed in parts per million. It is defined as the amount of the solute present in one million parts of the solution. It may be noted that the concentration units like molarity, mole fraction etc. are preferred as they involve the weight of the solute and solvent, which is independent of temperature. But units like, molarity, Normality etc., involve volume of the solution, hence changes with temperature.
<urn:uuid:8147c13b-e320-49ef-ad66-fbed4c618bed>
CC-MAIN-2016-26
http://chem-guide.blogspot.com/2010/04/different-ways-of-expressing.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942106
737
4.0625
4
The Civil War It has been stated that we will never fully understand what the lives of people were like in the past. Recently a journal appeared written by a thoughtful young girl who lived in Crystal Lake, Illinois. It is a treasure of people's emotions and experiences wrapped in the confines of an insightful journal. Her entries give a better understanding of what life was like in the McHenry County area at the time of the Civil War. In Martha Josephine Buck's diary, A Girl of the Civil War, we see how difficult the lives of the ordinary people were, and what position people took towards the war. We are able to perceive indirectly through the eyes of a young woman what the Civil War was actually like and how it affected Crystal Lake. Buck's diary was found several years ago at the Crystal Lake Library in typewritten form. Some unknown person had transcribed it from the handwritten original. The typescript, now on deposit at Crystal Lake South High School, is an unusual document. Buck was born on November 27, 1842. At the age of two she moved to Crystal Lake, Illinois, to live on a farm. At the age of nineteen she started to keep a journal of her life. She wrote in her journal regularly for almost a year until she married "Mr. Harwood," as she addressed him. During the war, people in northern Illinois took a stand against the Confederacy and slavery. The majority of Crystal Lake citizens, according to Buck's journal, opposed the Southern states and joined the Union along with fellow Illinoisian, Abraham Lincoln. Buck wrote of hearing reports about the "debates between Senator Stephen A. Douglas . . . and Abraham Lincoln concerning the extension of slavery. All of Illinois was strongly opposed to that." Many in Crystal Lake, including Buck, wished to have "peace, peace on Earth and good will toward men;" nevertheless, war had been declared when "Fort Sumter had been fired upon." It was a "war between brother and brother," Buck wrote. Because of the war, the Union Army called on people to help the cause and "all the young men . . . were responding." However, the period of the war for the people who had not gone to fight was also very hard. Families had been ripped apart because many enlisted in the army, including two of Buck's own brothers. "Lige, my big brother, was among the first to go." George too, " had long since gone away into the army," she wrote. Furthermore, due to the shortage of trained medical professionals who had also gone to help in the war effort, Buck's father developed an illness and was so weakened by it that he "could do none of the work at all." Because doctors were few and far away, many people in Crystal Lake traveled to the neighboring town of "Woodstock on Horseback for the Doctor." Essentially when people got sick it meant there was a high probability of death. For example, Buck's oldest sister died at "only twenty years of age" because "she had caught a cold." As a result, a number of women and children in Crystal Lake had to take on the tough jobs that the men on the farm had before they went off to battle or got sick. "Someone had to turn to and help—and help hard." Unfortunately, Buck was the only one of five girls in her family, that "had ever done field work on the farm.'' Consequently, she was chosen to be the one to "help hard." It was a joint war effort of men, women, and children in Crystal Lake. Regarding the soldiers, the Crystal Lake residents had made very sure they were doing well. They showed their support by organizing the Soldiers Aid Society with the church. Whenever there was a lecture, concert, or performance, "the proceeds were to go to ... the Soldiers Aid Society," which aided the soldiers tremendously and helped bond the community of Crystal Lake. The Christian Church in Crystal Lake was very important and possessed a great deal of power over the people of the time. It organized almost all the social events, including a Bible class for the community. "We are going to have a Bible class every Tuesday ... I went to church yesterday . . . Lawrence, Mary, Mollie were to do the singing," wrote Buck. In addition, the church at times interfered with the people's lives and was such an intrinsic part of Crystal Lake, that they would even deny marriages to people. "It seems . . . the church does not approve us, Mr. Harwood's choice, and we hear that they talk of interfering in the matter." Martha Buck Harwood's revealing journal shows how a small, rural town in northern Illinois was affected by the Civil War. She provides another outlook that is not normally covered in many textbooks on history and the Civil-War period. In her journal we read what life was like from a contemporary's viewpoint. These images are of great importance to historians, researchers, and ordinary people who are intrigued by this period of the nation's history.— [From Martha Josephine Buck Harwood, A Girl of the Civil War, unpublished.]
<urn:uuid:4a004e46-9932-49b9-8dd7-297e1e03e74f>
CC-MAIN-2016-26
http://www.lib.niu.edu/1996/ihy960240.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00104-ip-10-164-35-72.ec2.internal.warc.gz
en
0.987503
1,046
3.609375
4
Japan's 'Crow' Model Airplane, Flying Since 1891 Airplanes have a long history in Japan, even predating the 1903 first manned flight by the Wright brothers. Over a decade earlier, Japanese aeronautical pioneer Chuhachi Ninomiya first launched an unmanned airplane - models of which are still available for sale today! Ninomiya, born in 1866, was known as "Kite-flying Chuhachi" in Yawatahama where he grew up. By the late 1880s he was ready to move beyond kites to actual flying machines. Inspired by watching the flight of a crow, Ninomiya designed a small airplane powered by a rubber band that first flew in 1891. The 23.6 inch wide "Crow" was of a monoplane design with a tricycle landing gear and a 4-bladed pusher propeller, quite surprising for the times! A modern interpretation of Ninomiya's 1891 "Crow" has been faithfully reproduced in balsa wood and styrofoam with a 125mm (5-inch) reverse pitch plastic propeller. The plane's wingspan is 16 inches and it comes packaged in an attractive box with instructions in both English and Japanese. You might wonder why Ninomiya didn't move on to bigger and better things - flying things that is - once the Crow proved to be a success. Well, it wasn't for lack of trying. Having entered the Japanese Army at the outbreak of war with China in 1894, Ninomiya saw the need for airborne reconnaissance and begged his commanding officers to let him build an airplane big enough to carry a man (and presumably, powered by something other than rubber bands). The reply was crushing: "You're crazy. If America and Europe don't have such a machine, how can we Japanese build one?" News of the Wright's success in 1903 caused Ninomiya to give up his experiments and he eventually died in 1936. His place in aviation history is assured, however, and his ideas were proven in 1991, a century after the Crow first flew, when a larger replica of one of his designs successfully took to the air at a Vancouver, Canada air show. Want to fly your own Crow? Kits are available from Brooklyn 5 & 10 and ModernTots, and can be ordered online from each company's website.
<urn:uuid:641b1275-3815-421f-8378-2331231208ac>
CC-MAIN-2016-26
http://inventorspot.com/articles/crow_rubberband_model_plane_flyi_10243
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00102-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972393
484
3.15625
3
E. Cobham Brewer 18101897. Dictionary of Phrase and Fable. 1898. Garland (g hard). A chaplet should be composed of four roses and a garland should be formed of laurel or oak leaves, interspersed with acorns.J. E. Cussans: Handbook of Heraldry, chap. vii. p. 105. Garland. A collection of ballads in True Lovers Garland, etc. Nuptial garlands are as old as the hills. The ancient Jews used them, according to Selden (Uxor Heb., iii. 655); the Greek and Roman brides did the same (Vaughan, Golden Grove); so did the Anglo-Saxons and Gauls. Thre ornamentys pryncipaly to a wyfe: A rynge on hir fynger, a broch on hir brest, and a garlond on hir hede. The rynge betokenethe true love; the broch clennesse in herte and chastitye; the garlond gladness and the dignity of the sacrement of wedlock.Leland: Dives and Pauper (1493).
<urn:uuid:1dedca7f-3357-43c0-a302-e93e912e9da9>
CC-MAIN-2016-26
http://bartleby.com/81/7016.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00083-ip-10-164-35-72.ec2.internal.warc.gz
en
0.762742
259
3.078125
3
Plastic Bags Kill Whales, Birds, Fish & Turtles We use shopping bags for a few hours, but they can take lifetimes to decompose. Cloth versions make it easy to BYOB wherever you go - whether it's clothes shopping, grocery shopping, or a drugstore impulse buy. - Oil and tree savings. In the United States, 12 million barrels of oil and 14 million trees go to producing plastic and paper bags each year. - Discounts. Stores like Safeway and Whole Foods offer a five-penny discount if you bring your own. - Being a role model. Other shoppers will watch and learn. - Safety for sea creatures. Plastic bags are the fifth most commonly found item in coastal cleanups. - If you must use a plastic bag, reuse it as long as you can, then tie it into knots before you toss it to keep it from ballooning up into the air and ending up as litter. Animals which ingest plastic bags actually die and decompose much quicker than the plastic itself. The plastic is then released back into the environment more or less intact, ready for the next unsuspecting organism to ingest it. There are approximately 46,000 pieces of plastic floating in each square mile of our oceans. Plastic kills up to 1 million sea birds, 100,000 sea mammals and countless fish each year. Turtles, dolphins and killer whales mistake plastic bags for jellyfish and die of intestinal blockage. Pictured above, the tallest man in the world, Bao Xishun of China, reaches into the abdomen of a dolphin that had consumed plastic. Let’s do something positive to reduce the hideous number of plastic bags being used - 1 million are consumed per minute globally - of which hundreds of thousands end up in the oceans. I
<urn:uuid:496d6245-af59-464a-9bc7-93136184cb9c>
CC-MAIN-2016-26
http://wilddolphin.org/totes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92428
373
2.75
3
The Ses Salines Formentera Natural Park covers a wide area, including the Es Trucadors peninsula and its surrounding sea. It extends to the wetlands that run around Estany Pudent and Estany des Peix, and it is also the term used to describe the old salt pans. The salt works themselves closed down in 1984, but they remain part of the conservation area. Even though the salt pans are no longer used commercially the process of salt crystallization still continues and you see froth washing up against the walls of the salt pans, and on windy days it blows across the roads. The salt pans work by letting in water from Estany Pudent, which has a higher concentration of salt than the sea, and from there it crystallizes, with sluice gates opened to help water evaporate. The mill wheel that was used to pump the salt pans was build in the UK town of Accrington. Unfortunately it was ordered in meters, but was built in feet and inches so the wheel house had to be rebuilt once the wheel arrived. The wheel is on the right as you go into Es Pujols from La Savina. The area attracts wading birds that like living in mud, so-called limicolous birds. Although the wetlands are teeming with wildlife, the adventurous ornithologist can move off land into Es Freus to look for storm petrel, the Balearic shearwater, fisher eagles, cormorants, yellow-legged gull, and Audouin’s gull. Below the surface are the sea grass plants known as posidonia. The channel between Formentera and Ibiza is home to the longest living matter ever discovered: a posidonia plant eight miles long. 178 plant species have been identified in Ses Salines, including salicornia and halophilic vegetation.
<urn:uuid:59b11308-0208-4877-9fa7-90f99a20ab52>
CC-MAIN-2016-26
http://www.formenteraguide.com/ses-salines-formentera/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953968
384
2.90625
3
Members of Opus Dei date the group’s foundation to October 2, 1928, when Josemar’a Escriva, then a young Spanish priest making a retreat at a Vincentian monastery in Madrid, experienced a vision, revealing to him “whole and entire” God’s wish for what would later become Opus Dei. Obviously the vision was not “entire” in the sense that it answered every question, since it required subsequent inspirations to demonstrate to Escriva that there should be a women’s branch to Opus Dei (that came in 1930) and that Opus Dei should also include a body of priests, the Priestly Society of the Holy Cross (1943). Yet in some sense, Escriva insisted, the blueprint for Opus Dei was contained in that original experience on the Feast of the Guardian Angels in 1928. Here’s how he once described it: “On October 2, 1928, the feast of the Holy Guardian Angels — by now nearly forty years have gone by — the Lord willed that Opus Dei might come to be, a mobilization of Christians disposed to sacrifice themselves with joy for others, to render divine all the ways of man on earth, sanctifying every upright work, every honest labor, every earthly occupation.” Escriva and the members of Opus Dei are thus convinced that their organization is rooted in God’s will. As Escriva himself once put it, “I was not the founder of Opus Dei. Opus Dei was founded in spite of me.” Originally Escriva did not even give this new reality a name; “Opus Dei,” which is Latin for “work of God,” came from an offhand comment from Escriva’s confessor, who once asked him, “How’s that Work of God going?” This is why members usually refer to Opus Dei as “the Work.” The core idea revealed to Escriva in that 1928 vision, and unfolded in subsequent stages of Opus Dei’s development, was the sanctification of ordinary life by laypeople living the gospel and Church teaching in their fullness. This is why one of the leading symbols for Opus Dei is a simple cross within a circle–the symbolism betokens the sanctification of the world from within. The idea is that holiness, “being a saint,” is not just the province of a few spiritual athletes, but is the universal destiny of every Christian. Holiness is not exclusively, or even principally, for priests and nuns. Further, holiness is not something to be achieved in the first place through prayer and spiritual discipline, but rather through the mundane details of everyday work. Holiness thus doesn’t require a change in external circumstances, but a change in attitude, seeing everything anew in the light of one’s supernatural destiny. In that sense, admirers of Escriva, who included Pope John Paul II, believe the Spanish saint anticipated the “universal call to holiness” that would be announced by the Second Vatican Council. The late cardinal of Florence and right-hand man of Pope Paul VI, Giovanni Benelli — who crossed swords with Escriva over the years–nevertheless once said that what Saint Ignatius of Loyola, the founder of the Jesuits, was to the sixteenth-century Council of Trent, Escriva was to the Second Vatican Council. That is, he was the saint who translated the council into the life of the Church. In a December 2004 interview, the number-two official of Opus Dei, Monsignor Fernando Ocariz, a Spanish theologian who has served since 1986 as a consultor to the Congregation for the Doctrine of the Faith, the Vatican’s doctrinal agency, explained that Escriva’s understanding of the “universal call to holiness” had two dimensions, subjective and objective. The subjective is the invitation to individual persons to sanctification, meaning that all people, regardless of their station in life, are called to become saints. The objective is the realization that all of creation, and every situation in human experience, is a means to this end. “All human realities, all the circumstances of human life, all the professions, every family and social situation, are means of sanctification,” Ocariz said. “It’s not just that everyone is supposed to be a saint despite the fact of not being priests or monks, but precisely that all the realities of life are places that can lead one to the Lord.” – Excerpt from Opus Dei : An Objective Look Behind the Myths and Reality of the Most Controversial Force in the Catholic Church by John L. Allen, Jr. Published by Doubleday Religion, a division of Random House, Inc. This post was last updated: Jan. 4, 2006
<urn:uuid:0dc3eeb6-bda1-4cb4-b0dc-499de07b0b06>
CC-MAIN-2016-26
http://www.apologeticsindex.org/30-opus-dei-foundation-and-purpose
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959437
1,049
2.921875
3
How do machines understand what place you’re talking about when you say the name of a city, a street or a neighborhood? With geocoding technology, that’s how. Every location-based service available uses a geocoder to translate the name of a place into a location on a map. But there isn’t a really good, big, stable, public domain geocoder available on the market. Steve Coast, the man who lead the creation of Open Street Map, has launched a new project to create what he believes is just what the world of location-based services needs in order to grow to meet its potential. It’s called OpenGeocoder and it’s not like other systems that translate and normalize data. Google Maps says you can only use its geocoder to display data on maps but sometimes developers want to use geo data for other purposes, like content filtering. Yahoo has great geocoding technology but no one trusts it will be around for long. Open Street Map (OSM) is under a particular Creative Commons license and “exists for the ideological minority,” says Coast himself in a Tweet this week. And so Coast, who now works at Microsoft, has decided to solve the problem himself. This has been tried before, see for example GeoCommons, but the OpenGeocoder approach is different. It is, as one geo hacker put it, “either madness or genius.” The way OpenGeocoder works is that users can search for any place they like, by any name they like. If the site knows where that place is, it will be shown on a big Bing map. If it doesn’t, then the user is encouraged to draw that place on the map themselves and save it to the global database being built by OpenGeocoder. Above: The river of my childhood, which I just added to the map. Every single different way a place can be described must be drawn on the map or added as a synonym, before OpenGeocoder will understand what that string of letters and numbers means with reference to place. Anyone can redraw a place on the map, too. Then developers of location-based services can hit a JSON API or download a dump of all the place names and locations for use in understanding place searches in their own apps. It appears that just under 1,000 places have been added so far. It will take a serious barn-raising to build out a map of the world this way. It wouldn’t be the first time something a little like this has been done before though. “If only it was that simple :(” said map-loving investor Steven Feldman on Twitter. “Maybe it is?” The approach is focused largely on simplicity. Coast said in his blog post announcing the project: “OpenGeocoder starts with a blank database. Any geocodes that fail are saved so that anybody can fix them. Dumps of the data are available. “There is much to add. Behind the scenes any data changes are wikified but not all of that functionality is exposed. It lacks the ability to point out which strings are not geocodable (things like “a”) and much more. But it’s a decent start at what a modern, crowd-sourced, geocoder might look like.” Testing the site, I grew frustrated quickly. I searched for the neighborhood I live in: Cully in Portland, Oregon. There was no entry for it, so I added one. But there are no street names on the map so I got lost. I had to open a Google Map in the next tab and switch back and forth between them in order to find my neighborhood on the OpenGeocoder map. Then, the neighborhood isn’t a perfect rectangle, so drawing the bounding box felt frustratingly inexact. I did it anyway, saved, then tried recalling my search. I found that Cully,Portland,Oregon (without spaces) was undefined, even though I’d just defined Cully, Portland, Oregon with spaces. I pulled up the defined area, then searched for the undefined string, then hit the save button, and the bounding box snapped back to the default size, requiring me to redraw it again, on a map with no street names. Later, I learned how to find the synonym adding tool to solve that problem. In other words, the user experience is a challenge. That’s the case with Wikipedia too, and OpenGeocoder just launched, but I expect it will need some meaningful UX tweaks before it can get a lot of traction. I hope it does. That’s just my experience so far, though. Not everyone feels that way. GIS geek Paul Wither calls it “addictive.” There are certainly high hopes for the project, too. “I’m obsessed with the need for an open-source geocoder, and this is a fascinating take on the problem,” says data hacker Pete Warden about OpenGeocoder. “By doing a simple string match, rather than trying to decompose and normalize the words, a lot of the complexity is removed. This is either madness or genius, but I’m hoping the latter. The tradeoff will be completely worthwhile if it makes it more likely that people will contribute.” Coast will certainly be able to gather the attention of the geo community for the project. As we wrote when he joined the Bing team 18 months ago: Coast is a giant figure in the mapping world. In 2009, readers of leading geo publication Directions Magazine voted him the 2nd most influential person in the geospatial world, ahead of the Google Maps leadership and behind only Jack Dangermond, the dynamic founder of 41-year old $2 billion GIS company ESRI. Coast will turn 30 years old next month. The more I play with OpenGeocoder, the more it grows on me. I hope Coast and others are able to put in the time it will take to make it as great as it could be.
<urn:uuid:cd79e943-42cd-4ff8-aa7b-78b753b62cfd>
CC-MAIN-2016-26
http://jetlib.com/news/tag/pete-warden/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94545
1,291
2.578125
3
1-3 days old Kittens found at this age will likely have an umbilical cord and sometimes a placenta. 3-7 days old Kittens at this age will probably still have their umbilical cord and their eyes will not be open. Kittens separated from their mother at this age have a very low chance of survival - even with the best human intervention. Finding mom gives them their best chance! 7-10 days old By now most kittens will have lost their umbilical cords. They generally will not have their eyes open at this age. 10-14 days old As a rule of thumb eyes are opened between 10 - 14 days however, there are exceptions where they open their eyes as earlier. They do not really get around on their own but may start to move by wiggling. They may start to put weight on their legs. 2-3 Weeks Old Their wiggling will start to look more like walking and they will slowly be able to get around.. They are not yet running or playing.. They do not have teeth until the 3 week mark when their front teeth start to come in. By 3-4 weeks they are usually getting around quite well but still quite clumsy. Even at this age, they will be very easy to catch unless they dive into a small space. They can be introduced to wet food but many won't be interested. At 3 weeks they get their front teeth and by 4 weeks they will have their back molars. 4-6 Weeks Old By this age kittens are playing and washing themselves. They bounce around and attack imaginary prey. By now they should be eating wetfood. This is when they usually reach 1lb
<urn:uuid:dbc5b51b-d1ab-42a3-bb03-a7f50935c070>
CC-MAIN-2016-26
http://www.eastcountyanimalrescue.org/2011/09/kittens-you-found.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.983785
351
2.796875
3
From Math Images |Stereographic Projection of a Sphere| Stereographic projection is a method of mapping an object into a lower dimensional space. This page's main image shows a sphere being mapped into a plane. In this context mapping means matching points on the sphere with points on the plane using a specific rule. The rule used in the diagram to the left is as follows: draw a line from the 'north pole' of the sphere and let it pass through a point on the sphere, point A. The point that the line hits on the plane, point B, is the point that A is mapped to. The main image uses a similar procedure, except the plane is drawn under the sphere instead of cutting through it. The coloring helps give an idea of where regions of the sphere end up on the plane. Note that the projection is still from the top of the sphere, with the coloring not centered around the top creating an interesting formation of ellipses on the plane. The following applet demonstrates how a sphere is projected onto a plane. A sphere with coaxial bands of color is stereographically projected onto a plane in the background. You can rotate the sphere with the mouse, changing the orientation of the colors on the sphere which changes the projection on the plane. The sphere and projection point remain fixed; only the colors are shifted. A More Mathematical Explanation [[Image:Sphereproject.gif|thumb|500px|left|Cutaway view of some points on the sphere, tracked by the [...] An example of a mapping from a sphere onto a plane, shown graphically to the left, is where X,Y are coordinates on the plane and x,y,z are coordinates on the sphere. Since coordinates on the sphere are mapped uniquely to coordinates on the plane, this function is invertible. The explicit inverse is: An important mathematical application of stereographic projections is the Riemann Sphere. - There are currently no teaching materials for this page. Add teaching materials. About the Creator of this Image Thomas F. Banchoff is a geometer, and a professor at Brown University since 1967. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
<urn:uuid:df448fbb-8033-4a11-9875-c9bebf1d0c15>
CC-MAIN-2016-26
http://mathforum.org/mathimages/index.php?title=Stereographic_Projection&oldid=19449
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00178-ip-10-164-35-72.ec2.internal.warc.gz
en
0.88203
467
4.5
4
Wednesday, September 18, 2013/lk In our wealthy nation, it’s astounding that nearly 22 percent of children under 18 live in poverty and more than 16 million children live in households that struggle to put food on the table. Share Our Strength, www.nokidhungry.org, gives those and other sobering statistics: • In our country, 9.8 million children get free and reduced-price breakfast, but another 10.6 million eligible kids go without. • Nearly half of all people who use food stamps are children. The numbers in Okanogan and Ferry counties are similarly abysmal. Many school districts have free and reduced-price meal populations of 50 percent or more. In some, every child qualifies. Even the Methow Valley district, which some consider affluent in comparison to others, has a 49 percent rate. September is “No Kid Hungry” month, and many restaurants — including some local ones — are seeking monetary donations to combat hunger. That’s one way to help. Another is to donate generously and often to local food banks and food drives.
<urn:uuid:a5e06ca5-fd2c-4b40-b1a8-d069ef7c160c>
CC-MAIN-2016-26
http://t.omakchronicle.com/news/2013/sep/18/let-none-go-hungry/?templates=tablet
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938358
234
2.65625
3
Analysis of Relationship between Resistances in Delta Circuit to Determine Total Resistance Electric circuit has a very important role in electronics field progressing. From simple to complex form of circuits, actively support the establishment of new innovations in engineering technology. To build a good concept of circuit, it is necessary to have proper understanding on basic concept and circuit analysis. Starting from Ohm and Kirchhoff’s laws, another circuit laws and formulas are derived from analysis and theorems, to give better calculation and solution in problems. This paper will do an analysis on relationship between resistances in delta circuit, and apply basic electric circuit theory to calculate the total resistance. This analysis is conducted by doing comparison and calculation to derive the expected formula. At a glance: Figures Keywords: Δ – Υ circuit, resistances, total resistance International Transaction of Electrical and Computer Engineers System, 2014 2 (4), Received July 07, 2014; Revised July 17, 2014; Accepted July 29, 2014Copyright © 2013 Science and Education Publishing. All Rights Reserved. Cite this article: - Rompis, Lianly. "Analysis of Relationship between Resistances in Delta Circuit to Determine Total Resistance." International Transaction of Electrical and Computer Engineers System 2.4 (2014): 120-123. - Rompis, L. (2014). Analysis of Relationship between Resistances in Delta Circuit to Determine Total Resistance. International Transaction of Electrical and Computer Engineers System, 2(4), 120-123. - Rompis, Lianly. "Analysis of Relationship between Resistances in Delta Circuit to Determine Total Resistance." International Transaction of Electrical and Computer Engineers System 2, no. 4 (2014): 120-123. |Import into BibTeX||Import into EndNote||Import into RefMan||Import into RefWorks| To analyze an electric circuit problem especially a complex circuit, all we have to learn is the basic formula called Ohm’s law. This is the very important thing that we need to know first in order to help us solving the problems for electric circuits. From this law we derive another important formulas to support the circuit analysis; Kirchoff Voltage Law (KVL), Kirchoff Current Law (KCL), Voltage divider, and Current divider. With these various derivative laws, the circuit solution becomes easier by the developing circuit methods and theorems. 2. Discussion2.1. Basic Theory There are two common resistance circuits: series and parallel, as shown in figure 1. Using the right formula for series and parallel, primarily we can calculate the total resistance for either single or combination form of electric circuit [1, 4, 5, 6, 7]. It usually happens that the connection of each resistance is not in series or parallel, but in a form of delta (Δ) or wye (Υ). In this case, we should use specific formula to get the right result [1, 2, 3]. 3. Aims of Study This paper will conduct a simple analysis of applying the theory of basic electric circuit, to figure out the resistances relationship in delta circuit, to solve any problems related to total resistance. By using the derived formula, it gives another quick and best solution for circuit analysis in relation to resistance and Δ – Υ concept. Hopefully this nice description of theory will build our critical thinking and bring spirit to always finding good solutions in electronics world. 4. Research Method For this kind of analysis, I try to compare several forms of resistance circuits having delta model, see the relationships, and applying the exact method and formula to derive another formula equations for calculating total resistance. 5. Analysis and Results Pay attention to the electric circuit problem in figure 3. It is a specific form of circuit that usually we met when dealing with resistance circuit problems. It has a Δ model both on the upper side and bottom side of the circuit. To solve the circuit, we have to change the Δ form into Υ form as shown in figure 4, and then do the parallel resistance calculation. Because all the resistance values are the same, we can states that all resistance equal to R, equation 4. Using the formula of Δ – Υ, we get the result of each resistance in Y model as follows: The total resistance now can be calculated using the combination of series and parallel formula, and we get the result in equation 5 as follows: Suppose we change the value of two resistances into (1+R), again we will get the result of each resistance in Y model and calculate the total resistance to derive the result in equation 6 as follows: Here are several simulations that have been conducted to prove the previous calculation results. They shows the expected values from the current analysis, equation 6. Continuing change the two resistances into (2+R) and (3+R), we will get the ‘similar’ pattern of result as described in the following equations: It follows the same rules and pattern for the result of total resistance. These are unique relationships and can be applied for this special form of circuit problems. From the specific relationship of resistances modeled in two delta connections, a standard and simple formula could be derived to solve the related problems for easier analysis and calculation. The value of resistances in this specific electric circuit are formed in special patterns and determined by the ratio between the two and three resistance values in the circuit. In the future, an advance analysis can be done to figure out more detail about this special form of circuit, and derive other formulas with different ratio of resistances. |||Malvino, A.P., translated by: Alb. Joko Santoso. (2003) Prinsip-prinsip Elektronika Jilid 1, Jakarta: Penerbit Salemba Teknika.| |||Nahvi, M., dan Joseph A. Edminister. (2004) Schaum’s Easy Outlines: Rangkaian Listrik, Jakarta: Penerbit Erlangga.| |||Soegito, Ken Endar Supardjo, dan Sutriyono. (1991) Prima EBTA Fisika SMA, Edisi ke-1, Semarang: PT Intan Pariwara.| |||William H. Hayt, Jr. and Jack E. Kemmerly (Pantur Silaban). (1999) Rangkaian Listrik Jilid 1. Jakarta: Penerbit Erlangga.| |||William H. Hayt, Jr. and Jack E. Kemmerly (Pantur Silaban). (1999) Rangkaian Listrik Jilid 2. Jakarta: Penerbit Erlangga.| |||Edminister, Joseph A. (Sahat Pakpahan). (1988) Teori dan Soal-Soal Rangkaian Listrik. Jakarta: Penerbit Erlangga. Online Articles| |||Tony R. Kuphaldt. (2004) Lessons in Electric Circuits: Volume II-AC, fifth edition. [Online] Available: http://www.scribd.com/ doc/62569767/Lessons-in-Electric-Circuits-2-AC-Tony-R-Kuphaldt. [Accessed: November 2013].| |||Tony R. Kuphaldt. (2004) Lessons in Electric Circuits: Volume III-Semiconductors, fifth edition. [Online] Available: http://www3.eng.cam.ac.uk/DesignOffice/mdp/electric_web/Semi/. [Accessed: November 2013].|
<urn:uuid:fd26de5d-f97a-4742-8183-8d7cfc0ac60b>
CC-MAIN-2016-26
http://pubs.sciepub.com/iteces/2/4/2/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.845929
1,588
3.078125
3
Jan. 25, 2013 — Simon Fraser University earth scientist Diana Allen, a co-author on a new paper about climate changes’ impacts on the world’s ground water, says climate change may be exacerbating many countries’ experience of water stress. “Increasing food requirements to feed our current world’s growing population and prolonged droughts in many regions of the world are already increasing dependence on groundwater for agriculture,” says Allen. “Climate-change-related stresses on fresh surface water, such as glacier-fed rivers, will likely exacerbate that situation. “Add to that our mismanagement and inadequate monitoring of groundwater usage and we may see significant groundwater depletion and contamination that will seriously compromise much of the world’s agriculturally-grown food supply.” In Ground Water and Climate Change, Allen and several other international scientists explain how several human-driven factors, if not rectified, will combine with climate change to significantly reduce useable groundwater availability for agriculture globally. The paper was published in late 2012 in the journalNature Climate Change. The authors note that inadequate groundwater supply records and mathematical models for predicting climate change and associated sea-level-rise make it impossible to forecast groundwater’s long-range fate globally. “Over-pumping of groundwater for irrigation is mining dry the world’s ancient Pleistocene-age, ice-sheet-fed aquifers and, ironically, at the same time increasing sea-level rise, which we haven’t factored into current estimations of the rise,” says Allen. “Groundwater pumping reduces the amount of stored water deep underground and redirects it to the more active hydrologic system at the land-surface. There, it evaporates into the atmosphere, and ultimately falls as precipitation into the ocean.” Current research estimates oceans will rise by about a metre globally by the end of the century due to climate change. But that estimation doesn’t factor in another half-a-centimetre-a-year rise, says this study, expected due to groundwater recycling back into the ocean globally. Increasing climate-change-induced storm surges will also flood coastal areas, threatening the quality of groundwater supplies and compromising their usability. This is the second study that Allen and her colleagues have produced to assist the Intergovernmental Panel on Climate Change (IPCC) in assessing the impact of climate change on the world’s groundwater supply. The IPCC, established by the United Nations Environmental Programme and the World Meteorological Organization in 1988, periodically reviews the latest research on climate change and assesses its potential environmental and socio-economic impacts. This study is one of several guiding the IPCC’s formulation of upcoming reports, the first being about the physical science behind climate change, due Sept. 2013. Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Richard G. Taylor, Bridget Scanlon, Petra Döll, Matt Rodell, Rens van Beek, Yoshihide Wada, Laurent Longuevergne, Marc Leblanc, James S. Famiglietti, Mike Edmunds, Leonard Konikow, Timothy R. Green, Jianyao Chen, Makoto Taniguchi, Marc F. P. Bierkens, Alan MacDonald, Ying Fan, Reed M. Maxwell, Yossi Yechieli, Jason J. Gurdak, Diana M. Allen, Mohammad Shamsudduha, Kevin Hiscock, Pat J.-F. Yeh, Ian Holman, Holger Treidel. Ground water and climate change. Nature Climate Change, 2012; DOI: 10.1038/nclimate1744 Note: If no author is given, the source is cited instead. Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.
<urn:uuid:a33fad03-f4ea-436c-8927-2ae94dc6d359>
CC-MAIN-2016-26
https://limitlesslife.wordpress.com/2013/01/29/groundwater-depletion-linked-to-climate-change/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.875865
813
3.359375
3
Browse Results For: A History of Anti-Black Violence A symbolic embodiment of racial violence and hatred, "The Beast" openly prowled the nation between the Civil War and the civil rights movement. The reasons it appeared varied, with psychological, political, and economic dynamics all playing a part, but the outcome was always brutal--if not deadly. From the bombing of Harriette and Harry T. Moore's home on Christmas Day to Willie James Howard's murder, from the Rosewood massacre to the Newberry Six lynchings, Marvin Dunn offers an encyclopedic catalogue of The Beast's rampages in Florida. Instead of simply taking snapshots of incidents, Dunn provides context for a century's worth of racial violence by examining communities over time. Crucial insights from interviews with descendants of both perpetrators and victims shape this study of Florida’s grim racial history. Rather than pointing fingers and placing blame, The Beast in Florida allows voices and facts to speak for themselves, facilitating a conversation on the ways in which racial violence changed both black and white lives forever. With this comprehensive and balanced look at racially motivated events, Dunn reveals the Sunshine State's too-often forgotten--or intentionally hidden--past. The result is a panorama of compelling human stories: its emergent dialogue challenges conceptions of what created and maintained The Beast. Merchant Culture in the South, 1820-1865 Becoming Bourgeois is the first study to focus on what historians have come to call the “middling sort,” the group falling between the mass of yeoman farmers and the planter class that dominated the political economy of the antebellum South. Historian Frank J. Byrne investigates the experiences of urban merchants, village storekeepers, small-scale manufacturers, and their families, as well as the contributions made by this merchant class to the South’s economy, culture, and politics in the decades before, and the years of, the Civil War. These merchant families embraced the South but were not of the South. At a time when Southerners rarely traveled far from their homes, merchants annually ventured forth on buying junkets to northern cities. Whereas the majority of Southerners enjoyed only limited formal instruction, merchant families often achieved a level of education rivaled only by the upper class—planters. The southern merchant community also promoted the kind of aggressive business practices that New South proponents would claim as their own in the Reconstruction era and beyond. Along with discussion of these modern approaches to liberal capitalism, Byrne also reveals the peculiar strains of conservative thought that permeated the culture of southern merchants. While maintaining close commercial ties to the North, southern merchants embraced the religious and racial mores of the South. Though they did not rely directly upon slavery for their success, antebellum merchants functioned well within the slave-labor system. When the Civil War erupted, southern merchants simultaneously joined Confederate ranks and prepared to capitalize on the war’s business opportunities, regardless of the outcome of the conflict. Throughout Becoming Bourgeois, Byrne highlights the tension between these competing elements of southern merchant culture. By exploring the values and pursuits of this emerging class, Byrne not only offers new insight into southern history but also deepens our understanding of the mutable ties between regional identity and the marketplace in nineteenth-century America. Making an Ethnic Identity in the Appalachian South Appalachian legend describes a mysterious, multiethnic population of exotic, dark-skinned rogues called Melungeons who rejected the outside world and lived in the remote, rugged mountains in the farthest corner of northeast Tennessee. The allegedly unknown origins of these Melungeons are part of what drove this legend and generated myriad exotic origin theories. Though nobody self-identified as Melungeon before the 1960s, by the 1990s “Melungeonness” had become a full-fledged cultural phenomenon, resulting in a zealous online community and annual meetings where self-identified Melungeons gathered to discuss shared genealogy and history. Although today Melungeons are commonly identified as the descendants of underclass whites, freed African Americans, and Native Americans, this ethnic identity is still largely a social construction based on local tradition, myth, and media. In Becoming Melungeon, Melissa Schrift examines the ways in which the Melungeon ethnic identity has been socially constructed over time by various regional and national media, plays, and other forms of popular culture. Schrift explores how the social construction of this legend evolved into a fervent movement of a self-identified ethnicity in the 1990s. This illuminating and insightful work examines these shifting social constructions of race, ethnicity, and identity both in the local context of the Melungeons and more broadly in an attempt to understand the formation of ethnic groups and identity in the modern world. Essays in Honor of Charles Joyner Edited by southern historians Orville Vernon Burton and Eldred E. Prince, Jr., Becoming Southern Writers pays tribute to South Carolinian Charles Joyner’s fifty year career as a southern historian, folklorist, and social activist. Exceptional writers of fact, fiction, and poetry, the contributors to the volume are among Joyner’s many friends, admirers, and colleagues as well as those to whom Joyner has served as a mentor. The contributors describe how they came to write about the South and how they came to write about it in the way they do while reflecting on the humanistic tradition of scholarship as lived experience. The contributors constitute a Who’s Who of southern writers—from award-winning literary artists to historians. Freed from constraints of their disciplines by Joyner’s example, they enthusiastically describe family reunions, involvement in the civil rights movement, research projects, and mentors. While not all contributors are native to the South or the United States and a few write about the South only occasionally, all the essayists root their work in southern history, and all have made distinguished contributions to southern writing. Diverse in theme and style, these writings represent each author’s personal reflections on experiences living in and writing about the South while touching on topics that surfaced in Joyner’s own works, such as race, family, culture, and place. Whether based on personal or historical events, each one speaks to Joyner’s theme that “all history is local history, somewhere.” An Illustrated History The motto of Berea College is “God has made of one blood all peoples of the earth,” a phrase underlying Berea’s 150-year commitment to egalitarian education. The first interracial and coeducational undergraduate institution in the South, Berea College is well known for its mission to provide students the opportunity to work in exchange for a tuition-free quality education. The founders believed that participation in manual labor blurred distinctions of class; combined with study and leisure, it helped develop independent, industrious, and innovative graduates committed to serving their communities. These values still hold today as Berea continues its legendary commitment to equality, diversity, and cultural preservation and, at the same time, expands its mission to include twenty-first-century concerns, such as ecological sustainability. In Berea College: An Illustrated History, Shannon H. Wilson unfolds the saga of one of Kentucky’s most distinguished institutions of higher education, centering his narrative on the eight presidents who have served Berea. The college’s founder, John G. Fee, was a staunch abolitionist and believer in Christian egalitarianism who sought to build a college that “would be to Kentucky what Oberlin was to Ohio, antislavery, anti-caste, anti-rum, anti-sin.” Indeed, the connection to Oberlin is evident in the college’s abolitionist roots and commitment to training African American teachers, preachers, and industrial leaders. Black and white students lived, worked, and studied together in interracial dorms and classrooms; the extent of Berea’s reformist commitment is most evident in an 1872 policy allowing interracial dating and intermarriage among its student body. Although the ratio of black to white students was nearly equal in the college’s first twenty years, this early commitment to the education of African Americans was shattered in 1904, when the Day Law prohibited the races from attending school together. Berea fought the law until it lost in the U.S. Supreme Court in 1908 but later returned to its commitment to interracial education in 1950, when it became the first undergraduate college in Kentucky to admit African Americans. Berea’s third president, William Goodell Frost, shifted attention toward “Appalachian America” during the interim, and this mission to reach out to Appalachians continues today. Wilson also chronicles the creation of Berea’s many unique programs designed to serve men and women in Kentucky and beyond. A university extension program carried Berea’s educational opportunities into mountain communities. Later, the New Opportunity School for Women was set up to help adult women return to the job market by offering them career workshops, job experience on campus, and educational and cultural enrichment opportunities. More recently, the college developed the Black Mountain Youth Leadership Program, designed to reduce the isolation of African Americans in Appalachia and encourage cultural literacy, academic achievement, and community service. Berea College explores the culture and history of one of America’s most unique institutions of higher learning. Complemented by more than 180 historic photographs, Wilson’s narrative documents Berea’s majestic and inspiring story. A Black Doctor Remembers Life, Medicine, and Civil Rights in an Alabama Town Beside the Troubled Waters is a memoir by an African American physician in Alabama whose story in many ways typifies the lives and careers of black doctors in the south during the segregationist era while also illustrating the diversity of the black experience in the medical profession. Based on interviews conducted with Hereford over ten years, the account includes his childhood and youth as the son of a black sharecropper and Primitive Baptist minister in Madison County, Alabama, during the Depression; his education at Huntsville’s all-black Councill School and medical training at Meharry Medical College in Nashville; his medical practice in Huntsville’s black community beginning in 1956; his efforts to overcome the racism he met in the white medical community; his participation in the civil rights movement in Huntsville; and his later problems with the Medicaid program and state medical authorities, which eventually led to the loss of his license. Hereford’s memoir stands out because of its medical and civil rights themes, and also because of its compelling account of the professional ruin Hereford encountered after 37 years of practice, as the end of segregation and the federal role in medical care placed black doctors in competition with white ones for the first time. The Black Freedom Struggle in Escambia County, Florida, 1960-1980 In 1975, Florida's Escambia County and the city of Pensacola experienced a pernicious chain of events. A sheriff's deputy killed a young black man at point-blank range. Months of protests against police brutality followed, culminating in the arrest and conviction of the Reverend H. K. Matthews, the leading civil rights organizer in the county. Viewing the events of Escambia County within the context of the broader civil rights movement, J. Michael Butler demonstrates that while activism of the previous decade destroyed most visible and dramatic signs of racial segregation, institutionalized forms of cultural racism still persisted. In Florida, white leaders insisted that because blacks obtained legislative victories in the 1960s, African Americans could no longer claim that racism existed, even while public schools displayed Confederate imagery and allegations of police brutality against black citizens multiplied. Offering a new perspective on the literature of the black freedom struggle, Beyond Integration reveals how with each legal step taken toward racial equality, notions of black inferiority became more entrenched, reminding us just how deeply racism remained--and still remains--in our society. The Big Sandy River and its two main tributaries, the Tug and Levisa forks, drain nearly two million mountainous acres in the easternmost part of Kentucky. For generations, the only practical means of transportation and contact with the outside world was the river, and, as The Big Sandy demonstrates, steamboats did much to shape the culture of the region. Carol Crowe-Carraco offers an intriguing and readable account of this region's history from the days of the venturesome Long Hunters of the eighteenth century, through the bitter struggles of the Civil War and its aftermath, up to the 1970s, with their uncertain promise of a new prosperity. The Big Sandy pictures these changes vividly while showing how the turbulent past of the valley lives on in the region's present. A History of the Episcopal Church in Alabama
<urn:uuid:27773241-e9a2-4f5d-b1f2-7d5ac1b1c9bc>
CC-MAIN-2016-26
http://muse.jhu.edu/browse/history/us_history/local_and_regional/south?items_per_page=10&browse_view_type=default&m=61
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944217
2,612
3.296875
3
Everything you need to understand or teach Black Wind by Clive Cussler. Black Wind opens in a Japanese submarine I-403 in World War 2. The submarine captain reluctantly takes on a doctor and a secret payload. Their plan, he learns, is to attack America with something called "Makaze," the Evil Wind. Once near American shores, though, a naval vessel destroys them. More than sixty years later, the story opens on the Aleutian Islands where several scientists and meteorologists begin to experience bizarre health symptoms... Black Wind Lesson Plans contain 128 pages of teaching material, including:
<urn:uuid:a73c5c5a-8675-46e4-af32-18b12a042854>
CC-MAIN-2016-26
http://www.bookrags.com/Black_Wind/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931125
123
2.828125
3
As part of the Energy Efficiency and Renewable Energy Initiative under the Green Priority, the Reno City Council allocated stimulus funds for the installation of wind turbines on City properties as part of a wind energy demonstration program. Ever since Americans moved west, Northern Nevada’s landscape has been dotted with windmills. The early settlers used windmills on farms and ranches to extract valuable water from the ground and redistribute it where it was needed most. As any resident can tell you, our region is well known for it’s powerful and consistent wind patterns. In fact, in his book Roughing It, Mark Twain once wrote,‘a Washoe wind is by no means a trifling matter’, and that is exactly what the wind energy demonstration program aims to prove. While the theoretical potential for wind energy resources is plentiful, there has not been enough data to make accurate real world predictions of our region's true wind energy potential. In an effort to fill this gap in data, the City installed wind turbines demonstrate to citizens how the new generation of urban wind turbines compares to the old style wind turbines. Two to three turbines are installed near each other with an anemometer to measure wind speeds and show energy output versus manufacturer listed output. Anemometers are located near the turbines to collect wind speed, wind direction and electrical output data for each site and turbine in order to provide citizens with real world data. This information is displayed on the Green Energy Dashboard. The data will further be used to create a 3-D map of our region showing micro-climate possibilities for what citizens could expect for wind output at their residence or business. The sites that have been selected are on top of City Hall, on top of the Downtown Parking Gallery, Mira Loma Park and the Stead Wastewater Treatment plant. All system installations have been generously supported by the NV Energy RenewableGenerations rebate program made possible by the Nevada State Legislature and the Public Utilities Commission of Nevada. Citizens and businesses can get more information on these systems by following the previous links. You can track all system production of the City's systems at the Green Energy Dashboard. Watch a video of the Proven turbine being installed at Stead.
<urn:uuid:43b12e1c-6174-4f42-bf3c-c0d3d33aaa07>
CC-MAIN-2016-26
http://www.reno.gov/residents/sustainability/energy-efficiency-renewable-energy-initiative/wind-energy-demonstration-program
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93783
448
2.921875
3
An article in the London Daily Telegraph suggesting that President Obama might back a major program of bulldozing parts of cities in the Rust Belt has put so-called “shrinking cities” back in the spotlight. Many cities around the country, especially in the Rust Belt have experienced major population loss in their urban cores which has sometimes spilled into their entire metro area. They have thousands of abandoned homes, decayed infrastructure, environmental challenges, and no growth to justify a belief that many districts will ever be repopulated. Cities in the Rust Belt grew in an era when large scale manufacturing required large amounts of labor. Today, productivity improvements mean that the United States can set new industrial production records with a fraction of the workforce of yesteryear. With much of its traditional labor force no longer as in demand in the modern economy, many Rust Belt cities lack an economic raison d'etre. Some may transform themselves for the modern economy, but many will be forced to accept the reality of a significantly diminished stature in the 21st century. In this world, size can prove a liability. One of the biggest problems in turning around Detroit is the sheer size of the region. The metro area has a population of 4.5 million – not including nearby Ann Arbor or Windsor, Canada. Is there really any need in the modern day for a city the size of Detroit in Southeastern Michigan? It seems doubtful. As I've argued before, transforming that city's economy would be much easier if the region were smaller. One challenge is that a decline in population, which is already occurring naturally, doesn't shrink the area of urbanization or the accompanying infrastructure that needs to be maintained. Indeed, although it is losing population and can't support the infrastructure it has, Detroit still wants to build more, such a new regional rail transit system. And legacy debts such as pension liabilities don't get smaller just because people leave. As with leverage, scale economics works in declining places as well as on the growing ones. The people who operate new transit systems or police who secure expanded areas must be paid. Roads, sewers, and water lines need to be maintained. In many places that are losing people, jobs, and tax base, such fixed costs could prove ruinous over the long run. Under such conditions, Rust Belt cities require both outside help and a program of managed shrinkage. The first challenge will be getting these cities, especially larger ones like Detroit, to admit that they need to do it on a regional basis. Medium sized cities like Flint and Youngstown have been more willing to face up to challenges. In contrast, places like Detroit, Cleveland, and Buffalo still see themselves as important national cities. Pride is blocking the effort to undertake a major managed shrinkage program. Instead of adjusting to reality, these cities continue to pour hundreds of millions into projects that vainly attempt to restart growth. . What would a federally assisted managed shrinkage program look like? No one can say for sure since this is a new field in America. Clearly, study of what has happened in Europe, particularly in Germany, where managed shrinkage has long been on the agenda, is warranted. But these ideas can't just be transplanted via lift and drop. We need to create a distinctly American program informed by the best practices of elsewhere. That program should include the following elements: - Education. Raising educational attainment not only makes people more employable in the new economy, it makes them more mobile. - Relocation Assistance. Many people in the Rust Belt might want to move but be unable to do so because they are upside down on a mortgage or can't sell their house. As more people leave, that will put downward pressure on the housing market. Hence, some government relocation assistance to help buy out people who want to move might be helpful. - Shrinking the Urban Footprint. The quantity of urbanized land needs to be reduced so that the excess housing and infrastructure can be retired and the cost of servicing it eliminated. This means painfully identifying areas which will not receive reinvestment, and encouraging and assisting the people and businesses that remain to relocate. This will be difficult as these neighborhoods are still the locales for people’s homes and they have a strong emotional sense of ownership. Sensitivity is clearly called for. We need to increase localized density in areas targeted for redevelopment and convert other areas to non-urbanized uses such as nature preserves or agriculture. This will be a long process. - Financial Restructuring. Older cities are often hobbled by mountains of debt, underfunded pensions, overstaffed payrolls, and too many municipal fixed assets. The government needs to be right-sized. Federal assistance may be needed to take over pensions and to give cities some tools to restructure unsustainable debt loads outside of bankruptcy. - Development Restrictions. In return for federal assistance, there ought to be a real insistence that these cities sign up to the shrinkage programs. This might include enforceable restrictions on their ability to adopt policies that are oriented towards servicing growth such as restrictions on the ability to use federal funding for net new infrastructure. For example, if Detroit wants to build a federally funded rail system, it should retire an equivalent amount of other infrastructure elsewhere to offset it. Participation would be voluntary, but the federal government should make it clear that it will not finance futile attempts by these cities to try to recapture the glory of their pasts. This is of course only a conceptual outline of a program. Significant thought, analysis, and research would be needed to develop a program. Given our lack of experience in the field, experiments should be encouraged, flexibility granted within broad parameters, and real world feedback continuously incorporated back into the program. Clearly, we will not get everything right the first time around. We need to have the courage to learn from our mistakes and not forge headlong into failure simply because it would look like a political retreat. This won't be pleasant or easy. It is not a path anyone wants to take. But given the condition of much of the Rust Belt, the only viable options appear to be painful ones. As local blogger Tom Jones recently said, “Too often, dealing with urban problems in Memphis is like the stages of grief. Just this once, maybe we can move past denial, anger, bargaining and depression, and unabashedly move to acceptance and develop the kinds of bold plans that can truly make a difference in the trajectory of our city.” Aaron M. Renn is an independent writer on urban affairs based in the Midwest. His writings appear at The Urbanophile.
<urn:uuid:1140f7de-3957-4906-af02-65099e845263>
CC-MAIN-2016-26
http://www.newgeography.com/content/00883-shrinking-rust-belt
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958799
1,345
2.734375
3
To say Yosemite National Park is an eyeful is an understatement. Everywhere you look it seems there's something to fix your gaze on -- Half Dome, Glacier Point, El Capitan, Tenya Lake. But how can we preserve those vistas for future generations? How can we ensure that they're as marvelous (if not more so) 50 or 100 years down the road as they are today? Those are questions the folks at Yosemite are hoping to be able to answer in the months ahead. Beginning February 12 the park will embark on a 30-day-long public scoping period to gather thoughts on what should be considered as they move forward with developing a Scenic Vista Management Plan Environmental Assessment Written comments should be postmarked no later than March 13, 2009. Historians will tell you that Yosemite was originally set aside for preservation due to its outstanding scenery. Back in 1851, when Dr. Lafayette Bunnell, first set his eyes on the Yosemite Valley, this is how he described the incredible setting of rock, water, and trees: "...the clouds...partially dimmed the higher cliffs and mountains. This obscurity of vision ... increased the awe with which I beheld it, and as I looked, a peculiar exalted sensation seemed to fill my whole being." Millions of modern-day explorers have experienced this same view. Today, we call it Tunnel View. It’s just one of many iconic views and vistas for which Yosemite is famous. With that accepted, the purpose of the Scenic Vista Management Plan is to: * Protect Yosemite’s historic viewsheds and the natural processes that created them. * Preserve the historic and cultural contexts in which the viewpoints were created. * Restore visitor-use opportunities associated with lost vistas. * Where historic viewpoints cannot be rehabilitated, identify potentially new views or vistas. * Restore or maintain vistas by restoring natural species composition, structure, and function to systems or by using traditional Native American management practices. A public open house is scheduled for February 25 from 1 p.m. to 4 p.m. in the Valley Visitor Center Auditorium in Yosemite Valley. Park Admission fees will be waived for those attending the open house. You can either submit your thoughts at that meeting, fax them to 209-379-1294, email them from this page, or, after February 12, use the National Park Service's Planning, Environment, and Public Comment commenting system.
<urn:uuid:cd5639db-e758-40b4-b410-1c79c88e4243>
CC-MAIN-2016-26
http://www.nationalparkstraveler.com/2009/02/how-can-yosemite-national-parks-magnificent-vistas-be-preserved
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00023-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946586
506
2.890625
3
Up, Up and Away: High Flying with Science More than 300 students from schools in Northamptonshire, Worcestershire and the Midlands will learn about the fun side of science when they attend the University of Birmingham’s Science Festival on Monday 30 March. From building remote powered vehicles using Lego to getting to grips with blood pressure monitors the 15 and 16-year-olds will visit the University's Edgbaston campus to discover more about student life and learning. They will also attend sessions in the areas of Mathematics, Chemistry, Physics, Engineering, Geology, Geography and Biological Sciences. Mohammed Ansar, the University’s Outreach Officer and organiser of the event, said: ‘Raising the profile of science and the aspirations of young people to think about Higher Education is an important mission for us. I’m sure the visit will break down barriers that some young people have when thinking about the benefits of post 16 education and especially studying science based subjects at degree level. This is a great opportunity for young people to discover what goes on inside universities and how science subjects contribute to the economy.’ Leading biologists Dr Jeremy Pritchard and Dr Susannah Thorpe from the School of Biological Sciences will deliver the keynote lecture entitled ‘Am I An Ape?’ linking the Festival to the International Year of Darwin 2009. The day will start with an interactive lecture for all students delivered by Dr Ed Tarte from the School of Electronic and Electrical Engineering on the topic of ‘Superconductors are Super Cool’ and will finish with a balloon release, with a prize given to the student whose balloon travels the furthest. Notes to Editor: Balloon release: 2.45 - 3pm Chancellor’s Court, University of Birmingham, Edgbaston Event contact: Mohammed Ansar, Outreach Officer: Tel: 0121 414 7169 / 07974180154 (on the day) For further media information Kate Chapple, Press Officer, University of Birmingham, tel 0121 414 2772 or 07789 921164.
<urn:uuid:149269cd-8115-4ace-8f00-f1cf797fa7cc>
CC-MAIN-2016-26
http://www.birmingham.ac.uk/news/latest/2009/03/24Mar-Flying.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00033-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899739
435
2.515625
3
Some substances are secreted from the plasma into the lumen by the cells of the nephron. Examples of such substances are ammonia (NH3). As in reabsorption, there are transporters on the cells that can move these specific substances into the lumen. Now let's put all of these processes -- filtration, reabsorption and secretion -- together to understand how the kidneys maintain a constant composition of the blood. Let's say that you decide to eat several bags of salty (NaCl) potato chips at one sitting. The Na will be absorbed into your blood by your intestines, increasing the concentration of Na in your blood. The increased Na in the blood will be filtered into the nephron. While the Na transporters will attempt to reabsorb all of the filtered Na, it's likely that the amount will exceed their ability. Therefore, excess Na will remain in the lumen; water will also remain, due to osmosis. The excess Na will be excreted into the urine and eliminated from the body. So whether a substance remains in the blood depends on the amount filtered into the nephron and the amount reabsorbed or secreted by various transporters. Let's look at an another example: Why do you have to keep taking repeated doses of any given medicine? Well, once you take the medicine, it gets absorbed by the intestine into the blood. The medicine in the blood acts on its target cell and also gets filtered into the nephron. Most medicines don't have transporters in the nephron to reabsorb them from the filtrate. In fact, some transporters actively secrete medicines into the nephron. Therefore, the medicine gets eliminated in the urine and you must take another dosage later. We've seen how the kidney can regulate ions and small molecules and eliminate unwanted substances. In the next section, we'll see how the kidney maintains water balance.
<urn:uuid:512f6624-354f-42d4-aa19-68e7ddafe8d8>
CC-MAIN-2016-26
http://health.howstuffworks.com/human-body/systems/kidney-urinary/kidney4.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.939916
402
3.765625
4
This book offers an introduction to the structures and varieties of Spanish, covering all the major levels of linguistic forecasting: considerable attention is paid to Judeo-Spanish and creoles. No previous knowledge of linguistics is assumed and a glossary of technical terms, in conjunction with exercises and activities, helps to reinforce key points. The book is written specifically with English-speaking learners of Spanish in mind, and readers will find a good deal of practical help in developing skills such as pronunciation and the appropriate use of register.
<urn:uuid:95b3ee2d-8374-454a-8c9c-223aaf87f91d>
CC-MAIN-2016-26
http://linguistlist.org/pubs/books/get-book.cfm?BookID=10153
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9134
113
3.296875
3
Aquarius to Study the Power of Sea Salt |Play ScienceCast Video||Join Mailing List| June 7, 2011: A new observatory is about to leave Earth to map a powerful compound of global importance: Common everyday sea salt. Researchers suspect that the salinity of Earth's oceans has far-reaching effects on climate, much as the salt levels within our bodies influence our own delicate internal balance. An international team of scientists from NASA and the Space Agency of Argentina, or CONAE, will investigate this possibility with the aid of a satellite named "Aquarius/SAC-D," scheduled to launch on June 9th. "Based on decades of historical data gathered from ocean areas by ships and buoys, we know the salinity has changed over the last 40 years," says Aquarius principal investigator Gary Lagerloef. "This tells us there's something fundamental going on in the water cycle." Salinity is increasing in some ocean regions, like the subtropical Atlantic, which means more fresh water is being lost through evaporation at the sea surface. But no one knows why this is happening; nor can anyone pinpoint why other areas are experiencing more rainfall and lower salinity. To solve these mysteries, scientists need a comprehensive look at global salinity. Within a few months, Aquarius will collect as many sea surface salinity measurements as the entire 125-year historical record from ships and buoys. "Salinity, along with temperature, governs the density of seawater," says Lagerloef. "The saltier the water, the denser it is, and density drives the currents that determine how the ocean moves heat around the planet. For example, the Gulf Stream carries heat to higher latitudes and moderates the climate. When these currents are diverted by density variations, weather patterns such as rainfall and temperature change." Scientists have gathered an ensemble of measurements over the ocean--e.g., wind speed and direction, sea surface heights and temperatures, and rainfall. But these data do not provide a complete picture. "We've been missing a key element – salinity," says Lagerloef. "A better understanding of ocean salinity will give us a clearer picture of how the sea is tied to the water cycle and help us improve the accuracy of models predicting future climate." Aquarius is one of the most sensitive microwave radiometers ever built, and the first NASA sensor to track ocean salinity from space. "It can detect as little as 0.2 parts salt to 1,000 parts water -- about the same as a dash of salt in a gallon of water. A human couldn't taste such a low concentration of salt, yet Aquarius manages to detect it while orbiting 408 miles above the Earth." The Aquarius radiometer gets some help from other instruments onboard the satellite. One of them helps sort out the distortions of the choppy sea. CONAE's Sandra Torrusio, principal investigator for the Argentine and other international instruments onboard, explains: "One of our Argentine instruments is another microwave radiometer in a different frequency band that will measure sea surface winds, rainfall, sea ice, and any other 'noise' that could distort the Aquarius salinity measurement. We'll subtract all of that out and retrieve the target signal." Torrusio is excited about the mission. "I've met so many new people, not only from Argentina, but from the US and NASA! It's been a great experience to work with them and exchange ideas. We may come from different places, but we all talk the same language. And it isn't English – it's science." Working together, these international "people of science" will tell us more about the ocean's role in our planet's balance – and in our own – no matter where we live. For whatever we lose (like a you or a me), It's always our self we find in the sea. Aquarius/SAC-D: This multipurpose observatory continues the long-standing partnership between NASA and the Argentine Comisión Nacional de Actividades Espaciales, or CONAE. NASA provided launch vehicles and science instruments, while CONAE contributes the spacecraft, mission operations, and science instruments for their national space program. The NASA Aquarius instrument to measure ocean salinity is the prime instrument on the Aquarius/SAC-D mission. JPL will manage the NASA Aquarius implementation through its commissioning phase and archive mission data. Goddard will manage Aquarius instrument operations and process science data. NASA's Launch Services Program at the agency's Kennedy Space Center in Florida is managing the launch. CONAE is providing the SAC-D spacecraft, an optical camera, a thermal camera in collaboration with Canada, a microwave radiometer; sensors from various Argentine institutions and the mission operations center there. France and Italy are contributing instruments. SAC stands for Satélite de Aplicaciones Científicas. This is "D" in this series of four science application satellites Argentina has built in collaboration with NASA. A radiometer is essentially a sensitive radio receiver, which, in this case, detects natural microwave emissions given off by the ocean's surface. The Aquarius radiometer scans the sea surface to measure the emitted power in a certain frequency band (1400-1430 MHz) that is proportional to the water's salt content.There are an average of 35 parts per thousand of salt in the ocean (this ratio varies from 32-37 in open ocean areas). That is, the ocean is 3 1/2 percent salt, and in 1 kilogram seawater there's about 35 grams of salt. Since salinity levels in the open ocean vary by only about five parts per thousand, the instrument must be very sensitive.
<urn:uuid:552f0965-ac49-4aec-8713-a092152a7f95>
CC-MAIN-2016-26
http://science.nasa.gov/science-news/science-at-nasa/2011/07jun_aquarius/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00001-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909355
1,188
3.59375
4
Credit NASA with another milestone in space exploration: On February 12, 2001, a spacecraft landed on the surface of an asteroid for the first time in history. After a year spent orbiting the asteroid 433 Eros, the Near Earth Asteroid Rendezvous (NEAR) spacecraft made a controlled descent to the surface. But what exactly is an asteroid? And what was the NEAR Shoemaker mission about? In 1772, a mathematician named Johann Titus and an astronomer named Johann Bode discovered a mathematical sequence in the distances of the planets from the sun -- this sequence predicted the possibility of a planet orbiting in between Mars and Jupiter at 2.8 AU (2.6x108 mi / 4.2x108 km) from the sun. So astronomers began to search for this possible planet, and in 1801, an Italian astronomer named Giuseppi Piazzi found a faint body at that distance that he named Ceres. However, Ceres was fainter than Mars or Jupiter, so Piazzi concluded that it was much smaller. Other small bodies were later found in this same vicinity. These objects were named asteroids (meaning star-like) or minor planets. Asteroids are small, rocky bodies that orbit the sun in between the orbits of Mars and Jupiter, which is anywhere from 2.1 AU (1.95x108 mi / 3.15x108 km) to 3.2 AU (3.0x108 mi / 4.8x108 km) from the sun. There are more than 20,000 known asteroids. They are irregularly shaped and vary in size from a radius of 1 km (0.62 mi) to several hundred kilometers (Ceres is the largest, with a radius of 284 miles / 457 km). By measuring fluctuations in their brightness, we know that many asteroids rotate in periods of three to 30 days.
<urn:uuid:2c98d9c5-a8cf-4075-b384-e2fce7a8f477>
CC-MAIN-2016-26
http://science.howstuffworks.com/dictionary/astronomy-terms/asteroid.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956057
380
4.3125
4
Q: Most people use “like” instead of “as” these days. In a blog posting a few years ago, you presented the case for using “like” as a conjunction, but then recommended against doing it. Have you changed your mind since then? A: The use of “like” as a conjunction introducing a clause (“If you knew Susie like I know Susie”) is extremely common in both written and spoken English. But the prohibition against it is familiar to anyone old enough to have learned grammar in public school—that is, roughly anyone over 50. Is the usage still considered a crime? That depends on whom you ask. As we said in our 2007 blog post, opinions were then shifting and edicts against “like” were softening. Four years later, that’s still the case. When re-examining a familiar old edict, it’s always worth asking why the edict was laid down in the first place. The truth is that writers have been using “like” as a conjunction since the 14th century. Chaucer did it. Shakespeare, too. So did Keats, Emily Brontë, Thackeray, George Eliot, Dickens, Kipling, Shaw, and so on. Merriam-Webster’s Dictionary of English Usage says that objections to “like” as a conjunction were apparently “a 19th-century reaction to increased conjunctive use at that time.” Furthermore, Merriam-Webster’s says, “the objectors were chiefly commentators on usage rather than grammarians or lexicographers.” But after World War I, all three groups—usage commentators, grammarians, and lexicographers—were in agreement: “It was incorrect to use like for as or as if,” says M-W; “like was a preposition, not a conjunction.” By 1959, the authors of The Elements of Style went so far as to call the usage “illiterate.” And now? After an extensive examination of the history of the usage, Merriam-Webster’s concludes that “Strunk & White’s relegation of conjunctive like to misuse by the illiterate is wrong.” R. W. Burchfield, writing in Fowler’s Modern English Usage (revised 3rd ed.), agrees. After doing his own extensive examination of the usage, Burchfield concludes that “like as a conjunction is struggling towards acceptable standard or neutral ground” and that “the long-standing resistance to this omnipresent little word is beginning to crumble.” After reviewing the subject for our book Origins of the Specious, we came down on the side of Burchfield and Merriam-Webster’s, with this caveat: “But let’s face facts— or, rather, myths. Anyone who uses ‘like’ as a conjunction, especially in formal writing, risks being accused of illiteracy.” So until further notice, be aware that conservative usage guides (and grammar sticklers) still condemn the use of “like” as a conjunction. If you’re inclined to use it this way, consider your audience. Check out our books about the English language
<urn:uuid:381b8271-a847-4eb0-95ce-fb11b7ea4e0c>
CC-MAIN-2016-26
http://www.grammarphobia.com/blog/2011/02/like-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962077
719
2.984375
3
Description: This product provides direct access to a widely scattered collection of original medieval manuscripts that describe travel - real and imaginary - in the Middle Ages. These sources tell us much about the attitudes and preconceptions of people across Europe in the medieval period, shedding light on issues of race, economics, trade, militarism, politics, literature and science. The project combines: *Multiple manuscript sources detailing the journeys of famous travelers from Prester John and Marco Polo to Sir John Mandeville and John Capgrave *Translations and supporting materials (all of which are fully searchable) *Maps showing the routes of the travelers *Introductory essays by leading scholars
<urn:uuid:74b9455f-4727-48a2-8a3a-c3f5be9390d4>
CC-MAIN-2016-26
http://eresources.lib.unc.edu/eid/description.php?resourceID=221884&passthrough=no
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892776
136
3
3
It inhabits the canopy of wet forest and feeds on fruit and some invertebrates. It has an orange erectile crest, black-spotted yellowish underparts and scaling on the head and neck. As its name implies, it has a sharp bill. The Sharpbill is classified as Least Concern. Does not qualify for a more at risk category. Widespread and abundant taxa are included in this category. Reproduction - GEOGRAPHIC RANGE, SHARPBILLS AND PEOPLE, CONSERVATION STATUS Sharpbill: Oxyruncidae - Diet = Sharpbills eat mainly fruit, insects, and insect eggs. They get their name from their pointed bill that allows them to hunt for food using what is called "pry and gape" behavior. When a sharpbill is feeding, it often hangs upside down on a branch and uses its pointed bill to pry into fruit, tightly rolled leaves, or moss growing on the tree. More sharpbill pronunciation /ˈʃɑrpˌbɪl/ Show Spelled Show IPA –nouna passerine bird, Oxyruncus cristatus, of New World tropical forests, having greenish plumage and a pointed bill, related to the tyrant flycatchers. Use sharpbill in a SentenceSee images of sharpbillSearch sharpbill on the Web - Origin: sharp + bill2 Dictionary.com Unabridged Based on the Random House Dictionary, © Random House, Inc. 2010. More The Sharpbill (Oxyruncus cristatus) is a small drab bird. Its range is from the mountainous areas of tropical South America and southern Central America (Panama and Costa Rica). It inhabits the canopy of wet forest and feeds on fruit and some invertebrates. It has an orange erectile crest, black-spotted yellowish underparts and scaling on the head and neck. As its name implies, it has a sharp bill. More * Video preview image A Sharpbill on the nest, incubating Tapanti National Park, Cartago Province, Costa Rica Kathy Rohe 3 June 2008 33 weeks ago 56 sec 3.8 * Video preview image A poor view of a distant bird on a treetop Santa Teresa, Espirito Santo, Brazil (ssp cristatus) Josep del Hoyo 25 January 2005 2 years ago 31 sec 3. More Sharpbill Systems is an Information Technology and Management consulting firm providing full life-cycle services for strategic and operational applications and commercial software products. We are a team of dedicated professionals thriving to help you achieve your business needs. We help major enterprises strategize, execute, and manage their most mission-critical technology-based initiatives. More The exact affinities of the sharpbill (Oxyruncus cristatus) have been in dispute since the genus Oxyruncus was first described in 1820. Since the late nineteenth century most authors have given the sharpbill family status, despite its widely scattered distribution. Sharpbills are obviously related to the tyrannid passerines, particularly the tyrant flycatchers, cotingas, and manakins. However, the sharpbills' exact relations with these groups remain unclear. More Sharpbill determination Similar species Cotingidae Bare-necked Fruitcrow | Capuchinbird | Crimson Fruitcrow | Dusky Purpletuft | Guianan Cock-of-the-rock | Guianan Red-Cotinga | Pompadour Cotinga | Purple-breasted Cotinga | Purple-throated Fruitcrow | Screaming Piha | Sharpbill | Spangled Cotinga | White Bellbird | Conservation status Sharpbill status Least Concern Sharpbill (Oxyruncus cristatus) More The sharpbill is a 16-centimetre- (6.5-inch-) long bird with short legs, longish wings and tail, and sharply pointed bill. It is plain greenish brown above and pale yellowish to white below, with dark spots and bars; the midcrown has a low crest of red feathers. The elongated nostrils are covered with a flap, as in the tapaculos. Its sharp bill probes the bark of trees. Heliobletus contaminatus, of the ovenbird family (Furnariidae), is also called sharpbill, or sharp-billed tree hunter. More SHARPbill$ is an innovative and revolutionary new method of fundraising. Below is the concept and work-in-progress for this endeavor. Nothing is yet carved in stone, with such items as even the name as placeholders subject to additions, edification, replacement, or change-and are intended for discussion and review. Conceptual Website This working mock up is to present the concept. More The enigmatic Sharpbill is a large, canopy-dwelling fruit-eating bird that is the sole member of its monotypic family. Taxonomists have long debated its closest relatives, and it is generally placed close to the Cotingas. Its complex, green and grey plumage provides excellent camouflage in the dappled canopy light. Check out the bizarre call of this species, like a falling bomb without the explosion at the end! Click on the images to enlarge them. More The sharpbill's head is marked by a red eye and the straight, gray bill that gives the genus its name. This instrument tapers from a broad base to an unusually pointed tip. Short rictal bristles encircle its conical base. A median crest ranges between races from bright crimson to orange. It is raised only when the bird is excited. Adult female plumage is more muted, and the crest is less conspicuous. More amongst different populations of Sharpbill, a curious bird that is probably best placed in its own family, Oxyruncidae, given that genetic data have yet to provide a consistent 'answer' as to its best placement. In the past it has been variously considered a member of the Cotingidae or the Tyrannidae, or as a member of the recently constituted Tityridae (along with tityras, becards and a handful of other species of somewhat enigmatic affinities). Canon EOS 30D 1/1000s f/11.0 at 420. More
<urn:uuid:5a6dfcdb-5e7e-49cd-a22f-c845903624f4>
CC-MAIN-2016-26
http://thewebsiteofeverything.com/animals/birds/Passeriformes/Cotingidae/Oxyruncus-cristatus
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928409
1,298
3.3125
3
Help support New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... (Greek diabolos; Latin diabolus). It may be said of this name, as St. Gregory says of the word angel, "nomen est officii, non naturæ"--the designation of an office, not of a nature. For the Greek word (from diaballein, "to traduce") means a slanderer, or accuser, and in this sense it is applied to him of whom it is written "the accuser [ho kategoros] of our brethren is cast forth, who accused them before our God day and night" (Apocalypse 12:10). It thus answers to the Hebrew name Satan which signifies an adversary, or an accuser. Mention is made of the Devil in many passages of the Old and New Testaments, but there is no full account given in any one place, and the Scripture teaching on this topic can only be ascertained by combining a number of scattered notices from Genesis to Apocalypse, and reading them in the light of patristic and theological tradition. The authoritative teaching of the Church on this topic is set forth in the decrees of the Fourth Lateran Council (cap. i, "Firmiter credimus"), wherein, after saying that God in the beginning had created together two creatures, the spiritual and the corporeal, that is to say the angelic and the earthly, and lastly man, who was made of both spirit and body, the council continues: "Diabolus enim et alii dæmones a Deo quidem naturâ creati sunt boni, sed ipsi per se facti sunt mali." ("the Devil and the other demons were created by God good in their nature but they by themselves have made themselves evil.") Here it is clearly taught that the Devil and the other demons are spiritual or angelic creatures created by God in a state of innocence, and that they became evil by their own act. It is added that man sinned by the suggestion of the Devil, and that in the next world the wicked shall suffer perpetual punishment with the Devil. The doctrine which may thus be set forth in a few words has furnished a fruitful theme for theological speculation for the Fathers and Schoolmen, as well as later theologians, some of whom, Suarez for example, have treated it very fully. On the other hand it has also been the subject of many heretical or erroneous opinions, some of which owe their origin to pre-Christian systems of demonology. In later years Rationalist writers have rejected the doctrine altogether, and seek to show that it has been borrowed by Judaism and Christianity from external systems of religion wherein it was a natural development of primitive Animism. As may be gathered from the language of the Lateran definition, the Devil and the other demons are but a part of the angelic creation, and their natural powers do not differ from those of the angels who remained faithful. Like the other angels, they are pure spiritual beings without any body, and in their original state they are endowed with supernatural grace and placed in a condition of probation. It was only by their fall that they became devils. This was before the sin of our first parents, since this sin itself is ascribed to the instigation of the Devil: "By the envy of the Devil, death came into the world" (Wisdom 2:24). Yet it is remarkable that for an account of the fall of the angels we must turn to the last book of the Bible. For as such we may regard the vision in the Apocalypse, albeit the picture of the past is blended with prophecies of what shall be in the future: And there was a great battle in heaven, Michael and his angels fought with the dragon, and the dragon fought and his angels: and they prevailed not, neither was their place found any more in heaven. And that great dragon was cast out, that old serpent, who is called the devil and Satan, who seduceth the whole world; and he was cast unto the earth, and his angels were thrown down with him. (Apocalypse 12:7-9) To this may be added the words of St. Jude: "And the angels who kept not their principality, but forsook their own habitation, he hath reserved under darkness in everlasting chains, unto the judgment of the great day" (Jude 1:6; cf. 2 Peter 2:4). How art thou fallen from heaven, O Lucifer, who didst rise in the morning? how art thou fallen to the earth, that didst wound the nations? And thou saidst in thy heart: I will ascend into heaven, I will exalt my throne above the stars of God, I will sit in the mountain of the covenant, in the sides of the north. I will ascend above the height of the clouds, I will be like the most High. But yet thou shalt be brought down to hell, into the depth of the pit. (Isaiah 14:12-15) This parable of the prophet is expressly directed against the King of Babylon, but both the early Fathers and later Catholic commentators agree in understanding it as applying with deeper significance to the fall of the rebel angel. And the older commentators generally consider that this interpretation is confirmed by the words of Our Lord to his disciples: "I saw Satan like lightning falling from heaven" (Luke 10:18). For these words were regarded as a rebuke to the disciples, who were thus warned of the danger of pride by being reminded of the fall of Lucifer. But modern commentators take this text in a different sense, and refer it not to the original fall of Satan, but his overthrow by the faith of the disciples, who cast out devils in the name of their Master. And this new interpretation, as Schanz observes, is more in keeping with the context. The parallel prophetic passage is Ezekiel's lamentation upon the king of Tyre: You were the seal of resemblance, full of wisdom, and perfect in beauty. You were in the pleasures of the paradise of God; every precious stone was thy covering; the sardius, the topaz, and the jasper, the chrysolite, and the onyx, and the beryl, the sapphire, and the carbuncle, and the emerald; gold the work of your beauty: and your pipes were prepared in the day that you were created. You a cherub stretched out, and protecting, and I set you in the holy mountain of God, you have walked in the midst of the stones of fire. You were perfect in your wave from the day of creation, until iniquity was found in you. (Ezekiel 28:12-15) There is much in the context that can only be understood literally of an earthly king concerning whom the words are professedly spoken, but it is clear that in any case the king is likened to an angel in Paradise who is ruined by his own iniquity. Even for those who in no way doubt or dispute it, the doctrine set forth in these texts and patristic interpretations may well suggest a multitude of questions, and theologians have not been loath to ask and answer them. And in the first place what was the nature of the sin of the rebel angels? In any case this was a point presenting considerable difficulty, especially for theologians, who had formed a high estimate of the powers and possibilities of angelic knowledge, a subject which had a peculiar attraction for many of the great masters of scholastic speculation. For if sin be, as it surely is, the height of folly, the choice of darkness for light, of evil for good, it would seem that it can only be accounted for by some ignorance, or inadvertence, or weakness, or the influence of some overmastering passion. But most of these explanations seem to be precluded by the powers and perfections of the angelic nature. The weakness of the flesh, which accounts for such a mass of human wickedness, was altogether absent from the angels. There could be no place for carnal sin without the corpus delicti. And even some sins that are purely spiritual or intellectual seem to present an almost insuperable difficulty in the case of the angels. This may certainly be said of the sin which by many of the best authorities is regarded as being actually the great offense of Lucifer, to wit, the desire of independence of God and equality with God. It is true that this seems to be asserted in the passage of Isaiah (14:13). And it is naturally suggested by the idea of rebellion against an earthly sovereign, wherein the chief of the rebels very commonly covets the kingly throne. At the same time the high rank which Lucifer is generally supposed to have held in the hierarchy of angels might seem to make this offense more likely in his case, for, as history shows, it is the subject who stands nearest the throne who is most open to temptations of ambition. But this analogy is not a little misleading. For the exaltation of the subject may bring his power so near that of his sovereign that he may well be able to assert his independence or to usurp the throne; and even where this is not actually the case he may at any rate contemplate the possibility of a successful rebellion. Moreover, the powers and dignities of an earthly prince may be compatible with much ignorance and folly. But it is obviously otherwise in the case of the angels. For, whatever gifts and powers may be conferred on the highest of the heavenly princes, he will still be removed by an infinite distance from the plenitude of God's power and majesty, so that a successful rebellion against that power or any equality with that majesty would be an absolute impossibility. And what is more, the highest of the angels, by reason of their greater intellectual illumination, must have the clearest knowledge of this utter impossibility of attaining to equality with God. This difficulty is clearly put by the Disciple in St. Anselm's dialogue "De Casu Diaboli" (cap. iv); for the saint felt that the angelic intellect, at any rate, must see the force of the "ontological argument" (see ONTOLOGY). "If", he asks, "God cannot be thought of except as sole, and as of such an essence that nothing can be thought of like to Him [then] how could the Devil have wished for what could not be thought of? — He surely was not so dull of understanding as to be ignorant of the inconceivability of any other entity like to God" (Si Deus cogitari non potest, nisi ita solus, ut nihil illi simile cogitari possit, quomodo diabolus potuit velle quod non potuit cogitari? Non enim ita obtusæ mentis erat, ut nihil aliud simile Deo cogitari posse nesciret). The Devil, that is to say, was not so obtuse as not to know that it was impossible to conceive of anything like (i.e. equal) to God. And what he could not think he could not will. St. Anselm's answer is that there need be no question of absolute equality; yet to will anything against the Divine will is to seek to have that independence which belongs to God alone, and in this respect to be equal to God. In the same sense St. Thomas (I:63:3) answers the question, whether the Devil desired to be "as God". If by this we mean equality with God, then the Devil could not desire it, since he knew this to be impossible, and he was not blinded by passion or evil habit so as to choose that which is impossible, as may happen with men. And even if it were possible for a creature to become God, an angel could not desire this, since, by becoming equal with God he would cease to be an angel, and no creature can desire its own destruction or an essential change in its being. These arguments are combated by Scotus (In II lib. Sent., dist. vi, Q. i.), who distinguishes between efficacious volition and the volition of complaisance, and maintains that by the latter act an angel could desire that which is impossible. In the same way he urges that, though a creature cannot directly will its own destruction, it can do this consequenter, i.e. it can will something from which this would follow. Although St. Thomas regards the desire of equality with God as something impossible, he teaches nevertheless (loc. cit.) that Satan sinned by desiring to be "as God", according to the passage in the prophet (Isaiah 14), and he understands this to mean likeness, not equality. But here again there is need of a distinction. For men and angels have a certain likeness to God in their natural perfections, which are but a reflection of his surpassing beauty, and yet a further likeness is given them by supernatural grace and glory. Was it either of these likenesses that the devil desired? And if it be so, how could it be a sin? For was not this the end for which men and angels were created? Certainly, as Thomas teaches, not every desire of likeness with God would be sinful, since all may rightly desire that manner of likeness which is appointed them by the will of their Creator. There is sin only where the desire is inordinate, as in seeking something contrary to the Divine will, or in seeking the appointed likeness in a wrong way. The sin of Satan in this matter may have consisted in desiring to attain supernatural beatitude by his natural powers or, what may seem yet stranger, in seeking his beatitude in the natural perfections and reflecting the supernatural. In either case, as St. Thomas considers, this first sin of Satan was the sin of pride. Scotus, however (loc. cit., Q. ii), teaches that this sin was not pride properly so called, but should rather be described as a species of spiritual lust. Although nothing definite can be known as to the precise nature of the probation of the angels and the manner in which many of them fell, many theologians have conjectured, with some show of probability, that the mystery of the Divine Incarnation was revealed to them, that they saw that a nature lower than their own was to be hypostatically united to the Person of God the Son, and that all the hierarchy of heaven must bow in adoration before the majesty of the Incarnate Word; and this, it is supposed, was the occasion of the pride of Lucifer (cf. Suarez, De Angelis, lib. VII, xiii). As might be expected, the advocates of this view seek support in certain passages of Scripture, notably in the words of the Psalmist as they are cited in the Epistle to the Hebrews: "And again, when he bringeth in the first-begotten into the world, he saith: And let all the angels of God adore Him" (Hebrews 1:6; Psalm 96:7). And if the twelfth chapter of the Apocalypse may be taken to refer, at least in a secondary sense, to the original fall of the angels, it may seem somewhat significant that it opens with the vision of the Woman and her Child. But this interpretation is by no means certain, for the text in Hebrews 1, may be referred to the second coming of Christ, and much the same may be said of the passage in the Apocalypse. It would seem that this account of the trial of the angels is more in accordance with what is known as the Scotist doctrine on the motives of the Incarnation than with the Thomist view, that the Incarnation was occasioned by the sin of our first parents. For since the sin itself was committed at the instigation of Satan, it presupposes the fall of the angels. How, then, could Satan's probation consist in the fore-knowledge of that which would, ex hypothesi, only come to pass in the event of his fall? In the same way it would seem that the aforesaid theory is incompatible with another opinion held by some old theologians, to wit, that men were created to fill up the gaps in the ranks of the angels. For this again supposes that if no angels had sinned no men would have been made, and in consequence there would have been no union of the Divine Person with a nature lower than the angels. As might be expected from the attention they had bestowed on the question of the intellectual powers of the angels, the medieval theologians had much to say on the time of their probation. The angelic mind was conceived of as acting instantaneously, not, like the mind of man, passing by discursive reasoning from premises to conclusions. It was pure intelligence as distinguished from reason. Hence it would seem that there was no need of any extended trial. And in fact we find St. Thomas and Scotus discussing the question whether the whole course might not have been accomplished in the first instant in which the angels were created. The Angelic Doctor argues that the Fall could not have taken place in the first instant. And it certainly seems that if the creature came into being in the very act of sinning the sin itself might be said to come from the Creator. But this argument, together with many others, is answered with his accustomed acuteness by Scotus, who maintains the abstract possibility of sin in the first instant. But whether possible or not, it is agreed that this is not what actually happened. For the authority of the passages in Isaiah and Ezekiel, which were generally accepted as referring to the fall of Lucifer, might well suffice to show that for at least one instant he had existed in a state of innocence and brightness. To modern readers the notion that the sin was committed in the second instant of creation may seem scarcely less incredible than the possibility of a fall in the very first. But this may be partly due to the fact that we are really thinking of human modes of knowledge, and fail to take into account the Scholastic conception of angelic cognition. For a being who was capable of seeing many things at once, a single instant might be equivalent to the longer period needed by slowly-moving mortals. This dispute, as to the time taken by the probation and fall of Satan, has a purely speculative interest. But the corresponding question as to the rapidity of the sentence and punishment is in some ways a more important matter. There can indeed be no doubt that Satan and his rebel angels were very speedily punished for their rebellion. This would seem to be sufficiently indicated in some of the texts which are understood to refer to the fall of the angels. It might be inferred, moreover, from the swiftness with which punishment followed on the offense in the case of our first parents, although man's mind moves more slowly than that of the angels, and he had more excuse in his own weakness and in the power of his tempter. It was partly for this reason, indeed, that man found mercy, whereas there was no redemption for the angels. For, as St. Peter says, "God spared not the angels that sinned" (2 Peter 2:4). This, it may be observed, is asserted universally, indicating that all who fell suffered punishment. For these and other reasons theologians very commonly teach that the doom and punishment followed in the next instant after the offense, and many go so far as to say there was no possibility of repentance. But here it will be well to bear in mind the distinction drawn between revealed doctrine, which comes with authority, and theological speculation, which to a great extent rests on reasoning. No one who is really familiar with the medieval masters, with their wide differences, their independence, their bold speculation, is likely to confuse the two together. But in these days there is some danger that we may lose sight of the distinction. It is true that, when it fulfils certain definite conditions, the agreement of theologians may serve as a sure testimony to revealed doctrine, and some of their thoughts and even their very words have been adopted by the Church in her definitions of dogma. But at the same time these masters of theological thought freely put forward many more or less plausible opinions, which come to us with reasoning rather than authority, and must needs stand or fall with the arguments by which they are supported. In this way we may find that many of them may agree in holding that the angels who sinned had no possibility of repentance. But it may be that it is a matter of argument, that each one holds it for a reason of his own and denies the validity of the arguments adduced by others. Some argue that from the nature of the angelic mind and will there was an intrinsic impossibility of repentance. But it may be observed that in any case the basis of this argument is not revealed teaching, but philosophical speculation. And it is scarcely surprising to find that its sufficiency is denied by equally orthodox doctors who hold that if the fallen angels could not repent this was either because the doom was instantaneous, and left no space for repentance, or because the needful grace was denied them. Others, again, possibly with better reason, are neither satisfied that sufficient grace and room for repentance were in fact refused, nor can they see any good ground for thinking this likely, or for regarding it as in harmony with all that we know of the Divine mercy and goodness. In the absence of any certain decision on this subject, we may be allowed to hold, with Suarez, that, however brief it may have been, there was enough delay to leave an opportunity for repentance, and that the necessary grace was not wholly withheld. If none actually repented, this may be explained in some measure by saying that their strength of will and fixity of purpose made repentance exceedingly difficult, though not impossible; that the time, though sufficient, was short; and that grace was not given in such abundance as to overcome these difficulties. The language of the prophets (Isaiah 14; Ezekiel 28) would seem to show that Lucifer held a very high rank in the heavenly hierarchy. And, accordingly, we find many theologians maintaining that before his fall he was the foremost of all the angels. Suarez is disposed to admit that he was the highest negatively, i.e. that no one was higher, though many may have been his equals. But here again we are in the region of pious opinions, for some divines maintain that, far from being first of all, he did not belong to one of the highest choirs--Seraphim, Cherubim, and Thrones--but to one of the lower orders of angels. In any case it appears that he holds a certain sovereignty over those who followed him in his rebellion. For we read of "the Devil and his angels" (Matthew 25:41), "the dragon and his angels" (Apocalypse 12:7), "Beelzebub, the prince of devils"--which, whatever be the interpretation of the name, clearly refers to Satan, as appears from the context: "And if Satan also be divided against himself, how shall his kingdom stand? Because you say that through Beelzebub I cast out devils" (Luke 11:15, 18), and "the prince of the Powers of this air" (Ephesians 2:2). At first sight it may seem strange that there should be any order or subordination amongst those rebellious spirits, and that those who rose against their Maker should obey one of their own fellows who had led them to destruction. And the analogy of similar movements among men might suggest that the rebellion would be likely to issue in anarchy and division. But it must be remembered that the fall of the angels did not impair their natural powers, that Lucifer still retained the gifts that enabled him to influence his brethren before their fall, and that their superior intelligence would show them that they could achieve more success and do more harm to others by unity and organization than by independence and division. Besides exercising this authority over those who were called "his angels", Satan has extended his empire over the minds of evil men. Thus, in the passage just cited from St. Paul, we read, "And you, when you were dead in your offenses and sins, wherein in times past you walked according to the course of this world, according to the prince of the power of this air, of the spirit that now worketh on the children of unbelief" (Ephesians 2:1-2). In the same way Christ in the Gospel calls him "the prince of this world". For when His enemies are coming to take Him, He looks beyond the instruments of evil to the master who moves them, and says: "I will not now speak many things to you, for the prince of this world cometh, and in me he hath not anything" (John 14:30). There is no need to discuss the view of some theologians who surmise that Lucifer was one of the angels who ruled and administered the heavenly bodies, and that this planet was committed to his care. For in any case the sovereignty with which these texts are primarily concerned is but the rude right of conquest and the power of evil influence. His sway began by his victory over our first parents, who, yielding to his suggestions, were brought under his bondage. All sinners who do his will become in so far his servants. For, as St. Gregory says, he is the head of all the wicked--"Surely the Devil is the head of all the wicked; and of this head all the wicked are members" (Certe iniquorum omnium caput diabolus est; et hujus capitis membra sunt omnes iniqui.--Hom. 16, in Evangel.). This headship over the wicked, as St. Thomas is careful to explain, differs widely from Christ's headship over the Church, inasmuch as Satan is only head by outward government and not also, as Christ is, by inward, life-giving influence (Summa III:8:7). With the growing wickedness of the world and the spreading of paganism and false religions and magic rites, the rule of Satan was extended and strengthened till his power was broken by the victory of Christ, who for this reason said, on the eve of His Passion: "Now is the judgment of the world: now shall the prince of this world be cast out" (John 12:31). By the victory of the Cross Christ delivered men from the bondage of Satan and at the same time paid the debt due to Divine justice by shedding His blood in atonement for our sins. In their endeavours to explain this great mystery, some old theologians, misled by the metaphor of a ransom for captives made in war, came to the strange conclusion that the price of Redemption was paid to Satan. But this error was effectively refuted by St. Anselm, who showed that Satan had no rights over his captives and that the great price wherewith we were bought was paid to God alone (cf. ATONEMENT). What has been said so far may suffice to show the part played by the Devil in human history, whether in regard to the individual soul or the whole race of Adam. It is indicated, indeed, in his name of Satan, the adversary, the opposer, the accuser, as well as by his headship of the wicked ranged under his banner in continual warfare with the kingdom of Christ. The two cities whose struggle is described by St. Augustine are already indicated in the words of the Apostle, "In this the children of God are manifest and the children of the devil: for the devil sinneth from the beginning. For this purpose the Son of God appeared, that He might destroy the works of the devil" (1 John 3:8). Whether or not the foreknowledge of the Incarnation was the occasion of his own fall, his subsequent course has certainly shown him the relentless enemy of mankind and the determined opponent of the Divine economy of redemption. And since he lured our first parents to their fall he has ceased not to tempt their children in order to involve them in his own ruin. There is no reason, indeed, for thinking that all sins and all temptations must needs come directly from the Devil or one of his ministers of evil. For it is certain that if, after the first fall of Adam, or at the time of the coming of Christ, Satan and his angels had been bound so fast that they might tempt no more, the world would still have been filled with evils. For men would have had enough of temptation in the weakness and waywardness of their hearts. But in that case the evil would clearly have been far less than it is now, for the activity of Satan does much more than merely add a further source of temptation to the weakness of the world and the flesh; it means a combination and an intelligent direction of all the elements of evil. The whole Church and each one of her children are beset by dangers, the fire of persecution, the enervation of ease, the dangers of wealth and of poverty, heresies and errors of opposite characters, rationalism and superstition, fanaticism and indifference. It would be bad enough if all these forces were acting apart and without any definite purpose, but the perils of the situation are incalculably increased when all may be organized and directed by vigilant and hostile intelligences. It is this that makes the Apostle, though he well knew the perils of the world and the weakness of the flesh, lay special stress on the greater dangers that come from the assaults of those mighty spirits of evil in whom he recognized our real and most formidable foes--"Put you on the armour of God, that you may be able to stand against the deceits of the devil. For our wrestling is not against flesh and blood; but against principalities and powers, against the rulers of the world of this darkness, against the spirits of wickedness in the high places . . . Stand therefore, having your loins girt about with truth, having on the breastplate of justice, and your feet shod with the preparation of the gospel of peace; in all things taking the shield of faith, wherewith you may be able to extinguish all the fiery darts of the most wicked one" (Ephesians 6:11, 16). APA citation. (1908). Devil. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/04764a.htm MLA citation. "Devil." The Catholic Encyclopedia. Vol. 4. New York: Robert Appleton Company, 1908. <http://www.newadvent.org/cathen/04764a.htm>. Transcription. This article was transcribed for New Advent by Rick McCarty. Ecclesiastical approbation. Nihil Obstat. Remy Lafort, Censor. Imprimatur. +John M. Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:12c8fb57-3cf6-418a-88ec-a3c13e14e413>
CC-MAIN-2016-26
http://newadvent.org/cathen/04764a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00125-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974009
6,470
2.765625
3
Seoul: South Korea has cleared plans to launch the country's first space rocket, the Korea Space Launch Vehicle-1 (KSLV-1), in July. The country's National Space Committee has approved the launch, tentatively scheduled for 30 July at the Naro Space Centre in Goheung, about 475 km south of Seoul, a government statement said. According to the state-run Korea Aerospace Research Institute construction work on the rocket is complete and now work will begin to mate the first-stage main thruster to the second-stage space vehicle. |Image credit:Yonhap News| Russia built the first stage main thruster and also helped design the launch pad. South Korea has built the rocket's second stage as well as the satellite it will carry into orbit. The launch vehicle weighs 140 tonnes, stands 33 metres tall and has a diameter of three metres. The launch of the KSLV-1 has been postponed twice. The first launch was postponed from late 2008 to late June this year after an earthquake in China's Sichuan province created problems in securing key parts. It was pushed back further to late July in order to provide engineers more time for tests. The launch of the KSLV-1 will take place in the backdrop of North Korea's nuclear tests and attempts to launch a satellite, which the United States and its allies claim was actually a disguised long-range missile test.
<urn:uuid:0c81cb4c-5d98-4a0f-a731-f65159528b42>
CC-MAIN-2016-26
http://www.domain-b.com/aero/space/launch_veh/20090603_kslv_1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966807
290
2.765625
3
Recent Advances in Science (4/1/11 • AF News by Pearl Gray WASHINGTON –– At a press conference held this morning at the Sleep Research Center at the The National Institutes of Health in Bethesda, Maryland it was revealed that for over a decade neurologists have been working on a project that allows humans to live without the need to sleep. While working on a program investigating the underling causes of Attention-deficit hyperactivity disorder (ADHD), Doctor Roland Jerrome noticed cognitive function correlations between REM sleep patterns and sleep deprivation. He was then able to get a research grant for further research and formed the "Sleep Research Center" under the auspices of the National Institutes of Health. A breakthrough came in 2001 when he implanted a microchip transducer behind the ear of a rhesus monkey. Probes from the implant were then routed into the ocular nerve center in the cerebral cortex and occipital lobe of the brain. It was then programmed to send artificial REM sleep signals. Astonishingly it appeared that the subject did not seem to require sleep and in fact was quite active during normal sleep periods. In late 2003 an improved device dubbed a "stemulator", was implanted successfully in a human volunteer, another in 2005, two more in 2006, two in 2009, and in four subjects in 2010. That makes a total of ten subjects who are currently "wearing" the device and Dr. Jerrome said that they will be taking applications for twenty more volunteers in 2011. None have slept at all since their implants were installed. The stemulator implants are programmable wirelessly from a central transmitter which sends a pre-programmed "Sleep Dose" individually to each participant when it detects prefrontal cortex and parietal lobe activation. Currently the only drawback is that the subject has to remain within range of the center's transmitter for at least twenty minutes in any 24 hour period. However, Dr. Jerrome is working on a physical receptacle not unlike the USB connector on your computer that can be implanted behind the ear to allow an individual to plug in a dose of eight hours of sleep whenever the need arises. If the individual desires, he or she can skip this step and would then go into a normal sleep cycle. It is presumed that most people would initially opt to live their lives with normal sleep cycles and only program in sleep when needed (a cross country motor trip for instance). The advantages to emergency responders, long haul truckers, airline pilots, medical workers, military personnel, etc. are obvious. Applications may also include the ability to produce a suspended animation state that would allow astronauts to travel to deep space regions. Future generations will most likely adjust work and production schedules for most activities across the board. Dr. Jerrome also stated that it is not unforeseeable that in the future, DNA or RNA splicing may make it possible to achieve the same results without the need of a physical device. Dr. Jerrome predicts that barring any unforeseen setbacks, the procedure could become available to the general public by mid 2015.
<urn:uuid:4c4a3cdf-edd5-43b8-b217-056f552b4433>
CC-MAIN-2016-26
http://larryandteddy.blogspot.com/2011/04/recent-advances-in-science-2011.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94993
620
3.671875
4
4.1 BACKGROUND DOCUMENT 4.2 SUMMARY DOCUMENT THE APPROPRIATENESS, SIGNIFICANCE AND APPLICATION OF BIOTECHNOLOGY OPTIONS IN THE ANIMAL AGRICULTURE OF DEVELOPING COUNTRIES 4.1.1 The context: trends in animal agriculture in developing countries Human population growth, increasing urbanization and rising incomes are fuelling a massive increase in demand for food of animal origin (milk, meat, eggs) in developing countries. Globally, livestock production is growing faster than any other sector and by 2020 the livestock sector is predicted to become the most important agricultural sector in terms of added value. In view of its substantial dynamics, this process has been referred to as the livestock revolution. Important features of this process are: (1) a rapid and massive increase in consumption of livestock products in developing countries with, e.g. per caput meat consumption in the developing world expected to double between 1993 and 2020; (2) a shift of livestock production from temperate and dry areas to warmer and more humid environments; (3) a change in livestock keeping from a family-support activity to market-oriented increasingly integrated production; (4) increasing pressure on grazing resources; (5) more large-scale, industrial production units located close to urban centres, (6) decreasing importance of ruminant vis-à-vis monogastric livestock species; and (7) a rapid rise in the use of cereal-based feeds. Most food of animal origin consumed in developing countries is currently supplied by small-scale, often mixed crop-livestock family farms or by pastoral livestock keepers. The ongoing major expansion of the demand for livestock products for food is expected to have significant technological and structural impacts on the livestock sector. The productivity of animal agriculture in developing countries will need to be substantially increased in order to satisfy increasing consumer demand, to more efficiently utilize scarce resources and to generate income for a growing agricultural population. Agricultural biotechnology has long been a source of innovation in production and processing, profoundly impacting the sector. Rapid advances in molecular biology and further developments in reproductive biology provide new powerful tools for further innovation. Increasingly, the advanced molecular biotechnology research and development activities are conducted by large corporations and are designed to meet the requirements of developed country markets rather than the conditions of small-scale farmers in tropical regions of the world. Whilst the developing countries accommodate an increasing majority of the worlds people, farmers and animals, there is a risk that biotechnology research and development may by-pass their requirements. In this e-mail conference it is suggested to discuss biotechnologies that are either currently applied or are likely to come on stream for use in animal agriculture. The main theme of the conference is the question as to how relevant and appropriate these technologies are to meet the necessary enhancement of animal production and health in developing countries and which factors determine their adoption or lack thereof. The question needs to be addressed why exactly this potential is so under-utilized in developing countries. To what extent is the technology transfer, in adaptation and adoption, affected by, e.g.: 4.1.2 Biotechnologies for consideration 18.104.22.168 Reproductive biotechnologies The main objective of biotechnologies in reproduction is to increase reproductive efficiency and rates of animal genetic improvement, thereby contributing to an increased output from the livestock sector. They also offer potential for greatly extending the multiplication and transport of genetic material and for conserving unique genetic resources in reasonably available forms for possible future use. a) Artificial insemination (AI) AI has already had a major impact on cattle, sheep, goat, pig, turkey and chicken improvement programmes of developed countries by accelerating breeding progress primarily through increased intensity of selection of males and through diffusion of breeding progress, initially with fresh and later, with frozen, semen, offering rapid worldwide transport of male genetic material. Globally, more than a 100 million AIs in cattle, 40 million in pigs, 3.3 million in sheep and 0.5 million in goats are performed annually. Only in very few developing countries is AI practised to a level that impacts substantially livestock production. What are the reasons that such a powerful technology has not been more widely adopted in developing countries? What is required to make the technology the same success as in developed countries? b) Embryo transfer (ET) ET in the mammalian species, enhanced by multiple ovulation and embryo transfer (MOET), allows acceleration of genetic progress through increased selection intensity of females and freezing of embryos enables low cost transport of genetic material across continents, and also conservation of diploid genomes. MOET may also be used to produce crossbred replacement females whilst only maintaining a small number of the straightbreds. In 1998, worldwide 440 000 ETs were recorded in cattle, 17 000 in sheep, 1 200 in goats and 2 500 in horses. About 80 percent of the bulls used in AI in the developed world are derived from ET. Despite the potential benefits of ET, its application is largely limited to developed countries. What are the required technical and/or policy elements that will enable developing countries to make use of these technologies on a greater scale? ET is also one of the basic technologies for the application of more advanced reproductive biotechnologies such as ovum pick-up (OPU) and in vitro maturation and fertilization (IVM/IVF), sexing of embryos, cloning and of transgenics. c) OPU and IVM/IVF OPU in mammals allows the repeated pick-up of immature ova directly from the ovary without any major impact on the donor female and the use of these ova in IVM/IVF programmes. Making much greater use of genetically valuable females at a very early age may substantially increase genetic progress. What potential uses of these technologies are feasible in developing countries? What are the required technical and/or policy elements that will enable developing countries to make practical use of these technologies? Technologies for rapid and reliable sexing of embryos allow the generation of only the desired sex at specific points in a genetic improvement programme, markedly reducing the number of animals required and enabling increased genetic progress. Sexing of semen using flow-cytometric sorting has decisively progressed in recent years but still with limited sorting rates, even for IVF. Sexed semen could markedly increase genetic improvement rates and have major implications for end-product commercial production. What is the scope for the use of these technologies in developing countries? IVM/IVF are a source of large numbers of low cost embryos required for biotechnologies such as cloning and transgenesis. Three different types of clones are distinguished, as a result of: (1) limited splitting of an embryo (clones are genetically identical); (2) introducing an embryonic cell into an enucleated zona (clones may differ in their cytoplasmic inheritance); (3) introducing the nucleus of a somatic cell (milk, blood, dermal cells), after having reversed the DNA quiescence, into an enucleated zona (clones may differ in their cytoplasmic inheritance and substantial knowledge of the phenotype of the parent providing the somatic cell probably already exists). Cloning will be used to multiply transgenic founder animals. Cloning technologies offer potential as research tools and in areas of very high potential return. The sampling of somatic tissue may assist collection and transfer of breed samples from remote areas for conservation purposes. 22.214.171.124 Molecular biotechnologies Various molecular biotechnology applications are available in animal production and health, involving both on-farm production and off-farm product processing applications. In this e-mail conference on-farm use is considered; only technologies based on DNA procedures are suggested for consideration. a) DNA technologies and animal health Animal diseases are a major and increasingly important factor reducing livestock productivity in developing countries. Use of DNA biotechnology in animal health may contribute significantly to improved animal disease control, thereby stimulating both food production and livestock trade. i) Diagnostics and epidemiology Advanced biotechnology-based diagnostic tests make it possible to identify the disease-causing agent(s) and to monitor the impact of disease control programmes, to a degree of diagnostic precision (sub-species, strain, bio-type level) not previously possible. For example, DNA analysis of bovine viral diarrhoea virus (BVDV) has been shown to be composed of two genotypes, BVDV1 and BVDV2. Only the latter was found to produce haemorrhagic and acute fatal disease, and diagnostic tests to distinguish between the two are under development. Enzyme-immunoassay tests, which have the advantage of being relatively easily automated, have been developed for a wide range of parasites and microbes. Relevance and accessibility of these diagnostic tests to the livestock industry in developing countries are suggested for debate. Molecular epidemiology is a fast growing discipline that enables characterization of pathogen isolates (virus, bacteria, parasites) by nucleotide sequencing for the tracing of their origin. This is particularly important for epidemic diseases, where the possibility of pinpointing the source of infection can significantly contribute to improved disease control. Furthermore, the development of genetic probes, which allow the detection of pathogen DNA/RNA (rather than antibodies) in livestock, and the advances in accurate, pen-side diagnostic kits, considerably enhance animal health programmes. The conference should establish the status and potential uses of these technologies in developing countries. ii) Vaccine development Although vaccines developed using traditional approaches have had a major impact on the control of foot-and-mouth disease, rinderpest and other epidemic and endemic viral, mycoplasmal and bacterial diseases affecting livestock, recombinant vaccines offer various advantages over conventional vaccines. These are safety (no risk of reversion to virulent form, reduced potential for contamination with other pathogens, etc.) and specificity, better stability and importantly, such vaccines, coupled with the appropriate diagnostic test, allow the distinction between vaccinated and naturally infected animals. The latter characteristic is important in disease control programmes as it enables continued vaccination even when the shift from the control to the eradication stage is contemplated. Recombinant DNA technology also provides new opportunities for the development of vaccines against parasites (e.g. ticks, helminths, etc.) where conventional approaches have failed. What is the status and potential for the use of these technologies in developing countries? b) DNA technologies in animal nutrition and growth i) Nutritional physiology Applications are being developed to improve the performance of animals through better nutrition. Enzymes can improve the nutrient availability from feedstuffs, lower feed costs and reduce output of waste into the environment. Prebiotics and probiotics or immune supplements can inhibit pathogenic gut micro-organisms or make the animal more resistant to them. Administration of recombinant somatotropin results in accelerated growth and leaner carcasses in meat animals and increased milk production in dairy cows. Immunomodulation can be used for enhancing the activity of endogenous anabolic hormones. In poultry nutrition, possibilities include the use of feed enzymes, probiotics, single cell protein and antibiotic feed additives. The production of tailor-made plant products for use as feeds and free from antinutritional factors through recombinant DNA technology is also a possibility. Plant biotechnology may produce forages with improved nutritional value or incorporate vaccines or antibodies into feeds that may protect the animals against diseases. ii) Rumen biology Rumen biotechnology has the potential to improve the nutritive value of ruminant feedstuffs that are fibrous, low in nitrogen and of limited nutritional value for other animal species. Biotechnology can alter the amount and availability of carbohydrate and protein in plants as well as the rate and extent of fermentation and metabolism of these nutrients in the rumen. The potential applications of biotechnology to rumen micro-organisms are many but technical difficulties limit its progress. Current limitations include: isolation and taxonomic identification of strains for inoculation and DNA recombination; isolation and characterization of candidate enzymes; level of production, localization and efficiency of secretion of the recombinant enzyme; stability of the introduced gene; fitness, survival and functional contribution of introduced new strains. Methods for improving rumen digestion in ruminants include the use of probiotics, supplementation with chelated minerals and the transfer of rumen micro-organisms from other species. c) DNA technologies in animal genetics and breeding Most animal characteristics of interest to food and agriculture are determined by the combined interaction of many genes with the environment. The genetic improvement of locally adapted breeds will be important to realizing sustainable production systems. The DNA technologies provide a major opportunity to advance sustainable animal production systems of higher productivity, through their application in: i) Characterizing genetic variation The use of microsatellites in genetic distancing of breeds is gaining momentum. While most breeds are located in the developing world, this work is confined to developed countries. How is it possible to more effectively involve the developing country breeds? Are the current protocols adequate or what further standardization is required? ii) Increasing the speed of genetic improvement of locally adapted breeds There are many links in the chain to realizing rapid genetic progress in the desired goals, with the objective being to rapidly transmit from selected breeding parents to offspring those alleles which contribute to enhanced expression of the traits of interest. In developing countries, generation intervals are generally longer for all animal species of interest than in developing countries. How can DNA technologies be used to reliably realize intense and accurate selection and short generation intervals and to enable genetic improvement of these many locally adapted breeds to contribute to the required livestock development? There is rapid progress in the preparation of sufficiently dense microsatellite linkage maps to assist in the search for genetic traits of economic importance. Can these linkage maps be used to develop strategies of MAS and marker-assisted introgression to meet developing country breeding goals? How should this be approached? Given the limited financial resources, how might work for the developing country breeding programmes strategically utilize the rapidly accumulating functional genomic information of humans, mice and drosophila? Transgenic animals have one or more copies of one or various foreign gene(s) incorporated in their genome or, alternatively, selected genes have been knocked out. The fact that it is possible to introduce or to delete genes offers considerable opportunities in the areas of increasing productivity, product quality and perhaps even adaptive fitness. In initial experiments, genes responsible for growth have been inserted. The technology is currently very costly and inefficient and applications in the near future seem to be limited to the production of transgenic animals as bio-reactors. What is the potential significance of these advanced technologies for developing countries and what are the technical, societal, political and ethical determinants of their application? iii) Conserving genetic diversity Global surveys indicate that some 30 percent of all remaining livestock breeds are at risk of loss, with little conservation effort currently invested. The majority of domestic animal breeds are in developing countries. Whilst animals cannot be re-formed from DNA alone, the conservation of genomic DNA may be useful. Under what circumstances should DNA genomic material be conserved and how should this be done by developing countries? What other information should be retained and what policy issues need to be taken into account? In the Background Document to the conference the biotechnology options were classified into two main groups: reproductive and molecular. Application of biotechnologies in three different animal sectors was also considered: a) health (disease diagnosis, epidemiology and vaccine development); b) nutrition and growth (nutritional physiology and rumen biology); and c) genetics and breeding (genetic improvement and characterization/conservation of genetic diversity). A total of 42 messages were posted during the conference, of which more than half were from developing countries. In contrast to the crop, forestry or fishery sector conferences (Chapters 2, 3 and 5, respectively), where a single biotechnology (genetic modification) dominated discussions, participants in this conference dealt with a wide range of biotechnologies and transgenic animals were not a major topic of discussion. Regarding the different animal sectors referred to previously, all three were covered at different stages throughout the conference although there was greatest discussion concerning the use of biotechnologies for the third sector, genetics and breeding, and least on the second sector, nutrition and growth. The majority of messages came from participants with extensive experience of development projects and animal agriculture in developing countries. A large number of different topics were covered, ranging from those that were biotechnology-specific, such as participants experiences or comments regarding individual biotechnologies in their country, to those that dealt with broader issues, such as the impacts of biotechnology on livestock biodiversity in developing countries. In summarizing the discussions, participants comments are grouped into a number of main topics within two sections. The first section attempts to summarize what participants said about the appropriateness, significance and application of specific biotechnologies. The second section is not biotechnology-specific and deals with their comments on a range of broader issues. Sections 4.2.1 and 4.2.2 of this document thus attempt to summarize the main elements of the discussions. Specific references to messages posted, giving the participants surname and the date posted (day/month of the year 2000), are included. The messages can be viewed at www.fao.org/biotech/logs/c3logs.htm. Section 4.2.3 gives the name and country of the people that sent referenced messages. 4.2.1 Discussions related to the appropriateness, significance and application of individual biotechnologies in developing countries The Background Document indicated that AI has already had a major impact on genetic improvement programmes in developed countries and questioned why it had not been more widely adopted in developing countries. Most comments received (which came mainly from participants in developing countries) dealt with the factors explaining the relatively moderate uptake and whether natural service is preferable to AI. Steane (20/6) argued that low conception rates and dependence on donor funding, which eventually is exhausted (a point also highlighted by Tibary, 4/7), were two major factors behind its low use in developing countries. Steane, in a later message (30/6), elaborated on the first factor, suggesting that low conception rates were due to a) poor heat (oestrus) detection; b) poor communication and infrastructure; and c) the fact that inseminators do not carry out sufficient numbers of inseminations to achieve high success rates. Chandrasiri (24/7), on this subject, stressed the need for farmer education and suggested that significant improvement could be achieved if farmers were educated on proper heat detection and timing of AI. Traoré (6/7) concluded that, for developing countries, at the present status, it is out of the question to consider AI as an alternative reproductive method to natural service (as is often the case in developed countries today). He maintained that there were still many problems with AI, due to a) relatively high costs, where components such as liquid nitrogen continued to increase in price; b) poor heat detection, often making heat synchronization necessary; and c) its use when unlinked to good health care and animal husbandry. This last point was also emphasized by Ramsey (17/8). Na-Chiangmai (4/8) supported the conclusion of Traoré (6/7), saying that AI at the small farmer level is not practical, especially for swamp buffalo and that natural mating probably gives better results under village conditions. He noted that correct timing of AI can be difficult for small farmers when the buffaloes are kept far from the village, due to problems with heat detection and the short ovulation period. Chandrasiri (24/7) said that although AI could be considered as an alternative to natural service, it was not popular among small-scale dairy farmers in Sri Lanka, a country where 85 percent of cows are naturally bred. Wiwie (11/7) maintained however, that in her country, Indonesia, AI was indeed an alternative to natural service for cattle because heat detection was easy, as farmers had only few cattle and these were kept in pens, and because bulls were both expensive to maintain and to transport within the country, which consists of many islands. Tibary (7/8) argued that although natural service gave good fertility results, the cost and the accident/health risks involved in keeping live males meant that AI should be recommended. He maintained that efficient programmes involving ovulation synchronization and AI, without requiring heat detection, could be developed. ET is a more advanced reproductive biotechnology and is less widely used than AI in both developed and developing countries. Its potential impact and current status in developing countries were considered in the conference. The potential merits of ET for dissemination of crossbred genetic material, for conservation of endangered local breeds and for genetic improvement in developing countries were mentioned by Traoré (6/7). He also, however, argued that the technology had, since the beginning, been too focused on dissemination of purebred genetic material for commercial production. Steane (20/6) felt that its use in the developing world would be more effective for dissemination of appropriate genetic material (such as crossbred dairy females) than for genetic improvement. However, he highlighted (30/6) that the current conception rates were low, for the same reasons as he gave earlier for AI and that they would need to be improved. Tibary (7/8) suggested that if the parties involved are convinced that technologies such as ET and AI are useful, then technical problems can be solved if there is adequate funding of local research. As an example, he cited the large progress made in ET and AI in camels in the Middle East. Ramsey (17/8) emphasized that both ET and AI can be very useful, provided that other basic inputs (good husbandry, nutrition and management) are in place. Wiwie (5/7) reported her experiences with a dairy cattle ET project in Indonesia and suggested that such projects could be successful if begun slowly with local pilot projects and then expanded on a step-by-step basis. Chandrasiri (24/7) reported that in Sri Lanka, ET was still only at the experimental stage and that it would take a few more years for it to be established commercially. 126.96.36.199 IVM/IVF and sexing There was little discussion about these techniques. Chandrasiri (24/7) however, raised the issue of using IVM/IVF in countries like Sri Lanka, where slaughter of female cattle and buffaloes is prohibited and slaughter house ovaries are thus unavailable. He suggested that collaborations with countries allowing their slaughter would solve the problem. Steane (20/6) and Chandrasiri (24/7) both mentioned that in some circumstances it would be advantageous to have sexed genetic material available for dissemination purposes. Blair (29/6 and 30/6) suggested that adult cloning could be beneficial in centralized breeding schemes for efficiently disseminating the genetic gains achieved to other levels of the animal population. Cronjé (29/6) proposed that the government could stimulate farmer support (including financial) for centralized breeding schemes by offering free cloning of genetically superior animals and sale of clones back to the farmers at subsidized rates. Gibson (21/7), on the other hand, recommended that one should stick closely to foreseeable realities. He said there was no evidence that the use of cloning for livestock dissemination can be economically viable in developed countries and that we should exercise extreme caution in predicting future applications of cloning technologies. 188.8.131.52 Genetic modification Compared to other conferences of the Forum, discussion of this biotechnology was less emotive and extensive. Muir (10/7) felt that transgenic technology offered tremendous potential for developed and developing countries and said that he strongly supported it. He emphasized, however, that potential negative impacts, as well as the true costs of the technology, should be evaluated. Steane (20/6) was concerned that, due to financial restraints, all the tests required to evaluate the potential adverse effects of GM animals might not be carried out. Martens (3/7) argued that before introducing GM animals, their performance should be tested under local feeding and management conditions. Gibson (21/7) said that it was appropriate that there should be a debate on testing GM livestock but that, in his opinion, appropriate testing is not a substantive issue or limitation. He suggested that genetic modification had as much potential for animals as for crops and that production of GM livestock was already economically feasible (although not cheap) due to advances in transgenic technologies. He was, however, concerned that resources would not be directed towards producing GM animals of benefit to developing countries, such as those with improved disease or parasite resistance. 184.108.40.206 Use of molecular markers for MAS There were some differences of opinion concerning the potential benefits of MAS for developing countries. Steane (20/6) pointed out that some research results suggest that MAS could reduce the overall total genetic progress. Muir (10/7) also urged caution and referred to some of his computer modelling results, which showed that, in certain conditions, MAS had very little positive impact on genetic improvement. He thus questioned whether it would be appropriate for developing countries to use the large financial resources that MAS requires for this purpose. Jeggo (20/7), on the other hand, was more optimistic, arguing that the use of microsatellite marker information to analyse production traits may offer ways to maximize use of the favourable genetic characters of indigenous livestock and to accelerate their genetic improvement. He suggested that support should be given so that developing countries could be provided with this technology. 220.127.116.11 Comparisons of different biotechnologies In addition to discussions on individual biotechnologies, some participants also tried to compare and contrast them. Gibson (21/7), in the context of their application to livestock agriculture in the developing world, tried to place them in four classes according to the levels of infrastructure they require. In order of increasing complexity, there were: Some participants compared the two principal reproductive biotechnologies - AI and ET. Steane (20/6 and 30/6) maintained that timing practicalities favoured the use of ET over AI at the local level, as the latter requires efficient heat detection followed by quick insemination of the female, whereas with ET there is less urgency. The ET technology is nevertheless more specialized and Wiwie (11/7) noted that, unlike AI, ET was only carried out by a few experts in her country, Indonesia. Traoré (6/7) maintained that, except in some high producing zones, AI was more competitive than ET, as farmers were then dealing with crossbred genetic material that was more adapted than the purebred genetic material that tended to be transferred by ET. He thus concluded that contrary to AI, ET will still belong for a long time to the field of research. 4.2.2 Discussions on broader issues 18.104.22.168 Biotechnology and the dynamics of livestock production in developing countries Wiwie (28/6) and Ali (29/6) provided a reminder of the current situation for many farmers in developing countries. In Indonesia, farmers usually have one to three cattle and a few head of sheep and goats and the animals are kept as financial security for the future (Wiwie, 28/6). Ali (29/6) noted that due to poverty, consumption of livestock products is viewed as more of a luxury than a necessity for many people in developing countries. The peoples lack of purchasing power means then that farmers keep livestock as a social insurance rather than for profit (Ali, 29/6). Woodford (4/7) argued that it is inevitable that agriculture in the less developed countries will undergo enormous change in relation to socio-economics and farming systems, where biotechnology was likely to play an important role and that the same transition from rural-based to urban-based societies, that happened gradually over the last 400 years in developed countries, was occurring now in developing countries, but at a much faster rate. Ali (29/6) noted that in many countries, good prices are only available in urban areas where economic growth in other sectors provides a spill over effect to the livestock sector and that only progressive farmers close to urban areas, where the products can be sold at reasonable prices, may use biotechnologies. Traoré (6/7) supported this by saying that AI could be justified in some breeding systems with crossbreeding of local with exotic breeds, where there was a socio-economic environment to justify the crossbreeding operation, such as in peri-urban milk production systems. He said that this had been the experience in Mali. Regarding industrialization of animal production in peri-urban areas, Steane (20/6) urged that more attention should be paid to its impact on the environment and suggested that biotechnology might be used to address this problem. 22.214.171.124 Why biotechnology is used relatively little in developing countries Several messages addressed this important question. Many explanations were provided and the factors were often related. a) Lack of infrastructure Sedrati (14/8) recognized the large potential that new biotechnologies in animal agriculture have for breeders and consumers, but maintained that these technologies need an environment that we dont have in developing countries, in terms of educational and basic infrastructural (water, roads, sanitation, etc.) standards. His conclusion was that the role of developed countries should be to raise the levels of social development in developing countries so that it would then be possible for them to develop and use biotechnologies. Gibson (21/7), in a similar vein, wrote that the main difficulty in applying new technologies in developing compared to developed countries was that the vast majority of new technologies build upon and depend upon a highly developed physical, social and educational infrastructure, which makes transplantation to other settings very difficult. To integrate the need for large infrastructural requirements with the wishes of developing countries for locally-based solutions, he argued that there was an even greater need now for large international centres to carry out biotechnology research and development. Hanotte (11/8) supported this and referred to the successful example of the collaboration shown between individual African countries in a project to genetically characterize indigenous cattle, where the molecular data from each country was analysed in a single international research centre. The importance of cooperation between research centres in both developing and developed countries was also emphasized by Traoré (16/8). b) Low levels of information/knowledge about science and agricultural biotechnology The challenges in this area are considerable since, as pointed out by Sedrati (14/8), the levels of illiteracy can be quite high in rural areas of developing countries while only few farmers have technical training. Worku (29/6) nevertheless emphasized the importance of reducing the information and knowledge gap that exists between developing and developed countries regarding agricultural biotechnology (he called this the biotech divide). He proposed that several approaches need to be taken to bridging the divide, including enhancement of science education (and integrating applications/principles of biotechnology into the curriculum) at the school and college level, while also targeting extension workers, opinion leaders, small farmers and consumers. c) Low capacity of developing countries to use biotechnology Jeggo (20/7) pointed out that there is an increasing gap between the ability of developing and developed countries to utilize biotechnology and that it was critical to bridge this north-south technology gap. Sedrati (14/8) pointed out that the level of investment in scientific and technical research in developing countries was very low and that, even when people in developing countries are trained in high-level technologies, they tend to take jobs in developed countries because of the higher salaries and better working conditions. Regarding capacity-building in developing countries, Traoré (6/7) was convinced that researchers in developing countries had a lot to gain from cooperating with research institutes in developed countries to get access to useful biotechnologies and adapt them to the needs of developing countries. Jeggo (20/7) suggested that some technologies offered significant advantages to developing countries that did not hold for developed countries, but that they would not be realized unless support for the introduction and use of these technologies was provided. d) Insufficient economic incentives for farmers to use biotechnology As pointed out by Worku (29/6), poor profit margins in farming is one of the factors contributing to low rates of adoption of biotechnologies in developing countries. As the general population is poor and cannot typically afford to buy meat, milk or eggs, farmers do not tend to keep livestock for profit and so have no incentive to use biotechnologies (Ali, 29/6). The exception is when farmers produce close to urban areas, where they can expect good prices and their investments in the use of biotechnologies may be rewarded (Ali, 29/6). e) Reliance on external funding for biotechnology projects The dependence of many biotechnology projects on external funding was also considered to be a factor behind the low uptake of biotechnologies as often the projects collapsed once the funding finished. In discussing AI and ET, Tibary (4/7) pointed out that in his experience, the use of these technologies is usually erratic and depends on funds provided by development projects and as soon as these funds are gone the activity ceases. This was also the reaction of Steane (20/6) regarding AI, saying that it was often free and poorly structured with the result that when donor funding ended there were insufficient financial resources to continue. Wiwie (5/7) agreed that this was a problem, but suggested that if the projects were carried out slowly on a step-by-step basis rather than as one-off, big projects they might be successful. By beginning with a small pilot project, as she had done in Indonesia with ET, there was firstly, a good probability of getting successful results and, secondly, seeing these good results, farmers were then more likely to support (and pay for) expansion of the project. Steane (30/6) emphasized that proper study and planning of the use of biotechnologies was first needed and that, unless planning was done and the extension services properly informed, no sustainable projects would be achieved. Gibson (21/7) expressed similar sentiments, writing that through experience we have learned that development that is based locally and driven locally will have the greatest chance of being sustainable. 126.96.36.199 Relationship between biotechnology and other components of animal agriculture Several participants emphasized the fact that biotechnology and genetic improvement in particular, cannot be considered in isolation from the other components of animal agriculture. Tibary (4/7) bemoaned the fact that in many cases the use of biotechnology has been looked at as a magic solution to the growing demand on animal product. He argued that, since genetic improvement can only be expressed if other aspects of livestock management are improved, any implementation of reproductive biotechnology (his major area of interest) should be part of a larger programme to improve health and forage production. Donkin (21/8) echoed these sentiments, saying that although the temptation is to view new technologies as being able to provide a quick-fix solution, this was seldom true as the problems were usually more complex than they initially appear. He also argued that no genetic improvement should be introduced without making provision for other improvements in aspects such as nutrition, disease control, or simply in the organization and control of breeding. Ramsey (17/8) expressed similar views, emphasizing that biotechnology needs to be used responsibly and that important issues, such as general animal husbandry, should not be overlooked. Referring specifically to AI, he noted that very often the fact that stressed and underfed animals do not respond well to synchronization and AI is simply overlooked. Traoré (6/7) was of the same opinion, saying the application of AI as a lucrative activity remains questionable if it is not linked to some other activities, such as health care and advice on animal husbandry practice. Given that new biotechnologies are often very expensive and require sophisticated back-up services, facilities and technical staff, Donkin (21/8) suggested it was appropriate to ask whether the resources could be used more effectively for developing countries. Muir (10/7) made a similar point, writing that high tech does not necessarily equate with good tech. Good tech is that which is cost effective and appropriate for the situation. Referring specifically to MAS, he argued that the economic resources might be better utilized in raising the management skills of farmers or in improving the extension services. 188.8.131.52 Biotechnology and vaccine development or disease diagnosis According to Steane (30/6) the potential of biotechnology is probably greater than in most other areas of animal production when directed towards new vaccines or the use of disease resistance genes. Halos (13/7) noted that one of the major problems facing the livestock production services was availability of effective vaccines far from major urban areas. As those currently available need refrigeration, she argued that DNA vaccines may help to solve this problem. Jeggo (20/7) was slightly more cautious, saying that although biotechnology offered solutions for animal vaccines, there is a long way to go. He argued that DNA vaccines, recombinant vaccines and genetically modified marker vaccines are obvious paths to follow, but that there were problems due to a) the intense debate on GMOs currently taking place in Europe; and b) the limited research funds available for work on developing country diseases. Regarding diagnosis of animal diseases, Jeggo (20/7) argued that diagnostic systems based on the polymerase chain reaction had an advantage due to their specificity and sensitivity and that technical developments were making them more attractive. He noted, however, that their use in developing countries was still limited due to problems of assay control and contamination. 184.108.40.206 Biotechnology and nutrition Cronjé (5/7) suggested that blood metabolite concentrations could be useful measures of nutrient status for free-ranging animals in developing areas. Makkar (17/7) provided some detailed comments on the potential role of biotechnology in animal nutrition. He argued that the manipulation of plants is likely to improve the utilization of feed resources by livestock with lesser investment of efforts and money compared to the manipulation of rumen microbes. To illustrate how genetic manipulation of plants might improve feed quality, he gave seven examples where it held great promise such as increasing sulphur amino acids in leguminous forage or increasing the digestibility of existing nutrients, especially fibre, for tropical forage. He questioned, however, whether reduction or elimination of plant secondary metabolites (anti-nutritional factors) by plant breeding and molecular technologies might be advisable in developing countries as the plants are faced with various environmental challenges and the metabolites have a protective role - a viewpoint that was supported by Dundon (18/7). Makkar (17/7) suggested that problems caused by the metabolites could be mitigated in some cases by transferring rumen micro-organisms from resistant to susceptible animals. 220.127.116.11 Traits for genetic improvement in developing countries A range of biotechnologies can be used to genetically improve livestock in developing countries. There was some discussion in the conference about which traits should be targeted for genetic improvement. Steane (20/6) questioned whether it was sensible in dairy cattle breeding to follow the developed world and to increase body size and maintenance requirements and to reduce fertility as had happened with the Holstein-Friesian population. Cronjé (20/6) maintained that selection for single traits, as practised in developed countries, increased the animals adaptation to higher levels of nutrition and that it was important to genetically select the animals so that they could reproduce and carry out other essential functions when nutrient supply was low. The importance and potential of using biotechnology to genetically improve disease resistance was emphasized by Steane (30/6), Worku (1/7) and by Gibson (21/7), who said, regarding genetic modifications of livestock of potential benefit to the developing world, that he would focus on efforts to modify resistance to disease and parasites. 18.104.22.168 Genotype by environment (G x E) interactions The topic of G x E interactions, where the genetic superiority/ranking of animals is dependent on the environment they are in, was discussed in two different contexts: i) the import of genetic material selected in developed countries to developing countries; and ii) genetic improvement programmes in developing countries a) Import of exotic breeds Both Woodford (4/7) and Ramsey (17/8) noted that experts from developed countries often advocated use of foreign breeds for developing countries, a strategy that was often unsuccessful as the animals were not genetically adapted to the new environment. Cronjé (20/6) emphasized the animal nutrition aspect to this problem, arguing that caution should be expressed about using genetic material in developing countries that has been selected under high nutritional levels in developed countries. Cronjé (5/7) however, also insisted that, given the increasing demand for food for the expanding human population, the existence of G x E interactions should not be used to delay the application of biotechnology until all genotypes had been tested in all environments. b) Genetic improvement programmes in developing countries To overcome the difficulties associated with on-farm recording and testing in developing countries, Blair (29/6) suggested that genetic improvement programmes should be based in centralized breeding stations, from which the superior genetic material could be then disseminated. Cronjé (29/6) however, argued that this approach was associated with problems because in such stations i) the management/nutrition levels were typically far superior than in normal farm conditions; and ii) genetic selection was usually based on a single trait recorded in the station environment. Because of G x E interactions, he concluded that this could result in animals being selected that were genetically superior in the station but inferior in the farmers environment. He suggested a compromise, where farmers would cooperate in a group breeding scheme, each contributing their own animals to be recorded under normal nutritional/management conditions in a centralized farm or grazing area. The concept was supported by Muir (1/7) who insisted that when G x E interactions are strong then the way to deal with the problem is to select the animals in the normal environment of production. Blair (3/7) suggested that the solution was to change the ranking process in the centralized station, which would require either assessing new traits on the station animals, recording their relatives under commercial conditions outside the station or modifying the station environment to reflect commercial conditions (as suggested by Muir, 1/7). 22.214.171.124 Impacts of biotechnology on livestock biodiversity in developing countries There was much discussion throughout the conference about the potential impacts (negative and positive) that biotechnology has (or may have) on animal genetic resources in developing countries. The theme is important as much of the potentially important livestock biodiversity is found in developing rather than developed countries (Steane, 20/6; Hanotte, 11/8) and it was argued that it could be a potential goldmine for developing countries if properly studied and evaluated (Hanotte, 11/8). a) Negative impacts of biotechnology on livestock biodiversity Discussions about the negative impacts were, to a large degree, a consequence of the many experiences that developing countries have already had of the use of reproductive biotechnologies (especially AI) to introduce foreign or exotic genetic material from developed countries, either for crossing with the local breeds or as purebreds. The primary negative impacts mentioned were that the existing adapted genetic material might be diluted or lost (Donkin, 21/8), seen for example in the Philippines (Halos, 13/7), and that the imported genetic material might not be adapted to the new environment and would require improvements in nutrition/housing, etc. since if we change the genetics then the chances are that we must also change the environment (Woodford, 4/7). Ramsey (17/8) expressed similar sentiments, saying that using AI, adapted indigenous animals have been crossed with breeds that are often totally unsuited to the environments in question - and we are left with a legacy of animals that require additional inputs to perform - and an eroded indigenous gene pool. Cronjé (20/6) also emphasized that once genes are introduced into an indigenous gene pool, it is hard to remove them if they are later discovered to be inappropriate. Traoré (16/8) suggested that a problem for breed conservation is that foreign breeds often have a strong appeal to farmers because they, and their crosses, are believed to be of high performance. Note that crossbreeding, per se, using AI, was not seen as being a negative factor. Steane (20/6) lamented the fact that very few developing countries offered AI of local breeds to allow their sires to be used in crossbreeding systems, but said that this was changing slowly. Ramsey (17/8) argued that in certain conditions (where there was a need for a specific product, such as milk and where the management inputs were sufficiently high), there was a niche for the development of a composite breed using local adapted animals as the dam line. The sire line could be non-local but should be chosen carefully, keeping the developing country environment in mind. He provided two examples of the development of composite breeds in South Africa. b) Positive impacts of biotechnology on livestock biodiversity Many participants emphasized the potential positive contribution that biotechnology could make to the conservation and characterization of livestock biodiversity (e.g. Jeggo, 20/7; Ramsey, 17/8). Ramsey (17/8) maintained that the preservation of endangered breeds was a vitally important niche for biotechnology. Here, he argued that reproductive biotechnologies, such as AI and ET (also promoted in this context by Traoré, 6/7), and DNA technologies, to verify parentage and breed purity, could be very useful. The importance of using molecular markers for studying livestock biodiversity was underlined by Hanotte (11/8). He noted that they allow us to identify the ancestral origins and to investigate the history of domestication of modern livestock species. Muir (21/8) argued that, having identified the ancestral wild populations from which the modern breeds evolved, biotechnology could play an important role in identifying alleles of production traits present in ancestral populations but absent in modern breeds. Hanotte (11/8) stressed the importance of international cooperation when using molecular markers to genetically characterize local breeds and gave an example of successful collaboration involving an African cattle project. This point was strongly supported by Tiesnamurti (16/8) and Li (17/8), who, together with Steane (25/8), gave some advice on how such international projects could be successfully operated. Li (17/8) also argued that, apart from molecular markers, basic data on production characters, population size and breed histories were also important for genetic characterization. Traoré (16/8) maintained that although characterization was an important step, it was not enough to ensure conservation of the local genetic resources, as this depended on a true appreciation of their characteristics. Ramsey (17/8) suggested that, wherever possible, conservation should start with on-farm initiatives. 126.96.36.199 The role of animal scientists in the biotechnology debate Harper (18/7) urged scientists to be more active in public discussions about biotechnology and in providing information to groups looking to learn about biotechnology. He predicted that this information-provider role would grow for scientists in the coming decades. He also observed that it was important for scientists to communicate the role that the different biotechnologies are already playing in the production system, although without over-emphasizing the importance of transgenic solutions, as this may lead to loss of public support. Donkin (21/8) noted that scientists tend to be enthusiastic about technological advances and keen to find ways to apply them. He cautioned, however, that this enthusiasm needs to be directed appropriately and that in development projects, the people to be helped should also be involved. These elements of caution were also expressed by Steane (25/8) who suggested that many scientists in developing countries seemed to emphasize obtaining the technology rather than looking at the possible adaptations, which could be infrastructural, needed to make them serve local needs. For him, this emphasized the need for increased dialogue between the various interested parties - planners, scientists, extensionists and above all, farmers. 4.2.3 Name and country of participants with referenced messages Ali, Kassim Omar. Norway Blair, Hugh. New Zealand Chandrasiri, A.D.N. Sri Lanka. Cronjé, Pierre. South Africa Donkin, Ned. South Africa Dundon, Stanislaus. United States Gibson, John. Kenya Halos, Saturnina. The Philippines Hanotte, Olivier. Kenya Harper, Gregory. Australia Jeggo, Martyn. Austria Li, Kui. China Makkar, Harinder. Austria Martens, Mary-Howell. United States Muir, Bill. United States Na-Chiangmai, Ancharlie. Thailand Ramsey, Keith. South Africa Sedrati, MHammed. Morocco Steane, David. Thailand Tibary, Ahmed. United States Tiesnamurti, Bess. Indonesia Traoré, Adama. Mali Wiwie, Caroline. Indonesia Woodford, Keith. Australia Worku, Mulumebet. United States
<urn:uuid:eb18220d-4cf7-43f2-a633-519f9773ec4d>
CC-MAIN-2016-26
http://www.fao.org/docrep/004/Y2729E/y2729e06.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951888
10,304
3.046875
3
Experts take to the field to see if iron mine can pass muster Town of Anderson — Sara Viernum scooped up a small frog as it jumped across a muddy Iron County logging road in a hilltop forest that could one day become the site of a massive open pit mine. "They're everywhere," she said, smiling as she held a spring peeper in the palm of her hand. The tiny frog survives in winter in a near-frozen state until it emerges in spring. Viernum, a specialist in amphibians and reptiles, is part of a small army of experts employed by the mine's developers, Florida-based Gogebic Taconite. The group is conducting wide-ranging field studies this spring in a section of Iron and Ashland counties where the mine would be built. They're looking at the flora and the fauna, sizing up streams and wetlands and investigating rock deposits. The information will be part of a state environmental impact statement that will give regulators and the public a sense of what exists in these remote woods — and what could be lost if the first iron ore mine in Wisconsin in more than 30 years is built. Since the project was unveiled in November 2010, much of the debate has centered on conjecture and finger-pointing. But the fieldwork will supply research to help the state Department of Natural Resources decide whether the project complies with state environmental laws. Other agencies also will be involved in decision-making, including the U.S. Army Corps of Engineers. "What they're doing is collecting environmental data — what's there now," said Larry Lynch, a hydrogeologist with the DNR who is overseeing the project for the agency. "Our job will be to verify it," he said. DNR staff are visiting the site almost every week and conducting their own research, including water sampling. Former DNR Secretary George Meyer is executive director of the Wisconsin Wildlife Federation. He estimates that Gogebic will have to spend $10 million to $20 million on an acceptable mining application — figures the company did not dispute. But in the early going, Gogebic has shown a lack of diligence, according to Meyer. He noted that when the company needed approval to remove large rock samples, the DNR ordered Gogebic to provide more details. Meyer said the company has so far "not show a lot of interest" in satisfying state regulatory requirements. Experts believe a final regulatory decision is still several years away, especially if environmental work can't be completed in 2014. The likelihood of lawsuits also will delay the process. What's certain, however, is that Gogebic's project has whipped up the biggest environmental fight in Wisconsin in decades. First, there was a bitter legislative fight that eased iron-mining regulations. Also,opponents have already raised legal objections, even though the company hasn't yet filed for a mining permit. Protesters freely roamed the project site last summer, prompting at least one angry clash that resulted in the arrest of a woman on charges of criminal damage to property. Her case has not yet gone to trial. Vandalism has been a constant, the company says, and Gogebic representatives pointed to a bridge over a stream where someone last year tried to remove the bolts. A camp near the mine site devoted to American Indian culture and a hub of mining opposition was ordered by the Iron County Board to leave county-owned land. The camp recently moved across the road to private land. One couple spent the entire winterat the old site. Temperatures dipped to 38 degrees below zero and snow was piled "waist deep to midgut," mining opponent Larry Ackley said last week as he sipped coffee and plucked quills from a porcupine that had been hit by a car. He's using the quills to adorn American Indian clothing. Gogebic says it also has experienced vandalism this year, even though lawmakers in the GOP-controlled Legislature approved rules restricting access to the site. Keyholes on locks on gates to the property have been glued shut, according to company spokesman Bob Seitz. Someone also placed locks on Gogebic's locks. The company hired a security firm last summer that outfitted crews with assault weapons, but Seitz declined to detail the latest protection measures. "We match up the security to what's needed," he said. Gogebic is owned by billionaire Chris Cline, a coal mine operator. The company wants to construct two pits, up to 1,000 feet deep, that would run for 4 miles. A third area would be engineered to pile waste rock hundreds of feet high. A fourth site, a factory on Highway 77, would break up 300-pound chunks of rock and process the iron ore into taconite pellets. Economic geologist Ralph Marsden studied the area for the U.S. Bureau of the Mines. In 1978, he described the Penokee deposit as "the largest in Wisconsin and one of the most important undeveloped iron ore reserves in the United States," according to a DNR report released in December 2013. Iron County Board Chairman Joe Pinardi said the mine would provide an economic boost to the region. In March, Hurley lost its only supermarket. "Anyone who wants groceries has to go to Michigan," said Pinardi, who also is mayor of Hurley, a community of 1,500. Tim Myers, Gogebic's mining engineer, said officials with the company believe the project will serve as a replacement for aging mines in Minnesota and the Upper Peninsula. On a sunny day last week, Myers drove a four-wheel-drive vehicle over several miles of muddy, potholed roads, guiding it past fast-running streams and the last remnants of snow to the top of the Penokees. Near the summit is where Gogebic would begin digging to remove bands of iron ore that run 850 feet deep. He stopped at a pile of rock that had been blasted in the early 1960s by U.S. Steel, a company that also had investigated the site. Myers ran his finger over black swirls of magnetite — a source of iron ore that looked much like core samples the company's geologists are studying in town. "That's what we'll be grinding for," Myers said. Concerns with deposits There are two big concerns with the rock deposits. The first is that the waste rockcould contain sulfide minerals. The presence of the minerals could produce sulfuric acid and pollute water — a process known as acid mine drainage. The second is that the rock also could hold a fibrous material, asbestiform grunerite, whose airborne fibers are a health hazard. Opponents say there is evidence of both, but Gogebic says its preliminary research from its core samples hasn't turned up such problems. The company has drilled 22 holes as deep as 1,400 feet. "Our opponents can just pick up a rock and say something," Seitz said. "They jump to conclusions. What's going up here is based on science." The DNR's Lynch said the agency has examined core samples supplied by the company and has seen evidence of sulfide minerals, but the agency isn't done with its analysis and hasn't yet drawn any conclusions on the presence of asbestiform grunerite. The rock samples are just one of the factors being scrutinized. Last week, contractors for Gogebic with global positioning systems inventoried streams, so that they will be able to map every waterway on the property. Others were conducting bird migration studies and venturing into the woods at night to listen for owls. Viernum and her partner Bill Poole were searching for places to conduct wildlife surveys when they found spring peepers and red-backed salamanders. They may find protected species, such as wood turtles, a state endangered species believed to be in the area. Any development must try to avoid contact with them. "If they are there, a lot of what I do is try to figure out how to protect them and still allow whatever project there is to proceed," said Poole, a wildlife ecologist who like Viernum works for Stantec Consulting Services Inc. In a few weeks, crews will start mapping the locations of wetlands. The work on both wetlands and streams will be critical because, by law, Gogebic will be able to fill in some waters. How those streams and wetlands are mapped could have a big influence on future DNR decision-making, Meyer said. "That's the whole ball game," he said. But Seitz said the company is barred from touching trout streams and many other waters. Gogebic will have to supply regulators with reports that will have to pass close scrutiny. "We have a lot of eyes on us," he said. More Wisconsin News - Elizabeth Brenner: After 12 years, thanks to my co-workers — and to all of you - Human trafficking tough fight for state, attorney general says - Jim Stingl: Man is steward of Vietnam vets’ graves - 'Professional plaintiff' uses credit law to threaten companies, win $230,000 in settlements (9) - Peak turtle-crossing time brings caution from advocates (2)
<urn:uuid:dfeb53b4-d278-4b43-a9a0-a8f6f7ba2398>
CC-MAIN-2016-26
http://www.jsonline.com/news/wisconsin/experts-take-to-the-field-to-see-if-iron-mine-can-pass-muster-b99265897z1-258845091.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970264
1,929
2.734375
3
Air is being pumped into a spherical balloon so that its volume increases at a rate of 80cm^3/s . How fast is the surface area of the balloon increasing when its radius is 11cm ? ... i found the derivative of Surface Area to be 8*pi*r*(dv/dt)=(ds/dt) i keep getting 7040*pi as my answer.. but its not correct. please help mee!
<urn:uuid:be8a6b91-9b0a-499e-9528-5c2d48225638>
CC-MAIN-2016-26
http://mathhelpforum.com/calculus/55505-derivations.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00062-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930825
91
2.65625
3
Minding the Achievement Gap One Classroom at a Time by Jane E. Pollock, Sharon M. Ford, and Margaret M. Black (ASCD, 2012 – Learn more) It took the authors a page to explain the title of this book to me. As I read their explanation for the title, it made me think about a joke that isn’t understood: if you have to explain it, it might not be a very good joke. I think the title is an unfortunate book choice because it doesn’t make a connection to previous books in this unofficial series, going back to Marzano’s Classroom Instruction that Works (2001). I would have picked up this book faster if the title were “Implementing Classroom Instruction that Works with ESL and EC Learners.” But regardless of the title, it’s a book well worth reading. The gap between research and practice The premise of this book is spot-on: although we as teachers adopt new techniques, we tend not to deliberately change our practice to reflect new research-based, high yield strategies. I loved Classroom Instruction that Works. My personal copy is a dog-eared, highlighted mess, with sticky notes poking out from multiple pages. Yet I haven’t implementing all nine research based strategies with fidelity on a regular basis in my own classroom. As a refresher – The nine strategies are: identify similarities and inferences, summarize and take notes, recognize effort and provide recognition, provide homework and practice, use nonlinguistic representations, use cooperative learning, set objectives and provide feedback, generate and test hypotheses, and use questions, cues and advance organizers. As an ESL teacher, five years after Classroom Instruction that Works was published, I eagerly purchased Classroom Instruction that Works with English Language Learners. Great book. But, even referring numerous times to both books, I still didn’t have a reliable strategy for incorporating the nine high-yield research based strategies on a regular basis in my classroom. But now I do. GANAG Lesson Planning The GANAG Lesson Planning Template, which incorporates the nine strategies, was first brought to educators’ attention in Jane Pollock’s 2007 book, Improving Student Learning One Teacher at a Time. I missed reading that book. Fortunately, that lesson plan format is not only repeated in this current book for classroom teachers, but the format is redesigned (GANAG Plus) to better meet the needs of specialists who work with English Language Learners (ELLs) and Exceptional Education (EC) students. An update of Madeline C. Hunter’s Mastery Teaching schema published in the 1970s, the GANAG schema guides teachers to intentionally incorporate the nine strategies into daily classroom learning activities. — from the Introduction At this point, you might wonder about the continuing relevance of nine strategies that are based on research prior to 2001. The authors note that while the strategies still hold up in practice, they have “updated the discussion here to incorporate more recently published research on the power of teacher clarity and teacher-student relationships.” (p. 56) The GANAG Lesson Planning Template was created not only as a way for teachers to apply research-based, high-yield strategies within our classrooms, but as learning strategies that can be taught to our students. GANAG is an acronym for its five steps. G = Share the goal/standard(s) and objective(s) A = Access prior student knowledge N = Acquire and process NEW INFORMATION A = Apply knowledge in a new situation and create original ideas G = Goal Review or summarize (You’ll see some further explanation at this school district webpage.) Alignment with SIOP As an ESL teacher, I especially liked seeing the GANAG schema aligned with Sheltered Instruction Observation Protocol (SIOP). I’ve been playing around with various SIOP lesson plan formats for several years. This year, I’ll use GANAG to not only incorporate the nine research based practices but to also blend in features of SIOP. The language goal can easily be placed next to the content goal. (I do wish the various templates and samples shown in the book were available online.) So what other changes will I make this year based on this book? Instead of journals in my classroom, this year I’ll be using high-yield interactive notebooks (IN). While not a new concept, the notebooks described in this book (IN9s) are set up in a specific format to maximize students’ involvement in their learning and to deliberately use the nine researched based strategies. The authors clearly describe setting up classroom IN9s as well as recommend and provide samples of various grade level IN9s. They also recommend elementary teachers consider using “foldables” to place in their notebooks. For anyone not familiar with Dinah Zike’s work with foldables, visit her site at http://www.dinah.com. As a teacher who co-teaches with content area teachers, I found the authors’ discussions and examples of various co-teaching models to be useful not only for EC specialists, but for ESL teachers and all co-teaching specialists and their content area partners. The achievement gap Early in the book the authors review the history of the achievement gap in the United States and conclude that closing the achievement gap is not a recent effort but an ongoing challenge. They correctly acknowledge that teachers are the most important factor in student success and educator’s voices are heard regularly in this book between chapters. The book concludes that teachers need to deliberately use research-based teaching practices to improve student performance. In this book, they have provided us with a lesson plan format for intentional and successful teaching and learning, a way to increase student engagement with interactive notebooks (IN9s), and chapters that address academically at-risk students. One classroom at a time is the only way as a nation we can really address the reality of our ongoing achievement gap. Reading this book is a good first step for both “minding” and closing the achievement gap in your own classroom. Julie Dermody, NBCT, is currently an ESL teacher in the Chapel Hill-Carrboro (NC) City Schools. She has also served as an elementary, middle and high school teacher, a reading specialist and a teacher of gifted students. Her article about ESL students, “Going for the Growth,” appeared in the September 2012 Educational Leadership (online edition).
<urn:uuid:f405077f-a5b4-4d4b-8aed-eaba9cbf4f87>
CC-MAIN-2016-26
http://www.middleweb.com/3293/esl-instruction-that-works/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94496
1,362
2.84375
3
Receiving anesthesia for surgery may have an effect on young brains that puts kids at a long-term cognitive disadvantage, researchers found. General anesthesia before age 3 years was linked to deficits at age 10 in understanding and using language, as well as poorer reasoning skills compared with unexposed children, according to Caleb Ing, MD, of Columbia University in New York City, and colleagues. Even a single exposure early in life raised the risk of disability in receptive language 2.41-fold and in cognition 1.73-fold, the group reported in the September issue of Pediatrics. Most of the general anesthesia exposures were for minor surgical procedures rather than for chronic disease. Anesthesia should only be used to sedate kids when necessary and using the lowest possible doses, an FDA panel recommended last year. Although the panel recognized the growing body of evidence for a long-term neurocognitive effect of anesthesia in children whose brains are still developing, it said there wasn't enough evidence to make a stronger recommendation to parents. Panelists did suggest that putting off procedures that could be delayed until children are a little older, trying swaddling and sugar water instead for minor procedures, and other alternative strategies should be considered. The period of peak synapse formation through age 3 years in children appears to be a "window of vulnerability," according to animal studies. Those studies have pointed to neurodegenerative changes from apoptosis across types of anesthesia, from drugs like nitrous oxide and ketamine to the benzodiazepines, propofol, and volatile anesthetics. The researchers examined the Western Australian Pregnancy Cohort (Raine) Study, originally designed to evaluate the long-term effects of prenatal ultrasound. Among the 2,868 children born from 1989 to 1992 in the birth cohort, 321 received anesthesia by age 3 for diagnostic testing or surgical procedures, most minor. Placement of ear tubes topped the list at 25%. A battery of neurocognitive tests at age 10 showed significantly poorer scores in tests of receptive (P=0.006), expressive (P=0.004), and total language (P=0.003) on the Clinical Evaluation of Language Fundamentals test for anesthesia-exposed children. Anesthesia exposed children also showed poorer cognition, with lower scores on Raven's Colored Progressive Matrices test of abstract reasoning (P=0.002). These differences had a clinical impact, as the prevalence of disability in language and reasoning were more common in the children exposed by age 3. The adjusted risk ratio risk for any exposure versus none was: - 1.87-fold for receptive language (95% CI 1.20 to 2.93), - 1.72-fold for expressive language (95% CI 1.12 to 2.64) - 2.11-fold for total language (95% CI 1.42 to 3.14) - 1.69-fold for abstract reasoning (95% CI 1.13 to 2.53) The directly-administered tests likely were more specific and sensitive than the diagnostic codes, academic performance, standardized testing, school and medical records, and parent and teacher surveys used in prior studies, the researchers noted. The study didn't identify a dose-response difference between single and multiple exposures, though one might be found with a larger cohort, the researchers acknowledged. While a prior observational study had linked early anesthesia exposure to attention deficit hyperactivity disorder, the birth cohort showed no difference in behavior or motor function. The behavioral analysis was based on parent report, though, and may not have been sensitive enough, the group noted. The most prevalent volatile anesthetic during the study period was halothane, which is no longer on the market. But its neurotoxic effects in animal studies have been similar to other volatile anesthetics. Other limitations were demographic differences, with more boys, Caucasians, and higher-income households in the exposed group. Also, the study excluded mothers who did not speak English, which may render the results less relevant to children at a lower socioeconomic status. The study is funded by grants from the Raine Medical Research Foundation, the National Health and Medical Research Council of Australia, the Telethon Institute for Child Health Research, the University of Western Australia (UWA), the UWA Faculty of Medicine, Dentistry and Health Sciences, the Women and Infants Research Foundation, and Curtin University. The researchers reported no conflicts of interest. - Reviewed by Robert Jasmer, MD Associate Clinical Professor of Medicine, University of California, San Francisco and Dorothy Caputo, MA, BSN, RN, Nurse Planner
<urn:uuid:b385911d-0fb5-4303-ae8b-791e2df45e30>
CC-MAIN-2016-26
http://www.medpagetoday.com/Anesthesiology/Anesthesiology/34278
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943315
952
3.21875
3
Our body needs to be hydrated all the times but this is especially very important in summers to be hydrated as hot sun is there and temperature is rising day by day. The effects of dehydration can be quite problematic. Symptoms can be dry mouth, fatigue and nausea. But one can easily prevent them by taking these useful foods as they help to prevent dehydration and gives instant energy to the body. So take these foods and be cool and refreshing this summer: Watermelon: Watermelon is the first cooling fruit which comes in the mind when we think of summer’s heat. This fruit is loaded with water as the name suggests and it also has a proper concentration of electrolytes which balances the loss of fluids in the body. It contains very less calories so the people who are on weight loss can also enjoy it. Coconut Water: Coconut water is another option you can go for which will help you to keep hydrated and refreshing .It is free of added sugars and preservatives with a perfect balance of electrolytes. Coconut water contains lauric acid which our body converts to monolaurin and it helps to fight infections and is very good for liver and digestive health. Cucumber: Cucumbers also contain a very good amount of water in it. Besides this cucumbers have caffeic acid and vitamin C which helps to soothe skin irritations. This is the reason we feel satisfied in a hot sunny day after eating a slice of cucumber. Green Salads: Green salads are a mix of lettuce, spinach, cucumber and legumes with some dressings in it. They can be a refreshing delight to you as greens also contains dense amount of water in it and are very low in calories. They also make you feel full for the longer times. Buttermilk: Buttermilk or lassi can also be very beneficial drink in hot summers. Salted lassi prevents the dehydration and also helps in the digestion of food. Zucchini: Zucchini is also very beneficial vegetable for avoiding dehydration as it contains approximately 95% of water. It also has fiber, potassium, folate and vitamins A and C which helps to combat bad effects of heat. Pineapple: Pineapples are juicy fruits which help to prevent dehydration. They help the body to remove the toxins and it also contains a compound called bromelain which helps to reduce inflammations. Grilled vegetables: Take grilled seasonal vegetables after coming back from a hot sunny day. They will soothe you and will give you a refreshing energy. Vegetables are also rich in antioxidants so helps to remove toxins from the body and prevents inflammations. Spring Onions: Add spring onions in your salad as they also help to keep you cool. Water: Don’t forget our natural water as this is a very important tool to avoid dehydration. Drink about 10-12 glasses of water every day and if you are going out then please carry a water bottle with yourself. • Avoid the use of too much spicy and fried food. • Avoid cigarette smoking and alcohol as they may also lead to dehydration. • Avoid excess coffee and tea.
<urn:uuid:186fd9a0-e200-4f1a-9118-9e22b54eee2c>
CC-MAIN-2016-26
http://www.bookmydoctor.com/health-article/keep-yourself-hydrated-in-this-summer-256/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96818
644
2.96875
3
- The NPR blog post says wheel-well stowaways are likely to freeze to death or die from lack of oxygen (a condition called hypoxia). But the flight from California to Hawaii took place entirely in a warm, temperate climate zone. So, why would freezing temperatures or lack of oxygen be a concern in this case? Would it make a difference if the wheel-well stowaway was taking a flight from Greenland to Alaska? Read our very short encyclopedic entry on “altitude” for some help. - Both the freezing temperature and lack of oxygen have to do with air pressure, which decreases as a plane’s altitude increases. In higher altitudes, there are simply fewer gas molecules (including oxygen) bumping into each other. This creates a much colder atmosphere, and one where your lungs have to work really hard to get enough oxygen to breathe. - Yes, it would make a huge difference if wheel-well stowaways took flights close to the poles. (Take a look at this list of wheel-well stowaways. All the flights are in temperate or even tropical latitudes.) This is because air pressure decreases near the poles. At high altitudes, air pressure decreases even further. Even though the plane may be flying at the same altitude near the pole as near the Equator, the decreased air pressure near the poles means it would be even colder, with even thinner air (less oxygen) than a flight nearer the tropics. A wheel-well stowaway going from Greenland to Alaska wouldn’t stand a chance. - Besides ill-advised wheel-well stowaways, who else might face the dangers of altitude sickness? - Mountaineers, mostly. According to our encyclopedic entry, “[i]t can take days and even weeks for a body to adjust to high altitude and low air pressure.” Even after adjusting to high altitudes (usually defined as about 2,400 meters (8,000 feet)), mountaineers bring warm climbing gear and canisters of oxygen to compensate for the freezing temperatures and thin air of the so-called “death zone.” - Why aren’t people inside airplanes threatened with altitude sickness? - The plane’s cabin is pressurized. This means that oxygen-rich air is pumped and circulated through the plane’s cabin. - NPR has done a great job covering this story. Their latest update says the wheel-well stowaway was “just a runaway kid with a bad idea.” The FBI (Federal Bureau of Investigation) usually does not get involved in runaway-kid cases. Why are they investigating this one? - Two reasons. First, the kid crossed state lines—and half of the Pacific Ocean. The FBI, a federal agency, is often called in when a case exceeds the jurisdiction of a single state (here, California or Hawaii). - Second, this kid was able to “hop a fence” and jump into a plane at a major international airport without being detected. (The rest of us have to buy a ticket, take off our shoes, stand in lines, present two forms of ID, have our luggage examined . . . ) This is a major, major security breach, and the TSA (Transportation Security Administration) at Mineta San Jose International Airport has a lot of explaining to do.
<urn:uuid:f4772acc-f2ab-453e-b85c-e7fed57bd817>
CC-MAIN-2016-26
https://blog.education.nationalgeographic.com/2014/04/22/boy-survives-flight-in-jets-wheel-well/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00091-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951603
703
3.359375
3
If a teacher asks for your submission, she might want you to obey her every command like a drone or, on the other hand, she may want just you to turn something in for her approval. The noun submission is the act of giving in to a stronger power. If someone winds up in jail, the guards there will demand the prisoner's submission. Alternatively, this word can refer to something that you submit to someone else. If you write an article and send it into a magazine to see if they will publish it, your article would be called a submission. Good luck! n the act of submitting; usually surrendering power to another the act of obeying; dutiful or submissive behavior with respect to another person abject submission; the emotional equivalent of prostrating your body the act of obeying meanly (especially obeying in a humble manner or for unworthy reasons) - Type of: action taken by a group of people n something (manuscripts or architectural plans and models or estimates or works of art of all genres etc.) submitted for the judgment of others (as in a competition) the entering of a legal document into the public record
<urn:uuid:f80eeacd-8c73-419a-bf1e-a5d494358d37>
CC-MAIN-2016-26
https://www.vocabulary.com/dictionary/submission
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941461
243
2.890625
3
The argument for "no bottleneck" leading up to modern humans in Africa was recently made by Sjödin et al. Weaver examines critically three of the arguments of the "recent Out of Africa" model: - the shallow coalescence of mtDNA Eve - the emergence of anatomically modern traits in Africa c. 200ky ago - the idea that these modern traits are quite different from the ones that preceded them -the idea that a new species was born at the time. With respect to mtDNA, Weaver makes the argument that the fact that mtDNA Eve coalesces to about 200ky ago (actually 177ky according to the latest estimate) does not mean that there was some bottleneck at the time associated with the rise of modern humans; the same coalescence age could occur under quite different scenaria: perhaps there were bottlenecks all the time, and nothing special happened at that time; or, there were no bottlenecks at all: if the effective population size was constant, then mtDNA would still coalesce at some time, depending on what that effective size was. This of course raises the question: if we know what the human effective population size is, could we estimate when mtDNA ought to coalesce. First, coalescence theory does not provide a hard time for the coalescence, but rather an expected value; coalescence at 177ky is compatible with lots of different effective sizes, and different effective sizes are compatible with a coalescence at 177ky. (More importantly, and contrary to popular belief and recent commentary: we have absolutely no idea what the human ancestral effective population size is. Figures like 10,000 people are sometimes quoted around, but we must remember where they come from: there is a triangle of doom between the human-chimp divergence date, the effective population size, and the mutation rate, and you need to know two of these to infer the other. Actually, we are beginning to get a hold of the mutation rate -thanks to the ability to sequence full genomes- but we have absolutely no clue when human-chimp divergence actually happened, at least not within a few million years.) There is a different blow that can be directed to the idea of using mtDNA to infer a "rise of modern humans": if we look at Denisova and Neandertal hominins, they are autosomally about equidistant to us, but Denisova carries an mtDNA lineage that is about half a million years more ancient. If that doesn't stop us from repeating the "recent mtDNA Eve = recent African origin" meme, I don't know what is. Getting back to the Weaver article, the author argues that the appearance of cranial modernity is expected if we only make an assumption about the narrow-sense heritability of human traits; that is, working backwards from the present, and taking into account drift and mutation, we expect that "modern traits" will start appearing in the anthropological record at the time of the supposed "rise of modern humans". This is simply a consequence of the fact that anthropologists label traits as modern or archaic with respect to extant human variation; so, there is nothing special about the fact that such "modern" traits appear on the record, since they are expected to do so by the mere fact that the people who lived 100-200ky ago are ever-more related to us. The final aspect of the Weaver article has to do with the supposed punctuation in the appearance of modern humans in Africa. He makes a good point here, that the African record is so fragmentary that we hardly know what people were like before the supposed rise of modern humans. It's tough to argue about the emergence of a new species when you have no good comparative base. I would also add that even after the supposed emergence, "modern" and "ancient" traits co-exist, with no clear overall pattern discernible in the data. If modern humans suddenly arose in Africa and replaced pre-existing African hominins, the evidence for this sudden emergence and replacement is lacking. Overall, I would say that Weaver makes a good argument against the idea of us being something special in the grand scheme of things. Perhaps we're not mutant world conquerors after all, but rather the latest phase in a long and drawn-out evolution of Homo. It's a less dramatic and more mellow theory about our origins, but one that may very well be true. Journal of Human Evolution Volume 63, Issue 1, July 2012, Pages 121–126 Did a discrete event 200,000–100,000 years ago produce modern humans? Timothy D. Weaver Scenarios for modern human origins are often predicated on the assumption that modern humans arose 200,000–100,000 years ago in Africa. This assumption implies that something ‘special’ happened at this point in time in Africa, such as the speciation that produced Homo sapiens, a severe bottleneck in human population size, or a combination of the two. The common thread is that after the divergence of the modern human and Neandertal evolutionary lineages ∼400,000 years ago, there was another discrete event near in time to the Middle–Late Pleistocene boundary that produced modern humans. Alternatively, modern human origins could have been a lengthy process that lasted from the divergence of the modern human and Neandertal evolutionary lineages to the expansion of modern humans out of Africa, and nothing out of the ordinary happened 200,000–100,000 years ago in Africa. Three pieces of biological (fossil morphology and DNA sequences) evidence are typically cited in support of discrete event models. First, living human mitochondrial DNA haplotypes coalesce ∼200,000 years ago. Second, fossil specimens that are usually classified as ‘anatomically modern’ seem to appear shortly afterward in the African fossil record. Third, it is argued that these anatomically modern fossils are morphologically quite different from the fossils that preceded them. Here I use theory from population and quantitative genetics to show that lengthy process models are also consistent with current biological evidence. That this class of models is a viable option has implications for how modern human origins is conceptualized.
<urn:uuid:36171341-dac9-4581-923c-b6554aad60b0>
CC-MAIN-2016-26
http://dienekes.blogspot.com/2012/07/the-long-drawn-out-road-to-us.html?showComment=1343704570357
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948421
1,270
3.0625
3
The Manchester Ship Canal By the latter half of the 19th century, Manchester had become a major industrial city. It was a fast growing city; the population of the Manchester region had risen from an estimated 322,00 in 1801 to over 1 million by 1850, and would rise to over 2 million people by 1901. Not only was the Lancashire cotton industry (in which many of these people worked) expanding, but the city had developed a leading technology in the engineering and manufacture of machinery for textile production. Its growing population also needed feeding and servicing. Yet, because it was a landlocked city, all goods had to be transported by road or rail to Liverpool docks in order to be exported abroad, and incoming goods were delivered by the same route. Daniel Adamson, Lord Egerton of Tatton and Liverpool tolls and harbour dues were prohibitive and significantly reduced profitability. Mancunian businessmen had long objected to Liverpool's commercial monopolies, and of the stranglehold which that city's port authorities held on Manchester trade. Oldham merchants were quoted as saying that it was cheaper to send their goods the 100 miles by road to the port of Hull on the east coast than to transport them the 35 miles to Liverpool and have to pay exorbitant harbour dues and levies. Manchester Ship Canal Aerial Photo Courtesy of www.webbaviation.co.uk © 2008 Although some goods were still transported by narrowboats on the Bridgewater Canal, the railways had largely taken over this function by the 1850s. In the 1890s, however, Manchester was to come up with a radical new proposal to connect it directly to the sea by a new man-made canal - the Manchester Ship Canal. After the depression of the late 1870s and mid-1880s it's construction would be seen as a sign of the city's long overdue economic revival. The depression had been as a result of Union blockades on cotton supplies in from the southern states in the American Civil War, and the resultant cotton-starvation experienced by the cotton mills of the Manchester region. The first moves to make the idea a reality were made when Daniel Adamson , a leading local industrialist, called a meeting to form the Manchester Ship Canal Company on 1st January 1882 at his home at "The Towers" in Didsbury. As a result, a committee was formed to obtain parliamentary permission for the project. It was to take three attempts over the next few years to secure the passage of the Manchester Ship Canal Bill through parliament, and this was followed by a great celebration in the city, with a huge procession to Belle Vue and an ox-roasting at Eccles. A great deal of civic pride rested on the success of the The company needed to raise �5million before work could begin, and this was raised by floating a share issue. Construction began in November 1887, when the first turf was ceremonially cut at Eastham by the new chairman, Lord Egerton of Tatton. Earlier that year, Adamson had resigned as chairman, and was to die shortly afterwards. The project contractor was Thomas Walker, an experienced and celebrated civil engineer who had already been involved in the building of the Severn Tunnel for the Great Western Railway Company. He estimated it would take 4� years to complete at a cost of �5�million. His estimates were to be far from realistic, however, and the canal would eventually cost over �15million by the time of its opening in 1894. Navvies' wages alone accounted for Walker's death before its completion also caused a severe loss of confidence in the company and the withdrawal of many financial backers, so that the Manchester City Council had to step in with another �5million, and take over 51% of the Ship Canal Company shares. The construction of the canal was fraught with many other problems - particularly with the boggy ground and the bad weather, which halted work on numerous occasions through flooding. But construction methods were to be state of the art, with new machines and devices employed alongside the army of "navvies" (an abbreviation of "navigators" - the men and boys who dug the canal). Equipment included over 100 steam excavators, 7 earth dredgers, 6,300 railway wagons, 173 locomotives, 124 steam cranes and a workforce of 16,000 men and boys. Several major engineering feats were accomplished to deal with the several railway lines which crossed the canal - many bridges had to be reconstructed or raised to allow headroom for large ships to pass beneath. At Salford, the Barton Swing Aqueduct was built to allow the Bridgewater Canal to pass over it, as was the Swing Road Bridge at Salford Quays. Barton Road Bridge and Trafford Road Bridge were closest to Manchester, and were originally swung by means of hydraulic power. In recent times three new bridges have been built across the Ship Canal : Barton High Level which carries the M60 Motorway, the Thelwall Viaduct, carrying the M6 and the Widnes-Runcorn Link Bridge. East of Warrington, the canal joins the River Irwell, and the two become one waterway from there to the Mersey estuary. Dock facilities needed to be constructed at various points along the canal, and some of these are still operational, though the ones nearer to Manchester have long since ceased to be used. The lower reaches of the canal are still quite busy today, particularly around the huge Queen Elizabeth II Dock at Eastham, which handles ships delivering at its large oil tanker terminal. From the outset, it had been decided to dig the canal deep enough to allow passage of large ocean liners, on the same principle as the Suez Canal, and that its depth could be increased when necessary by dredging. It was said that up unto the Second World War there were only six ships in the world too big to use the Ship Canal. Six locks were installed to raise ships some 60 feet 6 inches over its 35.5 miles - at Port Sunlight-Eastham, Latchford, Irlam, Barton and Mode Wheel at Salford. Port Sunlight Lock connected the Ship Canal to the tidal channel of the River Mersey, and acted as a control stop lock, so that vessels moored above the lock could remain afloat even when the tide was out. It was also the home of the Lever Brothers factory where soap and detergent products were manufactured. The factory exported some 1600 tons of Sunlight Soap a week through the Ship Canal. Before the construction of the Ship Canal, Eastham had been a popular day trip venue for the people of Liverpool, and it was known for its beautiful gardens - the canal in some ways made it more accessible, particularly after the construction of the pier in 1874 and the running of regular services from Liverpool. In other ways, the canal sounded the death knell of Eastham as a tourist resort, as it became the focus of large commercial seagoing traffic, and its character of "the Richmond of the Mersey" was lost. Besides export goods, Manchester had become a major centre for the distribution of imported food and raw materials - hence its Corn Exchange and its Coal Exchange. While the Ship Canal had been primarily intended as a means of reviving the ailing cotton trade, it actually promoted Manchester engineering, and became a major attraction to food and raw material importers. Most of Britain's grain and corn imports came via the Manchester Ship Canal. By 1914 the Canal had secured 5% of all UK imports, and over 4% of domestic exports. The city had also built many large warehouses to store these goods in transit, and a great deal of employment and commerce had been created in the storage trade. The 20th century has seen the Manchester Ship Canal fare well and worse. One major factor in its success was Trafford Park Industrial Estate. This large park through which the canal passes directly, is so strategically placed on the south-western approaches to the Cities of Salford and Manchester, that it has seen many companies locating, or relocating their industries in Trafford, due in no small part to the canal, its direct accessibility to the sea, and thereafter to the whole world. Apart from the predictable textile companies, Trafford Park saw the arrival of food production, vehicle manufacture, electronics and brewing companies. The British Westinghouse Electric Company bought up a huge tract of the park to establish the largest engineering works in the UK; the Co-operative Wholesale Society (the CWS) located its distribution warehouses in the estate; a Ford Motor Car factory was situated there from 1910 and for many years before relocating to Dagenham; Kelloggs (of Corn Flakes fame) still have a major processing plant in Trafford; Hovis Bread and Brook Bond Tea is still produced there. Engineering works included the manufacture of the Manchester Bomber, and later over 1000 Lancaster Bombers in World War Two, as well as the Rolls Royce Merlin engines which powered fighter planes like the Spitfire. The Ship Canal and the Manchester Docks had become vital components in the success of Manchester commerce and industry. When the canal reaches Manchester (or more properly Salford) it enters a web of quays and jetties. The old Salford-Manchester Docks disappeared in the early 1970s, in the wake of improved road, air and rail freight systems, and over the past few decades, as Manchester has ceased to be the strong centre of manufacturing that it used to be, the canal has fallen largely into disuse. The docks were redeveloped as Salford Quays, a large, prestigious inner city regenerative project of quality waterside housing, enterprise zone, entertainment and recreational complexes, and light industry. Ironically, it had been the Ship Canal which had made possible the boom in exports of Manchester-made textile machinery, and it was this in itself which was to be responsible for its own decline. The importation of cheaper foreign textiles in the 1950s and 60s, often made on machines which originated in Manchester, was to render local production uneconomic, and as the mills began to shut down around Lancashire, the need for the canal declined with it. In the 1960s, the gradual opening of more fast through-route motorways made road transportation easier and cost effective, and the development of the World Air Freight Terminal at Manchester Airport was to be a tough competitor. The last nail in the coffin of the Manchester Ship Canal was the introduction of containerised freight transportation. New container systems were introduced in British coastal ports and docks, and Manchester lost out in this modernisation - it did not have the space to store large numbers of containerised goods at the waterside, which the system demanded - the Ship Canal had simply outlived its usefulness. It remains today as a tribute to Victorian Manchester's engineering ingenuity and entrepreneurial spirit, and the farsightedness which inspired its native industrialists. Sources: See Bibliography - Books about Manchester
<urn:uuid:4f058856-0c14-4487-ac10-bc148ba22939>
CC-MAIN-2016-26
http://www.manchester2002-uk.com/history/victorian/Victorian4.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966867
2,424
3.875
4
Why should your child go to preschool? “A preschool is a fun place to be.” There are many benefits for children and families to be gained from preschool. A preschool experience: - empowers children to think, explore, question, wonder and learn how to learn; - supports children’s intellectual and language development and communication skills; - offers children consistent, experienced, qualified educators to support their learning and development; - improves children’s ability to think, problem solve and reason as they enter school, enabling them to learn more in the early grades; - develops social and emotional maturity and the ability to relate well to peers and adults, solve conflicts, play co-operatively and be assertive; - provides a solid platform for life-long learning and education; - guides children to gain independence, self esteem and self confidence, empowering their learning; - helps children have a greater understanding of the world around them; - encourages children to be independent learners and to take an active role in their learning; - supports children to play co-operatively together and learn from their experiences; - enables children and families the opportunity to belong, to be part of a community, and connect with others. - fun places to be, with varied, exciting experiences planned to meet the needs ad interests of children; - a context to build social connections between families and the community; - a source of professional information that supports parenting; - give lots of opportunities for parent education; - a gradual transition from home to more formal educational environments; - a warm, caring, friendly environment, led by qualified, experienced and dedicated staff; - places that nurture creativity, thinking and social skills, enabling children to develop a love of learning; - at the centre of the community; - in almost all cases, not for profit, with all income from government funding, fees and fundraising re-invested for the benefit of children.
<urn:uuid:1f1652e1-8820-403b-a8ed-a4e74ed33e00>
CC-MAIN-2016-26
http://www.preschoolsnsw.org.au/about-preschools/why-preschool
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00020-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95237
408
2.96875
3
Anterior Uveitis can be a serious problem when it comes to Bulldogs. There are several factors that have been linked to the cause of this disease. These are: •Infections that have been caused by pathogenic microorganisms. These include bacteria, viruses and fungi. A dog that spends a lot of time outdoors is at a higher risk of being exposed to these pathogens. •Immune-mediated conditions that are breed specific •Eye trauma or injury •When protein escapes from the eye lens into the eye fluid. This is often linked to cataracts. •Older dogs that have tumors or cancers But Anterior Uveitis might only be a symptom – it could be alerting you to a serious underlying condition that your Bulldog is suffering from. The symptoms of Anterior Uveitis include pain, tearing, redness, and squinting in bright light. The Bulldog’s pupil may look small or uneven in shape, the iris is unevenly colored, and there could be a cloudy appearance in the front part of the eye. To eliminate other causes and to come up with the best treatment for your Bulldog, several steps and tests should be run. You’ll need to give your vet a complete medical history so he or she can conduct a comprehensive physical examination. Your vet will use an opthalmoscope to look at different portions of the eye. As well, tonometry may be performed, which can assess the pressure in the eye. Other tools that can be used to diagnose this disease include blood tests, ultrasound, x-rays, and examination of fluid samples. If your Bulldog is diagnosed with Anterior Uveitis, the treatment involves both symptomatic and specific therapy. In some cases, surgical intervention may be required. In the case of symptomatic treatment, simple solutions such as topical medications (eye drops) or ophthalmic ointments are all that are needed. To alleviate pain and inflammation, you vet may prescribe oral medications. To target the exact cause of Anterior Uveitis, treatments such as antibiotics, antifungals, or other medications will help to reduce the immune-mediated inflammation. In some serious cases, surgical intervention may be required. This is usually needed when a tumor or secondary complications are present and medications have not worked. Surgical removal of the eye may be necessary if the problem is serious enough.
<urn:uuid:c0a7ebe0-89c5-41f5-a47f-53490b3c5808>
CC-MAIN-2016-26
http://www.bulldogsworld.com/health-and-medical/anterior-uveitis-bulldogs?quicktabs_hot_content=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940682
496
2.90625
3
Meningitis Vaccine ococcal Disease What is meningococcal disease? Meningococcal disease is a severe bacterial infection of the bloodstream or meninges (a thin lining covering the brain and spinal cord) caused by the meningococcus germ. Who gets meningococcal disease? Anyone can get meningococcal disease, but it is more common in infants and children. For some adolescents, such as first-year college students living in dormitories, there is an increased risk of meningococcal disease. Every year in the United States approximately 2,500 people are infected and 300 die from the disease. Other persons at increased risk include household contacts of a person known to have had this disease, immunocompromised people, and people traveling to parts of the world where meningococcal meningitis is prevalent. How is the meningococcus germ spread? The meningococcus germ is spread by direct close contact with nose or throat discharges of an infected person. What are the symptoms? High fever, headache, vomiting, stiff neck and a rash are symptoms of meningococcal disease. The symptoms may appear two to ten days after exposure, but usually within five days. Among people who develop meningococcal disease, 10 to 15 percent die, in spite of treatment with antibiotics. Of those who live, permanent brain damage, hearing loss, kidney failure, loss of arms or legs, or chronic nervous system problems can occur. What is the treatment for meningococcal disease? Antibiotics, such as penicillin G or ceftriaxone, can be used to treat people with meningococcal disease. Should people who have been in contact with a diagnosed case of meningococcal meningitis be treated? Only people who have been in close contact (household members, intimate contacts, health care personnel performing mouth-to-mouth resuscitation, daycare center playmates, etc.) need to be considered for preventive treatment. Such people are usually advised to obtain a prescription for a special antibiotic (rifampin, ciprofloxacin or ceftriaxone) from their physician. Casual contact, as might occur in a regular classroom, office or factory setting, is not usually significant enough to cause concern. Is there a vaccine to prevent meningococcal meningitis? There are three vaccines available for the prevention of meningitis. The preferred vaccine for people ages 2-55 years is Meningococcal conjugate vaccine (MCV4). This vaccine is licensed as Menactra (sanofi pasteur) and Menveo (Novartis). Meningococcal polysaccharide vaccine (MPSV4; Menomune [sanofi pasteur]), should be used for adults ages 56 and older. The vaccines are 85 to 100 percent effective in preventing the four kinds of meningococcus germ (types A, C, Y, W-135). These four types cause about 70 percent of the disease in the United States. Because the vaccines do not include type B, which accounts for about one-third of cases in adolescents, they do not prevent all cases of meningococcal disease. Is the vaccine safe? Are there adverse side effects to the vaccine? The three vaccines available to prevent meningococcal meningitis are safe and effective. However, the vaccines may cause mild and infrequent side effects, such as redness and pain at the injection site lasting up to two days. Who should get the meningococcal vaccine? The vaccine is routinely recommended for all adolescents ages 11-12 years, all unvaccinated adolescents 13-18 years, and persons 19-21 years who are enrolling in college. The vaccine is also recommended for people ages 2 years and older who have had their spleen removed or have other chronic illnesses, as well as some laboratory workers and travelers to endemic areas of the world. Who needs a booster dose of meningococcal vaccine? CDC recommends that children age 11 or 12 years be routinely vaccinated with Menactra or Menveo and receive a booster dose at age 16 years. Adolescents who receive the first dose at age 13-15 years should receive a one-time booster dose, preferably at ages 16-18 years. Teens who receive their first dose of meningococcal conjugate vaccine at or after age 16 years do not need a booster dose, as long as they have no risk factors. All people who remain at highest risk for meningococcal infection should receive additional booster doses. If the person is age 56 years or older, they should receive Menomune. How do I get more information about meningococcal disease and vaccination? NYSDOH Revised: July 2011 Davis Health Center Campus Center 004 34 Cornell Drive Canton, New York 13617 Mon - Thurs: 8 am - 4:30 pm Friday: 8 am - 4:00 pm Mon - Thurs: 8 am - 4:30 pm Friday: 8 am - 12:00 pm
<urn:uuid:852241d5-0e12-4353-84b5-ccfde6e9f566>
CC-MAIN-2016-26
http://www.canton.edu/health_center/meningitis.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929941
1,064
3.609375
4
Published in 1852, Uncle Tomís Cabin tells the story of a Christian slave, Uncle Tom, who is sold by a Kentucky family burdened by debt. Finally, sold again, he dies under the lash of the henchman of a cruel overseer, Simon Legree, who wants Uncle Tom to accept him instead of God as his master. Stowe, a member of a family of prominent abolitionists and ministers, also recounts the flight of a family of runaways on the Underground Railroad. Stoweís book quickly sold 300,000 copies and shocked many Northerners into a hatred for the slave system. Though few Northerners were converted to the cause of immediate abolition, Stoweís novel influenced more and more Northerners to adopt a position against the expansion of slavery an increasingly contentious sectional issue during the 1850s. When introduced to Stowe during the Civil War, President Lincoln is said to have called her the "little lady who made this big war." The novel also affected the American language: "Uncle Tom" became an epithet for passive blacks and "Simon Legree" became a synonym for cruelty. As you read, consider why Stoweís work would have so effectively galvanized Northern public opinion against the slave system. How does she depict slavery? How does she depict slave masters? How does she depict slaves? In what ways does she portray slavery as in conflict with values prevalent during the antebellum period? A slave warehouse! Perhaps some of my readers conjure up horrible visions of such a place. They fancy some foul, obscure den... But no, innocent friend; in these days men have learned the art of sinning expertly and genteelly, so as not to shock the eyes and senses of respectable society. Human property is high in the market; and is, therefore, well fed, well cleaned, tended, and looked after, that it may come to sale sleek, and strong, and shining. A slave warehouse in New Orleans is a house externally not unlike many others, kept with neatness; and where every day you may see arranged, under a sort of shed along the outside, rows of men and women, who stand there as a sign of the property sold within. Then you shall be courteously entreated to call and examine, and shall find an abundance of husbands, wives, brothers, sisters, fathers, mother, and young children, to be "sold separately or in lots, to suit the convenience of the purchaser;" and that soul immortal, once bought with blood and anguish by the Son of God, when the earth shook, and the rocks were rent, and the graves were opened, can be sold, leased, mortgaged, exchanged for groceries or dry goods, to suit the phases of trade, or the fancy of the purchaser. It was a day or two after the conversation between Marie and Miss Ophelia, that Tom, Adolph, and about half a dozen others of the St. Clare estate, were turned over to the loving kindness of Mr. Skeggs, the keeper of a depot on ----- street, to await the auction the next day. Tom had with him quite a sizable trunk full of clothing, as had most others of them. They were ushered, for the night, into a long room, where many other men, of all ages, sizes, and shades of complexion, were assembled, and from which roars of laughter and unthinking merriment were proceeding. "Aha! thatís right. Go it, boys, go it!" said Mr. Skeggs, the keeper. "My people are always so merry! Sambo, I see!" he said, speaking approvingly to a burly Negro who was performing tricks of low buffoonery, which occasioned the shouts which Tom had heard. As might be imagined, Tom was in no humor to join these proceedings; and, therefore, setting his trunk as far as possible from the noisy group, he sat down on it, and leaned his face against the wall. The dealers in the hubservers, are constantly enforced upon them, both by the hope of thereby getting a good master, and the fear of all that the driver may bring upon them, if they prove unsalable... While this scene was going on in the menís sleeping room, the reader may be curious to take a peep at the corresponding apartment allotted to the women. Stretched out in various attitudes over the floor, he may see numberless sleeping forms of every shade of complexion, from the purest ebony to white, and of all years, whose mother was sold out yesterday, and who tonight cried herself to sleep when nobody was looking at her. Here, a worn old Negress, whose thin arms and callous fingers tell of hard toil, waiting to be sold to-morrow, as a cast-off article, for what can be got for her; and some forty or fifty others, with heads variously enveloped in blankets or articles of clothing, lie stretched around them. But, in a corner, sitting apart from the rest, are two females of amore interesting appearance than common. one of these is a respectable dressed mulatto woman between forty and fifty, with soft eyes and a gentle and pleasing physiognomy. She has on her head s high-raised turban, made of a gay red Madras handkerchief, of the first quality, and her dress is neatly fitted, and of good material, showing that she has been provided for with a careful hand. By her side, and nestling closely to her, is a young girl of fifteen her daughter. She is a quadroon, as may be seen from her fairer complexion, though her likeness to her mother is quite discernible. She has the same soft, dark eye, with longer lashes, and her curling hair is of a luxuriant brown. She is dressed with great neatness and her white, delicate hands betray very little acquaintance with servile toil. These two are to be sold tomorrow, in the same lot with the St. Clare servants; and the gentleman to whom they belong, and to whom the money for their sale is to be transmitted, is a member of a Christian church in New York, who will receive the money , and go thereafter to the sacrament of his Lord and theirs, and think no more of it. These two, whom we shall call Susan and Emmeline, had been the personal attendants of an amiable and pious lady of New Orleans, by whom they had been carefully and piously instructed and trained. They had been taught to read and write, diligently instructed in the truths of religion, and their lot had always been as happy an one as in their condition it was possible to be. But the only son of their protectress had the management of her property; an, by carelessness and extravagance, involved it to a large amount, and at last failed... Susan and Emmeline were sent to the depot to await a general auction on the following morning; and as they glimmer faintly upon us in the moonlight which steals through the grated window, we may listen to their conversation. Both are weeping, but each quietly, that the other may not hear. "Mother, just lay your head on my lap, and see if you canít sleep a little," says the girl, trying to appear calm. "I havenít any heart to sleep, Em; I canít; itís the last night we may be together!" "Oh, mother, donít say so! Perhaps we shall get sold together who knows?" "If it was anybodyís else case, I should say so, too, Em," said the woman; "But Iím so ëfeared of losiní you that Ii donít see anything but the danger." "Why, mother, the man said we were both likely, and would sell well." Susan remembered the manís looks and words. With a deadly sickness at her heart, she remembered how he had looked at Emmelineís hands, and lifted up her curly hair, and pronounced her a first-rate article. Susan had been trained as a Christian, brought up in the daily reading of the Bible, and had the same horror of her childís being sold to a life of shame that any other Christian mother might have; but she had no hope no protection. "Mother, I think we might do first-rate, if you could get a place as a cook, and I as chambermaid or seamstress, in some family. I dare say we shall. Letís both look as bright and lively as we can, and tell all we can do, and perhaps we shall," said Emmeline. "I want you to brush your hair all back straight, to-morrow," said Susan. "What for, mother? I donít look near so well that way." "Yes, but youíll sell better so." "I donít see why!" said the child. "Respectable families would be more apt to buy you, if they say you looked plain and decent, as if you wasnít trying to look handsome. I know their ways betterín you do," said Susan. "Well, mother, then I will." "And Emmeline, if we shouldnít ever see each other again, after tomorrow if Iím sold way up on a plantation somewhere, and you somewhere else, and you somewhere else always remember how youíve been brought up. and all Missis has told you; take your Bible with you, and your hymnbook; and if youíre faithful to the Lord, heíll be faithful to you." So speaks the poor soul, in sore discouragement; for she knows that tomorrow any man, however vile and brutal, however godless and merciless, if he only has money to pay for her, may become owner of her daughter, body and soul; and then, how is the child to be faithful? She thinks of all this, as she holds her daughter in her arms, and wishes that she were not handsome and attractive. It seems almost an aggravation to her to remember how purely and piously, how much above the ordinary lot, she has been brought up. But she has no resort but to pray, and many such prayers to God have gone up from those same trim, neatly arranged, respectable slave-prisons prayers which God has not forgotten, as a coming day shall show; for it is written: "Whoso causeth one of these little ones to offend, it were better for him that a mill-stone were hanged about his neck, and that he were drowned in the depths of the sea." The soft, earnest, quiet moonbeam looks in fixedly, marking the bars of the grated windows on the prostrate, sleeping forms. The mother and daughter are singing together a wild and melancholy dirge, common as a funeral hymn among the slaves: "Oh, where is weeping Mary? Oh, where is weeping Mary? ëRived in the goodly land. She is dead and gone to heaven; She is dead and gone to heaven; ëRived in the goodly land." These words, sung by voices of a peculiar and melancholy sweetness, in an air which seemed like the sighing of earthly despair after heavenly hope, floated through the dark prison rooms with a pathetic cadence, as verse after verse was breathed out... Sing on, poor souls! The night is short, and the morning will part you forever! But now it is morning, and everybody is astir; and the worthy Mr. Skeggs is busy and bright, for a lot of goods is to be fitted out for auction. There is a brisk lookout on the toilet; injunctions passed around to every one to put on their best face and be spry; and now all are arranged in a circle for a last review, before they are marched up to the Bourse. Mr. Skeggs, with his palmetto on and his cigar in his mouth, walks around to put farewell touches on his wares. "Howís this?" he said, stepping in front of Susan and Emmeline. "Whereís your curls, gal?" The girl looked timidly at her mother, who, with the smooth adroitness common among her class, answers "I was telling her, last night, to put up her hair smooth and eat, and not haviní it flying about in curls; looks more respectable so." "Bother!" said the man, peremptorily, turning to the girl: "You go right along, and curl yourself real smart!" He added, giving a crack to a rattan he held in his hand, "And be back in quick time, too!" "You go and help her," he added, to the mother. "The, curls ma make a hundred dollars difference in the sale of her." Beneath a splendid dome were men of all nations, moving to and fro, over the marble pavement...And here we may the St. Clare servants Tom, Adolph, and others; and there too, Susan and Emmeline, awaiting their turn with anxious and dejected faces. Various spectators, intending to purchase, or not intending, as the case might be, gathered around the group, handling, examining, and commenting on their various points and faces with the same freedom that a set of jockeys discuss the merits of a horse. "Hulloa, Alf! what brings you here? said a young exquisite, slapping the shoulder of a sprucely dressed young man, who was examining Adolph through an eye-glass. "Well, I was wanting a valet, and I heard that St. Clareís lot was going. I thought Iíd just look at his" "Catch me ever buying any of St. Clareís people! Spoilt niggers, every one. Impudent as the devil!" said the other. "Never fear that!" said the first; "If I get ëem, Iíll soon have their airs out of them; theyíll soon find out that theyíve another master to deal with than Monsieur St. Clare. ëPon my word. Iíll buy that fellow. I like the shape of him..." Tom had been standing wistfully examining the multitude of faces thronging around him, for one whom he would wish to call master... A little before the sale commenced, a short, broad, muscular man, in a checked shirt considerably open at the bosom, and pantaloons much the worse for dirt and wear, elbowed his way through the crowd, like one who is going actively into a business; and, coming up to the group, began to examine them systematically. From the moment Tom saw him approaching, he felt an immediate and revolting horror at him, that increased as he came near. He was evidently, though short, of gigantic strength. His round, bullet-head, large. light grey eyes, with their shaggy, sandy eyebrows, and stiff, wiry, sunburned hair, were rather unprepossessing items, it is to be confessed; his large, course mouth was distended with tobacco, the juice of which, from time to time, he ejected from him with great decision and explosive force; his hands were immensely large, hairy, sunburned, freckled, and very dirty, and garnished with long nails, in a very foul condition. This man proceeded to a very free personal examination of the lot. He seized Tom by the jaw, and pulled open his mouth to inspect his teeth; made him strip up his sleeve, to show his muscle; turned him round, made him jump and spring, to show his paces. "Where was you raised?" he added, briefly, to these investigations. "In Kintuck, Masír," said Tom, looking about, as if in deliverance. "What have you done?" "Had care of Masírís farm," said Tom. "Likely story!" said the other, shortly, as he passed on. He paused for a moment before Adolph; then spitting a discharge of tobacco juice on his well-blacked boots, and giving a contemptuous umph, he walked on. Again he stopped before Susan and Emmeline. He put out his heavy, dirty hand, and drew the girl towards him; passed it over her neck, and bust, felt her arms, looked at her teeth, then pushed her back against her mother, whose patient face shoed the suffering she had been going through at every motion of the hideous stranger. The girl was frightened, and she began to cry. "Stop that, you minx!" said the salesman; "no whimpering here the sale is going to begin." And accordingly the sale began... Tom stepped upon the block, gave a few anxious looks around... and almost in a moment came the final thump of the hammer, and the clear ring on the last syllable of the word, "dollars," as the auctioneer announced his price, and Tom was made over. He had a master. He was pushed from the block; the short, bullet-headed man, seizing him roughly by the shoulder, pushed him to one side, saying, in a harsh voice, "Stand there, you!" Tom hardly realized anything; but still the bidding went on rattling, clattering, now French, now English. Down goes the hammer again Susan is sold! She goes down from the block, stops, looks wistfully back her daughter stretches her hands towards her. She looks with agony I the face of the man who has bought her a respectable, middle-aged man, of benevolent countenance. "Oh, Masír, please do buy my daughter!" "Iíd like to, but Iím afraid I canít afford it!" said the gentleman, looking, with painful interest, as the young girl mounted the block, and looked around her with a frightened and timid glance. The blood flushes painfully in her otherwise colorless cheek, her eye has a feverish fire, and her mother groans to see that she looks more beautiful than ever before... The hammer falls; [our bullet-headed acquaintance] has got the girl, body and soul, unless God save her. Her master is Mr. Legree, who owns a cotton plantation on the Red River. She is pushed along into the same lit with Tom and two other men and goes off, weeping as she goes. The benevolent gentleman is sorry; but then, the thing happens every day! One sees girls and mothers crying, at these sales, always! It canít be helped, etc., and he walks off, with his acquisition, in another direction... On the lower part of a small, mean boat, on the Red River, Tom sat chains on his wrists, chains on his feet, and a weight heavier than chains lay on his heart. All had faded from his sky... all had passed by him, as the trees and banks were now passing, to return no more. Kentucky home, with wife and children, indulgent owners; St. Clare home, with all its refinements and splendors... the proud, gay, handsome, seemingly careless, yet ever-kind St. Clare; hours of ease and indulgent leisure all gone! And in place thereof, what remains? It is one of the bitterest apportionments of a lot of slavery, that the Negro, sympathetic and assimilative, after acquiring, in a refined family, the tastes and feelings which form the atmosphere of such a place, is not the less liable to become the bond-slave of the coarsest and most brutal just as a chair or table, which once decorated the superb saloon, comes, at last, battered and defaced, to the bar-room of some filthy tavern, or some low haunt of vulgar debauchery. The great difference is, that the table and the chair cannot feel, and the man can; for even a legal enactment that he shall be "taken, reputed, adjudged in law, to be a chattel personal," cannot blot out his soul, with its own private little world of memories, hopes, loves, fears, and desires. Mr. Simon Legree, Tomís master, had purchased saves at one place and another, in New Orleans, to the number of eight, and driven them, handcuffed, in couples of two and two, down to the good steamer Pirate, which lay at the levee, ready for a trip up the Red River. Having got them fairly on board, and the boat being off, he came round, with that air of efficiency which ever characterized him, to take a review of them. Stopping opposite to Tom, who had been attired for sale in his best broadcloth suit, with well-starched linen and shining boots, expressed himself as follows: Tom stood up. "Take off that stock (neckcloth or collar)!" and, as Tom, encumbered by his fetters, proceeded to do it, he assisted him, by pulling it, with no gentle hand, from his neck, and putting it in his pocket. Legree now turned to Tomís trunk, which, previous to this, he had been ransacking, and taking from it a pair of old pantaloons and dilapidated coat, which Tim had been wont put on about his stable work, he said, liberating Tomís hands from the handcuffs, and pointing to a recess in among the boxes "You go there, and put these on." Tom obeyed, and in a few moments returned. "Take off your boots," said Mr. Legree. Tom did so. "There," said the former, throwing him a pair of coarse, stout shoes, such as were common among the slaves, "put these on." In Tomís hurried exchange, he had not forgotten to transfer his cherished Bible to his pocket. It was well he did so, for Mr. Legree, having refitted Tomís handcuffs, proceeded deliberately to investigate the contents of his pockets. He drew out a silk handkerchief, and put it into his own pocket. Several little trifles, which Tom had treasured... he looked upon with a contemptuous grunt, and tossed him over his shoulder into the river. Tomís Methodist hymn-book, which, in his hurry, he had forgotten, he now held up and turned over. "Humph! Pious, to be sure. So, whatís yer name you belong to the church, eh?" "Yes, Masír," Tom said firmly. "Well, Iíll soon have that out of you. I have none oí yer bawling, praying, singing niggers on my place; so remember. Now, mind yourself," he said, with a stamp and a fierce glance of his gray eye, directed at Tom. "Iím your church now! You understand youíve got to be as I say." Something within the silent black man answered, No! and, as if repeated by an invisible voice, came the words of an old prophetic scroll... "Fear not! For I have redeemed thee, I have called thee by my name. Thou art mine!" But Simon Legree heard no voice. That voice is one he shall never hear. He only glared for a moment at the downcast face of Tom, and walked off. He took Tomís trunk, which contained a very neat and abundant wardrobe, to the forecastle, where it was soon surrounded by various hands of the boat. With much laughing, at the expense of niggers who tried to be gentlemen, the articles very readily were sold to one and to another, and the empty trunk finally put up at auction. It was a good joke, to see how Tom looked after his things, as they were going this way and that; and then the auction of the trunk, that was funnier than all, and occasioned abundant witticisms. This little affair being over, Simon sauntered up again to his property. "Now, Tom, Iíve relieved you of any extra baggage, you see. Take mighty good care of them clothes. Itíll be long enough ëfore you get more. I go in for making niggers careful; one suit has to do for one year, on my place." Simon next walked up to the place where Emmeline was sitting, chained to another woman. "Well, my dear," he said, chucking her under the chin, "keep up your spirits..." "Now," said he, doubling his great, heavy fist into something resembling a blacksmithís hammer. "díye see this fist? Heft it!" he said, bringing it down on Tomís hand. "Look at these yer bones! Well, I tell ye this yer fist has got as hard as iron knocking down niggers..." said he, bringing his fist down so near to the face of Tom that he winked and drew back. "I donít keep none oí yer cussed overseers; I does my own overseeing; and I tell you things is seen to. Youís every one on ye get to toe the mark, I tell ye; quick, straight, the moment I speak, Ye wonít find no soft spot in me, nowhere. So, now, mind yerselves; for I donít show no mercy!.... Thatís the way I begin with my niggers," he said to a gentlemanly man, who had stood by him during his speech. "Itís my system to begin strong just let ëem know what to expect!" "Indeed," said the stranger, looking upon him with the curiosity of a naturalist studying some out-of-the-way specimen. "Yes, indeed. Iím none oí yer gentlemen planters, with lily fingers, to slop around and be cheated by some old cuss of an overseer! Just feel of my knuckles, now; look at my fist. Tell ye, sir, the flesh onít has come jest like a stone, practicing on niggers feel on it." The stranger applied his fingers to the implement in question, and simply said, "It is hard enough; and I suppose," he added, "practice has made your heart just like it..." The stranger turned away, and seated himself beside a gentleman, who had been listening to the conversation with repressed uneasiness. "You must not tale that fellow to be any specimen of southern planters, " said he. "I should hope not," said the young gentleman, with emphasis. "He is a mean, low, brutal fellow!" said the other. "And yet your laws allow him to hold any number of human beings subject to his absolute will without even a shadow of protection; and, low as he is, you cannot say that there are not many such." "Well," said the other, "there are also many considerate and humane men among planters." "Granted," said the young man; "but, in my opinion, it is you considerate, humane men that are responsible for all the brutality and outrage wrought by these wretches; because, if it were not for your sanction and influence, the whole system could not keep foothold for an hour. If there were no planters except such as that one," said he, pointing with his finger to Legree, who stood with his back to them, "the whole thing would go down like a mill-stone. It is your respectability and humanity that licenses and protects his brutality." [Back to the Unit Five Summary]
<urn:uuid:fda7e2aa-ca72-4697-b509-3ba186585ef6>
CC-MAIN-2016-26
http://www.pinzler.com/ushistory/uncletomsupp.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00139-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982657
5,825
3.5625
4
For all we know the key is "dog". You didn't say what the key is, or what the encryption algorithm is; the only information given is the cleartext, "hello", and the ciphertext, "GH6SDgsd2". If the cleartext is changed, the resulting ciphertext will change. If the key is changed, the resulting ciphertext will change and, again, there's no telling what the resulting ciphertext will be without knowing 1) the key, and 2) the algorithm. Encryption software works by first converting the cleartext to a series of numbers (in a computer, text is always stored as a series of numbers). Then, the software performs one or more mathematical operations on these numbers, the operations performed depend on the encryption algorithm used by the encryption software. Common encryption algorithms include AES, Blowfish, etc. The key is a separate series of characters, or numbers, that are also used in these mathematical operations. If the same cleartext, the same key, and the same algorithm and encryption software are used, the resulting ciphertext will be the same. Ciphertext is sent as a secret message to someone. If the receiver of the message knows the ciphertext, the key, and the encryption algorithm and software, the receiver will be able to decode the message. If someone in possession of the ciphertext doesn't know the key, or what encryption algorithm is being used, they won't be able to decode the message, or at best it will be very difficult for them to decode the message. The difficulty involved in attempting to decode a given piece of ciphertext, without having the key, depends primarily on the length of the key. If the key is long enough, it might take many years for someone to decode the message.
<urn:uuid:b66af436-b79a-4fd5-b480-7d916bb4b0c3>
CC-MAIN-2016-26
http://security.stackexchange.com/questions/48493/how-does-an-encryption-key-work
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.914567
371
3.484375
3
Meet the Teachers Margaret Holtschlag and Cindy Lafkas have integrated telecommunications into their classrooms for three years. Margaret teaches fourth grade at Murphy Elementary in Haslett, Michigan. She first learned about technology from her husband, who is a United States Geological Survey hydrologist. Her interest in technology stemmed from her desire to explore new and different ways to approach teaching in her classroom. She and her husband got an Excellence in Education grant from the Department of the Interior to do a multi-media project on wetlands. Cindy is teaching fifth grade at Cornell Elementary in Okemos, Michigan. She says she wouldn't have become involved with computers if it hadn't been for word processing since she loves to write. Cindy has been at Cornell for 11 years. Cindy and Margaret participated in four telecommunications field trips through TCI and Turner Adventure Learning in the past three years. In past years they went to the Rift Valley in Kenya and the battle field at Gettysburg. This year, they went to Ellis Island. (To learn more about Margaret and Cindy and their other field trips, take a look at their Teacher Case in the Table of Contents on the left.) In this unit, students will visit Ellis Island sites to learn about the patterns and history of immigration in this country. These explorations will serve as a personal link for students to the historical importance of immigration in United States history. Students' research will include using Internet resources as well as library resources. Students will visit sites set up by other children that are related to the concepts of immigration. Materials and Resources In developing our lessons and activities, we made some assumptions about the hardware and software that would be available in the classroom for teachers who visit the LETSNet website. We assume that teachers using our Internet-based lessons or activities have a computer (PC or Macintosh) with the necessary hardware components (mouse, keyboard, and monitor) as well as software (operating system, TCP/IP software, networking or software, e-mail and a World Wide Web client program, preferably Netscape, but Mosaic or Lynx). In the section below, we specify any "special" requirements for a lesson or activity (in addition to those described above) and the level of Internet access required to do the activity. We have drawn on the historical thinking standards outlined by the National Center for History in the Schools as well as evolving standards for K-12 language arts from the National Council of English Teachers (NCET). We feel that these standards provide excellent guidelines for teachers on how to focus social sciences work in One Computer vs. Many The plans for this unit are tailored to fit teaching situations where students have access to several computers with an Internet connection. To accommodate classrooms that do not have access to a computer lab with full Internet connections, students can work in research groups to explore Internet sites and conduct their research. If you have only one computer with Internet access, you may choose to do one of the following:
<urn:uuid:a37b1bef-cc47-43f5-8824-68a00ceac0c3>
CC-MAIN-2016-26
http://commtechlab.msu.edu/sites/letsnet/frames/subjects/la/b1u1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950425
628
3.03125
3
Every week we bring you interesting facts from the annals of Geek History. This week saw the beginning of Wikipedia, the release of Apple’s IIe computer, and Thomas Edison brought light to an entire town. Wikipedia Turns 10 Wikipedia originally began life as a side project to go with the digital encyclopedia Nupedia. There were too many articles for the editorial staff of Nupedia to handle at one time so they started collaborating with a wiki. Eventually it became clear that the collaborative editing of the Nupedia holding area, the wiki, was the future of knowledge sharing. Nupedia is long gone and made little impression on the public but Wikipedia is now one of the most popular web sites on the internet and sports 17 million articles in 262 languages. Edison Lights Roselle, New Jersey In 1883 Thomas Edison threw the switch on a system of overhead wires that would bring light to the community of Roselle, New Jersey. A steam powered generator powered local businesses, the local Presbyterian church (the first in the world to be lit by electricity), around 40 houses, and 150 street lights. We take electric street lights completely for granted in the 21st century but at the time significant portions of the United States and Europe were still using gas lamps. Edison’s proof-of-concept display in Roselle inspired other communities to switch to safer electric light systems. Apple Introduces the Apple IIe 1983 The Apple IIe was the most successful personal computers of the 1980s and the longest running product in Apple’s lineup (the Apple IIe line ran, largely unchanged, for 11 years). The Apple IIe rocked a 1.023 MHz process (yes, you read that correctly), 64k of RAM, and a video resolution so low it’s outright confusing to modern consumers (a paltry 280×192 pixels). The Apple IIe was highly backwards compatible with the prior two Apple IIe models and was widely adopted by schools—anyone who went to school in the 1980s where Apple IIe computers were present is all too knowledgeable about how easy it is to die of dysentery while trying to get to Oregon. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia.
<urn:uuid:8e42303a-0e6a-4350-a886-e64f96b4c817>
CC-MAIN-2016-26
http://www.howtogeek.com/howto/41244/this-week-in-geek-history-wikipedia-opens-doors-apple-iie-released-edison-lights-first-town/?showcomments=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00003-ip-10-164-35-72.ec2.internal.warc.gz
en
0.942864
495
2.78125
3
Porting Linux to the DEC Alpha: Infrastructure Porting an operating system is not trivial. Operating systems are large, complex, asynchronous software systems whose behavior is not always deterministic. In addition, there are numerous development tools, such as compilers, debuggers, and libraries, that programmers generally take for granted but which are not present at the start of the porting project. The porting team must implement these tools and other pieces of infrastructure before the porting work itself can begin. This article is the first of three describing one such porting effort by a small team of programmers at Digital Equipment Corporation. Our goal was to port the Linux operating system to the Digital Alpha family of microprocessors. These articles concentrate on the initial proof-of-concept port that we did. Although much of our early work has been superseded by Linus Torvalds' own portability work for 1.2, our tale vividly illustrates the type and scale of the tasks involved in an operating system port. The article by Jon Hall on page 29 describes many of the business-case justifications for our involvement in the Linux porting effort. I will describe the actual events that led to my starting work on the Linux port. First, some background: I work for the Alpha Migration Tools Group, which is an engineering development group within Digital Semiconductor. We were initially chartered near the beginning of the Alpha project to develop automated methods for migrating Digital customers' legacy applications to Alpha-based systems. Our first product was VEST, which translated VAX/VMS binary executables into binaries that could be executed on OpenVMS Alpha. This was soon followed by MX, which translated MIPS Ultrix executables into executables that run on Alpha systems under Digital Unix. Since then, our charter has expanded into other areas of “enabling technology” (technology which enables users to move to Alpha). In addition to producing translators and emulators, we have supplied technology to third-party vendors, and we have participated in the development of compilers and assemblers for Alpha. Our involvement in Linux began at the end of 1993, when we realized that there was no entry-level operating system for Alpha-based systems. While OpenVMS, Digital Unix, and Windows NT were all solid, powerful operating systems in their own right, they were too resource-hungry to run on bare-bones system configurations. In many cases, the smallest usable configuration of a particular system costs at least several thousand US dollars more than the smallest possible configuration. We decided that to compete on the low end with PC-clone systems, we needed to make the lowest-priced system configurations usable. After investigating various alternatives, we decided that Linux had the best combination of price (free), performance (excellent), and support (thousands of eager and competent hackers worldwide, with third-party commercial support starting to appear as well). When putting together the proposal to do the port, I set forth the following goals for the Linux/Alpha project: Price: Linux/Alpha would continue to be free software. All code developed by Digital for Linux/Alpha would be distributed free of charge according to the GNU General Public License. In addition, all tools used to build Linux/Alpha would also be free. Resource stinginess: Linux/Alpha would be able to run on base configurations of PC-class Alpha systems. My goal was to be able to run in text mode in 8MB of memory and with X-Windows in 16MB. In addition, a completely functional Linux/Alpha system should be able to fit, with room to spare, on a 340MB hard disk. Performance: Linux/Alpha's performance should be comparable to Digital Unix. Compatibility: Linux/Alpha should be source-code-compatible with existing Linux applications. Schedule: We wanted to be able to show a working port as quickly as possible. The above criteria drove several of the design decisions we made regarding Linux/Alpha. To meet the schedule criterion, we decided to “freeze” our initial code base at the Linux 1.0 level and work from there, not incorporating later changes unless we needed a bug fix. This would minimize perturbations to the code stream (a necessity when you're reaching in and changing virtually the whole universe), and would eliminate the schedule drain of constantly catching up to the latest release. We reasoned that once we got a working kernel, we could then make use of what we had learned to catch up to the most current version. The scheduling criterion also drove our decision to make our initial port a 32-bit (as opposed to a 64-bit) implementation. The major difference between the two involves the C programming model used. Intel Linux uses a “32-bit” model where ints, longs, and pointers are all 32 bits. Digital Unix uses a “64-bit” model where ints are still 32 bits while longs and pointers are 64 bits. At Digital, we have encountered a lot of C code that treats ints, longs, and pointers interchangeably. Code like this might fortuitously work in a 32-bit programming model, but it may produce incorrect results in a 64-bit model. We decided to do a 32-bit initial port so as to minimize the number of such problems. We felt that limiting longs and pointers to 32 bits would not unduly hamper any existing code and by the time new applications appeared which would require larger datatypes, a 64-bit Linux implementation would be available. We also decided, in the interests of expediency, to use the existing PALcode support for Digital Unix rather than write our own. The Digital Unix PALcode was reasonably well-suited to other Unix implementations, it was readily available, and it had already been extremely well-tested. Using the Digital Unix PALcode in turn required that we use the “SRM” console firmware. The SRM firmware contained device drivers that could be used by Linux via callback functions. While these console callback drivers were extremely slow and had to be run with all interrupts turned off, they did allow us to concentrate on other areas of the Linux port and defer the work on device drivers. Some design decisions were driven by differences in execution environment between Intel and Alpha. On Intel, the kernel virtual memory space is mapped one to one with system physical memory space. Because of the potential collision with user virtual memory, Intel Linux uses segment registers to keep the address spaces separate. In kernel mode, the CS, DS, and SS segments point to kernel virtual memory space, while the FS segment points to user virtual memory space. This is why there are routines in the kernel such as put_fs_byte(), put_fs_word(), put_fs_long(), etc; this is how data is transferred between kernel space and user space on Intel Linux implementations. Since Alpha does not have segmentation, we needed to use some other mechanism to ensure that user and kernel address spaces did not collide. One way would be to have only one address space mapped at a time. This requires a translation buffer (sometimes called a translation lookaside buffer, or TLB), a special cache on the CPU used to considerably speed up virtual memory address lookups. But this makes data transfer between user and kernel space cumbersome. It can also exact a performance penalty; on systems that do not implement address space identifiers, using the same virtual address range for kernel space and user space requires that the entire translation buffer be invalidated for that range for every transition between user and kernel space. This could conceivably cause multiple translation buffer misses across every system call, timer tick, or device interrupt. The other way to avoid address space collisions between user and kernel is to partition the address space, assigning specified address ranges to specified purposes. This is the approach taken for the 32-bit Linux/Alpha port. It is simple, it does not require wholesale translation buffer invalidation for every entry to kernel mode, and it makes data transfer between user and kernel an utterly trivial copy. Designing the address space layout required attention to certain other constraints. First, no address could be greater than 0x7fffffff, because of Alpha's treatment of 32-bit quantities in 64-bit registers. When one issues an LDL (Load Long) instruction, the 32-bit quantity that is loaded is sign-extended into the 64-bit register. Therefore, loading the address 0x81234560 into R0 would result in R0 containing 0xffffffff81234560. Attempting to dereference this pointer would result in a memory fault. There are techniques for double-mapping such problematic addresses, but we decided that we did not need the additional complications for a proof-of-concept port. Therefore, we simply limited virtual addresses to 31 bits. The other consideration was that we needed an area which was mapped one for one with system physical memory. We did not want to simply use the low 256MB (for instance) because we wanted to be able to place user programs in low addresses, so we chose an area of high memory for this purpose and made the physical address equal the virtual address minus a constant. This is referred to below as the “mini-KSEG”. Once all the constraints were considered, we ended up with a system virtual memory layout as follows: 0x00000000--0x3fffffff User 0x40000000--0x5fffffff Unused 0x60000000--0x6fffffff Kernel VM 0x70000000--0x7bffffff mini-KSEG (1:1 with physical memory) 0x7c000000--0x7fffffff Kernel code, data, stack Finally, I had to decide how heavily I would modify the code base to accomplish the port. I felt that I did not have the latitude to make wholesale changes and rearrangements of the code the way Linus did for the 1.1.x to 1.2.x transition. To do so would cause my code to diverge further and further from the mainstream code base, which would adversely affect its acceptance among the Linux community. I decided to keep the original Intel code 100% intact, so one could conceivably still build an Intel kernel from my code base. The Alpha code would be either additions to or replacements for the Intel code base. Areas that needed to be changed would be set off via conditional compilation. Sometimes this required me to swallow my pride and devise a less clean Alpha-specific version of an algorithm to correspond to a less clean Intel-specific version when I really would rather have implemented a clean, generalized algorithm that could accommodate both. Fortunately, Linus implemented clean, generalized algorithms for all of us when he did his portability work for Linux 1.1.x and Linux 1.2.x. Fast/Flexible Linux OS Recovery On Demand Now In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems. Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc. Free to Linux Journal readers.Register Now! - Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI" - Profiles and RC Files - Maru OS Brings Debian to Your Phone - Astronomy for KDE - Understanding Ceph and Its Place in the Market - OpenSwitch Finds a New Home - Git 2.9 Released - Snappy Moves to New Platforms - SoftMaker FreeOffice - What's Our Next Fight? With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon. This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide
<urn:uuid:484b0c27-172b-417e-8664-ccdc66e32bf3>
CC-MAIN-2016-26
http://www.linuxjournal.com/article/1044
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00165-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934897
2,607
2.65625
3
Your Java class files run on a JVM - a "Java Virtual Machine" which is a program that reads the instructions in the (binary) class file and performs them. Although the class files are "architecture independent" - in other words, the same class file will work on an Intel and a PowerPC chip, and on Linux, Unix and Windows, the JVM is NOT portable in this way and you must have the right one for your system. The JVM is the running "engine" of Java, but just as most of us don't just buy an engine, we buy a complete car ... so it is with Java. Most of us need a JRE - a "Java Runtime Environment" which is a JVM plus all the standard classes which are needed to support our own piece of code. Even in a simple read / calculate / report program , you'll need to load in other standard classes to do jobs like handle command line data, convert a String of characters into a number, and produce output to the current window. All of those standard methods are in the JRE. Most users of Java programs will simply be running classes through a JVM with the support of a JRE, but the developers of Java programs will also need a way to turn the programs they write, in the "English-like" Java language stored in text files which can be edited easily - into those binary class files which run very mcuh quicker that a language that runs straight from the text (a 'truely interpretive language') but are impractical to edit. And those developers need not only the JVM and the supporting JRE, but also a JDK - that's a "Java Development kit" which amongst other things includes the java compiler program (javac) to convert english-like source into quickly-run binary. 1. "JSP"s - which are Java Server Pages, run on a web server, include Java source code within the web page. This means that if you're running JSPs on your web server, even if you're not developing any code / pages yourself, you will need the full JDK loaded ... onto not only your test and development servers, but also your production server! 2. You'll very often compile your Java code on a different computer to the one it's finally going to run on - your programmer's computer is really the one that you should be creating / maintaining programs on, and that should NOT be your live server. However, you must have a complete set of all the classes that your Java code calls up when it runs on your development machine - not only is that sensible because it lets you test your code (ALWAYS test even the smallest of changes - that advise comes from experience!) but also Java insists as it won't let you compile something if the classes that the compiled code relies on aren't available for it to check your code 'against'. JINI, JMX, JXTA, JAXB, JAXP, JAXR, SAAJ, JAX-RPC, JNI, JAXM ... are all extra technologies that you may wish to use in addition to your 'core Java'. Most of the ones I have listed are additional bundles of class files (known as "packages") which provide extra functionality for Java without the developer having to write it himself. They're accessed via method calls through their API (Application Programmer Interfaces) and to make good use of them, you need to (a) load them, (b) read the documentation / look at the samples and (c) understand something about what they actually do. The AWT, Swing, and many other elements are also extra technologies, but these particular examples are classes / packages that are bundled with the JRE distribution, so there's no need (in these cases) to download and install them separately. (written 2009-09-26) Associated topics are indexed as below, or enter http://melksh.am/nnnn for individual articlesA501 - Web Application Deployment - Java - Basic Language Overview Java on the Web Server - course for delegates with some prior Java experience - (2009-04-06) Java oversold? - (2006-09-19) Web Application Components - (2006-03-28)J601 - Java Introduction First Java Application - calculating the weight of a tablecloth - (2014-11-29) All the Cs ... and Java too - (2009-12-13) Finding your java program - the CLASSPATH variable - (2009-04-02) Java CLASSPATH explained - (2008-11-26) Diagrams to show you how - Tomcat, Java, PHP - (2008-08-22) Trying out our Java examples on our web site - (2008-02-27) Training Season Starts again! - (2008-01-07) Effective Java training - the bootcamp approach - (2007-12-09) A Golf Club Decision - Perl to Java - (2007-11-01) Private Java Course - A customer's pictures - (2007-04-22) Java 6, Apache Tomcat 6. - (2007-01-21) Is Java the right language to learn? - (2006-07-04) Programming languages - a comparison - (2005-05-20) PHP v Java - (2004-11-20) Training notes available under Open Distribution license - (2004-11-07) Release numbers - (2004-08-23) Some other Articles Operator overloading - redefining addition and other Perl tricksWhich version of MySQL am I running?Weekend and Christmas Promotion - Well House Manor Hotel, MelkshamA Winter Weekend Special at Well House ManorWhat is a JVM, a JRE, a JDK - components of the core Java EnvironmentLooking inside Java classes - javap and javadocSorting Collections of Objects in JavaExceptions in Java - why and howWhere is my Java class?Viv.java uses unchecked or unsafe operations - explanation and cure
<urn:uuid:8de01a3b-336c-45ed-99cd-98a2260a7a92>
CC-MAIN-2016-26
http://www.wellho.net/mouth/2423_What-is-a-JVM-a-JRE-a-JDK-components-of-the-core-Java-Environment.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.898905
1,295
3.25
3
In the Garden: Northern California Coastal & Inland Valleys The beautiful gardens at Filoli are open to visitors starting in February. The temptation to plant my summer garden is overwhelming right now. The days are getting longer, the air is sweet with the scent of spring-flowering plants, and nurseries are bursting with enticing displays. However, I know from experience that I'll have much better results if I wait a few more weeks until the soil has warmed. So now is the perfect time to work the garden soil to get it in perfect shape before the final push of planting. The Importance of Soil Soil is the foundation of your garden. The more time you spend preparing your soil prior to planting, the easier and more rewarding your gardening will be. This is especially true for arid climates that receive no rainfall in the summer months and have only a thin layer of topsoil, which is where most nutrients are for plants. Topsoil is formed over many thousands of years by decomposing plant matter, but in arid climates, there just isn't much organic matter growing to fall to the ground and decompose. Soil types differ throughout Northern California: sandy and fast draining along the coast, and clay-like and slow draining in the valleys. You need to determine exactly what type of soil you have before you can begin to improve it. Assess your garden area - even if you have sandy, fast-draining soil, for example, low-lying areas may stay wet longer than the rest of the garden. If you try to grow arid plants in boggy soil, you'll be throwing your money away and killing perfectly good plants. On the other hand, high spots drain more quickly than level ground, so choose plants that grow well in dry areas if you're planning on a rise. Testing Your Soil Here is a simple soil test to determine what kind of soil you have in your garden. Wet a small handful of dirt and roll it around in your hand. If it feels silky and slick, and stays together in a cigar shape, your soil is mostly clay. Clay is rich in nutrients but so dense that water, air, and nutrients have a hard time getting to plants. Plus, plant roots don't thrive because they can't push through thick, clay soil. The solution is to add river sand or organic matter to loosen the clay soil and allow air, water, and nutrients to flow more freely. If the soil feels gritty and won't hold together when rolled in your hand, it's mostly sand. Sandy soil drains well but lacks nutrients. To improve the moisture- and nutrient-holding capacity of sandy soil, add organic compost. If the soil crumbles easily, but you can still roll it into a cigar, it's called loam, which is the ideal mixture of clay, sand, and silt. Adding Organic Matter The key to amending almost any type of soil is organic matter. Organic means any material that was once alive or is a by-product of something that was alive such as manure. It increases drainage in heavy clay soils, improves nutrient-holding capacity in sandy soils, and improves the texture of any soil, making it more crumbly and workable (friable). Organic matter comes in many forms, such as straw, leaves, manure, and grass clippings. If you have a source of organic material nearby - such as a horse farm or a dairy - you are in fat city! Manure is a rich source of organic matter. It not only supplies some fertilizer to soil, but it also improves the texture. It's best used in a composted rather than fresh form. I like to lay the manure over the soil in the late fall so that by spring the material has decomposed and is ready to turn under. If you don't have manure available, you can buy it in bags at garden centers and nursery supply stores. Composted manure is inexpensive and usually has been sterilized so it's easy to handle with very little odor. Take a Soil Test After you've amended the soil, take a soil test to see what nutrients, if any, you need to add. Professional soil tests are accurate and may actually be cheaper than the home soil-testing kits you find at the nursery. Agricultural extension offices provide this valuable service at a minimum cost. They will give you instructions on how to take a soil sample from your garden. Based on the results of the soil test and on what you're growing, you may need to add other nutrients or minerals to your garden. Care to share your gardening thoughts, insights, triumphs, or disappointments with your fellow gardening enthusiasts? Join the lively discussions on our FaceBook page and receive free daily tips!
<urn:uuid:d70be178-4c3a-403f-ac5e-f8ba1874e719>
CC-MAIN-2016-26
http://garden.org/regional/report/arch/inmygarden/165
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954186
976
2.71875
3
from the November 14, 2010 Newsletter issued from Hacienda Chichen Resort beside Chichén Itzá Ruins, central Yucatán, MÉXICO For the last month or so the top branches of many tall weeds and bushes in and around the Hacienda have borne horizontally deployed orb webs two feet or so across (60 cm). The spider hangs upside down beneath the webs' spiraling threads. Two shots of one of these spiders, the image on the left displayed vertically instead of its original horizontal position, are shown above. We have lots of fascinating spider species here that I never mention because I can't identify them and therefore can't find out what's interesting about them. However, this species is so ubiquitous that I hoped it might occur in southern Florida, in which case someone at a spider forum might recognize it. Having enjoyed such success with the German ant forum, I searched for an active spider forum and came up with "Arachnoboards," sponsored by the British Tarantula Society, at http://www.arachnoboards.com. Two days after posting my picture I was astonished to find that several people -- one in Rome, Italy, two in Florida, and one each in Maryland, Louisiana and Michigan, had left comments. The spider is the Venusta Orchard Weaver, LEUCAUGE VENUSTA, a long-jawed orbweaver distributed from southern Canada to Panama, along the eastern US coast, extending into the central US. Forum user "davisfam" in central Florida, who calls young spiders "spidiies," wrote: Leucauge venusta is extremely abundant during the rainy season in coffee plantations in Chiapas State and other areas of Mexico. The web of this species is made in semi-open sites generally between weeds or between adjacent bushes. Young L. venusta spidiies build webs close to the ground, but as sexual development proceeds, the specimen increases the height at which the web is built. The sexual maturity of females induces migration to places where prey is more abundant. It's quite possible that immature spidiies are abundant right now along with the adult specimens. No worries, these beautiful spidiies are harmless and extremely docile... just pretty to look at and take pictures of! Another user in Florida added that "The neon yellow, orange or red spots on the rear of the abdomen are variable in size among individuals and sometimes absent." And "This species is parasitized by a wasp larva which attaches itself externally at the junction of the cephalothorax and abdomen." So, again, how about that for the power of the Internet? Already just by noon on the Wednesday I visited Arachnoboards, 599 visits had been registered at the forum. That's a lot of people interested in arachnids on a Wednesday morning!
<urn:uuid:36b921d6-9194-4c41-b6f6-e97daa49faef>
CC-MAIN-2016-26
http://www.backyardnature.net/yucatan/orchweav.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00112-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950222
596
2.75
3
The Hills Are Alive With the Sound of Robots Nov 13, 2013 5:00 AM PT Robotic instruments that could be programmed to play music, respond to human musicians, and even improvise were a source of fascination for Steven Kemper during his graduate student days at the University of Virginia, where he studied music composition and computer technology. To bring to life his machine music vision, Kemper and colleagues Scott Barton and Troy Rogers founded Expressive Machines Musical Instruments, and began designing the Poly-tangent Automatic (multi)Monochord, also known as "PAM." This stringed instrument's pitches are controlled by tangents -- the equivalent of fingers -- each of which is driven by a solenoid. Messages are sent from a computer via a USB to an Arduino microcontroller, which switches the solenoids on and off. PAM also can receive data from musical and gestural input devices -- such as a MIDI keyboard, joystick or mouse -- or from environmental sensors, allowing it to improvise its own music based on the programmer's parameters and instructions. It seems to be a lot of work just to create music -- something human musicians can do just fine on their own. For Kemper and other robotic music researchers, however, it's not a matter of robotic instruments replacing the human variety. Rather it's about finding new ways to make music -- and perhaps new forms of music, as well. "These instruments are not superior to human performers," Kemper, now an assistant professor of music technology at Rutgers University, told TechNewsWorld. "They just provide some different possibilities." Since PAM, EMMI has created a variety of instruments, and each one has its own set of possibilities and strengths. All the instruments can be programmed to play in multiple genres and situations, and musicians have begun to incorporate them into performances and recordings. "These instruments can improvise based on structures we determine or by listening to what performers are playing," said Kemper. "We work with the free improv aesthetic and [our instruments] don't fit into a particular musical genre. It's improvising based on any decisions the performers make." Creating New Worlds of Music Using robotic instruments and instrumentation, musician Chico MacMurtrie and his Amorphic Robot Works crew have worked to envision entirely new models for performance and musicianship, such as the Robotic Opera, which MacMurtrie created in the early 90s with computer scientists Rick Sayer and Phillip Robertson, and composer Bruce Darby. "Rick and Phillip were working in a multitasking language known as Formula Forth," MacMurtrie told TechNewsWorld. "At this time, it meant a lot of hard coding. Bruce created a lot of the tunings for the musical machines and wrote the compositions for them. Phillip adapted the machines to Bruce's compositions, and human musicians played the more complicated elements live on the musical machines." After the opera concluded, MacMurtrie became fascinated with the possibility of machines playing all by themselves, with minimal input from human musicians or programmers. "We really started to concentrate on how to teach the machines to strike the drums and strum the strings," explained MacMurtrie. "Some of the time the machines used a closed loop system to get more sophisticated things to happen with precision. However, the majority of the machines ran with simple on/off control, and the workload [involved] creating layer upon layer of sequences which overlapped and added to the complex nature of what the sound would become." In recent years, MacMurtrie's work with robots has evolved and broadened to create an even richer artistic landscape. The Amorphic Landscape, for instance, is a large-scale robotic installation and performance. "Central to the 20-meter-long Amorphic Landscape is an organic environment engineered to provide both a physical and narrative structure for hundreds of individual robots," said MacMurtrie. "This environment, however, is more than a passive context for its robotic inhabitants. The landscape is, itself, a robotic form capable of movement and transformation." Another recent project, The Robotic Church, is composed of 35 computer-controlled 12- to 15-foot pneumatic sculptures forming a "Society of Machines" that explores the origin of communication through rhythm. "While responding to computer language, they are anthropopathic in nature and channel air to activate their inner biology," explained MacMurtrie. "The evolutionary path towards machines with more kinetic abilities has led to the creation of a Society of Machines with their own language and expression." The Future Is Now Robotic instruments -- and what designers, performers and audiences expect from them -- are evolving and changing. No longer seen as competitors with human musicians, they are instead seen as an integral part of all kinds of music-making. "There are many artists, musicians, and scientists and roboticists working in musical machines these days, and a wide range of things are being created," said MacMurtrie. "I personally don't feel they will ever be better than humans, because of the emotional aspect, because of the creative act. The piece will never sound exactly the same when played by a human. Machines will be able to create compositions we have never heard before, but ultimately the programmer will be there in the process." What it comes down to, perhaps, is the process of creating and being creative with whatever instruments or technologies are on hand. Making music, after all, has always been an interaction of humans with the technologies they create to facilitate their art. "It's a new way to make music," Eric Singer, artistic director with the League of Electronic Musical Urban Robots, told TechNewsWorld. "When a new technology comes along, it pushes the possibilities of what you can do," he observed. "Robots are adding to that. They're visually interesting, though the sound is the most important thing. They can be interactive. They can go places that traditional instruments might not be able to go."
<urn:uuid:206d12e9-25d4-46c6-bf5c-3e962948e66c>
CC-MAIN-2016-26
http://www.technewsworld.com/story/79414.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964172
1,233
2.734375
3
Symptoms of Eye Disease Remember, your eyes can become damaged without your knowing it, as the damage can occur in areas that do not affect vision and you often feel no pain. Only careful eye examinations at regular intervals will detect the damage. If you have type 1 diabetes, it’s a good idea to have your eyes examined by an eye doctor expert in diabetes care at least once a year. If you have type 2 diabetes, your eyes should be examined when your diabetes is diagnosed and at least once a year afterward. Under certain circumstances, you may need immediate attention. Call your eye doctor’s office if you experience any of the following symptoms: - sudden loss of vision. - severe eye pain. - the sensation that a curtain is coming down over your eyes. - black or red floating spots in your vision. - distortion or waviness of straight lines. Your eye doctor will want to see you right away. Page last updated: June 28, 2016
<urn:uuid:d9039f3b-1408-44de-99ac-f320c20c0646>
CC-MAIN-2016-26
http://www.joslin.org/info/Symptoms_of_Eye_Disease.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915246
204
2.59375
3
For all practical purposes, the narrative of Percy Bysshe Shelley’s importance to theatrical history is the tale of The Cenci. However, Prometheus Unbound was the first of his substantial literary undertakings to be cast in dramatic form and is thematically related to The Cenci. Prometheus Unbound considers on the ideal level what The Cenci examines on the level of gritty reality, the relationship between good and evil, between benevolent innocence and that which would corrupt it. Shelley’s Prometheus is the traditional fire-giver redefined, as his preface tells us. The primary change that Shelley makes in his subject is a reworking of the events leading to Prometheus’s release. In the lost Aeschylean play from which Shelley borrowed his title, there occurred a “reconciliation of Jupiter with his victim” at “the price of the disclosure of the danger threatened to his empire by the consummation of his marriage with Thetis.” In Shelley’s version, Prometheus earns his freedom more nobly, by overcoming himself, by forswearing hatred and the desire for revenge, embracing love, and achieving, through extraordinary fortitude, a merciful selflessness. In a sense, Prometheus combines a Christ-like forbearance with the traits the Romantics often admired in Satan. Shelley says Prometheus is like Satan in that “In addition to courage, and majesty, and firm and patient opposition to omnipotent force, he is susceptible of being described as exempt from the taints of ambition, envy, revenge, and a desire for personal aggrandisement.” By contrast with Satan, Shelley described Prometheus as “the type of the highest perfection of moral and intellectual nature, impelled by the purest and the truest motives to the best and noblest ends.” This perfection is absent as the play begins, but when, in act 1, Prometheus relents in his hatred and says, “I wish no living thing to suffer pain,” his ultimate triumph and Jupiter’s defeat are inevitable. Evil can succeed only if it is allowed access to one’s innermost being, only if one allows it to re-create oneself in its own vile image. With “Gentleness, Virtue, Wisdom, and Endurance,” a person can win out, though the success of goodness requires a great deal of him as is shown in the play’s final lines: To suffer woe which Hope thinks infinite;To forgive wrongs darker than death or night;To defy Power, which seems omnipotent;To love, and bear; to hope till Hope createsFrom its own wreck the thing it contemplates;Neither to change, nor falter, nor repent;This, like thy glory, Titan, is to beGood, great and joyous, beautiful and free;This is alone Life, Joy, Empire, and Victory. In The Cenci, Beatrice exhibits the necessary defiance of evil, but she lacks the fortitude to resist hatred. She confuses physical violation, which any person with sufficient opportunity can inflict on any other, with spiritual violation, which requires willful complicity. By hating, she comes partially to resemble the thing she hates. The object of Beatrice’s hatred is her father, Count Francesco Cenci, the embodiment of everything the Romantics distrusted in those possessed of power. In characterizing the count, Shelley had a rich gallery of gothic and melodramatic villains on which to draw, and among them all, few can match the count for wickedness. The count is a plunderer, a murderer, and an incestuous rapist. He takes delight in destroying the lives of those around him, and he especially enjoys inflicting spiritual torture. He will only “rarely kill the body,” because it “preserves, like a strong prison, the soul within my power,/ Wherein I feed it with the breath of fear/ For hourly pain.” Like many a villain of the period, the count commits his vilest crimes against the holy ties of sentiment. His egomania destroys his capacity for fellow-feeling, and out of the horror of his isolating selfhood, he performs deeds of unnatural viciousness against those who most deserve his love. He abuses Lucretia, his wife, and Bernardo, his innocent young son. He prays for the deaths of two other sons, Rocco and Cristofano, and invites guests to a banquet of thanksgiving when their deaths occur. He refuses to repay the loan of his daughter-in-law’s dowry, which he had borrowed from the desperately poor Giacomo, his fourth son. After taking Giacomo’s job away and giving it to another man, he alienates this son from his wife and children by claiming that Giacomo used the lost dowry for licentious carousing. He reserves his greatest cruelty, however, for Beatrice. Beatrice possesses the courage to denounce him and to seek redress for the injustices inflicted on herself and her family. She goes so far as to petition the pope for aid in her struggle. In order to break her rebellious spirit, to crush her will to resist him, the count rapes his daughter and threatens to do so again. The count’s unnatural cruelty inspires unnatural hatred in Lucretia, Giacomo, and Beatrice. As Giacomo tells us, “He has cast Nature off, which was his shield,/ And Nature casts him off, who is her shame;/ And I spurn both.” The son, “reversing Nature’s law,” wishes to take the life of the man who “gave life to me.” He wishes to kill the man who denied him “happy years” and “memories/ Of tranquil childhood,” who deprived him of “home-sheltered love.” Beatrice and Lucretia share this wish to destroy the perverter of love. When Count Cenci proves impervious to their pleas that he relent, and when every external authority refuses to intervene, the family members take action against this most unnatural of men. Because Beatrice is strongest and most sinned against, she becomes the prime mover of her father’s murder. Giacomo refers to her victimization as “a higher reason for the act/ Than mine” and speaks of her, in lines ironically recalling the biblical injunction against vengeance, as “a holier judge than me,/ A more unblamed avenger.” In becoming the avenger, though, Beatrice must steel herself against those qualities of innocence and compassion that have rendered her superior to her persecutor. She thinks, in fact, that exactly those qualities that militate against the murder can be twisted around to give the strength needed to commit it. She advises Giacomo to . . . Let piety to God,Brotherly love, justice and clemency,And all things that make tender hardest heartsMake thine hard, brother. When assassins have been recruited to do the deed, Giacomo utters a momentary hope that the assassins may fail. When the first attempt does fail, Lucretia takes the opportunity to urge Francesco to confess his sins so that, if a second attempt succeeds, at least she will have done nothing to condemn his soul to eternal torment. Beatrice, by contrast, is as relentless in pursuing revenge as her father had been in pursuing evil pleasures. At a key moment, when even the assassins quail at taking the life of “an old and sleeping man,” Beatrice takes up the knife and shames them into performing the murder by threatening to do it herself. Beatrice is like her father, too, in claiming that God is on her side. Francesco had seen the hand of God in the deaths of his disobedient sons; the ultimate Father had upheld parental authority by killing the rebellious Rocco and Cristofano. Similarly, as Beatrice plots her father’s death, she feels confident of having God’s approval for her actions; as his instrument, she is permitted, even obligated, to wreak vengeance on this most terrible of sinners. Neither character is right. Both are appealing to the silent symbol of all external authority to justify the unjustifiable, to second the internal voice that has turned them toward evil. The dangers of religion are further embodied in the machinations of Orsino and the unconscionable actions of the pope and his representatives. Orsino is God’s priest, but his priestly garb merely wraps his lustfulness in the hypocritical guise of sanctity. In order to eliminate Count Cenci, the greatest obstacle to his possession of Beatrice, Orsino urges the conspirators on at every turn. When the conspiracy is discovered, he is the only participant in the count’s murder to slink safely away. The pope’s role in the play’s events is even more reprehensible. For years, he has allowed the count’s depredations at the price of an occasional rich bribe. He refuses to intervene to end the count’s crimes because it is in his self-interest to allow them to continue. When he finally does take action, apparently because he can now achieve more by eliminating the count than by keeping him alive, the pope is too late; the count is already dead. He then turns on those who have accomplished, outside the law, what he would have done with the full authority of the papal office. As the earthly representative of ultimate power, he orders the deaths of those who have become a threat to all power; the conspirators are to be executed. The irony of this situation is that the force behind the papal authority is the same false notion that lured his victims to act as they did, the assumption that everything, even the shedding of human blood, is allowed to those who have God on their side. In a world as corrupt as the one in which Beatrice Cenci finds herself, her fall is all the more terrible because it is so easy to sympathize with. The most perceptive comments concerning the nature of Beatrice as a tragic heroine and the appropriateness of her life as a tragic subject are Shelley’s own:Undoubtedly, no person can be truly dishonoured by the act of another; and the fit return to make to the most enormous injuries is kindness and forbearance, and a resolution to convert the injurer from his dark passions by peace and love. Revenge, retaliation, atonement, are pernicious mistakes. If Beatrice had thought in this manner, she would have been wiser and better; but she would never have been a tragic character. . . . It is in the restless and anatomizing casuistry with which men seek the justification of Beatrice, yet feel that she has done what needs justification; it is in the superstitious horror with which they contemplate alike her wrongs and their revenge, that the dramatic character of what she did and suffered, consists. In her capacity to endure evil and to forgive the evildoer, Beatrice is no Prometheus, but in her very understandable human frailty, she is a far superior subject for dramatic representation. Inspired at least in part by the squealing of pigs near Shelley’s rooms in the vicinity of Pisa, Italy, Oedipus Tyrannus is a raucous burlesque of events surrounding George IV’s attempt to divorce his estranged wife, Caroline. Its virulent mockery of commoners, cabinet ministers, and members of the royal family alike brought about its quick suppression. The last of Shelley’s dramatic works to be published during his lifetime, Hellas, was written in support of the Greek revolutionaries under the leadership of Prince Alexander Mavrocordato, to whom the play is dedicated. Like Prometheus Unbound, Hellas has affinities to Aeschylean tragedy. Aeschylus’s Prometheus desmts (date unknown; Prometheus Bound, 1777) provided much of the inspiration for the earlier work, while his Persai (472 b.c.e.; The Persians, 1777) gave impetus to the writing of Hellas. The play’s most familiar lines, from the concluding choral song, are an eloquent cry of hope for the regeneration of the world: The world’s great age begins anew,The golden years return,The earth doth like a snake renewHer winter weeds outworn:Heaven smiles, and faiths and empires gleam,Like wrecks of a dissolving dream.
<urn:uuid:bcf139a1-32e3-444d-b0b4-f641762bd682>
CC-MAIN-2016-26
http://www.enotes.com/topics/percy-bysshe-shelley/critical-essays
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402699.36/warc/CC-MAIN-20160624155002-00181-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966687
2,647
2.9375
3
REVERSE SLOPE DEFENSE "In battle, casualties vary directly with the time you are exposed to effective fire. . . A pint of sweat will save a gallon of blood." A reverse slope defense is a positioning technique characterized by the location of defensive forces on a slope of a hill, ridge, or mountain that descends away from the enemy. It is one of several time tested techniques that may be used as part or all of a unit defense. The reverse slope defense protects the infantryman from enemy long-range direct and indirect fires. WW II: JAPANESE DEFENSES ON THE PHILIPPINES- -THE 6TH ID Reverse slope defenses were more often used by enemy forces in WW II and Korea than by U.S. units. During operations on Luzon in the Philippines (July 1945), the 6th Infantry Division discovered how the Japanese used such defenses to good effect. From 3-8 July the soldiers of the 6th ID tried to take a well-fortified position along what was called Lane's Ridge. The position was only assailable from the front since each flank was protected by dense thickets on one side and a deep gorge on the other. These frontal attacks got nowhere until on 7 July, after extensive air strikes (including napalm) and artillery barrages, the 6th managed to take the forward slope of the ridge. However, trying to continue the advance, the soldiers discovered that the Japanese had also fortified the reverse slope with 55 emplacements including 13 pillboxes. Once again artillery was called in, including shells using the new VT fuses. Under the cover of a smoke screen and with quad .50 caliber machine guns blazing, the men of the 6th Division stormed those positions on 8 July, but at heavy cost. The reverse slope defenses, unexpected and virtually impervious to observed artillery fire, proved to be more costly to take than forward slope positions. A reverse slope defense is especially useful when the light infantryman finds himself on terrain which is exposed to enemy long range fire systems. 90TH ID: C CO ON THE FORWARD SLOPE A clear example of the dangers of placing your men in an exposed position was the poor placement of C Company of the 1st Battalion/357th Infantry of the 90th Division in WW II. Then MAJ William DePuy, the battalion commander, recounts this story: "When we got up between the Prum and Kill Rivers [Germany, 1945], we encountered a very high open ridge. One of my company commanders put his 'C' Company out in the snow on a bare forward slope. They dug in and everyplace they dug they made dark doughnuts in the snow. On the other side of the river there was another ridge. On top of that ridge were some German assault guns, and they waited until the company commander had all of his troopers scattered around in their foxholes on the forward slope, and then, they just started firing with their two assault guns. It was murder. Finally, after they killed and wounded maybe 20 men in that company, the rest of them just got up and bolted out of there and went over to the reverse slope, which is where they belonged in the first place. So, being on a forward slope when the enemy has direct fire weapons, high velocity direct fire weapons, is suicide." KOREA: "FRONT LIGHT, REAR HEAVY" The Chinese Army in Korea greatly feared and respected the volume and accuracy of U.S. artillery fire and air strikes. In a captured Chinese report on their "lesson learned" after battles with the U.S., they said, "the enemy [U.S.] has stronger artillery than we." As a result, the Chinese decided to put their forces, "front light, rear heavy." That is, they put a few reconnaissance troops on the forward slope of the hill while putting most of their troops on the reverse slope. They were placed "in well-protected holes dispersed over the crest of the hill from where one can easily push forward." These units achieved the maximum protection from the deadly effects of direct and indirect fire weapons at a minimum loss in defensive power. When facing an enemy rich in artillery and air power, the reverse slope defense proved a good bet for the Chinese. The reverse slope defense brings the battle into the range of infantry weapons. Argentine defensive positions in the Falklands were normally located on the forward slopes. This permitted the British forces to observe and accurately locate the Argentine positions. They then would direct accurate artillery fire and antitank guided missiles into those exposed positions. The Argentines were driven out of their holes by this concentrated fire. The British were quick to capitalize on the vacated positions. When 2nd Battalion, the Parachute Regiment (2 Para) took Wireless Ridge, they occupied the vacated Argentine positions, now on a reverse slope from the enemy. From those positions the British were protected from Argentine artillery fire. The Argentines were not able to place effective artillery fire over the crest of the mountain. Positions on the reverse slope are hidden from enemy observation and can hide your strength and locations. Reverse slope defenses would also have been better than a forward slope defense at Darwin Hill and Boca House where the British again used MILAN ATGMs to destroy Argentine forward positions one by one. British sources indicate that had the Argentines adopted reverse slope defensive positions with observation posts (OPs) on the forward slope, they would have been denied the British detailed knowledge of their defensive positions. FM 7-20, The Infantry Battalion, Dec 84. Chapter 5, gives the advantages, limitations, and organization of reverse slope defense. FM 7-22, Light Infantry Battalion, Mar 87. Chapter 4, Section V, gives the concept, integration, organization and a scenario for a reverse slope defense. FM 7-71, Light Infantry Company, Aug 87. The Combat Studies Institute Research Survey #6, A Historical Perspective on Light Infantry, Sep 87, provides an excellent discussion and graphics of the reverse slope defense used by the CHICOM in Korea. The reverse slope defense can (depending on METT-T) provide a critical "edge" to the light fighter on the firepower intensive battlefield. A Sample Company Reverse Slope Position Table of Contents |Join the GlobalSecurity.org mailing list|
<urn:uuid:38b96d33-33c0-4b02-901b-31ff15dc5377>
CC-MAIN-2016-26
http://www.globalsecurity.org/military/library/report/call/call_1-88_chpt3.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959769
1,311
2.765625
3
Texas Department of Transportation: An Inventory of Transportation Commissioner Robert Nichols Correspondence and Speeches at the Texas State Archives, 1997-2005 The Texas Department of Transportation (TxDOT), in cooperation with local and regional officials, is responsible for planning, designing, building, operating and maintaining the state's transportation system. This involves planning, designing, and acquiring right-of-way for state highways and other modes of transportation; researching issues to save lives and solve problems; constructing bridges and improving airports; and maintaining roadways, bridges, airports, the Gulf Intracoastal Waterway, and ferry systems. Other functions carried out by TxDOT include public transportation, vehicle titles and registration, vehicle dealer registration, motor carrier registration, traffic safety, traffic information, and auto theft prevention. The Texas Highway Department was created in 1917 (House Bill 2, 35th Texas Legislature, Regular Session) to stimulate the building and improvement of roads throughout the state. The Federal Aid Road Act of July 11, 1916 (39 Stat. 355; 16 U.S.C. 503; 23 U.S.C. 15, 48), signed into law by President Woodrow Wilson, initiated federal aid for highways with the requirement that each state receiving aid have a state highway department that controlled the building of roads. The Department was to administer federal funds to counties for state highway construction and maintenance and to provide for state motor vehicle registration, fees from which were to generate the state's required matching funds. The department began operation on June 4, 1917. After gathering information at public hearings over that summer, the commission proposed an 8,865-mile state highway network. Further influence from the national level came with the Federal Highway Act of 1921, which required state highway departments to control the design, construction and maintenance of roads rather than follow Texas' practice of allowing counties to undertake the work themselves with oversight from department engineers. In 1969 the Legislature created the Texas Mass Transportation Commission (House Bill 738, 61st Legislature, Regular Session) to develop public mass transportation in Texas. This agency was merged with the Highway Department in 1975, creating the State Department of Highways and Public Transportation (Senate Bill 761, 64th Legislature, Regular Session). An executive order of May 1976 transferred the Governor's Office of Traffic Safety to the Department. The Texas Department of Transportation was created in 1991 (House Bill 9, 72nd Legislature, 1st Called Session), merging the Texas State Department of Highways and Public Transportation, the Texas Department of Aviation (created as the Texas Aeronautics Commission in 1945, name changed to Texas Board of Aviation in 1989); and the Texas Motor Vehicle Commission (created in 1971). In 1997 the Texas Turnpike Authority merged with the Texas Department of Transportation (Senate Bill 370, 75th Legislature, Regular Session). The Texas Department of Transportation's governing body is the Texas Transportation Commission, originally composed of three members, increased to five in 2003 (Senate Bill 409, 78th Legislature, Regular Session). Commissioners are representatives of the general public appointed by the governor with advice and consent of the senate for overlapping six-year terms. Since 2003, one of the members must represent rural Texas. The positions are part-time salaried positions, and the chair (appointed by the governor) was originally called the commissioner of transportation; since 2003, each member is referred to as a commissioner. (Sources include: Guide to Texas State Agencies, 11th edition (2001); An Informal History of the Texas Department of Transportation, by Hilton Hagan (2000) (previously available on the TxDOT website, the link has since been removed); and divisional information, found on the agency's website ( http://www.dot.state.tx.us/about_us/) accessed March 2009.) Robert L. Nichols was born in 1944 and was raised in Jacksonville, Texas. He received a degree in Industrial Engineering from Lamar University in 1968. He was a successful small businessman and served for a time as the Mayor of Jacksonville, streamlining government and cutting property taxes during his tenure. Governor George Bush appointed Nichols to the Texas Transportation Commission in 1997. He was reappointed in 2003 by Governor Rick Perry and served until June 30, 2005, when he resigned to run for the Texas Senate. In his resignation letter, Nichols listed accomplishments of the Commission during his tenure: tripling the number of roadway construction projects built each year without raising taxes, establishing regional mobility authorities, accelerating the reconstruction of deteriorating bridges, establishing and accelerating the construction of corridors throughout the state on the Texas Trunk System, passing rail legislation, signing working agreements with major railroads for the relocation and preservation of rail corridors, and beginning the implementation of the Trans Texas Corridor. Robert Nichols began serving as state senator for District Three in 2007. He also serves on several local boards, including Lon Morris College, the East Texas Medical Center, and the Nan Travis Hospital Foundation. (Sources: Texas Senate website article on Senator Nichols (http://www.senate.state.tx.us/75r/senate/members/dist3/dist3.htm), accessed March 2009; Senator Nichols personal website (http://nicholsforsenate.com/about/), accessed March 2009; and the correspondence files in his records.) The Texas Department of Transportation (TxDOT), in cooperation with local and regional officials, is responsible for planning, designing, building, operating and maintaining the state's transportation system. The Texas Transportation Commission is TxDOT's governing body. These files consist of administrative correspondence and speeches of Transportation Commissioner Robert L. Nichols, dating 1997-2005. The files cover a wide range of TxDOT issues, including changes to highway routes, road and bridge construction, funding, allocation of vehicle registration fees, proposed legislation, motor carrier rules, construction of corridors through the Texas Trunk System, toll roads, fuel theft, traffic congestion, rail corridors, and the creation of regional mobility authorities. The bulk of the records is incoming and outgoing administrative correspondence between Commissioner Nichols and legislators, congressmen, local and state officials, civic and community groups, TxDOT officials and staff, etc. The remaining files are speeches given by Commission Nichols to civic and community groups, professional associations, and state agencies; also presentations at the annual Texas Department of Transportation conference; and testimony given before the Texas Senate and House of Representatives or legislative subcommittees. To prepare this inventory, the described materials were cursorily reviewed to delineate series, to confirm the accuracy of contents lists, to provide an estimate of dates covered, and to determine record types. Restrictions on Access Because of the possibility that portions of these records fall under Public Information Act exceptions including, but not limited to, email addresses (552.137), an archivist must review these records before they can be accessed for research. The records may be requested for research under the provisions of the Public Information Act (V.T.C.A., Government Code, Chapter 552). The researcher may request an interview with an archivist or submit a request by mail, fax, or email including enough description and detail about the information requested to enable the archivist to accurately identify and locate the information requested. If our review reveals information that may be excepted by the Public Information Act, we are obligated to seek an open records decision from the Attorney General on whether the records can be released. The Public Information Act allows the Archives ten working days after receiving a request to make this determination. The Attorney General has 45 working days to render a decision. Alternately, the Archives can inform you of the nature of the potentially excepted information and if you agree, that information can be redacted or removed and you can access the remainder of the records. This restriction applies only to the Speeches series. Materials do not circulate, but may be used in the State Archives search room. Materials will be retrieved from and returned to storage areas by staff members. Restrictions on Use Most records created by Texas state agencies are not copyrighted and may be freely used in any way. State records also include materials received by, not created by, state agencies. Copyright remains with the creator. The researcher is responsible for complying with U.S. Copyright Law (Title 17 U.S.C.). (Identify the item and cite the series), Texas Department of Transportation Commissioner Robert Nichols correspondence and speeches. Archives and Information Services Division, Texas State Library and Archives Commission. Accession number: 2009/053 These records were transferred to the Archives and Information Services Division of the Texas State Library and Archives Commission by the Texas Department of Transportation on November 20, 2008. Processed by Laura K. Saegert, March 2009 Detailed Description of the Records
<urn:uuid:4541640f-b88c-423c-b296-b3dfc442ff71>
CC-MAIN-2016-26
http://www.lib.utexas.edu/taro/tslac/20181/20181-P.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00084-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943268
1,786
2.546875
3
What you're looking at above is the Actroid-SIT, a human-like robot from Japanese firm Kokoro. It can make eye contact and even gestures in the direction of a person trying to speak to her, enabling it to competently handle crowds of people. Nara Institute of Science and Technology researchers studied how individuals and groups interacted with the robot to develop new behavior. They focused "interruptibility" and "motion parameterization" to improve human-robot interaction. Continue reading for more. The Furusawa group at the University of Tokyo has succeeded for the first time in demonstrating complete quantum teleportation of photonic quantum bits by a hybrid technique. The demonstration of quantum teleportation of photonic quantum bits by Furusawa group shows that transport efficiency can be over 100 times higher than before. Also, because no measurement is needed after transport, this result constitutes a major advance toward quantum information processing technology. 4. Driverless Cars The Google driverless car is a project by Google that involves developing technology for autonomous cars. The software powering Google's cars is called Google Chauffeur. The project is currently being led by Google engineer Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Google's robotic cars have about $150,000 in equipment including a $70,000 LIDAR (laser radar) system. The range finder mounted on the top is a Velodyne 64-beam laser. This laser allows the vehicle to generate a detailed 3D map of its environment. The car then takes these generated maps and combines them with high-resolution maps of the world, producing different types of data models that allow it to drive itself. The Black Hornet Nano is a military micro unmanned aerial vehicle (UAV) developed by Prox Dynamics AS of Norway, and in use by the British Army. The unit measures around 10 x 2.5 cm (4 x 1 in) and provides troops on the ground with local situational awareness. They are small enough to fit in one hand and weigh just over half an ounce (16 gm-including batteries). The UAV is equipped with a camera which gives the operator full-motion video and still images. They were developed as part of a £20 million contract for 160 units with Marlborough Communications Ltd. 2. Stem Cell Burger On August 5, 2013, the world's first lab-grown burger was cooked and eaten at a news conference in London. Scientists from the Netherlands, led by professor Mark Post, took stem cells from a cow and grew them into strips of muscle that they combined to make a burger. The burger was cooked by chef Richard McGeown of Couch's Great House Restaurant, Polperro, Cornwall, and tasted by critics Hanni Ruetzler, a food researcher from the Future Food Studio and Josh Schonwald. Human Universal Load Carrier, or HULC, is an un-tethered, hydraulic-powered anthropomorphic exoskeleton developed by Professor H. Kazerooni and his team at Ekso Bionics. It is intended to help soldiers in combat carry a load of up to 200 pounds at a top speed of 10 miles per hour for extended periods of time. After being under development at Berkeley Robotics and Human Engineering Laboratory since 2000, the system was announced publicly at the AUSA Winter Symposium on February 26, 2009 when an exclusive licensing agreement was reached with Lockheed Martin.
<urn:uuid:e7ca3864-788b-4516-99f1-4fa260556aa8>
CC-MAIN-2016-26
http://www.techeblog.com/index.php/tech-gadget/5-technologies-found-in-science-fiction-that-actually-exist-now
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941773
704
2.984375
3
World Environment Day Melbourne 2012 From Greenlivingpedia, a wiki on green living, building and energy The Wilderness Society held a special event on 5 June 2012, World Environment Day. It was big – 1 hectare in fact. - When: 1pm, Tuesday 5 June 2012, Melbourne CBD - Where: Corner of Bourke and Elizabeth Streets (bottom of Bourke St Mall) All were invited to come and join in the action to celebrate the beautiful gems that are our wild places, and shed light on some of the risks they face. Hundreds of people joined hands around one of Melbourne CBD’s 1 hectare city blocks on World Environment Day as a representation of the threat to our forests, oceans and wild places from logging, unsustainable fishing and mining. What’s in a city block? - Every day, 10 city block sized areas of Victoria’s forests are logged, mostly to make cheap products like Reflex paper - An area the size of 3,500 city blocks of the Kimberley’s stunning coastline is currently at risk of being converted into the largest oil and gas processing hub in the southern hemisphere - And in a once in a lifetime opportunity, the equivalent to 108,000,000 city blocks of our marine territory is up for protection in 2012! Hundreds of Victorians joined hands on the United Nations’ 12th World Environment Day to show support for Australia’s world renowned natural environment.
<urn:uuid:e1acba93-7226-4c51-902c-7368c5efe9fd>
CC-MAIN-2016-26
http://www.greenlivingpedia.org/World_Environment_Day_Melbourne_2012
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00061-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925946
298
2.765625
3
Under the proposed changes - part of a wide-ranging overhaul of the Nutrition facts label - serving sizes in many cases will go up, so a serving size would not, for example, be half a bagel, or half a can of energy drink. While the changes are “appropriate”, says the Behavioral Science and Regulation Group (a collection of students and fellows at Harvard’s Law, Kennedy, and Business Schools), they are also risky given that more than half of consumers perceive the term ‘serving size’ to be a recommended serving size, not the reference amount customarily consumed (RACC). So where serving sizes are increased to reflect typical consumption habits, this “could lead [some consumers] to eat more than they otherwise would… because these consumers believe that the FDA has implicitly endorsed the serving size as healthy”, warns the group. More than half of consumers perceive the term ‘serving size’ to be a recommended serving size, not an amount customarily consumed So what is the alternative, given that current (unrealistically small) serving sizes are arguably just as misleading (who really eats half a muffin washed down with half a can of Monster Energy)? One option worth considering is ditching the word 'serving' and replacing it with something else, suggests the group: “We suggest that the word ‘serving’ and the phrase ‘serving size’ be changed to avoid an implied endorsement. Changing ‘serving’ to a word that does not suggest the context of a meal, like ‘unit’ or ‘quantity,’ may mitigate the endorsement effect.” Alternatively, it says, the FDA could consider removing the lines that mention ‘serving’ and adding, next to the words ‘Amount per ___,’ the fraction of the container that the RACC represents (eg. ‘Amount per ⅔ cup (⅛ of container)’). American Diabetes Association: We urge FDA to conduct consumer education While most other commentators are happy with the word ‘serving’ given that it is more consumer-friendly than most alternatives, several also express concerns about possible unintended consequences of the proposed changes. The American Diabetes Association, for example, says: “Ensuring the Nutrition Facts label information reflects actual consumer eating habits will help individuals fully understand the nutritional content of the food they are consuming… [but] we urge FDA to conduct consumer education to ensure these changes to the RACCs are not misunderstood by consumers as recommendations to consume larger portions.” Weight Watchers - which supports the move - also notes that it could “create the impression that the larger portion size is the proper portion size.” Click HERE to read more about portion sizes. Click HERE to read all the comments in the docket on serving sizes.
<urn:uuid:ba6020e2-c49b-4761-803d-0be0e7843acf>
CC-MAIN-2016-26
http://mobile.foodnavigator-usa.com/Manufacturers/Could-new-serving-sizes-on-Nutrition-Facts-label-backfire
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934102
593
2.609375
3
Plastic chemical triggers allergic asthma A plastic chemical commonly used in baby bottles and the lining of food and beverage cans may at least partially responsible for allergic asthma, a new study suggests. The animal model study presented Sunday in New Orleans at the American Academy of Allergy, Asthma & Immunology annual meeting showed the mice born to the mothers who were exposed to bisphenol A or BPA suffered allergic asthma. BPA has been known to cause a myriad health problems. Even Food and Drug Administration, which initially rejected the link between BPA exposure and health conditions, has recently admitted that BPA can potentially be harmful. Dr. Erick Forno of pediatrics at the University of Miami Miller School of Medicine and colleagues early have found that baby mice born to mothers who had been exposed to BPA were at higher risk of allergic asthma. In the current study, Dr. Forno and colleagues intended to decide what levels of exposure would affect the animals. What the researchers did is prepare drinking water with 0.1 1 or 10 micrograms per ML of BPA and give it female mice before, during and after pregnancy. Baby mice were given ovalbumin immediately after birth to induce asthma. They found mice born to mothers exposed to 10 micrograms of BPA developed airway problems while mice born to mothers exposed to low or no BPA did not develop the problem. Allergic asthma is the most common type of asthma that affects some 90 percent of children with asthma and 50 percent of adults with asthma. Individuals with allergic asthma have their airways hypersensitive to the allergens and when they are exposed to allergens, they may suffer coughing, wheezing, shortness of breath, rapid breathing and tightening of the chest, symptoms that are commonly experienced by patients with non-allergic or allergic asthma. The allergens of concern include pollen from tress, grasses and weeds, mold spores, animal dander and saliva, dust mite feces and cockroach feces among others. Studies by The Environmental Working Group suggested that BPA contamination in canned food are likely to put consumers at risk of ingesting doses of the chemical that are very close to levels now known to harm laboratory animals. EWG scientist Sonya Lunder said parents should choose BPA-free types of baby bottles or water bottles and formulas with a label claiming BPA-free to avoid BPA exposure. Babies are more sensitive than adults to the harm from the chemical and they also ingest more than adults. Lunder also suggested pregnant women should minimize exposure to canned foods and polycarbonate food containers, and BPA-containing medical devices. Early studies have found associations between BPA and high risk of abnormal behavior, male reproductive system, heart disease immune system, the brain, breast cancer, metabolic syndrome, heart attack and diabetes among other diseases. By David Liu Photo courtesy: wikipedia (Send your news to [email protected], Foodconsumer.org is part of the Infoplus.com ™ news and information network) - Commentary: Don't Be Fooled by the Senate's GMO Fake Labeling Bill - Healthy Recipes: Summer Grilled Balsamic Veggies - Hops may help prevent breast cancer - Alfalfa, Spicy sprouts recalled due to salmonella - Could drinking soy milk stop prostate cancer?
<urn:uuid:aeb173d8-e7cd-4d85-99c8-b9196e26f619>
CC-MAIN-2016-26
http://www.foodconsumer.org/newsite/5/plastic_chemical_triggers_allergic_asthma_2902100305.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946631
696
2.890625
3
DNS (Domain Name Service) server is a server that translate an IP address into a name that will be easy to remember or do the opposite way. The administrative job is done in Server side. For client side just set the machine to connect the DNS server. Before we start, I assume that you are connected to Internet already. For, text editor, you can use any program that you are familiar with. In this sample, I use vim. The installation is as easy as below: Step 1. Install the bind9 Open Linux Terminal (Applications>Accessories>Terminal), type: sudo apt-get install bind9
<urn:uuid:4805cac9-b250-4faf-9c8f-97c5e5e8a194>
CC-MAIN-2016-26
https://taufanlubis.wordpress.com/2010/08/21/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00162-ip-10-164-35-72.ec2.internal.warc.gz
en
0.872274
132
2.5625
3
- Remainder/Divisibility Puzzles, a selection of answers from the Dr. Math archives. - Remainders of 1, 2, 3, 4 - Find the smallest whole number that when divided by 5, 7, 9, and 11 gives remainders of 1, 2, 3, and 4 respectively. - Remainder Problem - What number less than 500 produces remainder 4 when divided by 5, remainder 7 when divided by 9, and remainder 9 when divided by 11? - Multiply Two Numbers (No Zeros) to make 5 Billion - What two numbers, neither of them containing zeros, can be multiplied together to make 5,000,000,000? - One Billion as Product of Two Numbers with No Zeros - Write 1,000,000,000 as the product of two numbers, neither of which contains any zeros. - Sums Divisible by 11 - Why is the sum of a number with an even number of digits and that same number written in reverse always divisible by 11? - Largest 7-Digit Number - Work out the largest 7-digit number you can applying two rules: every digit in the number must be able to be divided into the number, and no digit can be repeated. - Extraordinary Social Security Number - The number's nine digits contain all the digits from 1 to 9. When read from left to right the first two digits form a number divisible by two, the first three digits form a number divisible by three... - Divisibility Word Problem - Arrange the digits 0 to 9 such that the number formed by the first digit is divisible by 1, the number formed by the first two digits is divisible by 2, that formed by the first three digits divisible by 3, and so forth; thus the number formed by the first 9 digits will be divisible by 9 and that formed by all 10 digits divisible by 10. - Find the Smallest Number - A Remainder Problem - Find the smallest number, M, such that: M/10 leaves a remainder of 9; M/9 leaves a remainder of 8; M/8 leaves 7; M/7 leaves 6; M/6 leaves 5; M/5 leaves 4; M/4 leaves 3; M/3 leaves 2; and M/2 leaves 1.
<urn:uuid:cfbb3c23-df2b-4615-a714-a9fd3b2fb28d>
CC-MAIN-2016-26
http://mathforum.org/library/drmath/sets/select/dm_remainder.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00131-ip-10-164-35-72.ec2.internal.warc.gz
en
0.883513
516
3.640625
4
Cancer is a complex disease driven by numerous factors: environment, lifestyle, and genetic make-up. In some cases, genetic variation shared within families is related to risk for disease. For over three decades, NCI scientists have studied families in which multiple individuals have developed specific cancers. This research, now aided by new genomic technologies, has led to the discovery of genes and environmental exposures that affect cancer risk in families and the general population. To see a list of family studies that are actively recruiting participants, visit the Active Clinical Studies page. Pleuropulmonary Blastoma Cancer Predisposition Syndrome (actively recruiting) Pleuropulmonary blastoma (PPB) is a rare tumor of the lung. Research has shown that PPB may be part of an inherited cancer predisposition syndrome caused by changes in a gene known as DICER1. The PPB Cancer Study is an observational study of children with PPB and their families. Familial Melanoma Studies (actively recruiting) DCEG researchers are searching for melanoma susceptibility genes in melanoma-prone families. Li-Fraumeni Syndrome Study (actively recruiting) Li-Fraumeni Syndrome (LFS) is a rare, inherited disorder which leads to a higher risk of certain cancers. NCI has evaluated families with LFS since the syndrome was first recognized in 1969. DCEG is now expanding this research through a clinical study and participation in a multi-institutional collaboration. Familial Blood and Lymph Node Cancers Study (actively recruiting) DCEG investigators have been studying the causes of familial blood and lymph node cancers for over 30 years. Ongoing advances in genetics, anticipated applications of advanced technologies, and data from families have allowed research efforts to expand. Inherited Bone Marrow Failure Syndromes (actively recruiting) The inherited bone marrow failure syndromes (IBMFS) are a group of rare genetic blood disorders. DCEG investigators are leading a clinical study to better understand how cancers develop in persons with IBMFS. Waldenström's Macroglobulinemia Study (actively recruiting) Waldenström's macroglobulinemia (WM) is a rare type of tumor that belongs to a group of disorders called lymphoproliferative diseases. DCEG researchers are leading a study to determine what causes WM to sometimes develop in two or more family members. Familial Testicular Cancer Study (actively recruiting) DCEG investigators are conducting the Familiar Testicular Cancer Study to research the genetic causes of testicular cancer. Familial Chronic Lymphocytic Leukemia Study (actively recruiting) DCEG investigators are studying families with multiple cases of chronic lymphocytic leukemia (CLL), the most common leukemia in adults in the Western Hemisphere. Multidisciplinary Etiologic Studies of Hereditary Breast/Ovarian Cancer DCEG researchers have been studying the Hereditary Breast/Ovarian Cancer (HBOC) syndrome since the 1960s. This study is now closed to patient enrollment, and researchers are analyzing previously collected data. Chordoma Study (actively recruiting) Chordoma is a rare bone cancer that develops at the base of the skull, in a vertebra, or at end of the spine. DCEG investigators are studying families with multiple relatives with chordoma. National Ovarian Cancer Prevention and Early Detection Study The National Ovarian Cancer and Early Detection Study (GOG-0199) is a prospective study of women who are at increased risk of ovarian cancer, either because they or a close relative have a mutation in the BRCA1 or BRCA2 genes, or because they have a strong family history of breast and/or ovarian cancer. The clinical phase of this study is complete, and efforts are focused on data analysis. Read more about the public health impact of family studies.
<urn:uuid:fa82a347-61b1-4c9e-b684-ec1374d81f77>
CC-MAIN-2016-26
http://dceg.cancer.gov/research/who-we-study/families
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00017-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917977
816
2.53125
3
French missionary, born at Orléans, France, 10 January, 1607; martyred at Ossernenon, in the present State of New York, 18 October, 1646. He was the first Catholic priest who ever came to Manhattan Island ( New York ). He entered the Society of Jesus in 1624 and, after having been professor of literature at Rouen, was sent as a missionary to Canada in 1636. He came out with Montmagny, the immediate successor of Champlain. From Quebec he went to the regions around the great lakes where the illustrious Father de Brébeuf and others were labouring. There he spent six years in constant danger. Though a daring missionary, his character was of the most practical nature, his purpose always being to fix his people in permanent habitations. He was with Garnier among the Petuns, and he and Raymbault penetrated as far as Sault Ste Marie, and "were the first missionaries", says Bancroft (VII, 790, London, 1853), "to preach the gospel a thousand miles in the interior, five years before John Eliot addressed the Indians six miles from Boston Harbour". There is little doubt that they were not only the first apostles but also the first white men to reach this outlet of Lake Superior. No documentary proof is adduced by the best-known historians that Nicholet, the discoverer of Lake Michigan, ever visited the Sault. Jogues proposed not only to convert the Indians of Lake Superior, but the Sioux who lived at the head waters of the Mississippi. His plan was thwarted by his capture near Three Rivers returning from Quebec. He was taken prisoner on 3 August, 1642, and after being cruelly tortured was carried to the Indian village of Ossernenon, now Auriesville, on the Mohawk, about forty miles above the present city of Albany. There he remained for thirteen months in slavery, suffering apparently beyond the power of natural endurance. The Dutch Calvinists at Fort Orange ( Albany ) made constant efforts to free him, and at last, when he was about to be burnt to death, induced him to take refuge in a sailing vessel which carried him to New Amsterdam ( New York ). His description of the colony as it was at that time has since been incorporated in the Documentary History of the State. From New York he was sent; in mid-winter, across the ocean on a lugger of only fifty tons burden and after a voyage of two months, landed Christmas morning, 1643, on the coast of Brittany, in a state of absolute destitution. Thence he found his way to the nearest college of the Society. He was received with great honour at the court of the Queen Regent, the mother of Louis XIV, and was allowed by Pope Urban VII the very exceptional privilege of celebrating Mass, which the mutilated condition of his hands had made canonically impossible; several of his fingers having been eaten or burned off. He was called a martyr of Christ by the pontiff. No similar concession, up to that, is known to have been granted. In early spring of 1644 he returned to Canada, and in 1646 was sent to negotiate peace with the Iroquois. He followed the same route over which he had been carried as a captive. It was on this occasion that he gave the name of Lake of the Blessed Sacrament to the body of water called by the Indians Horicon, now known as Lake George. He reached Ossernenon on 5 June, after a three weeks' journey from the St. Lawrence. He was well received by his former captors and the treaty of peace was made. He started for Quebec on 16 June and arrived there 3 July. He immediately asked to be sent back to the Iroquois as a missionary, but only after much hessitation his superiors acceded to his request. On 27 September he began his third and last journey to the Mohawk. In the interim sickness had broken out in the tribe and a blight had fallen on the crops. This double calamity was ascribed to Jogues whom the Indians always regarded as a sorcerer. They were determined to wreak vengence on him for the spell he had cast on the place, and warriors were sent out to capture him. The news of this change of sentiment spread rapidly, and though fully aware of the danger Jogues continued on his way to Ossernenon, though all the Hurons and others who were with him fled except Lalande. The Iroquois met him near Lake George, stripped him naked, slashed him with their knives, beat him and then led him to the village. On 18 October, 1646, when entering a cabin he was struck with a tomahawk and afterwards decapitated. The head was fixed on the Palisades and the body thrown into the Mohawk. The Catholic Encyclopedia is the most comprehensive resource on Catholic teaching, history, and information ever gathered in all of human history. This easy-to-search online version was originally printed between 1907 and 1912 in fifteen hard copy volumes. Designed to present its readers with the full body of Catholic teaching, the Encyclopedia contains not only precise statements of what the Church has defined, but also an impartial record of different views of acknowledged authority on all disputed questions, national, political or factional. In the determination of the truth the most recent and acknowledged scientific methods are employed, and the results of the latest research in theology, philosophy, history, apologetics, archaeology, and other sciences are given careful consideration. No one who is interested in human history, past and present, can ignore the Catholic Church, either as an institution which has been the central figure in the civilized world for nearly two thousand years, decisively affecting its destinies, religious, literary, scientific, social and political, or as an existing power whose influence and activity extend to every part of the globe. In the past century the Church has grown both extensively and intensively among English-speaking peoples. Their living interests demand that they should have the means of informing themselves about this vast institution, which, whether they are Catholics or not, affects their fortunes and their destiny. Copyright © Catholic Encyclopedia. Robert Appleton Company New York, NY. Volume 1: 1907; Volume 2: 1907; Volume 3: 1908; Volume 4: 1908; Volume 5: 1909; Volume 6: 1909; Volume 7: 1910; Volume 8: 1910; Volume 9: 1910; Volume 10: 1911; Volume 11: - 1911; Volume 12: - 1911; Volume 13: - 1912; Volume 14: 1912; Volume 15: 1912 Catholic Online Catholic Encyclopedia Digital version Compiled and Copyright © Catholic Online
<urn:uuid:626e1722-b981-4436-bc94-803ecdb342fd>
CC-MAIN-2016-26
http://www.catholic.org/encyclopedia/view.php?id=6353
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981963
1,372
3.421875
3
6 Ways to Avoid Hormone Disrupting Chemicals You've probably heard of BPA, the chemical that was in baby bottles, sippy cups, and water bottles. There's been lots of concern about this chemical as an endocrine disruptor-- it's ability to alter hormones, causing all sorts of health problems, such as obesity, cancer, heart disease, early puberty and learning disabilities. Back in 2008, consumers learned about the problems of BPA and demanded that companies remove it from products for children. Most did, and in 2012 the FDA banned BPA in baby bottles and sippy cups. Flash forward to 2014, and we have many BPA free plastics for sale and widely used in bottles and toddler cups. But last week, Mother Jones wrote an extensive piece about the health problems associated with BPA free plastics-- in fact, all plastics-- that leach hormone disrupting chemicals. According to Environmental Working Group, Endocrine disruptors are compounds that can interfere with the way our hormones work– and hormones control fetal development, puberty, and metabolism. Chemicals that interfere with these processes may be risky. Hormone disruption has been linked to early puberty, heart disease, cancer, reproductive problems, obesity, and interference with cancer treatments. According to the Mother Jones article, the list is even longer: “Scientists have tied BPA to ailments including asthma, cancer, infertility, low sperm count, genital deformity, heart disease, liver problems, and ADHD. “Pick a disease, literally pick a disease,” says Frederick vom Saal, a biology professor at the University of Missouri-Columbia who studies BPA.” So it makes sense to avoid them-- and work for the reform of our dysfunctional chemical laws that got us in the situation to begin with. 1. Use stainless steel or glass for baby bottles, sippy cups, and water bottles. There are many great choices out there now. I used glass bottles with both my girls and never had any trouble with breakage or cracks. Now, my girls use small glass bottles with silicone sleeves for school lunches. They used small stainless steel sippy cups when they were in toddlers and in PreK. 2. Ditch canned food-- go for fresh or frozen instead. BPA is still used in the lining of most canned foods. Avoiding canned foods is a good way to lower exposure to BPA for your whole family. 3. Filter your water. Our friends at EWG highlighted a couple chemicals that pollute our drinking water and are endocrine disruptors. These can be avoided by filtering your water. See this post from EWG for more information about 12 hormone altering chemicals and where to find water filters that will reduce exposure to endocrine disruptors. 4. Buy and use a vacuum cleaner with a HEPA filter. When I did my body burden test-- I had high levels of the flame retardant Deca. This and other flame retardants are endocrine disruptors. Most of our furniture contains these chemicals, but a vacuum cleaner with a HEPA filter will pick up more of the toxic dust and lessen exposures. 5. Ditch the non-stick pans and water resistant coatings on clothes, furniture and carpets. These contain PFCs, and according to EWG, " Perfluorochemicals are so widespread and extraordinarily persistent that 99 percent of Americans have these chemicals in their bodies." and they are linked to "decreased sperm quality, low birth weight, kidney disease, thyroid disease and high cholesterol, among other health issues." Use stainless steel or cast iron instead. 6. Eat as much organic produce as possible, at least the (original) Dirty Dozen. A certain class of chemicals used as a pesticide in farming--organophosphate pesticides-- has been linked to effects on brain development, behavior and fertility, as well as "interfering with the way testosterone communicates with cells, lowering testosterone and altering thyroid hormone levels," according to EWG. Limit exposure to pesticides in produce by eating organic fruits and vegetables whenever possible. Environmental Working Group shares more ways to limit exposure to hormone disrupting chemicals and more information. Most importantly-- to protect all kids and families everywhere, we need strong laws so that products must be proven safe before they are sold in stores. Right now, people assume that products are safe and that someone, somewhere has tested them for safety. This is far from the truth. Let' s make choices to protect our families now, and work for changing our system to better protect families well into the future.
<urn:uuid:9bbe797e-2e2b-403a-ba9f-7895942f748b>
CC-MAIN-2016-26
http://www.momsrising.org/blog/6-ways-to-avoid-hormone-disrupting-chemicals/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950338
934
2.890625
3
Michael LaFosse and Richard Alexander are probably the only origami artists in the world today who routinely make custom paper. In fact, other master origami artists prize their papers, which are made with permanent, finely ground pigments so that folded pieces will last hundreds of years. Both Michael and Richard have backgrounds in science, which explains the strong natural history focus to their work. The commonly understood definition of origami is paper folding from a single, uncut square. Folding techniques most likely originated in Asia circa 600 A.D., and came to Europe via the Silk Road. The art of folding paper has roots in Japan but it was German educator, Friedrich Fröbel, who introduced paper folding into early grade school curricula in the 1800s. One is said to "perform" a piece of origami. Michael explains. "The very best origami begins in the design stage, where the folding, from start to finish, is elegant ... the finished piece has to look alive." In preparation for making Wilbur (1991), Michael spent many hours observing piglets at the Topsfield Fair. Having the right paper was critical. Experimenting, he came up with the perfect handmade paper -- pale pink in color, fairly stiff, with fuzziness to its finish. The actual folding of the piece took approximately six hours.
<urn:uuid:4534e061-9f77-46b7-9b79-4419fe6e0ea8>
CC-MAIN-2016-26
http://www.lowellsun.com/folkfestival/ci_26207918/michael-lafosse-and-richard-alexander-origami-and-hand
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.96482
272
3.296875
3
Seeing the throngs of men, women and children in Chicago’s Grant Park cheering the nation’s first African-American president-elect; hearing civil rights lions like Jesse Jackson, John Lewis, Roger Wilkins and Andrew Young grope for words when describing their feelings about the election; listening to black schoolchildren on television express in simple phrases what Barack Obama ’s achievement meant to them; watching replays of the Rev. Dr. Martin Luther King Jr. declaim “I have a dream” on the steps of the Lincoln Memorial; and downloading videos of jubilant crowds in the Nairobi slums chanting a Kenyan surname over and over—all this made me think of a passage from the New Testament: the Magnificat. Fifty-six million voters did not vote for Senator Obama; some reports claim that almost 50 of the 267 active U.S. Catholic bishops stated that it was a grave sin (some called it cooperation in murder) to cast a vote for the Illinois senator; many priests warned parishioners against making such a choice; and millions of Catholics, even if they did not agree with their pastors, did not vote for Obama because their overall political views were more closely aligned with those of Sen. John McCain . But were there many Christians, even Obama opponents, who watched their African-American brothers and sisters weeping tears of jubilation and pride, whose hearts were unmoved by the transformation among a people who had suffered for so long? Many must have heard echoes of Mary’s words in the Gospel of Luke: “He has...lifted up the lowly; he has filled the hungry with good things....” In Mary’s song of praise, God visits an oppressed people and restores their fortunes “according to the promises he made to our ancestors.” The civil rights movement sprang from African-American churches that believed God would rescue the poor, that the Spirit would lead them and that Jesus loved them. Dr. King used familiar biblical imagery—in particular, the exodus of the Hebrew people out of Egyptian slavery—to call a community to hope in the face of fear. “One day every valley shall be exalted, every hill and mountain shall be made low, the rough places will be made plain, and the crooked places will be made straight, and the glory of the Lord shall be revealed, and all flesh shall see it together,” he said in 1963, paraphrasing Isaiah. This is prophetic language. It looks ahead to the “one day” when God’s justice will set things right. But who would have thought that the upending of the status quo would happen so quickly? Robert F. Kennedy, for one. In 1968 Senator Kennedy said, “Things are moving so fast that a Negro could be president in 40 years.” It must have seemed outlandish at the time. Five years earlier, Dr. King had been arrested in Birmingham. And just a year earlier, riots in Newark and Detroit had stripped the country of hope. But the prophet sees that some day “one day” will be today. John LaFarge, S.J., adverted to this hope in one of his most popular books. Father LaFarge, a longtime editor of America, was deeply involved in interracial issues in the 1930s, when Robert Kennedy was still a boy. In The Race Question and the Negro , published in 1943, he examined the perils of racism and confidently concluded that even someone infected by prejudice will “by the logic of his own principles and by the light of his own experience...come to this road at long last.” That is why the scenes in Grant Park were so moving. The “one day” had come “at long last.” Despite the passionate rhetoric used to describe Mr. Obama, he is neither a messiah nor the anti-Christ. But his election is a sign that believers downplay only if they wish to downplay God’s activity in the world. It is a sign that the “lowly” can be lifted up—to previously unimaginable heights. That the “hungry” can be filled with the nourishing food of jubilation, pride and hope. That the valleys shall be exalted. That the mountaintop is a real place. Not every Christian rejoiced in the election results. But every Christian who knows the Gospels, even those who disagree with Barack Obama’s politics, can be gladdened to see this particular sign of progress. “We rejoice with the rest of our nation,” wrote Archbishop Donald T. Wuerl of Washington, D.C., “at the significance of this time.” For this sign our souls should magnify the Lord.
<urn:uuid:2f418342-0c15-40a4-a673-820fb5d9d388>
CC-MAIN-2016-26
http://americamagazine.org/print/issue/677/many-things/many-things
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966401
994
2.75
3
Using Excel provides users with many ways to store information. When it comes to information, one of the most common things you can use Excel for is a calendar. By using an Excel template for a calendar you will have an easy way to keep track of time in a more organized manner. This program will allow you to record time such as months, days and weeks. Using a calendar template on Excel will give you a great way to get the most out of your data storage when it comes time to oversee significant dates during the year. Calendar Template Excel: Months In the first part of the Excel calendar template, you will have months. This is simply recording and putting together a template that has the month and then all of the days that appear in the year’s sequence. With the months template you will have the means to have a broad and overall perspective of the days and weeks that take place. You will also have the means to keep track of holidays and any important events that may take place during this time. Using the Excel template to record months is one of the first things you should do when looking to have an Excel template. Calendar Template Excel: Days and Weeks The next part of the Excel calendar template is days and weeks. This is the part of the calendar that shows the days and weeks of each month. On this template there will be boxes and have a number ranging from 1 to 31. Each box represents the day of each week month. By using this part of the template you will be able to keep track of each date during the month and have a more detailed set of information to go over.
<urn:uuid:260ced47-3fbd-4d31-8570-5ddb9e448f88>
CC-MAIN-2016-26
http://exceltemplates.net/calendar/calendar-template-excel/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93693
329
3.046875
3
Imagine being able to zoom into the brain to see various cells the way we zoom into Google maps of the world and can see houses on a street. And keep in mind that the brain is considered the most complex structure in the universe with 86 billion neurons. Zooming in is now possible thanks to a new brain atlas with unprecedented resolution. BigBrain is the first 3D microstructural model of the entire human brain, and is free and publicly available to researchers world-wide. A new study shows that memory pathology in older mice with Alzheimer’s disease can be reversed with treatment. The study by researchers from the Montreal Neurological Institute and Hospital - The Neuro, at McGill University and at Université de Montréal found that blocking the activity of a specific receptor in the brain of mice with advanced Alzheimer’s disease (AD) recovers memory and cerebrovascular function. What is ALS?Amyotrophic Lateral Sclerosis (ALS) also known as Lou Gehrig’s disease, is a neurodegenerative disease in which progressive muscle weakness leads to paralysis. ALS is a result of the death of motor neurons (nerve cells) in the brain and spinal cord that control voluntary muscle movement. Most people survive less than five years following diagnosis, but a small percentage of patients live for ten years or even longer. So far, there is no cure. Dr. Frederick Andermann, neurologist and researcher at the Montreal Neurological Institute and Hospital – The Neuro, McGill University has been named an Officer of the Order of Quebec. Dr. Andermann is among 33 distinguished recipients who will be decorated by Premier of Quebec Pauline Marois at a ceremony on June 6, 2013 at the salle du Conseil législatif de l’hôtel du Parlement. Dr. Study compares data from hundreds of people in childhood and old age A new study shows compelling evidence that associations between cognitive ability and cortical grey matter in old age can largely be accounted for by cognitive ability in childhood. The joint study by the Montreal Neurological Institute and Hospital, The Neuro, McGill University and the University of Edinburgh, UK was published today, June 4 in Local students compete and put their science skills and knowledge to the test at Let’s Talk Science’s All Science Challenge MONTREAL, QC – Approximately 110 Grade 6, 7 and 8 students from 14 local schools are getting ready to compete in Let’s Talk Science’s All Science Challenge on May 31st at the Montreal Neurological Institute and Hospital – The Neuro, McGill University. Dr. Robert J. Research opens door to new drug therapies for Parkinson’s disease McGill University researchers have unlocked a new door to developing drugs to slow the progression of Parkinson’s disease. Collaborating teams led by Dr. Edward A. Fon at the Montreal Neurological Institute and Hospital -The Neuro, and Dr. Kalle Gehring in the Department of Biochemistry at the Faculty of Medicine, have discovered the three-dimensional structure of the protein Parkin. Live 3D images of brain’s vasculature will improve patient diagnosis and treatment The diagnosis and treatment of potentially life-threatening neurological conditions such as aneurysms and strokes will be significantly improved as a result of cutting-edge technology at the Montreal Neurological Institute and Hospital - The Neuro, at McGill University and the MUHC. The new angiosuite, inaugurated today, offers significant advantages to patients and physicians including most importantly, improved safety and outcomes. Creates a 3D “ New study shows what happens in the brain to make music rewarding A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital – The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music. What is Parkinson’s Disease? Parkinson’s disease is a neurological condition related to the death of specific brain cells that produce dopamine, a chemical needed for brain cells to control muscular movement. In Parkinson’s disease, dopamine-producing cells stop functioning for reasons still unknown. Powerful treatment improves patients’ lives and provides new insight into mechanisms of the disease A new study by Multiple Sclerosis researchers at three leading Canadian centres addresses why bone marrow transplantation (BMT) has positive results in patients with particularly aggressive forms of MS. The transplantation treatment, which is performed as part of a clinical trial and carries potentially serious risks, virtually stops all new relapsing activity as observed upon clinical examination and brain MRI scans. The study reveals how th A team of basic and clinical scientists led by the University of Montreal Hospital* Research Centre’s (CRCHUM) Dr. Nathalie Arbour has opened the door to significantly improved treatments for the symptoms of Multiple Sclerosis (MS). March - National Epilepsy Awareness Month The Neuro has been at the forefront of epilepsy treatment and research for over half a century. The development of “The Montreal Procedure” by Dr. Thirty-two students from six Montreal area high schools will assemble at the Montreal Neurological Institute and Hospital – The Neuro at McGill University on February 21st to be quizzed about synapses, axons and other cerebral facts in the international contest known as the Brain Bee.
<urn:uuid:2e65d996-1186-490c-801f-1e5d232f2111>
CC-MAIN-2016-26
https://www.mcgill.ca/channels/section/mni/channel_news?page=7&BIGipServer~CCS_Sties~DRUPAL=867293316.20480.0000
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00073-ip-10-164-35-72.ec2.internal.warc.gz
en
0.923547
1,122
2.5625
3
Definition of brackets plural of bracket 1 The word "brackets" uses 8 letters: A B C E K R S T. Direct anagrams of brackets: Words formed by adding one letter before or after brackets (in bold), or to abcekrst in any order: s - backrests Words within brackets not shown as it has more than seven letters. List all words starting with brackets, words containing brackets or words ending with brackets All words formed from brackets by changing one letter Other words with the same letter pairs: br ra ac ck ke et ts Browse words starting with brackets by next letter Previous word in list: bracketing Next word in list: brackish Some random words: einkorn
<urn:uuid:babb9c1d-bc05-4330-a1fa-6ba48c344629>
CC-MAIN-2016-26
http://www.morewords.com/word/brackets/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.798829
161
2.53125
3
(http://www.flickr.com/photos/notionscapital/869847216/ by Mike Licht) Im going to post series of mobile learning posts and try to open the field for me and teachers who are not familiar practical and educational use of mobile learning devices. Im heading from theory towards practice and finally post also our school practical experiences of mlearning. First there are couple of definitions (there are plenty of more) of mobile learning. In Upside Learning blog: General Considerations for Mobile Learning (mLearning) i found many definitions on mobile learning: Wikipedia defines mobile learning as “Any sort of learning that happens when the learner is not at a fixed, predetermined location, or learning that happens when the learner takes advantage of the learning opportunities offered by mobile technologies”. In other words, mLearning decreases limitation of learning location with the mobility of general portable devices1. Simply put, mobile learning is the acquisition or modification of any knowledge and skill through using mobile technology, anywhere, anytime and results in the modification of behavior What is mobile learning? Upper: Thoughts on the state of mobile learning Mobile Learning 1 by TheSophiaLin
<urn:uuid:1e8b06a3-3947-4796-981f-e7c74b0da437>
CC-MAIN-2016-26
http://educationtechnology-theoryandpractice.blogspot.com/2011/09/definitions-of-mobile-learning.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917651
246
2.71875
3
CASH AND CARRY In the Second World War, it was not naval blockade but air bombardment that proved itself in the end to be the most devastating weapon of warfare against the enemy's economic power. Nevertheless, naval power confuted all those prophets who in enemy countries and elsewhere had foretold its obsolescence in modern war. Fused in new ways with the other elements of warfare, it still maintained its old advantages of flexibility and surprise. At Dunkirk it brought deliverance to an outmatched army; at Salerno and Normandy it assembled the avenging armies and supported their assault. These were the dramatic battles; between them was the never-ceasing battle of supply—the Malta convoys. the Arctic convoys to Russia, the Battle of the Atlantic which was fought year in year out to safeguard those overseas reinforcements of war-making power that in the end overwhelmed Hitler's Continental fortress. Economists have sometimes attempted to measure the advantage of overseas supply in statistical terms. Even in the dark year of 1941, the economists of the War Cabinet Office, piecing together their knowledge of British figures and their guesses at German and Italian figures, concluded that the war production of the United Kingdom was already closely matching that of her enemies—a prodigy of achievement which a nation so heavily outmatched in population could not even have approached, had not its own efforts been intermeshed with the productive labour of other countries. It would take too much time to confront these estimates of 1941 with our retrospective knowledge of Germany's performance; but it may be confidently asserted that the estimates revealed one important truth. Taking full advantage of the international division of labour, the United Kingdom was enabled in large measure to rectify her numerical inferiority by mobilising in the immediate war zone a much higher proportion of her much smaller population. Not to mention the Allied and associated countries on either side, Greater Germany had in mid-1939 a population of nearly 79½ millions with a total military and working force of about 40½ millions: Great Britain's population was nearly 45½ millions, of whom nearly 20 millions were gainfully employed. But this formidable contrast in human resources shrinks sensationally when the distribution of the two labour forces is examined. Great Britain, for example, employed on the land less than a million people or not quite five percent of her labour force; whereas Germany, to provide her people with food, was employing on the land 11 millions, or twenty-seven percent of her labour force. Again, to take a single minor example: the erection by the Germans of twelve synthetic oil plants with a capacity of 3.3 million tons a year was estimated to require 2.4 million tons of structural steel and 7.6 million man-days of labour; with a much smaller expenditure of man-days of labour, Great Britain was able to procure in British-owned or foreign tankers natural oil from the wells of Iraq, Persia or America.1 The Germans, of course, were able to make other people work for them; by 1944 they were employing in their own country more than five million imported civilian workers and nearly two million prisoners of war—a total of 7.13 million foreign workers, which in large measure explains the gentleness with which the German Government treated its own people, in comparison with Great Britain's relentless mobilisation of manpower. Germany, moreover, was able to draw other economic contributions from a lebensraum which after 1940 included the whole of Continental Europe west of the new Russian boundaries. But the productive resources that Germany's land neighbours could make available to the German war economy were immensely inferior to Britain's potential gain from her oceanic neighbours. The agricultural countries of the tropics, although their average economic efficiency was low, could contribute specific commodities of great value, such as rubber and oilseeds, cotton and sisal and cocoa: some of them could contribute valuable minerals as well. The agricultural countries of the temperate zone, such as Australia and New Zealand and the Argentine, had an immensely higher output per man than any of the peasant countries of Europe; they had besides a respectable and increasing manufacturing productivity. And on the continent of North America there was established, both in agriculture and industry, the most formidable concentration of productive power in the whole world. The United Kingdom's economic advantage was therefore great; but in earning it she had chosen to live dangerously. Whereas the Germans held on secure tenure—until the liberating armies at last drew near—their modest profits from Polish or Bulgarian economic effort, the British held on precarious and conditional tenure their much greater benefit from Argentinian or American production. If the Axis powers had been able to break British naval strength they would have turned the tables indeed: the United Kingdom would have then been compelled to struggle for economic self-sufficiency, at so pitiable a level that she could neither have made effective war nor even maintained her civilian population. There was another contrast: Germany was creating in Europe a 'new order' largely subservient to German military command; but the international economic order to which the United Kingdom belonged was still in large measure governed by the notions of economic self-interest held by the individual communities participating in it. Britain might be granted some privileges of deferred payment, her merchant fleet might be reinforced by ships of other nations; but until the coming of lend-lease the strength that she could draw from overseas was sharply limited by her own capacity to pay and to ship. Whereas Germany could use force to exact from her land neighbours a large, if not a full measure of economic collaboration, the United Kingdom must depend upon the good will of her distant oceanic neighbours, and upon their feeling of a common interest between themselves and her. The United Kingdom must in particular attune her economic policy to the political decisions of the United States. In September 1939, each self-governing nation of the British Commonwealth, excepting Eire, had by its own sovereign decision made common cause with the United Kingdom; but the United States had proclaimed a rather complicated neutrality. Judging by the experience of the previous war, this neutrality might possibly make all the difference between victory and defeat. Certainly, nothing short of full United States support would have saved Great Britain and her Allies in 1917, when the German U-boats came close to cutting the oceanic lifelines. American supplies had then been made available without stint, along with the credits to pay for them; American yards were than set hard at work to produce millions of tons of shipping to share the dangerous Atlantic passage; American armies were than called up and trained to reinforce the western front. How great was the contrast of 1939 and 1940! The United States were competent, this time, to give far greater help;2 but they had proclaimed their resolve to give far less. Moreover, as if in distrust of their own passionate sympathies for the democratic cause, they had embodied their resolve in formal legislation of Congress. The historian of the British war economy must not presume to expound the history of the Neutrality Acts of 1935-39; but he is bound to discuss the effects of American neutrality policy upon British economic policy. Far back in the nineteen-twenties, the British Government had feared that America's traditional policy of affirming the trading rights of neutral nations, now that it could for the first time in history be backed by a great mass of American naval power, might wreck altogether the British design of economic warfare.3 When Hitler came to power, a drastic reversal of United States policy removed this anxiety; but put a more serious one in its place. The British could now go ahead with their plans of naval blockade without fearing that their attempts to weaken the Axis powers would embroil them with the United States Navy; but they were at the same time given warning not to expect effective economic aid from American democracy if they found themselves at war with Germany, Italy or Japan, or all three together. The new doctrine of American neutrality was thus on balance a discouragement to the democracies, an encouragement to the Fascist powers. Its legislative statement in the Neutrality Act, after reasserting some of the duties that were traditionally incumbent upon neutral powers—for example the duty of refusing refuge and supply to belligerent armed vessels—proceeded to surrender those traditional rights of which the United States had been the foremost claimant for more than 100 years. Instead of claiming for American ships and American citizens the right to pursue their peaceful business in time of war, it forbade them to entangle themselves in the dangers of war. Indeed, the Neutrality Act might very well have been called the Non-Entanglement Act. Its main features reflected the popular conceptions, or misconceptions, of the causes of American entanglement in the First World War. According to a widely diffused opinion, three causes—apart from propaganda, which in the American view had exacerbated each single cause and all three together—had brought about American participation in a European quarrel: first the interest of the munitions industries, secondly the destruction of American ships and the death of American citizens at sea, thirdly the financial interest created by Allied borrowings on the American market. The neutrality legislation attempted to root out all these evils. First, it imposed an absolute embargo on the export of arms to belligerent states. Secondly, it withdrew the protection of the U.S. Navy from American nationals on belligerent ships, and it forbade American ships to enter the combat zones. Thirdly, it obliged belligerent purchasers of American goods to secure a transfer of title before exportation. The first prohibition needs no comment. The second prohibition expressed the policy of 'carry'—i.e. that belligerents must carry in their own ships all cargoes procured by them in the United States, even if those cargoes were only apples or tobacco. The third prohibition reasserted the principle of 'cash'—which had already found another expression in the Johnson Act of 1934, prohibiting loans of money from any person under American jurisdiction to any foreign government in default of its payments to the United States.4 It is worth remembering that there were certain gaps in this legislation: most noticeably, the exemptions with which the American republics were favoured, and the discretion entrusted to the President to 'find' or not to 'find' a state of war.5 Far more important from the British point of view was the expectation, which from the early months of 1939 appeared reasonably well founded, that the outbreak of war in Europe would be followed quickly by important modifications of the Neutrality Act. This expectation was justified on 4th November 1939, when the President approved a 'joint resolution to preserve the neutrality and peace of the United States and to secure the safety of its citizens and their interests'. This resolution, called for convenience the 1939 Neutrality Act, removed the arms embargo; but it stiffened the 'cash and carry' provisions.6 In outward appearance, the amending legislation of 4th November made considerable difference to British supply policy. As far back as July the British Government had made preparations to establish a purchasing commission in the United States; but respect for American susceptibilities had prompted it hitherto to place the main emphasis on procurement in Canada. The 'British Supply Board in Canada and the United States' had its headquarters in Ottawa; in New York it had only an inconspicuous branch office, under the direction of Mr. Arthur Purvis. The excision of the arms embargo changed all that. Mr. Purvis was at once instructed to go to Washington and make contact with the United States administration, not merely on behalf of the British Government, but as chairman of the newly constituted Anglo-French Purchasing Commission.7 Nevertheless, the immediate real effects upon British policy were small. The British Government believed itself to be too sparsely supplied with dollars to justify any considerable expenditure upon American finished munitions, and was determined to limit its purchases as stringently as possible to indispensable materials and tools for use by British workers in British factories.8 On the other hand, the Government believed itself to be very well supplied with ships. The validity of these two beliefs will be examined in the following sections of this chapter. In so far as they were valid, they appeared to justify policies of food and raw materials importation by the longer shipping haul from those countries—chiefly in the British Empire or the sterling area—which from the financial point of view were more accommodating than the United States. This approach to the problems of overseas supply was in harmony with the policy of armament in depth and the aim of preponderant military power within the period of a three-year war. If the British and French Governments had realised that Hitler was banking on victory in the west within the first twelve months of war, they would surely have felt themselves compelled to state their import requirements very differently, with a much heavier demand upon the American munitions industries, despite the immediate cost in dollars. Indeed, they began to overhaul their programme in this way even before Hitler started his blitzkrieg in the west. In the chapters which discussed the United Kingdom's previous experience of modern war and British studies of war-economic problems during the nineteen-twenties and thirties, apology was made for postponing consideration of the external financial problems.9 The main reason for this postponement was convenience of arrangement; but if any further justification were called for, it would be possible to plead the new circumstances, including the new habits of thought, following Britain's departure from the gold standard in 1931. Only two years previously, the Treasury memorandum on The Course of Prices in a Great War had given international gold movements a central place in its discussion of external financial policy; but all the documents produced after 1931 started from the assumption of a very different monetary order. After 1931, Britain was no longer subject to any obligation, legal, contractual or moral, to maintain the pound sterling at any fixed parity with gold, or with the currency of any foreign country. Moreover, the control of Britain's normal reserve of gold and foreign exchange, having become a risk which the resources of the Bank of England were not competent to sustain, had passed from the Bank to the Treasury, acting through the Exchange Equalisation Account. Rates of exchange were determined by the prices at which the Exchange Equalisation Account bought and sold currency; they could be altered from day to day, or half a dozen times a day. More important still were the changes that had taken place between 1914 and 1939 in the basic conditions of British financial strength. When the First World War broke out, the United Kingdom was at the climax of its exporting power. It was moreover still postponing an important part of its claim upon imports: in the decade 1904-1914, unprecedented sums of British capital—which would better have been used, some critics have said, in modernising the industrial structure at home—were invested in the development of overseas economies. Even in the first year of the 1914-18 war, British investors maintained their capital exports to the tune of about £200 millions. But in 1939 the situation was very different. The old staple export industries had for a long time been languishing, and for some years past a net deficit on the international balance of payments had announced that the nation, even in advance of war, was already beginning the process of overseas disinvestment. Moreover, the aggregate sum of past overseas investment was less in 1939 than it had been a generation earlier: if the nation's holdings of gold were larger, its holdings of useful foreign securities were considerably smaller.10 to cap this dispiriting comparison, there was the plain notice given in the Johnson Act and the Neutrality Act that United States resources would not be made available a second time in support of a British war effort, except upon terms of immediate payment. To earn the means of payment, the British would find themselves compelled to maintain a large flow of exports, thereby diluting the intensity of their war mobilisation, both materially and psychologically—for it would be hard to persuade ordinary people that the workers who were producing luxury frocks for Buenos Aires or fine table linen for New York were serving the nation just as effectively as the workers in the dockyards or the aircraft factories. In figuring out this not very exhilarating balance-sheet of external financial prospects, the British Government had one consolation: although the resources which it could now command were smaller than in 1914, it could command them more effectively. After one or two false starts, the twentieth-century state had added to its armoury of defensive and offensive weapons the new and formidable engine of exchange control. Its short modern history may be said to have begun in the years of currency disturbance after the First World War, when some states of continental Europe attempted with poor success to compel their subjects to keep their money at home. In the crisis year of 1931 the British Government itself established an ephemeral exchange control, with the purpose of preventing a collapse of the pound sterling following upon the suspension of the gold standard. This mild British control was not seriously tested.11 Meanwhile, Germany and some other European countries were initiating much more drastic policies. The German Reich under Hitler was pursuing an inflationary employment policy in a country morbidly afraid of inflation; in consequence, it had to block all the escape holes. It established a large and complex administrative machine capable not merely of preventing the flight of capital, but of mobilising for government use all the financial resources accruing externally to German nationals, whether by payment of interest, or sale of exports, or in any other way. It achieved success by inquisitorial and quasi-police action covering every individual transaction in foreign exchange. Would Britain be compelled in time of war to construct the same formidable engine of exchange control? The question was raised by the Bank of England in the summer of 1937, and was discussed between the Bank and Treasury during the next eighteen months. The Treasury, while showing a marked distaste for German methods, nevertheless recognised that it would be essential to mobilise and conserve for war purposes the nation's limited and precious resources of gold and foreign exchange. Six months before the war it had ready the following draft regulations, which could be enforced without delay on all residents in the United Kingdom: - A regulation making dealings in gold and foreign exchange a monopoly of the Treasury and its authorised agents, and giving power to the Treasury to limit sales to current requirements. - A regulation requiring that all gold and all holdings of designated foreign currencies be offered for sale to the Treasury. - A regulation prohibiting all payments to residents outside the United Kingdom, except with Treasury permission. - A regulation empowering the Treasury to exercise control over all securities marketable abroad, and to call for their registration with a view to their ultimate acquisition by the Treasury. This network of control, comprehensive though it seems at first sight, contained gaps which did not exist in the German system. Moreover, its administration was not centralised on the German model, but was delegated to the banks, as authorised dealers, acting under detailed Treasury instructions, issued through the Bank of England.12 The draft regulations for the British exchange control reached their mature form in March 1939, when the Germans occupied Prague; they were promulgated in installments between 24th August and 3rd September, the day when the United Kingdom declared war. On the same day the sterling area was given its wartime definition. Neither in September 1939, nor eight years earlier when sterling had separated itself from gold, was the sterling area a new creation: all that happened on both occasions was that a trading and financial partnership already long established took a shape that was more visible to outsiders. The sterling area had grown naturally from the London-centred international market of the nineteenth century, when overseas producers were always able to sell their products for sterling which they could use either to finance their imports from the United Kingdom or to clear their accounts with third parties. Under these circumstances, it was natural for them to hold a considerable part of their monetary reserves in the form of sterling in London. In September 1939 this was still the qualification for membership, as it had always been. Some foreign countries, such as Egypt, still remained in the sterling area; some Empire countries—notably Canada and Hong Kong—had passed outside it;13 but, by and large, the sterling area was now co-terminus with the British Commonwealth and Empire. Its wartime definition was in form the result of Treasury action;14 but behind this were careful discussions which had started six months previously in response to an Australian initiative. The sterling area rested upon the recognition of common interests and responsibilities by an association of sovereign governments. All the associates engaged themselves to impose within their own jurisdictions an exchange control of the United Kingdom brand. None of them was under any obligation to keep its currency unit in any fixed relation to the British £; what united them all was a common code of practice under which they remained unhampered from exchange control in their mutual transactions with each other, but maintained a united front in all their external dealings. They combined their earning power, pooled their earnings of 'hard' currencies, and entrusted them to the Exchange Equalisation Fund, which held them as the reserve of the entire sterling area and issued to each member the sums that it required to satisfy its own economic needs. The sterling area was in fact a financial union, centred on London and managed by London. Its existence freed the British Government from a substantial part of its anxieties on the score of 'cash', seeing that a large part of the world, including some countries of great productive efficiency were willing to guarantee the flow of supplies on terms of deferred payment. No doubt the United Kingdom would pay for these supplies in part by current sales of British goods and services, and by realizing British capital assets;15 but for the rest it would be able to borrow the necessary sums in the form of the sterling balances accumulating in London …. By the end of the war, these balances had accumulated to the tune of £2,723 millions.16 Between the countries of the sterling area, which offered to Britain extensive financial accommodation, and the United States of America, which offered her no financial accommodation at all, there emerged an intermediate group of countries which, on calculation of their own interest, were willing to make specific payments agreements with the United Kingdom. Of course, not all the payments agreements concluded in the opening months of the war were prompted on the British side by strict considerations of supply: some of them served the purposes of economic warfare, and were concluded mainly with the intention of denying supplies to Germany. There were in addition two exceptional agreements which were based on full partnership in the war, one with France, the other with Canada. The first will be discussed in a later chapter: of the second it is sufficient to say here that it manifested the determination of the Dominion, though not a member of the sterling area, to allow no financial impediments to thwart the maximum contribution of Canadian agriculture and Canadian industry to the war effort of the British Commonwealth. No overriding purpose of this nature was to be expected from neutral governments. However, on 27th October 1939, the British Government made a very encouraging payments agreement with Argentina, a country which had great importance as a supplier of food. This agreement was later on amended, and became the model of similar agreements with the governments of other neutral countries. Its broad effect was to enable the British Government to continue importing without making immediate payment. The sums accruing to Argentine exporters were paid into a special account in the Bank of England on behalf of the Argentine Banco Central, with a guarantee that they would be available later on at gold value. The neutral Argentinians had thus shown themselves ready, like the members of the sterling area, to lend their resources to the belligerent British: or—to state the situation in reverse—the British had succeeded in softening a currency which they had originally reckoned as 'hard'. The 'hard' currencies had been selected as those eligible for inclusion in the reserve held by the Exchange Equalisation Fund. The first list designated United and Canadian dollars, Argentinian pesos, Swiss, French and Belgian francs, Swedish and Norwegian kroners, and Dutch guilders. Generally speaking, these were the currencies that were hardest to come by under the conditions of trade and abnormal overseas expenditure that attended the outbreak of war. Even under peace conditions, United States dollars had not been easily earned.17 They now became the hard currency par excellence. The problem of foreign exchange was, above all, an American problem. Almost from the outset of the war, the British Government found itself compelled to review mistrustfully its earlier hopeful plans for keeping its dollar purchases within narrow bounds. As will be seen later, unanticipated shipping difficulties, aggravated in some cases by delays in instituting consumer rationing, compelled it to pay out dollars for supplies it had intended to procure from more distant, but more accommodating countries within the sterling area. More significant still was the steep rise in the requirements upon America for fulfilment of the British munitions programmes. Steel was 'pre-eminently the basic raw material of warfare'; but since the capacity of the British steel industry was below the requirements of the newly-expanded British war plans, there existed a growing deficiency which would have to be made good by heavy imports from America. There would, moreover be a sizable bill to pay for machine tools, petroleum products (though the tankers would so far as possible be sent to the Middle East) and some other commodities.18 For these reasons and because of the rise of prices,19 the United Kingdom's dollar commitments began to mount up, even before the British Government saw any need for giving big orders to American armament firms, and long before it saw any prospect of America coming into action as the 'arsenal of democracy'—and its granary. During the early months of war, the War Cabinet returned frequently to reckonings of the available 'cash', and the available means of husbanding it. To take the savings first: if indispensable imports were to be secured, it was necessary to prune rigorously those dispensable imports that were a charge upon the nation's limited resources of foreign exchange. In Germany there had been established both a direct control over imports and a direct control over the foreign exchange required to pay for them. In the United Kingdom, on the other hand, the operation of exchange control had been decentralised among the banks. They could not pretend to any exact knowledge of the Government's import policy and could not therefore take responsibility for granting or refusing exchange to their individual clients. In consequence, the British Government decided to adopt measures based on the scrutiny of different classes of imports. These measures were broadly of two kinds, adapted respectively to the requirements of government departments and those of private commercial importers. The demands of the importing departments for foreign exchange were met by the Treasury, after they had been scrutinised by the Exchange Requirements Committee, a body set up on 29th August 1939 with representatives from all the importing departments, the Treasury, and the Bank of England. The demands of private importers were controlled by the Import Licensing Department of the Board of Trade. There was nothing amiss in this mechanism of import control; but there was for many months a good deal lacking in the vigour of its operation. In the first place, there remained throughout the first war winter an unoccupied no-man's land between the territories of the Exchange Requirements Commission and the Import Licensing Department. 'Miscellaneous and unallocated' imports which no government department sponsored and which the Board of Trade had not as yet brought under licence were valued in November at £120 millions, out of a total import programme of about £920 millions20 —a ratio which was not substantially reduced until March, when the Ministry of Food and Board of Trade made an agreement whereby the former undertook to sponsor a long list of privately imported foods, and the latter put them under licence. But, in the second place, the licensing system was not in this period particularly drastic within the sphere of its operation. The Import Licensing Department had started work with a short list of commodities which included textiles, apparel, pottery, cutlery, cars, a few luxury foodstuffs, and some assorted manufactures.21 Very few of the items on this list were completely prohibited; under most heads importers were given a ration on the basis of their past trade. It was of course understood from the beginning that the list of licensed commodities would be extended, and the ration made more niggardly, if and when the need for more drastic action was demonstrated but genuinely drastic action was postponed until 4th June 1940.22 By that time the mechanism of import licensing, which hitherto had been intended and employed for the saving of foreign exchange, was being geared to the additional purpose of economising shipping. In the end, it was the shortage of shipping, far more than the shortage of hard currencies, which was the spur towards a tightening of import control, not only in the spheres which have already been mentioned, but in the third and most important sphere, that of direct departmental procurement. Private commercial imports had been by far the smaller part of the total even in the early months of the war; in the mature war economy they were destined to take a rigorously diminished place. However, the assumption by the great importing departments of direct responsibility for the main bulk of overseas supplies did not by itself bring into being an economical, realistic and genuinely national import programme, from which all unessential items were pruned and in which all the essential ones were scientifically balanced in relation to the nation's war needs. As will later appear, that goal was achieved slowly and painfully. Throughout the period of the Anglo-French alliance, the mechanisms that had been established for controlling imports did not prevent a serious leakage of the nation's precious store of foreign exchange upon purchases which were, in the circumstances of the time, luxurious. But, even if all unnecessary imports had been promptly and efficiently stopped, the mounting cost of absolutely indispensable imports would still have been alarming. To begin with, the depreciation of the exchange rate of sterling on the eve of the war had raised by approximately one fifth the sterling price of all imports from the United States. On top of this, the early months of war brought difficulties of supply and transport which raised import prices still further.23 Meanwhile, the claims of the British war economy upon hard currency were expanding even beyond the requirements of materials and tools that have already been described. It had been the original intention of the British Government not to deplete its store of American dollars by the purchase of finished munitions; but a day came when the French Prime Minister declared at a meeting of the Supreme War Council that he would be ready to sell all the pictures in the Louvre if they would procure American aircraft for France. Despite their misgivings about finance, the British felt obliged to join the French in spending dollars to build up the capacity of the American aircraft industry. Against these soaring commitments, there was as yet no adequate balancing force on the dollar earning side of the account. By index of volume British exports in the third quarterly period of the war were still seven points below the quarterly average for 1938; import prices, moreover, had risen much higher than export prices.24 Simultaneously, net current earnings from other sources were being engulfed by war needs; the balance on the shipping services, for example, was being upset by the overriding claims of the war upon British-owned tonnage and the need to hire neutral tonnage, even at extravagant rates.25 Contemporary statistical analysis of the balance of payments situation, both for the United Kingdom and the whole sterling area, had many shortcomings; but two calculations that were made early in 1940 are worth quoting. Lord Stamp calculated that the total adverse balance of the United Kingdom in the first year of the war (later years would be worse) was likely to approach, perhaps even to exceed, £400 millions. According to a Treasury estimate prepared about the same time, the sterling area as a whole was likely to have an adverse balance on current account of approximately the same figure—£400 millions. These estimates made the British war effort, when envisaged in terms of external finance, seem pretty hopeless; for both Lord Stamp and the Treasury had concluded, after their separate investigations, that the United Kingdom could not in a three years' war afford to expend more than £150 millions a year from its reserves of gold and foreign exchange, with perhaps an additional £70 or £80 millions a year from the sums realised by the sale of British-owned securities abroad. A conclusion of such deep pessimism might seem at first sight surprising. The total capital value of British external investments was usually reckoned to be above £3,000 millions. But the distribution and the quality of these investments had to be taken into account. More than half of them were located in sterling area countries, where payments difficulties did not arise; to transfer them to American buyers would be a long and difficult process, even if the buyers should be in the end forthcoming. As for the British investments in America itself, the Johnson Act ruled out the possibility of raising money on them as security. But could not some of them be sold outright? That was, indeed, British policy; but the only investments that could be realised quickly and economically were listed securities denominated in American currency and enjoying a free market. Other securities, inside the United States or outside it, might in time be transferred to American ownership: but any attempt to rush the job was likely to result in knock-down prices—fewer dollars for more securities, and therefore a loss rather than a gain to the British war effort.26 For all these reasons, the total to be expected from the requisitioning of British securities marketable aboard was expected to be no higher than £200 or £250 millions. Add to that gold reserves estimated at £450 millions, and—'The sum total of our resources', The Chancellor of the Exchequer concluded, 'is thus not more than £700,000,000 …. It is obvious that we are in great danger of our gold reserves being exhausted at a rate that will render us incapable of waging war if it is prolonged.' In February 1940, the Treasury estimated that this total sum, which ought to last three years if prudently husbanded, would at the present rate of expenditure be consumed at the end of two years. After this warning, the War Cabinet ordered an investigation into the possibility of scaling down the armament programmes. This would certainly be an effective way of curtailing dollar expenditure, but it might be equally effective as a way of losing the war. An alternative answer to the insistent problem of foreign exchange was therefore sought by a drive to increase the current earnings of British exports. Despite the plentitude of government exhortations, British exporters had been given little practical encouragement in the opening months of the war. They found themselves hampered by the export licensing mechanism, which had been established by the Board of Trade not primarily to facilitate British exports, but to conserve scarce materials for home use and to prevent exported goods from reaching countries through which they could be filtered to the enemy—i.e. to wage economic warfare against the enemy.27 Meanwhile, the new Controllers established in the Ministry of Supply were for the most part intensely preoccupied with Service needs: ignoring the Government's official doctrine about the vital importance of exports, some of them flatly refused to make available the essential materials the exporting industries needed. On top of these frustrations inflicted upon them by the controls, would-be exporters suffered also from the violent disturbance of trade channels and the shipping difficulties of the first war winter. By the late winter and early spring the War Cabinet had made up its mind to clear the ground for a genuine 'National Export Drive'. Lord Stamp, as adviser on economic co-ordination, had produced a series of memoranda stressing the need for an export policy that would be both vigorous and discriminating, choosing with care exportable goods of high conversion value28 and export markets that would yield hard currencies. A sub-committee of ministers, specially appointed to promote the export drive, set greater store upon the vigour recommended by Lord Stamp than upon the discrimination: so too did the Export Council, which was established on 1st February 1940 and at once appealed to 'all industry for all exports'.29 Probably the most important thing this Export Council did was to set up export groups in a number of British industries. At the time, these groups did very little to start a stronger flow of British exports, but some of them proved themselves useful later on, as instruments of the concentration of industry, a policy which aimed at releasing plant, floor space and labour from the production of civilian goods to war industry.30 Indeed, it was the fate of the export drive and all its attendant instruments to be overwhelmed, before their effectiveness could be properly tested, by the tidal wave of military crisis. The Limitation of Supplies Orders illustrate this. One of the most promising things that the Board of Trade had done to foster exports was to set up an Industrial Supplies Department with the specific duty of determining the competing claims upon raw materials advanced on behalf of the home civilian market and the export market. On 16th April 1940, the new department went into action with a Limitation of Supplies Order which cut down by twenty-five percent. the supplies of cotton, rayon and linen piece-goods and made-up goods available to British wholesalers for resale to domestic retailers or makers-up.31 After Hitler had let loose his victorious blitzkrieg in western Europe, new and far more comprehensive Orders32 were issued with an additional purpose—to stint British consumers, not primarily for the sake of exports and foreign exchange, but for the sake of British war production. Here was the beginning, or at least the forecast, of austerity. All the main elements of the problem of foreign exchange have now been examined—British exchange control, the sterling area, the payments agreements with foreign countries, the value of British reserves and external investments and the process of turning the latter into current cash, import restrictions, the export drive, the mounting total of overseas war expenditure. The examination has revealed nothing seriously amiss in the mechanism of policy, but a serious deficiency of motive power. The United Kingdom's capacity to wage war on the scale necessary to ensure victory was dangerously constricted by the limits imposed upon her capacity to pay for overseas supplies. All the more need, therefore, to generate the maximum intensity of effort within those limits. Before the fall of France the British Government was not achieving this maximum. There was a discrepancy between the financial and the military outlook upon time. To dole out reserves of gold and foreign exchange at the rate of £150 millions a year might be sound policy if the war were likely to last three years; it could not be sound policy if the enemy were planning to win it in one year. This must have been the thought in the French Prime Minister's mind when he declared that he would be ready to sell his nation's art treasures for American aircraft. If only the Americans had been ready to deliver them! They too were clinging, far more intensely than the French or the British, to the commercial, unmilitary notion of time. When in February 1940 the French and British Governments made up their minds to spend their dollars rather more quickly, they had perforce to spend the greater part of them, not on combat aeroplanes and weapons—they were not ready—but on developing America's capacity to produce them . The production came months and years too late to be of any use to France. It would be an interesting exercise in hypothetical statistics to estimate what the eventual size of the British war effort would have been if the United States had not in March 1941 thrown aside the 'cash' provisions of their neutrality legislation and if Canada had not throughout the war overcome every financial impediment to full economic collaboration with Britain. There would perforce have been a smaller R.A.F. and a smaller Navy and far fewer divisions in Normandy—if ever there had been a Normandy. There would have been a much smaller war industry working for these diminished Forces, and a greatly expanded export industry struggling to earn the overseas supplies essential to sustain the United Kingdom's small-to-medium mobilisation. Such a distribution of the national resources—the very contrary of the overstrain and unbalance which were the eventual legacy of the war—would have been highly favourable to British recovery after victory. But here the smooth hypothesis breaks down. Victory was not to be brought on the cheap. Economic prudence, estimating in long-term the interests and bare needs of the people and the interlocking long-term interests and needs of the British Commonwealth and of world society, could not be brought into congruity with military prudence, estimating the immediate, urgent requirements of armed resistance. For the sake of present resistance and future victory, Britain at last threw economic prudence to the winds. When France was already falling, the new British Government discarded the old policy of overseas purchase. On 16th May, six days after the Churchill Government took office, a memorandum from the Stamp Survey proposed that the balance of payments policy that had hitherto been followed ought henceforth to be scrapped, in so far as it impeded the speedy procurement of armaments. Before this document was considered by any committee of the War Cabinet,33 the Prime Minister had secured from his colleagues authority to state Britain's most urgent requirements in a personal communication to the President. His communication contained this sentence: 'We shall go on paying dollars for as long as we can, but I should like to feel reasonably sure that when we can pay no more you will give us the stuff just the same.' On 27th May, Lord Lothian, in more formal terms, made a similar communication to the American Secretary of State. Finally, on 3rd July, Lord Lothian presented to the United States Government an aide-mémoire which stated comprehensively the demand that Britain, 'now almost the last free country in Europe', intended to make in the first place upon herself, and secondly upon the United States. His Majesty's Government intended to draw upon American resources to an extent not hitherto contemplated. So long as they were able, they would continue to pay cash for American armaments, materials, tools and foodstuffs. They feel however [the aide-mémoire continued] that they should in all frankness inform the United States Government that it will be utterly impossible for them to do this for any indefinite period in view of the scale on which they will need to obtain such resources from the United States. Their immediate anxiety arises from the necessity of entering into long-term contracts. Dollars would be of no use to the United Kingdom if the German and Italian onslaught rubbed out British national life in 1940 or 1941. And, if this onslaught did succeed, American democracy would find itself in the front line of war before it had armed itself for war. For both countries, now rapidly discovering their deep partnership of strategic interest and ideals, the act of faith was also the act of prudence—of prudence defined (for the United Kingdom) not in economic but in military terms. It must not be imagined that the British were magically freed from all their difficulties of external payment either in the summer of 1940 or even in the early spring of 1941, when the Lend-Lease Act was passed. In subsequent phases of the war they found themselves, as will later be shown, constantly compelled to exercise great care in husbanding and allocating their resources of foreign exchange. Nevertheless, in the summer of 1940 it became probable, and in the following spring it became certain that the British people would not lose the war through scarcity of hard currency. The scarcity of shipping was a very different matter. In 1917 and 1918 mortal peril had been warded off by the Navy's valour and skill in fighting the U-boats, by the Merchant Navy's courage, by convoy and the other apparatus of Admiralty control, and by civilian control both of ships and cargoes. All this experience was available to the British Government when it was making its plans for the employment of the resources of shipping-space available to it in a new war. In its planning of United Kingdom imports (with which the present chapter is most concerned) the Government might have drawn one lesson in particular from previous experience: namely, the inadequacy of partial control. The spasmodic and partial interventions of the earlier years of the last war had cured or mitigated particular scarcities, temporarily at least; but they had created indefensible inequalities in the shipping industry and had aggravated the general scarcity by causing an overall waste of the diminished tonnage available to the nation in its great need. In the end, the Government had been compelled to face the need for total control. Its control over ships was exercised through the requisitioning system operated by the Ministry of Shipping. Its control over cargoes did not in practice obtain the same completeness; but the principle of substituting departmental decision for the individual choice of importers and determining conflicting departmental claims by a committee of the War Cabinet was embodied in action at the time and clearly expounded in retrospect.34 In despite of this experience, the United Kingdom entered the Second World War with plans for a partial control of shipping and sea-borne supplies. How is this fact to be explained? Explanation must no doubt be sought in large measure in considerations of an administrative kind. It is only too easy for the historian, with his after-knowledge of eventual achievement, to forget the simple fact that the type of control exercised at the end of a war—in 1918 for example—requires elaborate departmental organisation and staff; these take time to build up, and, until they have been built up, the controls which assume their existence are inappropriate. Bearing this truth in mind, the critical historian may feel justified in arguing that the war planners of the late nineteen thirties would have done well to devote more energy—not only in the sphere of shipping policy but elsewhere—to the building up of skeleton administrative staffs, rather than hypothetical calculations of requirements and supplies. As it turned out, the forecasts of shipping resources and the probable demands upon them suggested that there need be no great urgency in building administrative foundations for controls of the 1918 stamp. The basis of these forecasts was as much strategical as economic. The men responsible for planning the employment of British-controlled tonnage could hardly be expected to anticipate a German occupation of the western coasts of Europe from the Pyrenees to the North Cape. Not that all the advice that came from the strategical experts was optimistic; very serious warnings were given about the damage that might be inflicted by enemy air attacks upon port facilities and shipping in the ports. The Admiralty, however, was optimistic about the Navy's capacity to cope with air attacks upon ships at sea. It was leaving nothing to chance. It intended to introduce convoy at the very beginning of the war. It believed that the convoy system and the anti-submarine patrols would be able to keep U-boat sinkings reasonably low. This confidence was subsequently justified by events, up to the time when British naval losses during the last phase of the Battle of France, the subsequent advance of German bases along a wide Atlantic front, the defection of the French fleet, and the entry on the other side of the Italian fleet completely overturned the strategical assumptions with which the war had begun. Up to the time of this immense reversal of fortune, the gains and losses of merchant ships from all causes roughly balanced.35 Moreover, the Germans still held back the Luftwaffe from attacking British ports. The first half-year of war at sea was, by the standard of previous experience, easy—not at all the kind of war that Britain had fought in 1917-18, and had, after great tribulation, won. And yet, this first half-year witnessed a severe import crisis and a depressing wastage of the precious stocks of food and raw material that were to be of such crucial importance in the harder war that lay ahead. These setbacks took the Government almost entirely by surprise. The explanation of them—since the Admiralty forecasts were proved correct—must be sought in miscalculations on the civilian side. At the end of 1938, the problem of British resources of shipping in relation to import needs was being studied by the Committee of Imperial Defence. Earlier in the year,36 the President of the Chamber of Shipping had delivered a speech which alleged that the Merchant Navy had been allowed to decline to a level incompatible with national safety in time of war. The allegation was one-sided and the Mercantile Marine Department produced a document which included evidence on the other side. This was desirable and indeed necessary; but the outcome was a tilting of the balance too far on the side of optimism. The document laid justifiable stress upon the favourable strategical forecasts. There were, on the other hand, certain unfavourable factors which it discussed. The mercantile marine of the United Kingdom was about 1¼ million gross tons smaller in 1938 than it had been in 1914 and the decline in dry cargo vessels was much larger than this, since the United Kingdom tanker tonnage had risen by over 1¾ million gross tons in this period. The annual output of the shipyards had shrunk considerably: whereas between 1911 and 1913 it had averaged two million gross tons a year, in every year since 1931 it had been below the million mark, in some years a good deal below it. Yet there existed some compensating factors. If tonnage on Dominion and Colonial registers were included with the United Kingdom merchant fleet (though the United Kingdom had no direct control over Dominion ships) the total was only about half a million short of the 1914 figure. Moreover, there was included within this total a larger tonnage of ocean-going ships suitable for long voyages. And if the fleet was, on balance, older, it nevertheless contained a larger proportion of the faster vessels. It was, however, not merely the size of the merchant fleet and its peace-time efficiency that needed to be reviewed; what was wanted was an estimate of carrying capacity under war-time conditions. Such an estimate is extremely difficult to make. There are certain things that cannot be predicted in advance of war with any reasonable accuracy: for example, the balance of gains and losses. There are certain other things, such as the savings that may be made by reducing the number of loading and discharging ports, which can be predicted with tolerable correctness by an experience statistician with a thorough practical knowledge of shipping. The document under discussion did not possess this expert character; but it offered some reassuring estimates. The carrying capacity of available British shipping (after deducting the tonnage required by the Army and Navy and allocated to Empire supply and cross trades) should suffice to bring to the United Kingdom in the first twelve months of war 48 million tons of dry cargo imports.37 British requirements of dry cargo imports for the same twelve months would be 47 million tons. Consequently, there would be a safety margin of one million tons. This satisfactory result could be achieved by British shipping alone—not counting the large tonnage of neutral shipping which, it was confidently expected, would come into British service when the blockade sealed up many of the normal opportunities of shipping employment.38 These forecasts were made nearly two years before war broke out. They may be contrasted with an expert estimate which was made in the Ministry of Shipping early in the war—that British and neutral shipping together might be able in the first year of war to bring in 47 million tons of dry cargo imports.39 It was this latter estimate, not the more sanguine one submitted before the war, that was subsequently, in very large measure, proved true. The optimistic forecasts that were current before the war may well have encouraged a disposition to postpone the imposition of complete control over shipping. Even if such a control had been imposed at once, it could not at a stroke have achieved its object, the switch over of British shipping to its war tasks; for such a switch-over is a large and complicated undertaking which can only produce its full effects cumulatively over a period of months. This was an additional reason for making a prompt beginning; indeed, in the calculations of 1938 it had at the outset been assumed that the shipping industry would be brought under effective control 'from the outset of the emergency'. But this assumption very soon dropped out of sight. Instead it came to be assumed that the British shipowner knew his own business best and should be left as free as possible to follow the normal incentives of his calling. At the beginning of the war the Ministry of Shipping was expected to administer, not the full requisitioning system that its predecessor had instituted and operated in 1917, but the gentler, more negative system of ship licensing.40 There was another weakness in civilian preparations to safeguard overseas supplies. No really thorough attempt was made to calculate how far British imports might under war conditions be limited by shortage of port capacity.41 One of the major factors determining the carrying capacity of a ship is the time she spends in port—in loading or discharging cargo and in other port operations. In peace a liner spends more than half her life in port and a tramp a smaller, though still very considerable proportion of time. Between 1914 and 1917 the times spend in port had been so much extended that, as a result of the difference, the United Kingdom almost certainly lost more imports, in any single year, than the submarines sank.42 Delay at the ports had occurred principally because of the disorganisation of the normal machinery of trade, combined with the large demands made by the Services on port capacity. In the nineteen-thirties there was visible danger, not merely that this situation might repeat itself, but that it might repeat itself in exaggerated form; for it was realised that in any future war the ports would be heavily bombed. In the years of preparation, the strategical experts had given clear warning that ports rather than shipping might limit British imports. In 1933, the Committee of Imperial Defence set up a sub-committee to review the whole question of the capacity of the ports and inland transport to handle imports, particularly in the event of the diversion of ships from their customary ports. The sub-committee spent four years on its task and its final report was optimistic. It found that even if seventy-five percent of the tonnage which normally entered the south and east coast ports was diverted to the west coast, the port facilities there would be adequate. But the basis of this reasoning was extremely shaky. The sub-committee had collected estimates of what each west coast port supposed it could handle regardless of the types of goods imported and the burdens on other ports and upon inland transport. It had collected estimates from the railways about the traffic they could carry from the west coast ports, considering each port in isolation and out of relation to inland transport movements. It had added up the number of deep sea ships that could be accommodated in the west without considering any of the factors which determine the time a ship spends in port. The whole port problem was then remitted to yet another committee which discovered in March 1939 that the estimates of its predecessor were 'complete nonsense'. But by then time was too short. Britain entered the war without any realistic estimate of port capacity if ships should be diverted to the west coast ports. The dangers of this over-confidence were not apparent until the fall of France made diversion necessary; in the winter of 1940-41, the United Kingdom was losing once again as large a volume of imports because of port delays as it was losing because of cargoes sunk. In September 1939, however, no doubts about port capacity clouded the prediction that United Kingdom dry cargo imports would be about 48 million tons in the first year of war. The estimate of British import requirements had no firmer foundation than the estimates of British shipping and port capacity. The origins of the seemingly precise figure of 47 million tons of imports can be traced back to some vague statistical manipulations between 1936 and 1938. In 1936, the figure of 52 millions—about three millions less than average peace-time imports—had been cited to the Committee of Imperial Defence; but the Food Supply Sub-Committee unwittingly complicated the issue by recommending that 'an overall decrease of imports of food of twenty-five percent should be assumed throughout the duration of the war'. On this authority the Mercantile Marine Department cut its estimate of food requirements from 20 million tons to 15 millions, thereby bringing down the total of import requirements to the 47 million figure. But the officials of the Food (Defence Plans) Department had never for one moment imagined that their import programme could be slashed in this way. In so far as they paid any attention to the twenty-five percent estimate, they accepted it as a measure of the losses which enemy action might inflict upon British food supplies if no counter action were taken. They then proceeded to take counter action. By their judgement, if there were indeed a danger of a twenty-five percent fall in arrivals of food owing to destruction and delay at sea, loadings of food in overseas ports must be correspondingly increased. While, therefore, the planners responsible for the nation's ships were scaling down the programme of food imports, the planners responsible for the nation's food were scaling the programme up. Neither party took any notice of what the other was doing; nor did the Committee of Imperial Defence uncover the discrepancy of calculation and planning. And so the word went round that there would be plenty of ships. How far this mood of muddled cheerfulness was the product of the calculations which have been reviewed, how far these calculations were themselves the product of the prevailing mood, need not, and possibly cannot be determined; but some of the clear consequences should be pointed out. One consequence was a lack of realism in the zone of import policy that persisted throughout the first period of the war and proved hard to eradicate even after the reverses of 1940. In September 1939, the organisation of the importing departments and of the shipping authorities was admittedly much further advanced than it had been in August 1914; but plans fell a long way short of the 1918 mark. The shipping authorities concluded that a partial control over deep-sea tonnage would be good enough to start with, the importing departments concluded that a partial control over supplies would be good enough, and the War Cabinet was not ready for the task which Lord Milner's committee had undertaken on its behalf in 1917—the scrutinising and adjudication of conflicting departmental claims on shipping, so that out of them might be hammered a national import programme adjusted to the actual facts of the shipping situation. Another consequence was the relaxation of preparations for import-saving production at home. The plans for British agriculture offer a good example; in September 1939 they were less drastic than they had been two years earlier. In 1937, the Committee of Imperial Defence had approved a war agricultural programme dominated by the memories of the 1917 submarine campaign and the wheat famine of the succeeding seasons. The basis of this programme was the conversion of grassland to arable in order to grow crops that would give the largest and quickest return in food value and that were bulky to import. In particular it would be necessary to increase the output of wheat, potatoes and oats for direct human consumption. A large quantity of home-grown corn would also have to be diverted from animal to human consumption. At the same time, a considerable fall in imports of animal feeding-stuffs was expected. All these plans together made an inevitable drastic fall in the number of corn-eating and grass-eating animals—that is pigs, poultry and sheep. These policies of 1937 were never formally rescinded but, in the growing expectation that there would be plenty of shipping, they were quietly obscured. In 1939, it was thought that temporary interruptions of cargoes of animal feeding-stuffs were still possible. And shortage of foreign exchange might limit imports—imports not of the bulky foods such as wheat but of expensive foods like meat and cheese. Gradually, the necessity of ploughing grassland became accepted mainly as a preparation for a greater production of animal feedings-stuffs in order to maintain the supply of meat and dairy produce. All this was symptomatic of a change in the general tone of agricultural policy which took place between 1937 and 1939 and expressed itself emphatically in the early war months.43 The original idea of a food production campaign concentrating upon crops for direct human consumption had slipped into the background and did not re-emerge until the disasters of May and June 1940 revived the memories, and the policies of 1917. A more important consequence of the unrealistic forecasting of British importing capacity was the inadequate action taken to build up stocks of food and raw materials. On this subject there had been considerable public discussion from 1936 onwards. In the mid-summer of 1939, Sir Arthur Salter, one of the protagonists of a vigorous policy of stock-building, proposed an exact figure: 13 million tons of stocks would, he said, 'enable us to carry on for three years of war with a loss of shipping which, in the absence of such reserves, would have crippled us in little more than a year.' Here it is necessary to make a distinction between a stocks policy that is designed to save shipping and one that is designed to safeguard war production. The authorities responsible for war production will inevitably concern themselves with specific commodities of strategic importance which are likely to become difficult to procure in time of war, either through a rise in total demand or because of enemy domination over important sources of supply. Such commodities are not necessarily the bulky ones. Sheer bulk is, however, the primary concern of the shipping authorities. They have no specific interest in any particular cargo unless it happens to make big demands upon shipping space. Before the war, Sir Arthur Salter and those who shared his opinions concentrated their attention on three commodities which, between them, accounted for nearly half the tonnage of British imports. These three were iron-ore, grain and timber. All of them were primarily tramp cargoes and largely inter-changeable with each other from the shipping point of view, so that it did not matter what emphasis was given in storage policy to any one of them. All that did matter was to bring in 13 million tons, or some other big total, before the outbreak of war. This advocacy made little impression upon the Government. Before Munich, it conflicted with the doctrine of a war of limited liability; for what was the use of accumulating large quantities of iron ore when the nation would have to equip no more than five or six divisions for modern warfare? It conflicted also with the doctrine of normal trade, since the accumulation of stocks by government action might have a disturbing effect on trade prices. And even when these two doctrines went by the board, the Government still rejected the premises underlying this troublesome agitation of economists and M.P.s. If its own experts were right, if shipping were going to be plentiful, why insure against a serious shipping shortage? The Essential Commodities Reserves Act, passed through Parliament in 1938, had a more limited purpose; to give moderate insurance against temporary deficiencies and delays likely to accompany the early months of war.44 Some of the purchases made under this Act (especially the purchases of oils and fats) were negotiated by the food planners with considerable skill and served the country well.45 They did not however constitute an effective reply to the advocates of a large-stock building policy because their total effect in forestalling the strain on shipping was small. When war broke out, the nation was poorly provided with the three bulk commodities mentioned above. It is true that the Government had bought 400,000 tons of wheat (the equivalent of five weeks' consumption); but trade stocks were low. The Government had accumulated no stocks at all of iron-ore and timber. Trade stocks of iron-ore at 1.2 million tons (equivalent to ten weeks' supply46 ) were higher than the normal peace-time average; but trade stocks of timber were far below the average.47 In consequence of all this, the Ministry of Shipping found itself dangerously short of elbow room in its attempt to cope with the flood of difficulties which immediately followed the outbreak of war. In the years before the war, British imports had averaged over 4½ million tons per month, with a lower average for the mid-winter months. The monthly figures of imports up to the fall of France were as follows: The table shows that imports in the first two months of war fell short of peace-time performance by more than a third. In the following months they rose appreciably, despite the seasonal disadvantage; by the spring they were less than half a million tons short of the peacetime average. However, it had by then become quite clear that the accumulated backlog on requirements would never be made up. And a far grimmer battle on the seas and in the ports was now closely impending. Within the general framework of monthly import totals, attention may now be given to the three commodities discussed above, wheat, iron-ore and timber—not because these commodities were the only ones where critical shortages arose, but because their story is quantitatively important and has, besides, special significance for the evolution of policy. To begin with wheat. From the very first weeks of war, consumption went up and imports went down, until by November working stocks in the hands of the trade were reduced to so low a level that some mills actually ran out of wheat and had to stop work. However, in December 1939 the Ministry of Shipping brought into action the weapon of requisitioning, with the result that in each successive month up to the fall of France imports were above consumption. When France fell, a very sound stock position had been established for wheat.48 Not, however, without cost. The Government had been compelled to spend dollars on North American wheat where it had panned to save them by procuring Australian wheat. Moreover, the concentration of requisitioned shipping on overcoming the wheat crisis had given rise to crises in other commodities. Import requirements of iron-ore for the first year of the war, as stated by the Ministry of Supply, were seven million tons, or rather more than 580,000 tons per month. For the first three months of the war, actual imports came in a little more than half this rate, which indeed was never once reached during the first six months of war.49 In February, when the Ministry of Supply appealed to the War Cabinet, stocks had fallen below the ordinary needs of the trade and works were already beginning to close down. Fortunately, by that time the wheat crisis was well on the way to solution, so that it was possible to switch an increasing number of requisitioned tramps to Narvik and Kirkenes, French North Africa, Sierra Leone and Newfoundland, the main sources of supply. But the start had been slower than with wheat, and the backlog was never made up. At the end of the first year of the war, the Ministry of Supply was nearly two million tons short of the imported iron-ore for which it had budgeted. For wheat, the turning point had come in December; for iron-ore it came in February; but for timber it never came at all. Month after month, imports of timber were less than a half, sometimes less than a quarter of Ministry of Supply requirements.50 There were no stocks from which the deficiency might be made good; nor were there ships enough to switch from the closed Baltic to the long British Columbian haul. Warnings were frequently given that the timber shortage was jeopardising the military and munitions programmes of the Government and in particular the building of munitions factories and of hutments for the troops. Despite these warnings, timber was sacrificed, and rightly sacrificed, for the sake of wheat and iron. By whose decision? The Ministry of Supply, once it was convinced that its clamours and complaints could not exact more tonnage from the Ministry of Shipping, was certainly competent to decide between the respective claims of iron-ore and timber; just as the Ministry of Food was competent to strike a balance between wheat and feeding stuffs. But there did not as yet exist any authority, short of the War Cabinet, which could decide between feeding-stuffs and iron-ore, or wheat and timber. In consequence, the aggrieved departments kept coming to the War Cabinet with their contending and incompatible claims upon the Ministry of Shipping. In the first months of its history, the Ministry of Shipping achieved a great deal, despite the impediment of those pre-war political decisions that have been described. In its organisation, and in the technical instruments that it commanded—for example, in its complicated and exact apparatus of shipping intelligence—it was able to draw with great profit upon the experience of 1917-18. Working in close contact with the Admiralty, it played its part in the institution of convoy control, in the closing and reopening of the Mediterranean, in the switching of sea traffic from the east ports to the west ports and back again, in the holding of ships in port to be fitted with guns and degaussed against the magnetic mine, and in all the other emergency operations of the early months of war. Its precise arithmetic soon rectified the optimistic forecasts it had inherited. It took realistic measure of the carrying capacity of the British merchant fleet and the aid to be expected from neutral shipping.51 Moreover, in order to get maximum service from the drastically scaled-down total of effective resources available to it, it rapidly refashioned the policy which it had been called into being to administer. Less than a month after the Ministry's inauguration, the Director-General felt constrained to point out that control through the licensing of voyages, whatever might be said in its favour as a transitional measure, was already suffering a change in its original nature and intent: instead of operating mildly and negatively with infrequent interferences with owners' intentions, it was becoming an ill-concealed dictation to all owners as to the voyages they might undertake. Indeed, nothing short of dictation—that is to say, positive government control—was capable of getting the nation's ships to the places where they were needed—to North America for wheat, to Narvik for iron-ore. As has already been seen, the Ministry was compelled to use the weapon of requisitioning in order to overcome the urgent crises of wheat and iron-ore. Nor was its action in these special instances haphazard; from the early days of December it was moving purposefully towards the all-inclusive requisitioning of deep-sea shipping as an objective of fully considered public policy. The inauguration of this policy was announced on 4th January 1940. From that day, the Ministry had power to extract much fuller value from the carrying capacity of the merchant navy, since every ship could henceforward be sent to the destination, and loaded with the cargo that the national interest demanded. But the national interest was not always easy to define; nor was the Ministry of Shipping always the appropriate authority for defining it, even within the sphere which seemed peculiarly its own. For it is wrong to allow large issues of economic policy and the structure of the war economy itself to be determined incidentally by the day-to-day operations of shipping. Such issues occurred frequently through the overlap of 'cash' and 'carry'; considerations of 'carry' demanded concentration on the short hauls; but considerations of 'cash'—or of economic warfare—often demanded the reverse. Again: if the United Kingdom's import programme had been the only test, British tonnage should have been withdrawn completely from the 'cross trades';52 but this policy would have been expensive in 'cash' and would besides have jeopardised the war-making power of the overseas Empire. The Ministry of Shipping, therefore, had to do its best within the limits of policies which originated in the Treasury, the Ministry of Economic Warfare, or elsewhere, and were ultimately decided by the War Cabinet. In those early months, the War Cabinet did not decide enough. Allocations of tonnage by the Ministry of Shipping, in despite of its own desires and explicit protests, were determining not merely short-term loading programmes but long-term import priorities as well. This happened inevitably through the War Cabinet's failure to establish an authority charged with responsibility for scaling down the total of import requirements to fit the total of available capacity. As has been seen, the Ministry of Shipping had given early warning that import requirements would have to be scaled down. It was a warning that the importing departments were most reluctant to observe. They found it hard to free themselves from the great expectations which they had been encouraged to form before the war. They demanded more proof—and so did the War Cabinet itself—that the shipping authorities could not produce less discouraging statistics and prophesy smoother things. In the meantime, they allowed their own calculations of requirements to stand, if indeed they did not increase them.53 However, as the first half year of war drew towards its close, they found themselves compelled to modify these tactics of stone-walling. on 19th December 1939 the War Cabinet had assigned to the Lord Privy Seal (Sir Samuel Hoare) the task of investigating the shipping resources available to the nation. His report, which was presented to the War Cabinet in February 1940, emphatically corroborated the judgement of the Ministry of Shipping. It showed that the shipping situation, so far from improving, would get still worse in the second year of war. It went on to propose drastic cuts in imports, a more realistic policy of agricultural and other import-saving production, and a more provident policy in regard to stocks. In consequence of this report the Lord Privy Seal was invited to review the current import programme as a whole. He remitted this task to a committee of officials, who had their report ready at the beginning of April. the War Cabinet accepted their proposals for scaling down the import requirements for the first year of war. In broad outline, these proposals were as follows. Ministry of Food 19.00 to 19.95 Ministry of Supply 43.79 to 44.75 At last a real beginning had been made in lifting the shipping problem above the level of departmental tussle, and in adjusting the total of requirements upon overseas supplies to the total of available tonnage. It was, however, no more than a beginning. The savings suggested in the above figures were to some extent the product of paper adjustments which had no counterpart in the actual importing plans of government departments or private business men. When France fell, war-making power was still being wasted through importation of unessential things, and of essential things in quantities which—in default of a scientific restatement of relative needs in the context of a compulsorily diminished total of imports—were sometimes excessive, and sometimes inadequate. It is not easy to determine how much of this waste of war-making power might have been avoided. By the standards of endeavour that the nation later on accepted, and by its later standards of efficiency, there were in this opening period of the war some extravagances which seem almost bizarre. It would be possible to make a long list of commodities which, though of very indirect value to the war effort, were still being shipped to the United Kingdom in larger quantities than in peace time. Wines and spirits, Spanish onions, canned, bottled and dried fruits would be conspicuous among the food items on the list; there were besides many dubious items, chiefly odds and ends of manufacture, included in the 'miscellaneous and unallocated' imports for which the Board of Trade was officially responsible. According to the tests of necessity that Britain adopted in a leaner time, two or three million tons of shipping-space might possibly have been saved by pruning away this miscellaneous luxuriance. But, under the conditions of administrative organisation that existed at the beginning of the war, pruning operations were always difficult and sometimes impossible. For example: the Board of Trade's acquaintance with the items on the miscellaneous list was very distant; it knew a good deal about their value, but nothing about their weight. It could control them only through the over-worked Import Licensing Department, whose primary task was to save foreign exchange and not shipping. The transfer of formal responsibility for imports of this class to the Ministries of Supply and Food did not by itself make things any better. Such a transfer took place in quite a big way in the spring of 1940; but the Ministry of Food was not yet ready to take direct control over minor items like wines and onions and canned fruits; these items, though they now figured on its programme, continued to be handled by private firms through the normal channels of trade. The Ministry of Supply was even less ready to take over from private importers full responsibility for stating the quantities of all the miscellaneous materials and components that British industry needed. In consequence, the Ministry of Shipping was forced to leave a sufficient margin of unallocated liner space to cover these undefined requirements. The commodities that flowed through this channel were not always the ones that were needed by a nation at war; yet the national effort might well have suffered greater loss if the channel had been abruptly and prematurely blocked. Moreover, although ordinary people and the War Cabinet itself were prone to put special stress on the waste of shipping through importation of the mass of miscellaneous 'non-essential' article, a far more formidable waste occurred through failure to determine the proper relative quantities of those bulk imports whose 'essential' character nobody would deny. In summing up, it may be suggested that, if the pre-war estimates of shipping resources and the claims upon them had been less optimistic, some of the difficulties of the first war winter might have been avoided. Still more might they have been avoided if administrative preparations had been pushed further forward before the war began. However, once the war had begun, resolute action was soon forthcoming on the supply side of the shipping problem; the newly established Ministry of Shipping lost little time in measuring its task and instituting the controls necessary for its performance. It was on the demand side of the problem that action was dilatory. Allowance must no doubt be made for some exceptional requirements of imports to speed the expansion of war production and for the unavoidable time-lags in expanding agriculture and other import-saving industries. What could have been avoided, or at least mitigated, was departmental boggling at the extent of the economies and effort insistently demanded by the facts of the shipping situation. And rationing, as will be shown in a later chapter,54 might have been imposed more speedily. The postponement of decisions which were unwelcome, but in the end inescapable, found support in an unexpected quarter, namely the Admiralty. Perhaps it was felt there that prompt and strict rationing would be a reflection on the Navy's ability to guard the food ships; perhaps anti-austerity preconceptions in statistical dress were the chief influence. In the War Cabinet, the balance of forces during the first war winter favoured laxity of control. It was only by slow degrees that the War Cabinet prepared itself for its task of subjecting contending and excessive departmental claims upon shipping space to an agreed measurement of national necessity. Meanwhile, though there had occurred as yet no serious inflationary pressure against stocks,55 the persistent refusal to scale down import requirements to real importing capacity found its counterpart in a drain upon stocks of imported commodities. The drain was unevenly distributed.56 In the overall stocks position of the Ministry of Supply, the graph show first a steep decline and then a wide, deep trough. The Ministry of Food, thanks to the tenacity with which it defended its 20 million tons import programme and to the success from December onwards, of its rationing policies, had more comforting graphs to contemplate: even before the fall of France, it was improving its stocks position and thereby gaining elbow room for the more balanced food policy it subsequently adopted. But of the national position as a whole the graphs tell a depressing story. When all due allowance has been made for the special difficulties of the change-over from peace to war, there still remains the obstinate contrast between a volume of imports far higher in the first year of the war than in any subsequent year, and a seriously weakened stocks position. Government and people had failed in this time of grace to make provident use of British sea power. The nation had not as yet adjusted its imagination and will to the hard realities that would compel it, later on, to live lean. Contents * Previous Chapter (III) * Next Chapter (V) 1 See U.S. Strategic Bombing Survey, op. cit., Chapter III and appendix: also C. T. Saunders, 'Manpower Distribution 1939–1945' in The Manchester School, May 1946. Owing to statistical difficulties the manpower figures are for Great Britain, not the United Kingdom. 2 In the previous war, American economic aid had been chiefly in materials, food, ships and finance, rather than in finished munitions, for the U.S. had not got to the stage of producing them in large quantities and depended largely on British industry to equip their armies in France. 3 e.g. at the time of the Geneva Disarmament Convention of 1927, the British Government believed that the extreme U.S. doctrine of freedom of the seas underlay the American determination to deny to Great Britain the large force of small cruisers which she wanted. Small cruisers could be used not only to defend trade routes but to enforce a blockade; whereas a small number of large cruisers, which was what the Americans wanted, could be used to prevent British interference with neutral commerce. There will be some discussion of the American background to British blockade policy in Professor W. N. Medlicott's volume in this series. 4By a ruling of the U.S. Attorney-General, a token payment was held to be equivalent to default. 5 The President 'found' a state of war in the Italian aggression upon Abyssinia, but not in the Japanese aggression against China. 6 In this, the latest version of the neutrality legislation, the 'combat zones' made their first appearance, and foreshadowed the total disappearance of U.S. shipping from all dangerous waters. 7 See below, Chapter VII. 8 The first total statement of British requirements in the United States for the first year of the war (30th January 1940) was as follows: For Service Departments 9 See above, p. 54. 10 There is a basis for comparison in the well-known estimate by Sir Robert Paish (Supplement to The Statist, 14th February 1914) and Sir Robert Kindersley's articles in the Economic Journal during the nineteen-thirties. The difference in capital value, according to these estimates is about £500 millions. Sir Robert Paish's estimate of total British capital invested abroad in 1913 was £3,700 millions. Early in 1940, the War Cabinet was given an estimate of £3,240 millions capital value with an income of £185 millions in 1938. An official retrospective estimate of 1945 put the average annual income from overseas investment for the years 1936-38 at £203 millions but gave no figure for total capital value. See Cmd. 6707, Appendix VII. Reference to the qualitative inferiority of British overseas holdings in 1939 is made on p. 115 below. 11 The Gold Standard (Amendment) Act of 1931prohibited purchases of foreign exchange or transfers of funds except in satisfaction of legitimate current requirements, namely: (1) normal trading requirements, (2) pre-existing contracts, (3) reasonable travelling or personal expenses. These restrictions might possibly have prevented a flight from the £ if one had been attempted; but persons with transferable money showed themselves more anxious, at any rate after the first three or four anxious months, to run away from the currencies that remained on gold than from sterling. 12 For drastic contemporary criticism see articles by T. Balogh in the Economic Journal, March 1940 and Economica, August 1940. It should be noted especially that the exchange control did not effectively cover non-resident holders of sterling balances. Hence arose after the outbreak of war the so-called 'black market' in sterling—a misnomer, since dealings abroad between non-residents, at whatever rates were not an infringement of the law. In these dealings, sterling was not at first at a heavy discount, but by 27th March 1940 it had fallen to $3.48, in comparison with $4.03, the official middle rate fixed for the dollar. On 12th May foreign-owned sterling securities were blocked. Balances were still left free, but it was believed that they have been by this time reduced almost to the minimum requirements for existing commitments. 13 The reasons for these two omissions, the first by decision of Ottawa, the second by decision of London, were basically the same: namely the powerful influence of geographical and (still more) economic neighbourhood in North America and Asia respectively To cite the example of Canada only: fifty-nine percent of her visible trade was with the United States, and only thirty-one percent with the United Kingdom; American investments in the Dominion were fifty percent higher than British investments, while Canadian investments were large in the United States but negligible in the United Kingdom. Under these circumstances, Canada was inevitably led to follow 'an intermediate course between the sterling area and the U.S. dollar. 14 S.R. & O. 1168 of 1939, issued concurrently with the Defence (Finance) Regulations of 3rd September 1939. The Treasury was empowered by the Regulations to issue exemption orders from the prohibition against making payments to residents outside the United Kingdom; in the Order cited, it exempted payments to residents in those countries which held their principal monetary reserves in sterling at London and imposed exchange control similar to that of the United Kingdom. 15 Cmd. 6707 gives for the whole war period the figure of £564 millions for total proceeds of sale or repatriation of British investments in the sterling area (Dominions, £201 millions; India, Burma and Middle East, £348 millions; the rest, £15 millions. 16 Op. cit. Appendix IV. It should be noted that only the smaller part of this immense total of sterling debt was incurred for overseas resources supplied to the United Kingdom: no less than £1,732 millions represented the United Kingdom's efforts in the defence of India, Burma, Egypt and the Middle East. 17 cf. The United States in the World Economy, a study of the U.S. balance of payments between the wars issued by the U.S. Department of Commerce in 1944. 18 See note on p. 106 above. 19 See below, p .154. 20 The November programme (or rather estimate, since genuine programming of imports had not as yet been developed) was as follows:— Imports, Ministry of Food and Ministry of Supply Controls Imports under Import Licensing or soon to be brought under it Films and tobacco, which were subject to special arrangements 21 S.R. & O., 1939, No. 1054 and following Orders. 22 S.R. & O., 1940, No. 873. By this Order import licensing was made to cover all commodities and was extended to sterling area countries. 23 See Chapter VI, Section (i). 24 See Statistical Tables 3(b) and 1(e) on pp.79 and 77. 25 See Section (iii) of this chapter. 26 In a return made by the Bank of England (February 1940) of British-owned securities in North America which had been registered in accordance with the regulations, five grades were distinguished. 275 Grade B Securities in Grade A were readily marketable and those in Grade B fairly valuable; at the other end, securities in Grades D and E were practically unsaleable. 27 S.R & O. 1939, Nos. 945, 984, 1024 and following Orders. The main Export Control Order, dated 1st September 1939, covered a wide range of raw materials, semi-manufactured and manufactured goods which could not be exported without license. Destinations were classified into A (all countries outside the United Kingdom), B (all countries outside the British Empire) and C (specified European countries or areas). Although the Export Licensing Department was established in the Board of Trade, the pressure for more stringent control and longer lists of prohibitions came from the Ministries of Economic Warfare and Supply, with which the Board of Trade found itself continuously in dispute. 28 i.e. exports involving the highest possible addition by British labour, management and plant to the value of the raw materials. 29 Cmd. 6183. 30 See below, p. 310. 31 S.R & O. 1940, No. 561. The reduction of twenty-five percent was on the standard period, 1st April to 3rd September 1939; but, in view of the many exceptions in favour of blackout materials, overalls, the needs of hospitals, the W.V.S etc., etc., it was in fact a good deal less. Note that the Board of Trade had rejected the project of control at the raw materials stage, choosing instead to limit the manufactured or semi-manufactured articles at the stage of wholesale distribution. 32 S.R & O. 1940, Nos. 874, 875 and following Orders, covering various kinds of machinery, and consumer goods such as pottery, glass, cutlery, hosiery, toys, games, musical instruments, etc. 33 It was considered by the Ministerial Committee on Economic Policy on 27th May. 34 See above, p. 30; and cf. Sir Arthur Salter, Allied Shipping Control (Carnegie Endowment, O.U.P., 1921). 35 See Table 3(c) on p. 80. 36 31st March 1938. 37Tanker imports and tanker tonnage as being the concern of the Oil Board, were not included in the calculation. 38 Before the war, a provisional calculation suggests, neutral ships brought in during an average year about 24 million tons, or something approaching twenty-five percent of British imports. 39 This estimate was repeated in February 1940, subject to the explicit warning that no margin had been left in for unfavourable contingencies which ought to be insured against. Unfavourable contingencies did in fact occur after April 1940. In the event, neutral and British ships brought to the United Kingdom during the first twelve months of the war 43.5 million tons of dry cargo imports. 40 The Ship Licensing system was administered by a committee of owners and civil servants. The Lines were given a general licence, subject to revision, permitting them to operate on their normal berths. They were, however, bound to load their ships according to the guidance given by a priority cargo list, in which was left a certain allowance of free choice which varied from route to route and which was justified by the impossibility of producing at that stage a fully detailed and comprehensive list. In contrast to the liners, the tramps had to get a specific licence for each separate voyage—a contrast which suggests the stock simile in which the liner is said to be like a train and the tramp like a taxi. 41 This problem fell within the jurisdiction of the Ministry of Transport, whose investigations were parallel but not in close co-ordination with those of the Mercantile Marine Department into the carrying capacity of British shipping. 42 In 1917 the United Kingdom imported (excluding petroleum products) some 34 million tons of commodities. In the first four months of the year, at the peak of the U-boat effort, cargoes were being sunk at a rate of about five million tons a year. At the same time the loss from delays in port, taking peace-time performance as a standard, was between four and five million tons. It must of course be remembered, in comparing the losses from sinkings with port delays, that sinkings are cumulative and port delays are not: ships sunk in one year mean so many less the next. 43 The price increases which came into effect in January 1940 represented, when compared with the averages for January 1939, a twenty-five percent increase for sheep and fat cattle and a thirty-three percent increase for pigs. Part of this increase represented the higher cost of feeding stuffs due to the unforeseen shortfall of imports, but part of it was 'incentive'. 44 The plans of the Mercantile Marine Department at this time represented an advance on the 1938 report to the Committee of Imperial Defence, to the extent of assuming for the early months of war a reduction of fifteen percent in the carrying capacity of British ships, owing to the introduction of convoy and other temporary dislocations. The actual reduction in the period September-December 1939 was thirty percent, a figure which the Ministry of Shipping thought might be cut down, under favourable circumstances, to twenty to twenty-five per cent. 45 The Food (Defence Plans) Department sought authority to spend £25 millions and received Treasury sanction for spending £15 millions. In addition to whale oil, it laid in stocks of sugar, which were dissipated in the early weeks of war by the delay in the introduction of rationing, and of wheat, which were engulfed in the shipping shortage. 46 The estimate of ten weeks' supply may be optimistic, since the trade normally holds five weeks' supply for ordinary distributive purposes. 47 In October 1939 trade stocks of timber were 617,000 standards, as against the peace-time average of one million: and yet in the previous June the Government had still been considering 'whether any reserves are desirable in principle, and if so, whether they can be obtained'. 48 On 6th December 1939, the War Cabinet had adopted, as a minimum safety standard, wheat stocks equivalent to thirteen weeks' consumption (in fact more, when home-grown wheat was coming in). 49 Monthly imports rose from 263,350 tons in the first month of war to 443,000 in the sixth (February). April was the first month in which the peace-time average was reached and passed. 50 The September statement of softwood timber requirements for the first six months of war worked out at an average monthly import of 425,000 tons, with which may be contrasted actual imports of 183,300 tons in December and 98,100 tons in January. Even in April the figure was only 180,100 tons. 51 cf. p. 123 above. Before the war it had been expected that the British blockade would aggravate the world's chronic over-supply of tonnage and bring neutral owners in flocks to the Ministry of Shipping, there to be employed on terms not unfavourable to the Treasury. What the war in fact produced was a world shortage of shipping which sent neutral owners frolicking after high freights. The British Government was unwilling to join the rush into the short-term freight market, partly because of its need to husband the means of payment, partly because of its reluctance to pay foreigners at a vastly higher rate than it was paying its own people. Consequently, it endeavoured to secure blocks of tonnage on a long-term basis at reasonable time-charter rates. this policy necessitated protracted negotiations, which did not produce substantial results until the German invasions of western Europe changed the political atmosphere and the terms of bargaining. Meanwhile, the Ministry of Shipping did its best to fill the gap with voyage-charter arrangements. These were expensive, precarious and inadequate. Attempts to buy neutral ships were also made; but the results were small, for ships had become a good investment again and the neutrals had no inducement to sell except at high and rapidly rising prices. 52 i.e ships trading between any two ports other than United Kingdom ports. 53 Despite the Ministry of Shipping's figures and its call in December 1939 for adjustment to the shipping shortage by restricted consumption and the increased use of substitutes, the Ministry of Supply in January 1940 put up its import requirements from 23.9 to 30.3 million tons. This put up the total import requirements to 53.7 million tons, which on the more favourable assumption was nearly 7 millions, and on the less favourable one 12 millions above the estimate of available shipping space. 54 See below, Chapter VI, Sec (iii). 55 See below, p. 153. 56 For some of the details see Table 3(e) on p. 81.
<urn:uuid:e173ee15-9ba0-413e-9fa5-cde6339d6342>
CC-MAIN-2016-26
http://www.ibiblio.org/hyperwar/UN/UK/UK-Civil-WarEcon/UK-Civil-WarEcon-4.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00012-ip-10-164-35-72.ec2.internal.warc.gz
en
0.973573
19,739
3.203125
3
Condoms are thin sheaths worn by men during sexual intercourse to prevent pregnancy and venereal infections. According to the 1995 National Survey of Family Growth, conducted by the National Center for Health Statistics in Hyattsville, Maryland, male condoms or prophylactics are the third most popular form of birth control—preceded only by female sterilization (29.5%) and birth control pills (28.5%)—with usage at 17.7%. They are also one of the most effective: research indicates that with correct use, failure rates are 2-3%. Most condoms are made of latex rubber, but they can also be made from lamb cecum or polyurethane. In addition to their contraceptive value, condom use has been found effective in preventing the spread of sexually transmitted diseases. In 1986, the U.S. Surgeon General endorsed the use of condoms as the only currently available effective barrier against the transmission of Acquired Immunodeficiency Syndrome (AIDS). The spread of many other sexually transmitted diseases, such as chlamydia and gonorrhea, can also be virtually eliminated with the use of a latex condom. With the government touting the health benefits of condom use, manufacturers openly advertise their products, and retailers stock condoms in visible, accessible locations. Condoms, previously kept behind the prescription counter, are now found on most store shelves. Today in the U.S., 450 million condoms are sold each year. Despite the wide variety of styles, there are few differences among the many latex condoms available on the market today. They can be straight-sided, contoured, ribbed, sensitive, or smooth. They may be treated with lubricants or spermicides. They can be blunt-ended or have a reservoir tip. Because the condoms undergo stringent testing before they are sold, quality is generally not a marketable issue. Hence, manufacturers attempt to build brand loyalty and market their products to specific target consumers. Condoms made from lamb cecums—the blind pouch in which the intestines begin and into which the ileum opens from one side—are also available. However, they are more expensive than latex condoms, and while they prevent pregnancy, "skin" condoms are ineffective in preventing the transmission of sexually transmitted diseases. In 1994, the Food and Drug Administration (FDA) approved a polyurethane condom for sale in the U.S. The new condom has not been extensively tested for effectiveness in preventing pregnancy and sexually transmitted diseases. The first recorded use of condoms was in Egypt in 1350 b.c. In 1564, the Italian anatomist Fallopius described a linen condom used to prevent venereal disease. The term condom is actually a corruption of the name of an 18th-century British physician, Dr. John Conton, who provided condoms to France's King Charles II. The legendary lover Giovanni Casanova (1725-1798) used pieces of sheep intestine to protect himself against venereal disease. The first condom manufacturer in the U.S. was Schmid Laboratories. In 1883, Julius Schmid, a former Even as Schmid was marketing his skin condoms, technology was progressing to allow thinner, more pliable, and less expensive condoms to became available. Vulcanization, the chemical linking of rubber particles that was originally developed in 1839 for use in automobile tires, made condoms strong, durable, and fit for consumer use. A form of rubber called latex was developed in the 1930s; this new material, combined with a mechanized dipping process, facilitated the mass production of condoms and lowered manufacturing costs. The first condoms manufactured by Julius Schmid were formed from the cecum of lambs. As of 1990, condoms made from lamb cecum accounted for 5.5% of the market, and because of their higher price, for 20% of retail sales. This manufacturing process remains relatively unchanged since Schmid first manufactured condoms: the cecums are washed, defatted, and salted. The raw skins are then shipped to the finishing plants. New Zealand, which raises large numbers of sheep, is the primary source and initial processing center for most "skin" condoms. Latex condoms account for most of today's market. Because rubber latex is a natural material, it can vary greatly in strength and elasticity. Manufacturers add chemicals to the latex to stabilize and standardize the composition of the latex. Many brands also add talc, lubricants, or spermicides to the condoms before they are packaged. Condoms are classified as Class II Medical Devices. According to the Medical Device Amendments of 1976 of the FDA, the FDA is required to inspect each condom manufacturing plant at least once every two years. All electrical and mechanical equipment must be impeccably maintained. Condom-dipping machines are designed to operate continuously; if they remain idle, their mechanisms can get clogged and rust. During any downtime, partially cured compound cannot be left in the dip tank because it could contaminate future production. All condoms sold in the U.S. must comply to specifications that were voluntarily developed by condom manufacturers and adopted by the FDA. Condom measurements can range from 5.8-7.8 inches (150-200 mm) in length, 1.8-2.1 inches (47-54 mm) in width, 0.001-0.003 inches (0.03-0.09 mm) in thickness (although most condoms range between 0.002 and 0.0024 inches), and the weight cannot exceed 0.07 ounces (2 grams). Additionally, physical characteristics must include a minimum tensile strength of 15,000 pounds psa and elongation before breakage of 625%. The FDA reviews U.S. company records and spot checks batches for cracking, molding, drying, or sticking latex. The organization also tests every lot of imported condoms. Upon sampling, lots will not pass inspection if they reveal greater than 4% failure with respect to the above dimensions, 2.5% failure with respect to tensile strength and elongation, and 0.4% failure due to leakage. Manufactured by Chicago-based Female Health Co., the Reality condom for women has been on the market and available through family-planning clinics in the U.S. since August 1994. It has been sold in 12 European countries since 1993. The female condom is a long polyurethane sheath with one open ring and one closed ring that is anchored between the women's cervix and vagina. According to Female Health Co., these condoms are 40 times stronger than latex; each costs approximately $3, compared to about $.64 for male latex condoms. Research, started in 1988, lead to the development of the new polyurethane male condom, which also went on the market in 1994. The new condom is said to be just as strong but only one-tenth as thick as the latex condom. It is recommended for people who are sensitive to latex condoms. Murphy, James S. The Condom Industry in the United States. McFarland and Company, Inc., 1990. "How Reliable Are Condoms?" Consumer Reports, May 1995, pp. 320-25. Goldberg, Stephanie B. "Birth Control Update: Specialists Recommend Reviewing Choices As Life Changes." Chicago Tribune, March 5, 1995, pp. 1, 6. — Susan Bard Hall
<urn:uuid:cdee34d8-cfec-41ee-ab59-05367559c86e>
CC-MAIN-2016-26
http://www.madehow.com/Volume-2/Condom.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.95395
1,519
3.140625
3
Pittsburgh Supercomputing Center FOR IMMEDIATE RELEASE CONTACT: November 6, 1996 Michael Schneider Pittsburgh Supercomputing Center Computers at Work in Pittsburgh Pittsburgh Supercomputing Center Will Demonstrate Breakthroughs in Earth Science, Design of New Materials, Protein Structure, Heart Modeling, Brain Mapping and Storm Forecasting. PITTSBURGH -- Who would have thought a few years ago that a supercomputer could reveal the structure of Earth's inner core or predict a severe thunderstorm six hours in advance? Or show how a protein folds? Or create a real-time picture of what parts of the brain are thinking? Or diagnose prostate cancer? In these and many other areas of research, scientific computing is making contributions to knowledge and to bettering everyday life that no one could have anticipated ten years ago when the National Science Foundation supercomputing centers program began. As host of SC '96, the annual supercomputing conference, held this year at Pittsburgh's David L. Lawrence Convention Center, Nov. 17-22, Pittsburgh Supercomputing Center (PSC) will conduct a series of demonstrations to show some of this activity. These demonstrations, which highlight the conference theme, "Computers at Work," include: - Faster than a Speeding Storm Front In Oklahoma, the wind comes sweepin' down the plain, especially during spring storm season. For the past four springs, 1993-1996, the Center for Analysis and Prediction of Storms at the University of Oklahoma used PSC's supercomputers to test its storm-forecasting model, the Advanced Regional Prediction System (ARPS). During 1995 and again, with more success, in 1996, ARPS set milestones in meteorology. Current forecasting gives about 30 minutes warning of an impending severe storm. Powerful computing is essential to doing better, and using PSC's CRAY T3D, ARPS has successfully predicted the location and structure of severe storms six hours in advance, the first time anywhere this has been done. - When North Goes South Geological evidence from lava flows and the ocean floor shows that Earth's magnetic field reversed itself many times during Earth's history, but scientists haven't been able to explain how or why. In truly "groundbreaking" research, Gary Glatzmaier of Los Alamos National Laboratory used PSC's CRAY C90 to produce the first fully self-consistent, three-dimensional computer simulation of the "geodynamo," the electromagnetic, fluid-dynamical processes of Earth's inner core believed to sustain the planet's magnetic field. A stunning result was a simulated magnetic-field reversal. Glatzmaier's results offer the first coherent explanation of this phenomenon. The model also revealed that the Earth's inner core rotates faster than the planet's surface, a finding since confirmed by laboratory analyses of seismic data. - When the Earth Moves When the big earthquake comes, how bad will it be? Studies of major quakes in San Francisco (1989) and Mexico City (1985) show that, depending on soil type and other factors, ground motion can vary significantly from one city block to the next. Computer scientists at Carnegie Mellon University are collaborating with seismologists at the Southern California Earthquake Center to develop realistic models that capture these site-specific variations. Using PSC's CRAY T3D, they're studying the Greater Los Angeles Basin. Their results are expected to give the most detailed data on seismic response ever developed, information that will help engineers design buildings better able to withstand the stress of a severe quake. - Street Map of the Mind What parts of the brain are active in different kinds of thinking? Scientists at PSC collaborated with cognitive scientists at the University of Pittsburgh and Carnegie Mellon University to link PSC's highly parallel CRAY T3D (and the newer CRAY T3E) with "brain mapping" experiments on magnetic-resonance imaging scanners at the University of Pittsburgh Medical Center. With this computing capability, the scan data can be processed as fast as the scanner scans, making it possible to see what parts of a subject's brain are active while the subject is in the scanner. Ultimately, this capability has the potential to make brain-mapping viable as a clinical tool to diagnose and treat disturbances in brain function in real time. - Heart Throb Streams of red particles emerging from the left ventricle into the aorta -- that's what researchers at New York University saw when they ran their heart model for the first time on PSC's CRAY C90. Improved computing technology led to the first realistic, three-dimensional computational model of bloodflow in the heart, its valves and major vessels. Much like a wind tunnel, the model acts as a test chamber for assessing normal and diseased heart function. It will make it possible to address many questions difficult or impossible to answer in animal research and clinical studies. - Diagnosing Prostate Cancer What features from the biopsy of an enlarged prostate gland are important in evaluating whether there is malignancy and how serious it is? It's sometimes a matter of interpretation, and trained pathologists can differ in how they read the visual information revealed under the microscope. In collaboration with the University of Pittsburgh Medical Center, PSC scientists are developing computerized image-classification and pattern-recognition methods to aid in accurate diagnosis of prostate cancer. These methods provide an automated statistical analysis of relative cell locations that rates the degree of malignancy. - New Twists in Globs and Zippers A droopy chain of amino acids -- that's what rolls off the assembly line inside a cell when a protein is created. Before it can perform its life-sustaining tasks, this dangly chain must fold into the right shape. How is it that a particular sequence of amino acids uniquely determines this right shape, out of almost unlimited possibilities? It's called the protein-folding problem. Knowing the biological laws that govern this process could make it possible to create new proteins made to order to abate maladies from indigestion to arthritis. Using the CRAY C90, T3D and T3E, Charles Brooks, Erik Bozcko and William Young have carried out simulations that give the most comprehensive picture yet of protein folding. - Long Distance Charges Simulations of the structure of DNA and proteins and their dance-like oscillations inside living cells are achieving unprecedented results with a software innovation called particle-mesh Ewald (PME). Devised by Tom Darden of the National Institute of Environmental Health Science and initially tested at Pittsburgh, PME is an efficient, accurate method to account for the electrical attractions and repulsions between atoms that aren't bonded to each other in a large biomolecule. PSC scientists implemented PME on the CRAY T3D, and this one-two punch -- PME and parallel computing on the T3D and T3E, has made it possible to reproduce solvent-specific and sequence-specific DNA structure and dynamics, which heralds new possibilities for computational biology. - Designing New Alloys The magnetic materials used in recording devices, computers and other high-technology products represent a multi-billion dollar industry, yet significant gaps remain in our knowledge of how these materials work. To answer these questions, researchers need better understanding of the atom-to-atom interactions that give a material its unique characteristics. Scientists at Oak Ridge National Labs, Sandia National Labs and PSC are collaborating to solve these problems. They have developed an efficient new software method, the Locally Self-Consistent Multiple Scattering method (LSMS), that gives realistic results for the magnetic properties of each atom in a metallic alloy. Coupling LSMS with massively parallel supercomputers, they are making new advances in magnetic materials research. More information (and graphics) about these demonstrations is available on World Wide Web: http://www.psc.edu/science/sc96.html The Pittsburgh Supercomputing Center, a joint effort of Carnegie Mellon University and the University of Pittsburgh together with Westinghouse Electric Corp., was established in 1986 by a grant from the National Science Foundation, with support from the Commonwealth of Pennsylvania. # # # go back to contents page
<urn:uuid:ee527f10-68ea-4106-bb0d-bc79ff238998>
CC-MAIN-2016-26
http://www.psc.edu/publicinfo/news/1996/demos_news.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.909212
1,691
2.578125
3
Clarifying Real Ale with Finings Beer that’s unfiltered and unpasteurized (like real ale) still contains millions of live yeast cells in liquid suspension. With the help of gravity, and in due time, beer clarifies all by itself. But to expeditiously clear the beer of all this yeast, brewers use what are called finings. A brewer adds finings to real ale when he racks or transfers the ale in its natural, unfiltered, and unpasteurized state into a cask. These finings basically clot yeast cells and other organic matter and drag them to the bottom of the cask, where they settle and form a jelly-like mass of sediment. When this happens, the beer is said to have dropped bright. What finings do is fairly uncomplicated; what finings are is a bit more interesting. Here are two of the most common finings: Carrageen: Also known as Irish moss, carrageen is a species of red algae found in abundance along the rocky shores of the Atlantic coasts of Europe and North America. Isinglass: Isinglass is a form of collagen derived from the swim bladders of certain fish. After the bladders are removed from the fish, they’re processed and dried. Not to gross you out, but prior to the introduction of the less expensive gelatin, isinglass was used in confectionary and desserts, such as fruit jelly and marmalade. Other less-commonly used beer clarifiers include the following: Albumen: Albumen is derived from egg whites. Dried albumen is rehydrated with water and added to the beer. Similar to gelatin, albumen is positively charged so it attracts negatively charged proteins and yeast. Bentonite: Bentonite is a non-organic material combined with a form of fine powdered clay. When mixed with water, bentonite is very effective at clarifying liquids. Gelatin: Gelatin is derived from the ground hooves of cows and horses. It’s a colorless, tasteless, and odorless water-soluble protein that attracts negatively charged proteins and yeast. Pectinase: Pectinase is a general term for the various pectic enzymes that break down pectin, a jelly-like substance found in the cell walls of plants. Pectinase breaks down the pectin haze that can form in beer — especially those that contain fruit. PVPP (polyvinylpolypyrrolidone): Say that five times fast! Also known by its commercial name, Polyclar is made up of minute plastic beads that are statically charged, thereby attracting particulate matter to themselves like electrostatic glue. (Pharmaceutical companies also use this product to produce capsule-type drugs.) When a brewer adds finings to a cask of real ale, he may also add more hops and priming sugar. The extra dose of hops provides the beer with more hop aroma — not bitterness — and the priming sugar gives the yeast a little something to eat in order to create carbon dioxide within the cask. The cask is then sealed and shipped off to the pub.
<urn:uuid:2cb617e2-be0f-42b3-905a-0397217d028b>
CC-MAIN-2016-26
http://www.dummies.com/how-to/content/clarifying-real-ale-with-finings.navId-811098.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941452
664
2.703125
3