text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Massive Open Online Courses known as MOOCs are online courses aimed at unlimited participation and open access via the web. In addition to traditional course materials such as filmed lectures, readings, and problem sets, many MOOCs provide interactive user forums to support community interactions between students, professors, and teaching assistants. This form is a recent and widely researched development in distance education gradually emerged as a popular mode of learning.
There have been many theories that have come up for the rapid rise of the MOOCs in the present times. With the backup of the top learning schools and huge response in addition with high demand has culminated in the phenomenal increase of this form of open learning in the age of internet. Moreover, there were some vital factors that helped in adding fuel to the growth of MOOCs in the market.
Most of the big and elite colleges and universities have been doling out free e-courses to test their effectiveness and in the purpose to check to a more traditional setting for education, writes USA Today. Most do not want to roll out an online degree program if they can’t be sure they will send the most qualified graduates into the business world. Likewise, students want to obtain degrees that will employers will recognize and properly prepare them for successful careers. This is where accreditation becomes important.
This model is a more specific example of blended learning, where the MOOCs becomes that which you study at home for the knowledge and exposition and the internal training gets you to practice and adapt that knowledge, within your organization. This gives you free external training and internal relevance and competitive edge. One can easily see a cohort of people within an organization starting a MOOC and moving forward together with mutual support to achieve real learning.
There is already evidence that organizations are looking at MOOC platforms as an alternative to the traditional, expensive Learning Management System or LMS. They are attracted by the low cost, agile and scalable nature of these platforms in terms of their coding structure, where the rendering and representation is kept separate from the logic and interactions. This is in contrast to the monolithic code and limited single database use of traditional LMS vendors.
There is the issue of LMS integration. Companies want data that proves efficacy and competence and want it through their LMS. This is merely a technical hurdle over which most MOOCs platform vendors are already jumping. Tin Can promises to provide an interoperability standard way beyond that of Sharable Content Object Reference Model or SCORM. | <urn:uuid:4005f54b-bcfb-45c7-8d07-cd2266e9e5e9> | {
"date": "2020-01-22T04:36:57",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.962458074092865,
"score": 2.6875,
"token_count": 504,
"url": "http://www.emmersivetech.com/blog/reasons-for-the-rise-of-moocs/"
} |
Despite its rich ecosystem and growing number of real-world implementations, misconceptions about RISC-V are keeping companies around the world from fully realizing its benefits.
Ted Marena of Microsemi has written an interesting article in Electronic Design about the RISC-V ecosystem.
Many companies today are exploring free, open-source hardware and software as an alternative to closed, costly instruction set architectures (ISAs).
RISC-V is a free, open, and extensible ISA that’s redefining the flexibility, scalability, extensibility, and modularity of chip designs.
Despite its rich ecosystem and growing number of real-world implementations, there are misconceptions about RISC-V that have companies holding back from fully realizing its benefits.
To read the full article and see the 11 myths…, click here. | <urn:uuid:b147a67b-3938-4ec2-bb9d-de40a5bb10f1> | {
"date": "2020-01-22T05:37:34",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9404429197311401,
"score": 2.53125,
"token_count": 171,
"url": "http://www.fast-cpu-models.com/2018/01/"
} |
Under the New Jersey Motor Vehicle laws (Title 39) a motor vehicle is defined as "all vehicles propelled otherwise than by muscular power." I receive many phone calls from parents about whether or not the popular gas and electric scooters (a.k.a. "Go-Peds") are "legal."
Since these scooters are motor vehicles by definition, they fall under the regulations set forth by New Jersey's motor vehicle laws under Title 39. All motor vehicles operated on public roadways must be registered, insured and have the minimum required safety equipment (mirrors, lights, turn signals, etc.). Most, if not all of these motorized scooters have none of the required safety equipment. Additionally, the Motor Vehicle Commission of New Jersey will not allow these types of scooters to be registered. Insurance companies will not insure them. Unregistered and uninsured motor vehicles cannot be operated on public roadways or sidewalks. | <urn:uuid:b54baf78-5820-4a1d-a598-6b18f7b48b4c> | {
"date": "2020-01-22T05:27:38",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.926194429397583,
"score": 2.546875,
"token_count": 185,
"url": "http://www.holmdelpolice.org/264/Bicycle-Motorized-Scooters-Safety"
} |
| Abstract|| |
Common mental disorders (CMD) is a term used to describe depressive and anxiety disorders. It replaces the old term 'neuroses' and is widely used because of the high level of co-morbidity of depression and anxiety, which limits the validity of categorical models of classification of neurotic disorders, particularly in primary care settings. The global public health significance of CMD is highlighted by the fact that in developing countries, depression is the leading cause of years lived with disability in both men and women aged 15-44 years. This oration brings together research evidence, mostly from South Asia, to show that although the aetiology of CMD may lie in the socioeconomic circumstances faced by many patients, biological treatments such as antidepressants may be among the most cost-effective treatments in resource-poor settings. The oration demonstrates the public health implications of CMD by briefly reviewing the burden of CMD in the region and presents evidence linking the risk for CMD associated with two of the region's most important public health risk factors-poverty and gender disadvantage. The oration also presents recent evidence to establish the association of CMD with some of the region's most important public health issues: maternal and child health; and reproductive and sexual health. Next, the evidence for the efficacy of treatments for CMD in developing countries is presented, focusing on a series of recent trials that show that both psychosocial and biological treatments are effective. Finally, the implications for policy and future research are considered.
Keywords: Common mental disorders, biological treatment, depression, anxiety
|How to cite this article:|
Patel V. Social origins, biological treatments: The public health implications of common mental disorders in India. Indian J Psychiatry 2005;47:15-20
| Introduction|| |
Common mental disorders (CMD) are depressive and anxiety disorders that are typically encountered in community and primary care settings. Although depressive and anxiety disorders are classified as separate diagnostic categories in ICD-10, the concept of CMD is acknowledged as being more valid for public health interventions due to the high degree of co-morbidity between subcategories and the similarity in epidemiological profiles and treatment responsiveness .,,, In the South-East Asian region, 11% of disability-adjusted lifeyears (DALYs) and 27% of years lived with disability (YLD) are attributed to neuropsychiatric disease. Depressive disorders are the most important neuropsychiatric cause of disease burden. CMD lead to profound levels of disability through symptoms such as tiredness and sleep problems, and are associated with increased healthcare costs and reduced economic productivity .,, A review of 8 epidemiological studies on CMD in South Asia shows that the prevalence in primary care was 26.3% (95% CI 25.3%-27.4%). In a study done in Goa, the rate of CMD was 46.5% in adult primary care attenders. Patients with CMD spent twice the number of days in the previous month being unable to work as usual due to their illness. Over half the cases in primary care remain chronic for up to 12 months.
Clinical studies show that while somatic symptoms are the commonest presenting complaints, psychological and cognitive symptoms can be elicited in the majority of patients on inquiry. Most patients and general health workers do not view CMD as being psychiatric or mental disorders, which may partly explain the relatively low recognition rates for CMD in primary care. Instead, psychosocial and spiritual models of illness causation and management are often preferred. ,, The clinical validity of the subcategories of CMD is in doubt; in primary care, a dimensional model of distress may be easier and more practical to use. The majority of patients do not receive evidence-based treatments; typically, symptomatic treatments are provided (for example, vitamins and tonics for the complaint of fatigue, hypnotics for sleep difficulties, etc.). It is not surprising that surveys of prescription behaviour in India show that a majority of drugs used are of 'doubtful value'. Inappropriate treatment is associated with chronicity, disability and increased healthcare costs.
Despite this substantial body of epidemiological evidence demonstrating the considerable burden of CMD in developing countries, CMD remain a low priority in public health. In this oration, I will argue that CMD are a high priority in public health, using evidence on the links between CMD and established public policy priorities, and evidence that CMD can be treated effectively using locally available and affordable treatments.
| Poverty, Gender Disadvantage and CMD|| |
Diseases that disproportionately affect the poor, or women who face disadvantages on account of their gender, are prioritized by public health policy-makers. Stressful life experiences, such as exposure to violence and poor physical health, which are well-recognized risk factors for mental disorders, are more likely to be experienced by poor people. Thus, it is not surprising that virtually all population-based studies of the risk factors for mental disorders, particularly depressive and anxiety disorders, consistently show that the poor and marginalized are at greater risk of suffering from these. We also know that mental disorders impoverish people, both due to the increased costs of healthcare-often sought through private providers-and lost employment opportunities. Most mental illnesses are relatively simple and cheap to treat, and evidence from clinical trials shows that efficacious treatment is associated with significant reductions in overall healthcare costs. Thus, treating mental disorders, particularly in the poor who bear the disproportionate burden of suffering, would help them work more productively and reduce their healthcare expenditure, facilitating the conditions necessary to rise out of poverty. One of the most consistent risk factors for CMD is female sex; this increased risk has also been replicated in developing countries. A recent review on the possible explanations for these sex differences found no evidence to support a hormonal or other biological mechanism. On the other hand, there is growing evidence that gender disadvantage, as indicated by exposure to intimate partner violence and low levels of autonomy in decisionmaking, are key risk factors for CMD in women.
| CMD and Maternal and Child Health|| |
Maternal and child health is one of India's most important public health priorities. One of the commonest health problems affecting mothers during pregnancy and after childbirth is depression. A large number of studies from most regions of the developing world show that between 10% and 30% of mothers will suffer from depression. ,, Depressed mothers are much more disabled and less likely to take care of their needs. Suicide is a leading cause of maternal death in developed countries. Suicide is now a leading cause of death among young women in the reproductive age group in the world's two most populous countries, India and China. , It is plausible that depression in mothers may also lead to increased maternal mortality, both through adversely impacting on their physical health needs, as well as more directly through suicide. A series of studies from South Asia have demonstrated that early childhood failure to thrive, as indicated by undernutrition and stunting in under-1-year-old babies, is independently associated with depression in mothers. For example, a recent population-based cohort study from Pakistan has shown that babies of mothers who were depressed during pregnancy and in the postnatal period were more than 5 times at greater risk for being underweight and stunted at 6 months than babies of non-depressed mothers, even after adjustment for other known confounders such as maternal socioeconomic status. Childhood failure to thrive is a major risk factor for child mortality and thus, it would be plausible to hypothesize that depression in mothers is also associated with increased child mortality. Indeed, evidence shows that depressed mothers are more likely to cease breastfeeding, and their babies are significantly more likely to suffer from diarrhoeal episodes or to not have their complete immunization, all of these being recognized risk factors for childhood mortality. This study also showed that depression during pregnancy was strongly associated with low birth weight, an association that has been replicated in studies in India and Brazil (personal communication).
| CMD and Reproductive and Sexual Health|| |
Among the most common complaints in women are those related to their reproductive and sexual health, notably complaints of abnormal vaginal discharge and fatigue. Not surprisingly, such complaints are the focus of reproductive health programmes in the country. These complaints have typically been assumed to be the result of poor reproductive health; the complaint of vaginal discharge, for example, is attributed to reproductive tract infections, while the complaint of fatigue is attributed to anaemia. However, a growing body of evidence has demonstrated that these assumptions are incorrect; for example, a study in Bangladesh found that only 1 in 3 women with the complaint of abnormal vaginal discharge in fact had any infection. A population-based cohort study of nearly 2500 women has recently been completed in Goa, India; its objective was to investigate psychosocial aetiologies for these complaints, focusing on mental health and gender disadvantage. The rationale for these hypotheses was that poor mental health was a recognized risk factor for medically unexplained symptoms in developed countries and was a recognized risk factor for sexual complaints among men in South Asia (the dhat syndrome). These studies have shown that the strongest risk factors for complaints of abnormal vaginal discharge and chronic fatigue are mental health-related factors-somatoform disorders and CMD. , Thus, CMD are an integral component of our understanding of the aetiology of common reproductive complaints.
| Treatment of Common Mental Disorders|| |
The implications of the evidence linking CMD with social risk factors and other public health priorities is that if we can treat mental disorders effectively, we may find benefits not only to the patient's mental health, but to wider social and health outcomes as well.
Many developing countries have extremely meagre resources for mental illness and little progress has been made in improving treatment modalities. Although most countries have an essential drugs policy, about 20% do not even have the most commonly prescribed drugs for depression. In approximately half the countries of the world (all in the developing world), there is no more than one psychiatrist and one psychiatric nurse per 100,000 population; the numbers of psychologists and social workers working in mental health is even lower. As a result of this scarcity of mental health resources, the overwhelming majority of persons with depression would have little opportunity for specialist treatment. Thus, in the context of the considerable burden of CMD, the fact that the overwhelming majority of patients are only seen in primary care, and the great shortage of mental health specialists, treatments must be delivered in primary care or community settings by general or community health practitioners. Apart from being effective, such treatments must be affordable and accessible.
Until recently, all the evidence for effective treatment of depression was derived from randomized controlled trials in developed countries, and the cross-national applicability of these studies had been questioned on a number of grounds. These grounds include (i) cultural factors such as the local acceptability of specific interventions, (ii) health system factors such as the availability of human resources to implement intervention, (iii) costs and availability of medication, and (iv) individual patient factors such as pharmacodynamic variations among populations, all of which could influence the crosscultural validity of treatment evidence. Three randomized controlled trials that studied the efficacy and cost-effectiveness of the treatment of depression in India, Uganda and Chile have been published. ,, All these trials shared a number of features, including preparatory work in which measures for depression were translated and validated for the local culture and epidemiological studies were undertaken to estimate prevalence and risk factors. All the studies targeted poor populations. The Indian and Chilean trials were located in lowincome, urban, primary or general healthcare settings while the Ugandan trial was in a poor rural community. All the studies tested treatment options that were intended to be feasible and affordable to the populations being studied.
So, what worked? All three trials had a psychological intervention; however, only the two trials that employed a group-based intervention found that this intervention was efficacious. The individual, psychological intervention used in India was no better than placebo; this lack of efficacy was, in all probability, due to the culturally unacceptable nature of a purely 'talking' intervention by a professional therapist. On the other hand, group therapy that emphasizes support and sharing among members of the same community was highly effective in Uganda and Chile. These group interventions were also part of a bottom-up approach in which people from the local area led the intervention. Antidepressants were used in two trials, one as a discrete treatment (in India), and one as part of a multimodal intervention along with group therapy (in Chile). The Indian trial demonstrated the superiority of antidepressants as compared to placebo, particularly in facilitating an early recovery. However, adherence to treatment declined rapidly after 2 months and this may have accounted for the absence of any significant effects at the 6 and 12 months' outcome. In the Chilean study, most patients in both groups received medication, the main difference being that the stepped-care group received appropriate doses for longer periods of time. Four factors might have influenced this: structured guidelines for medication, advocacy on behalf of the patient by the group leader when approaching the prescribing doctor, peer support, and empowerment of patients to take an active role to ensure that guidelines were enforced. The Ugandan trial employed no drug therapy at all. All trials had a measure of function or disability which showed significant improvements in the treatment group; the Indian trial showed that treating depression produces a significant reduction in total healthcare costs.
Thus, in all the three study sites, there was evidence for efficacy of depression interventions that were locally feasible and cost-effective among the poorest people in that setting. The associated improvement in function suggests benefits beyond mental health and beyond the individual who was treated, since improved function should benefit both the family and the community, and enable the person to cope better with social and economic difficulties. The studies demonstrate that some interventions found to be effective in developed countries were found effective at these study sites while others were not, perhaps due to local factors such as low adherence and lack of acceptability of specific treatments. Both elements suggest that it is worth trying interventions found to be effective in other cultures, but that their effectiveness needs to be tested when applied to new populations.
The studies also demonstrate that scientific evaluation of interventions, in the form of adequately powered randomized controlled trials, with relatively high response rates, are feasible in developing countries from both a practical and ethical viewpoint. While there continues to be a need for more studies among other populations to determine the crosscultural applicability of these approaches, and to identify other interventions likely to be effective, the new evidence obliges physicians, policy-makers and donors to take action to reduce the burden of one of the most common and disabling illnesses in developing countries. Above all, it is time to use the new evidence to actively combat the scepticism of policy-makers that there is nothing that can be done for depression in developing countries.
| Implications for Theory, Policy and Research|| |
Evidence suggests that although the most consistent risk factors for CMD lie in the social and economic contexts of individuals' lives, both biological treatments (antidepressants) and psychological treatments (group therapy) are efficacious and cost-effective. The apparent divergence of social origins and biological treatments for CMD has parallels with the multifactorial aetiological models well established for other chronic, non-communicable diseases such as diabetes mellitus. The key differences lie in the fact that pathophysiological processes in diseases such as diabetes are more clearly elucidated. Thus, the role of lifestyle and stressful events as well as the role of insulin resistance and genetic inheritance in the aetiology and prevention of diabetes are well-established. The existence of a social aetiology that triggers a biological pathological process in vulnerable individuals and which, despite the lack of direct action on the social aetiology, responds to biological treatments, is the basic principle of the theoretical rationale. The finding that antidepressants are efficacious is supportive evidence for a biological basis to the pathophysiology of CMD. Similarly, there is evidence that psychological treatments, such as cognitive-behavioural therapy, exert a therapeutic effect on depressive disorders, which are reflected in changes in brain metabolism. However, the precise mechanism through which social and biological factors interact to lead to CMD, or to enable recovery from CMD, remains unclear. I will now consider the implications of this evidence for policy and research.
In societies where mental health services are poorly developed, it may be argued that preventive strategies aimed at strengthening protective factors in local communities may be a more sensible investment of scarce resources than duplicating the extensive mental healthcare systems of the developed world (whose existence has not led to any significant reduction in the prevalence of mental disorders). There is a potential for both primary and secondary preventive strategies.
In terms of primary prevention, two major themes which can be explored are education and economic empowerment. Although health policy has often considered mental health as a 'luxury' item when dealing with the health consequences of poverty, it is clear from the evidence presented that the poor are more likely to suffer from mental illness and tragic outcomes as a result of their illness.
In many developing countries, indebtedness to loan-sharks is a great source of stress and worry. Since the mid-1990s, the seasonal monsoon has consistently failed in some central regions of India leading to low harvests and, subsequently, lower incomes for farmers. The ones who have suffered the most have been the poorest subsistence farmers; those who were not credit-worthy enough to get bank loans and had to borrow money from loan-sharks at exorbitant rates of interest to tide over the financial crisis. With their crops failing, the farmers were faced with the stark choice of selling whatever few assets they still had or become bonded labour to the moneylender until the debt was repaid. It is not surprising, then, that these circumstances lead to severe mental distress and, ultimately, suicide. It is clear that here lies a potential preventive strategy in that local banks could step in and review their process of assessing credit-worthiness for persons who belong to the poorest sections of society. While there is no evidence specifically demonstrating the link between the access to micro-credit and suicide, many non-governmental organizations (NGOs) such as those run by Basix in India and the Bangladesh Rural Advancement Committee (BRAC) in Bangladesh are involved in setting up such loan facilities in rural areas. Provision of such loans may reduce mental illness by removing the key cause of stress: the threat posed by the informal moneylender. The NGO programmes for poverty alleviation target not only credit facilities, but also gender equity, basic healthcare, nutrition, education and human rights issues. An evaluation of the BRAC poverty alleviation programmes, which reach out to millions of the poorest people in Bangladesh, indicates that the psychological well-being of women who are BRAC members is better than those who are not.
The key to secondary prevention is placing mental illness, in particular CMD, onto the priority agenda for primary healthcare by local policy-makers. There is a need to move the subject of CMD from its current home within the isolated and marginalized realm of psychiatry into the broader, community-oriented public health arena. Thus, greater emphasis is required on developing innovative methods of training general health workers to recognize and effectively treat CMD. The author has written Where there is no psychiatrist, a healthcare manual modelled along the lines of the classic manual, Where there is no doctor, which follows these principles. The manual is now being translated into Indian languages by the Voluntary Health Association of India, New Delhi.
Despite the compelling evidence of an association between CMD and economic deprivation, it is important to recognize that the majority of people living even in squalid poverty remain well, cope with the daily grind of existence and do not succumb to the stressors they face in their lives. Indeed, this is the real challenge for public health researchers, i.e. to identify the protective and nurturing qualities in those who do not become depressed when faced with difficult economic circumstances, for therein lies a potential to help and prevent mental health problems. Although we now have enough evidence from efficacy trials to guide our choices for specific treatments for CMD, we still do not have a model through which these can be integrated into routine primary care in an effective and affordable manner. A combination of an antidepressant with a psychosocial intervention, providing more resource-intensive interventions according to individual patient needs (the stepped-care model) may be the ideal way of improving clinical outcomes in patients with CMD in routine primary care. Recent trials reporting the efficacy of a collaborative management protocol may provide a useful route to improving outcomes. An innovative intervention which combines the principles of stepped-care and collaborative care will be evaluated in a new cluster randomized trial to be implemented in Goa, India over the next few years. Finally, there is the need for strong collaboration between biological and epidemiological psychiatric research to uncover the precise pathways through which social factors lead to biological changes, which lie at the heart of the distress and treatment of CMD.
| Acknowledgements|| |
I am grateful to the staff and colleagues in Sangath, an NGO in Goa, through which I have implemented most of my research in India. I am also grateful for the support of colleagues in Goa Medical College and the Directorate of Health Services (Government of Goa) for their collaboration in many of the studies cited. The Wellcome Trust has been the principal funder for my research on common mental disorders. Finally, I wish to thank Dr K.S. Shaji for nominating me for this prestigious oration.
| References|| |
|1.||Goldberg D, Huxley P. Common mental disorders: A biosocial model. London: Tavistock/Routledge; 1992. |
|2.||World Health Organization (WHO). The ICD-10 classification of mental and behavioural disorders. Geneva: WHO; 1992. |
|3.||Jacob KS, Everitt BS, Patel V, et al. The comparison of latent variable models of nonpsychotic psychiatric morbidity in four culturally different populations. Psychol Med 1998;28:145-52. |
|4.||Lewis G. Dimensions of neurosis. Psychol Med 1992;22: 1011-18. |
|5.||Tyrer P. The case for cothymia: Mixed anxiety and depression as a single diagnosis. Br J Psychiatry 2001;179:191-3. |
|6.||WHO. The World health report 2001: Mental health: New understanding, new hope. Geneva: WHO; 2001. |
|7.||Chisholm D, Sekar K, Kumar K, et al. Integration of mental health care into primary care. Demonstration cost-outcome study in India and Pakistan. Br J Psychiatry 2000;176: 581-8. |
|8.||Patel V, Kleinman A. Poverty and common mental disorders in developing countries. Bull World Health Organ 2003; 81:609-15. |
|9.||WHO. Mental illness in general health care: An international study. Chichester: John Wiley& Sons; 1995. |
|10.||Patel V. The epidemiology of common mental disorders in South Asia. NIMHANS J 1999;17:307-27. |
|11.||Patel V, Pereira J, Coutinho L, et al. Poverty, psychological disorder and disability in primary care attenders in Goa, India. Br J Psychiatry 1998;171:533-36. |
|12.||Patel V, Todd CH, Winston M, et al. The outcome of common mental disorders in Harare, Zimbabwe. Br J Psychiatry 1998;172:53-7. |
|13.||Patel V, Pereira J, Mann A. Somatic and psychological models of common mental disorders in India. Psychol Med 1998;28: 135-43. |
|14.||Patel V, Gwanzura F, Simunyu E, et al. The explanatory models and phenomenology of common mental disorder in Harare, Zimbabwe. Psychol Med 1995;25:1191-9. |
|15.||Patel V, Pereira J, Coutinho L, et al. Is the labelling of common mental disorders as psychiatric illness useful in primary care? Indian J Psychiatry 1997;39:239-46. |
|16.||Linden M, Lecrubier Y, Bellantuono C, et al. The prescribing of psychotropic drugs by primary care physicians: An international collaborative study. J Clin Psychopharmacol 1999;19:132-40. |
|17.||Patel V, Chisholm D, Rabe-Hesketh S, et al. The efficacy and cost-effectiveness of a drug and psychological treatment for common mental disorders in general health care in Goa, India: A randomised controlled trial. Lancet 2003;361:33-9. |
|18.||Patel V, Araya R, Lima MS, et al. Women, poverty and common mental disorders in four restructuring societies. Soc Sci Med 1999;49:1461-71. |
|19.||Piccinelli M, Wilkinson G. Gender differences in depression. Critical review. Br J Psychiatry 2000;177:486-92. |
|20.||Patel V, Kirkwood BR, Pednekar S, et al. Gender disadvantage and reproductive health risk factors for common mental disorder in women: A community survey in India. Arch Gen Psychiatry (in press). |
|21.||Chandran M, Tharyan P, Muliyil J, et al. Post-partum depression in a cohort of women from a rural area of Tamil Nadu, India. Incidence and risk factors. Br J Psychiatry 2002;181:499-504. |
|22.||Cooper P, Tomlinson M, Swartz L, et al. Post-partum depression and the mother-infant relationship in a South African peri-urban settlement. Br J Psychiatry 1999;175:554-8. |
|23.||Patel V, Rodrigues M, De Souza N. Gender, poverty and postnatal depression: A cohort study from Goa, India. Am J Psychiatry 2002;159:43-7. |
|24.||Oates M. Suicide: The leading cause of maternal death. Br J Psychiatry 2003;183:279-81. |
|25.||Aaron R, Joseph A, Abraham S, et al. Suicides in young people in rural southern India. Lancet 2004;363:1117-18. |
|26.||Phillips MR, Li X, Zhang Y. Suicide rates in China, 1995-99. Lancet 2002;359:835-40. |
|27.||Patel V, Rahman A, Jacob KS, et al. Effect of maternal mental health on infant growth in low income countries: New evidence from South Asia. BMJ 2004;328:820-3. |
|28.||Rahman A, Iqbal Z, Bunn J, et al. Impact of maternal depression on infant nutritional status and illness: A cohort study. Arch Gen Psychiatry 2004;61:946-52. |
|29.||Koenig M, Jejeebhoy S, Singh S, et al. Investigating women's gynaecological morbidity in India: Not just another KAP survey. Reproductive Health Matters 1998;6:1-13. |
|30.||Hawkes S, Morison L, Foster S, et al. Managing reproductive tract infections in women in low-income, low-prevalence situations: An evaluation of syndromic management in Matlab, Bangladesh. Lancet 1999;354:1776-81. |
|31.||Patel V, Oomman NM. Mental health matters too: Gynecological morbidity and depression in South Asia. Reproductive Health Matters 1999;7:30-8. |
|32.||Patel V, Kirkwood BR, Weiss H, et al. Chronic fatigue in developing countries: Population based survey of women in India. BMJ 2005;330:1190. |
|33.||Patel V, Pednekar S, Weiss H, et al. Why do women complain of vaginal discharge? A population survey of infectious and pyschosocial risk factors in a South Asian community. Int J Epidemiol 2005 (in press). |
|34.||WHO. Atlas country profiles of mental health resources. Geneva: WHO; 2001. |
|35.||Patel V. The need for treatment evidence for common mental disorders in developing countries. Psychol Med 2000;30:743-6. |
|36.||Araya R., Rojas G, Fritsch R, et al. Treating depression in primary care in low-income women in Santiago, Chile: A randomised controlled trial. Lancet 2003;361:995-1000. |
|37.||Bolton P, Bass J, Neugebauer R, et al. Group interpersonal psychotherapy for depression in rural Uganda. JAMA 2003;289:3117-24. |
|38.||Goldapple K, Segal Z, Garson C, et al. Modulation of cortical- limbic pathways in major depression: Treatment-specific effects of cognitive behavior therapy. Arch Gen Psychiatry 2004;61:34-41. |
|39.||Sundar M. Suicide in farmers in India. Br J Psychiatry 1999;175:585-6. |
|40.||Chowdhury A, Bhuiya A. Do poverty alleviation programs reduce inequities in health? The Bangladesh experience. In: Leon D, Walt G (eds). Poverty, inequality and health. Oxford: Oxford University Press; 2001:312-32. |
|41.||Patel V. Where there is no psychiatrist. London: Gaskell; 2003. |
|42.||Katon W, Von Korff M, Lin E, et al. Collaborative management to achieve treatment guidelines: Impact on depression in primary care. JAMA 1995;273:1026-31. |
Sangath, 841/1 Alto-Porvorim, Goa 403521, India
Source of Support: None, Conflict of Interest: None | <urn:uuid:33a38154-5ca0-455c-9bc0-689e003b4cfa> | {
"date": "2020-01-22T05:55:56",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9380506277084351,
"score": 2.671875,
"token_count": 6294,
"url": "http://www.indianjpsychiatry.org/article.asp?issn=0019-5545;year=2005;volume=47;issue=1;spage=15;epage=20;aulast=Patel"
} |
Technology and education are growing hand in hand. The generation today is highly informed and informative which is beneficial for the outcomes. Understanding the speed of the students, it is necessary that the relevant information is delivered and guidance is provided for leading the students. In this era the teachers understand the needs of the students and hence they guide them in the similar manner so that reasonable outcomes can be acquired by the students.
Convenient Methods of Studying
Technology is a blessing as it has made acquisition of information easier for the individuals. The intellect of the students is increasing day by day and hence better methods are being sought consistently by the students for the acquisition of knowledge. In this era all the information is available and accessible from the internet. The students can even find ready solutions of the project and assignments from the internet. The students buy assignment support from the internet so that they can work and effectively submit the topics which are assigned as tasks to them.
Ease of Access of Information
The internet has enhanced the accessibility of information. Earlier the students had to sit long hours in the libraries in search of the desired information. The entire process was highly time consuming and it generated less effective results. In this era all the students need to do is enter the right set of key words and acquire the desired information. The biggest benefit which can be realized from this is that education has extended its roots and more and more individuals are getting attracted towards education. They understand the fact that to compete and excel in the race of life, acquisition of education is a must.
Rectification of Barriers
Education and technology have complemented each other in a prominent manner. This is because the barriers which used to restrain the individuals from acquisition of education and knowledge have now been rectified. The students have the freedom of attending online lectures and grasp information from various portals which support in knowledge acquisition and make excelling in this competitive environment easy for the students. All the aspects which have been discussed have been made practical and implementable from the support of the teachers. The teachers have bound the students more towards practical submission of assignments which gives them first hand exposure of requirements from them in the actual world.
Enhancing the Scope of Innovation
Through the enhanced technology and the portals which make it easy for the students to submit the assignments new and innovative ideas are generated. The students get a chance to buy assignment as examples and work on the topic in a different way to come up with aspiring results. The frequency of innovative ideas over the period of time has enhanced because a better analysis can be conducted of the ideas which come to the mind.
In the end it can be concluded that syncing education and technology has proved extremely beneficial for the betterment of the society. Through this,a new scope of innovative ideas has been introduced which can be seen in the progressive growth rate of the society. This is a useful trait which is enhancing the overall intellect and productivity of the individuals and is making the world a better place to live.
Melody Wilson is a true professional and a business graduate from a reputable university who has carved her own roads towards success. She has analyzed the usage of technology in the field of education under various different circumstances. She is working with Buy Assignment as a writer. After completing her masters,she plans to write as a professional blogger and benefit the students. | <urn:uuid:83c3c950-e1fc-4598-96b5-8e4c314bcd79> | {
"date": "2020-01-22T06:00:19",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9676318168640137,
"score": 3.21875,
"token_count": 667,
"url": "http://www.jegindustries.com/syncing-technology-and-education-for-the-betterment-of-society/"
} |
Home Resources Standards Inspiration About the Author
21st Century Workforce Readiness TPACK ISTE Common Core UDL
21st Century Learning, The Music Classroom, and Workforce Readiness
The key elements or outcomes that our students today should be learning are Learning and Innovation Skills - 4Cs (Critical Thinking, Communication, Collaboration, Creativity), Core Subjects - 3Rs and 21st Century Themes, Life and Career Skills, and lastly, Information, Media, and Technology Skills. I have conducted some research and as a result, it has been proven that an education in the arts not only teaches students these key elements but in fact prepares students for college and the 21st century workforce. I invite you to click on the link below to read about the findings I fond in regards to music and 21st century education and preparation. | <urn:uuid:11795f8b-3d91-44e2-ba5d-0e81a9b516e1> | {
"date": "2020-01-22T04:41:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9212118983268738,
"score": 3.09375,
"token_count": 171,
"url": "http://www.learninginnovationlab.com/samantha-21st-century-workforce-readiness-competencies-4-cs.html"
} |
(StatePoint) The words “veteran,” “hero” and “patriot” usually evoke images of men. Many people are not aware that some 3 million women are currently serving or have served in the U.S. Armed Forces beginning with the American Revolution. Their stories are largely unknown.
“Women have served alongside men to gain and preserve liberty, from the American Revolution to today’s Global War on Terror,” says retired Army Major General Dee Ann McWilliams, president of the Women In Service For America Memorial Foundation.
The Foundation aims to bridge the gap in the public’s understanding of women’s military service and encourages everyone to help in the following ways:
Learn Their History
Women’s History Month, celebrated in March, is a great time to learn about trailblazing military women. Here are five you should know about:
• In 1782, Deborah Sampson disguised herself as a man to become the first woman known to enlist as a soldier in the Continental Army. The only woman to earn a full military pension for service during the American Revolution, she served as an infantryman and was wounded in action.
• Minnie Spotted-Wolf enlisted in the Marine Corps Women’s Reserve in 1943, making her the first known Native American woman to do so. Skilled at breaking horses, she described Marine boot camp as “hard but not too hard.”
• Capt. Sunita Williams, an astronaut who served 322 days as commander of the International Space Station, at one point held the record for the most cumulative hours of spacewalking. During her early Navy career, she flew helicopters in Operation Desert Shield.
• Overcoming childhood adversity, in 2010 Lt. La’Shanda Holmes became the first African-American female helicopter pilot in the history of the Coast Guard. She played a vital role in the Global War on Terror.
• During her three deployments to Afghanistan, Air Force Senior Airman Vanessa Velez drove a loaded Humvee into enemy territory on more than 120 missions.
Pay a Visit
Located at the gateway to Arlington National Cemetery, the Women In Military Service For America Memorial (Women’s Memorial) is the only memorial dedicated to honoring the 3 million women who have served or are serving in the U.S. Armed Forces. Preserving the details of their achievements, from clerk typist to fighter pilot, the Memorial aims to integrate military women into the public’s image of courage. When visiting the nation’s capital, consider adding this educational and inspiring institution to your itinerary.
Share Your Story
Military women, past and present, can register their service with the Women’s Memorial and become part of the world’s largest register of U.S. servicewomen and women veterans, which now totals nearly 267,000 members. By sharing your story future generations will come to know the valuable contributions of America’s military women. To register and learn more, visit womensmemorial.org/register-now.
At a time when the Department of Veterans’ Affairs reports that women veterans are the fastest-growing veteran population, recognizing the collective service of women is more important than ever.
“No matter what you did during your service, it’s an important part of history,” says General McWilliams. “Without your story our history will never be complete.”
Photo Credit: Courtesy of Donna Parry | <urn:uuid:c70c39b8-da06-4d53-b1d5-064186c245f2> | {
"date": "2020-01-22T06:26:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9488217830657959,
"score": 3.15625,
"token_count": 722,
"url": "http://www.newsbug.info/online_features/community_cares/honoring-the-achievements-of-women-in-the-military/article_6815acf5-67ce-5c30-960a-4530cc5578b4.html"
} |
They Drive You Batty
Yes, even bats are considered pests. Why? Would you like a family of them flapping around the attic and swooping down on you when you least expect it? They are certainly not the type of house guests that you would want literally hanging around.
Know Your Enemy
Before you look for ways to get rid of bats hanging around your home, it’s important to know what you’re up against prior to doing so. Bats are the only species of mammal that is capable of true flight. The creatures are nocturnal, which means that they only hunt at night. No, they won’t carry the family cat off.
Bats only eat insects but that doesn’t make them any less annoying. They use a fascinating echolocation system that allows them to locate their food in the dark. Bats also use the echolocation system to communicate with each other through a series of high-pitched noises.
Types of Bats
There might be as much as 45 bat species in existence; however, only the colonizing variety is known to invade attics. The warm and dry atmosphere of these particular areas is perfect for them to raise their pups. Aside from that, the enclosed and elevated area also offers them perfect protection from predators. They also go out to hunt in shifts rather than fly off together.
If you have a bat colony in your attic, it’s probably composed of one of the following types of bats –
- Little Brown Bat
These creatures are appropriately named. However, there is nothing little about the havoc they can cause in a home. Their ears are also small and the bat themselves weigh no more than 5 or 14kg. Their wingspan can grow to 222 or 269mm.
Little brown bats can be golden brown, dark brown or black in color. The fur on their back is darker than the fur on their belly. The wings or membranes between them have no hair.
- Big Brown Bats
These are obviously larger than their smaller counterparts and can grow to 110 to 130mm in length. Big brown bats also have a wingspan of 330mm. Their sharp teeth enable them to bite down hard on larger insects. In addition, they do not have any fur on their faces, ears, wings and tails. These creatures are highly adaptable and can thrive in a variety of habitats such as forested areas, as well as along streets.
- Mexican Free-Tailed Bat
The tail of a Mexican free-tailed bat extends a third beyond its tail membrane. They are no more than 4 1/2 inches long and have a wingspan that can extend to 14 inches. Their wrinkled lips and thin wings give them a hideous appearance and they can be reddish brown or gr
ay in color. The pests also weigh no more than 0.4 ounces.
This particular species of bat also happens to be extremely fast in flight. Mexican free-tailed bats can also live longer than your average pest. The females can live for 18 years while their male counterparts can live for 13 years.
Most bats like to live in caves. However, a warm attic or basement with a broken window will do just as well, especially during winters. The furry creatures nurse their young and like to reside anywhere warm.
Why You Don’t Want Bats In Your Home
They may be warm-blooded and furry, but you don’t want them in your home. Sure, they might not necessarily try to get in your pantry, like the average rat, but that doesn’t make them any less of a nuisance.
Not convinced? Here are some facts that should change your mind:
- They carry diseases – Even a single infected bat in a colony of 25 should be a cause for concern. They can carry diseases such as rabies. The disease can be transmitted even by a tiny bite.
- Their droppings are toxic – A colony of bats can leave tons of droppings where they live. Bat guano (droppings) can accumulate in your eaves or attic. You don’t want something like that happening during winter, when the pest spends most of their nights indoors.
Why? It is harmful to humans. Bat dung that hasn’t been cleaned up for a while, can break out in a fungal growth. The growth releases spores that can cause a nasty lung infection;
Histoplasmosis. The disease can turn deadly if it affects a person who already suffers from another condition like cancer.
The guano also attracts other pests or insects. In other words, the rats, cockroaches, and other sorts of creepy crawly might be having a house party upstairs without you knowing it.
Signs Of a Bat Infestation
Homeowners are highly capable of detecting the presence of bats in their homes through various ways. As a bat finds its way in a house, oil is deposited from their fur. The oil usually results in stains – often black or brown in nature. Noise is also a factor to watch out for. Due to their rigorous activities, you might hear them scurrying around. Subsequently, a more frontal detection method is their droppings. You’ll find those lingering around the infested area. Seeing is believing. With an infestation on-hand, you’re bound to catch them hanging out.
What Do I Do When I Find One?
If you find a bat in your house, don’t make the mistake of assuming that there won’t be a colony of the pests roosting nearby. They might have decided to make your attic their new winter home.
Some states have completely banned the extermination of bats even if they end up invading homes. Four of the species is considered to be endangered so this doesn’t come as a surprise. However, it is definitely inconvenient for the homeowner who has to deal with them. Most do it yourself remedies are either illegal or ineffective.
How Do I Get Rid of Them?
Bats do not behave like common pests, such as rats. They roost, can flap away or even attack with their sharp teeth. The removal process itself is detailed. For example, before you try to get rid of the creatures, you must –
- Find out what species has invaded your home
- Their access and entry point into the attic
- Figure out how to clean the piles of guano on the attic floor and clean them up carefully
- Figure out how you can safely remove the entire colony humanely
That is a pretty tall order, and is not something that an average homeowner is equipped to handle. In other words, it is always a good idea to have a pest control service that is experienced in handling wild animals such as bats to do the job. They would also know the most humane techniques to use to effectively get rid of the problem.
There are also no registered pesticides for bat control. However, you can prevent them from entering your home in the first place by sealing any entry points that they might use to infiltrate your home.
Make sure that you do so immediately; especially if you have just moved into a new home and heard the neighbors complain about their “batty” problem. For example, you can close your windows with a wire mesh. Any opening that is ¼ inch wide is an open invitation to most species of bats.
Never try to get rid of a bat infestation yourself. Remember, it takes a tiny bite for a bat to transmit disease. If you absolutely have to handle one of them, make sure that you wear thick gloves when doing so. A plastic container or net would also come in handy in trapping it. | <urn:uuid:d3208c54-3639-4591-895a-fdb0b3003e43> | {
"date": "2020-01-22T04:56:18",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9668736457824707,
"score": 2.734375,
"token_count": 1584,
"url": "http://www.onlinepestcontrol.com/pest/bats/"
} |
Creating a Flash Email Client Application
Almost everyone in the Internet community has or knows someone who has a Hotmail email address. Email programs like Hotmail and Yahoo offer users a way to send and retrieve messages online via a central location on the Internet. Hence, their popularity is obviouspeople from all over the world can communicate with each other simply by logging into a central location.
This chapter explores the architecture and the means of creating a simple email management program using Flash MX, XML, Java, and Microsoft Access. The program handles email by using the person's pre-existing email account.
First, we'll look at a general overview of the application, and then we'll explore the data transactions and see how everything works. The last section of this chapter reviews Flash-specific programming concepts applicable to this application, including prototyping and local connections, and revisits the custom classes created for this application.
Although the technologies used here are specific to the discussion of Flash MX, you will walk away with an understanding of the architecture and be able to adapt other technologies such as ColdFusion, .NET, mySQL, or whatever you so choose to the same process.
Getting Started: Peachmail Joe
To get things started, let's look at the steps that a user goes through in working with a browser-based email management program like Hotmail. We'll call our application Peachmail. Meet Joe, who is a new user to our application.
Summer break! Joe is going on vacation and must leave his computer at home. Realizing he may not have access to his email, he tries to find a solution. A friend tells him about Peachmail, a service that allows him to access his email simply by getting online and accessing Peachmail's Web site.
He decides to check it out.
Joe gets home, fires up his machine, and loads up Peachmail's Web site. First, Joe needs to create an account with Peachmail, entering in such details as his username, email address, password, POP server, and SMTP server. Joe selects mejoe to be the username that uniquely identifies him at Peachmail. Now that Joe has an account, he logs into Peachmail.
Joe now has access to these Peachmail services:
Address Book Services. Joe can add, edit, and remove contacts from his address book.
Email Services. Joe can view, send, receive, organize, and delete email from his account.
Account Services. Joe can modify his personal information or, heaven forbid, delete his account.
Joe adds a few friends to his address book and sends out an email letting his friends know he still has access to his email during his vacation:
Hey you crazy blokes! It's me Joe! I'm using Peachmail.com to access my email. It's absolutely brilliant! Cheers, Joe
Joe then logs out of Peachmail and feeling special about himself, decides to go spend a week's pay on a coffee at Starbucks. | <urn:uuid:7219793b-e22a-4063-b682-68d1119428b5> | {
"date": "2020-01-22T04:44:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9551557898521423,
"score": 2.703125,
"token_count": 608,
"url": "http://www.peachpit.com/articles/article.aspx?p=30689&seqNum=9"
} |
Hurrah! – It’s Tuesday morning and the sun is officially shining here, radiantly! And whilst vitamin D may not be the first thing on your mind when the sun comes out to play, I would suggest that perhaps it should be…
You may well have heard a lot of talk in the press recently about vitamin D and be wondering what it’s all about? The unfortunate truth is that we now know that vitamin D deficiency has reached pandemic proportions, and that’s having a serious impact on our health.
“Vitamin D deficiency is now recognised as a pandemic, with more than half of the world’s population currently at risk1”
Vitamin D is also fast becoming known as a ‘super-nutrient’. We now know that it’s not just an important nutrient for healthy bones, research is now finding that deficiency of this fat-soluble vitamin can be linked to a wide range of health problems, from cancer and cardiovascular disease to cognitive impairment and problems with auto-immunity such as multiple sclerosis (MS) and Type 1 diabetes.
What many people don’t realise though, is that very few foods naturally contain vitamin D. Fortified milk, egg yolks and oily fish are the best sources, but you certainly cannot rely on food to provide you with optimal amounts of vitamin D on a daily basis. In fact, the major source (80 – 100%) of vitamin D is actually sunshine! Vitamin D is primarily manufactured in the skin on contact with sunshine.
So why in this modern age are we experiencing such epidemic proportions of vitamin D deficiency? The simple answer is that we simply aren’t getting as much sun as we used to. Millions of years ago, our ancestors lived naked in the sun, spending most of the day working and travelling outside. Over the years, we have put on clothes and started working inside, travelling in cars and living in cities where buildings block the sun. In addition to this, in more recent years, skin cancer scares have further minimised sun exposure for all ages, especially for children. The recommended liberal use of high factor sunscreen has had additional negative impacts on the skin’s natural vitamin D production process. Before the sun scare, 90% of human vitamin D stores came from skin production not dietary sources. When you look at how our lifestyles have evolved to cut out the sun’s contact with our skin, it is easy to see why we now have such high levels of vitamin D deficiency.
So how much vitamin D do you need….
- Recent medical research indicates that human daily requirements of vitamin D may be up to 10 x more than what is currently recommended.
- The current daily recommendation for vitamin D is 200 IU.
- If you consider that the skin will naturally produce approximately 10,000 IU vitamin D in response to 20 – 30 minutes summer sun exposure, you can easily see how current daily recommendations of a worryingly low 200 IU are seriously brought into question by health experts.
- Based on information from the most current medical literature, highly respected scientist Dr Joseph Pizzorno recommends that an average daily maintenance dose of 5000 IU vitamin D is more realistic to promote optimal vitamin D levels1,2.
- Yet scientists also agree that there is a great deal of individual variation and with vitamin D it is impossible to recommend a ‘one size fits all’ daily dosage level.
- The correct level of vitamin D is the one which results in bringing blood levels of vitamin D into an optimal range; it is essential therefore that you seek advice from a qualified healthcare practitioner for advice on the best dose for your individual requirements.
What’s the best way of increasing vitamin D intake? Since vitamin D isn’t naturally present in many foods, it isn’t possible to achieve optimal vitamin D intake from food sources alone. The risk of skin cancer from excessive sunlight or sun-bed exposure opens an important debate over spending more time in the sun to increase vitamin D levels. Therefore, most experts now agree that supplementation is currently the safest and most effective method of achieving optimal vitamin D status. Supplements should contain vitamin D in the form of vitamin D3 (cholecalciferol), since this is the form naturally produced by the skin upon exposure to sunlight and research has shown this is the most efficient form at increasing vitamin D levels3.
How do I know if I’m deficient? The best way to find out if you’re deficient is by asking your GP to check your levels. Alternatively, there’s a simple, non-invasive test you can request privately which will identify deficiency too.
For more information on vitamin D deficiency, testing and supplementation, please feel free to get in touch and I will be happy to help.
- Pizzorno J. Integrative Medicine Vol. 9 No. 1 Feb/Mar 2010 ‘What have we learned about vitamin D dosing?’
- Hall, Kimlin, Aronov et al. Journal of Nutrition Published online ahead of print, doi: 10.3945/jn.109.115253 ‘Vitamin D intake needed to maintain target serum 25-hydroxyvitamin D concentrations in participants with low sun exposure and dark skin pigmentation is substantially higher than current recommendations’
- Trang HM, Cole DEC, Rubin et al. (1998) American Journal of Clinical Nutrition 68, 854-858 ‘Evidence that vitamin D3 increases serum 25-hydroxyvitamin D more efficienctly than does vitamin D2’. | <urn:uuid:c23ef8e6-523b-4dc5-ad72-1be4c627e81d> | {
"date": "2020-01-22T05:31:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9252637624740601,
"score": 2.671875,
"token_count": 1152,
"url": "http://www.rachelbartholomew.co.uk/enjoy-the-sunshine-and-top-up-your-vitamin-d/"
} |
Squeezed between the brighter constellations of Boötes to the left and Leo to the right, Coma Berenices is completely smothered with galaxies. In the south of the constellation is the sprinkling of part of the Virgo Galaxy Cluster. The part that is in Coma Berenices is referred to as the Coma-Virgo Galaxy Cluster. This constellation is a treat for galaxy hunters with large telescopes.
The backstory behind this constellation depicts events that actually happened in ancient times. Queen Berenices promised to sacrifice her hair to the gods if her husband, King Ptolemy III Euergestes returned victorious from battle. Victorious and battle scarred he returned from battle, so the hair was cut off Berenice's head. The constellation was given its name by a Greek called Konon of Samos.
|Blackeye Galaxy (M64):|
Looking like a survivor of an intergalactic scuffle, this galaxy is distinguished by a dark patch of dust. This patch is in fact a proponent of a phenomena known as 'Extended Red Emission'. These regions emit light by the photoluminescence of tiny dust particles that are abundant in the dusty patch. The dust patch is thronged with a large number of red emission clouds that glow brightly due to ultraviolet radiation emitted by young stars.
A strange fact about this galaxy is that it has a couplet of counter rotating disks. This means that both disks aren't rotating in the same direction, the outer gaseous disk is rotating in the opposite direction to the inner one. The inner one also contains more gas and stars. The current hypothesis is that the outer disk was formed when another galaxy collided and merged with M64.
|A Hubble Space Telescope close up of the 'black eye' part of galaxy M64. A swirl of star clouds and nebulae can be clearly seen in immaculate clarity.|
|Image copyright Hubble Heritage Team/NASA|
|Needle Galaxy (NGC 4565):|
Most people have heard of the adage about a needle in a haystack. Well this adage doesn't apply to this galaxy as it is very bright and can even be seen in small telescopes.
The Needle Galaxy is a satisfyingly symmetrical edge on spiral with an abundance of lilac dust covering most of its disk. At the heart of this spiral is a hungry supermassive black hole that is emitting a wild amount of x-rays and because of this the nucleus is considered to be an active galactic nucleus.
What contributes to its amazing needle like appearance is a horizontal dark brown dust lane snaking along the middle of the galaxy. There is also a slight bulge in the core area that is more clearly visible in larger scopes.
|Rare and uncommonly found, this barred spiral galaxy is only comprised of one single lonely spiral arm. The structure of the main part of the galaxy is ring like and is a site of very active star formation. The ring encircles the bar just like in NGC 2523 in Camelopardalis.|
|The sprawling spiral structure will make you follow the numerous maze of dust lanes until your gaze penetrates the hypnotic spiral shaped core. The core is the bright part of this galaxy and the spiral arms literally drip with HII regions.|
M99 is one of the more prominent spiral galaxies in the Virgo-Coma Galaxy Cluster and was also the second galaxy where its spiral structure was discerned by sharp eyes and keen observations by Lord Rosse in 1846, a year after him discovering the spiral structure of the Whirlpool Galaxy.
M99 is racing towards us at a fantastic speed and this might indicate an encounter with another galaxy in the past.
|Elliptical shaped and highly symmetric, M88 is a fabulous tightly wound spiral galaxy. It is tilted at an incline and this adds to its mystique.|
|NGC 4302 and NGC 4298:|
|You could say that this pair of galaxies are a galactic odd couple. They both are 60 million light years away and NGC 4302 is an edge on similar in texture to the Sandwich Galaxy in Leo and NGC 4298 is a spiral galaxy similar to NGC 4647 in Virgo.|
|Coma Galaxy Cluster (Abell 1656):|
|Like grains of sand scattered on the galactic beach of the universe, Abell 1656 is a large weighty collection of thousands of galaxies. The centre of the cluster is dominated by two supergiant ellipticals, NGC 4889 and NGC 4874. It is known that these two galaxies are supermassive because the galaxy cluster is 400 million light years away and these two galaxies are much larger than the surrounding ones.|
|The Box (Hickson 61):|
|This is quartet of galaxies that give the impression of a box shape. The largest galaxy, NGC 4173 is actually not associated with the group and through the magic of line of sight, it is a foreground object. This is a trait found in many compact galaxy groups such as Stephen's Quintet in Pegasus. Because it is closer than the 200 million light year distance of the others, it is also the brightest in the group. The three that are together, two are edge on spirals and one is a lenticular galaxy.|
Since this nearly face on spiral galaxy is about 40 million light years away, it would be expected to be brighter than magnitude 11 and have a larger apparent size. The general appearance is of a bright yellow core with two spiral arms tapering outwards.
The galaxy is also known as Arp 189, the reason is because of a very elongated tidal tail that requires large telescopes to be seen. Initially this tidal tail was thought of as being an optical jet emitted by the galaxy but more conclusive studies proved this was not the case.
|The magnificent Needle Galaxy is sometimes referred to as Berenice's Hairclip. The nebulae and star clouds are heavily obscured by thick swathes of dust which is why they can't be seen as in other edge on galaxies such as NGC 891.|
|Image copyright J. Schedler| | <urn:uuid:71293888-c510-442d-8372-85aac2fad975> | {
"date": "2020-01-22T06:05:24",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9411638975143433,
"score": 3.375,
"token_count": 1279,
"url": "http://www.starsurfin.co.uk/constellations/comaberenices.html"
} |
The various cultivars of muskmelons (Cucumis melo L.) make a tasty addition to a summer-time backyard. Smooth-skinned cultivars contain honey dew melons and casabas, crenshaws. Cultivars with nets on their epidermis contain melons and cantaloupes.
Climate and Soil
Muskmelons develop best with average temperatures of 65 to 75 degrees Fahrenheit and ripen best-in dry climate. They like free, rich soil having a pH of 6.0 to 6.8 that’s well-drained. Add aged manure to the planting beds the drop before they are planted by you.
Plant seeds in full-sun 1inch deep three to one month following the last frost in the spring. In about 10 times, the seeds will germinate at 65 degrees Fahrenheit. Plant seeds indoors in peat pots or paper about six months before planting time. Pots that are biodegradable are required if their roots are disturbed because muskmelon seedlings develop badly. Plant in hills or mounds or in rows that are. Mounds should be four to six-feet and 24-inches across apart. Use scissors to reduce the three or four seedlings level to the floor when the seedlings on a mound produce three or four leaves. Space the seedlings in rows.
Muskmelons react properly to the program of plastic mulch to early-season insects and control weeds. Use the mulch 2 to 3 months before you plant the melons, creating holes for the seeds or seedlings. The mulch helps protect the crops from early-season cool spells and until the plants start to flower, you don’t need to remove it unless it gets really warm.
Use 3 lbs of potassium, 2 lbs of phosphorous and 1 pound of nitrogen for 1000 square-feet of plot area before planting in the event that you use black plastic mulch. Increase the quantity of nitrogen by 25-percent on bare floor. Three figures that show the ratio by percentage fat of macro-nutrients nitrogen, phosphorus and potassium rate fertilizers. Nitrogen encourages the development of vines; potassium aids melons produce.
Keep the soil moist. Irrigate them using a drip or furrow hose. Mildew can be caused by watering them. Remove weeds from around the vines before the vines are huge to smother your competitors out. Each vine is grown on by flowers to allow four melons.
Growing in Small Spaces
It’s possible for you to buy seeds for bush or dwarf cultivars ideal for expanding in containers that are 18-inches extensive or bigger. To truly save space and boost produce, train vines to develop close to the container on a support or trellis. Place mesh bags across the creating melons s O the vines will not be damaged by their pounds, and tie them. It’s possible for you to extend the developing period by commencing a container-grown muskmelon indoors exterior when the climate warms in the spring and relocating it. | <urn:uuid:c2f9d601-8c4b-44e6-a0a6-3630dd4b3ba1> | {
"date": "2020-01-22T04:25:41",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8950198292732239,
"score": 3.40625,
"token_count": 634,
"url": "http://www.tesorodelvallesolar.com/cultivation-of-muskmelon"
} |
On the evening of Sep 5, 2018, an odd-radius plate display of great significance was captured in Haikou, China, by photographer Zhan Guorong. The photos, when enhanced, reveal an elusive coloured arc between 24° and 35° plate arcs, which doesn't fit into any ordinary odd-radius halo families.The arc was later confirmed by Dr. Nicolas Lefaudeux to be the exceedingly rare 28° plate arcs, which previously had only two known records world-wide. They were first observed in the 1997 Lascar display in Chile (http://www.thehalovault.org/2008/12/lascar-display.html), and spotted for the second time in Chengdu, China by photographer Jin Hui on July 20, 2016. We've got permission from Jin Hui to share his great capture to the world.
© Zhan Guorong, shown with permission
© Jin Hui, shown with permission
Unlike the Lascar display which lasted for almost a full day with many new arcs/halos discovered, displays in Chengdu and Haikou were short-lived with no other new arcs/halos apart from the 28° plate arcs. The lack of associated arcs and restricted solar elevation make it difficult to fully understand what really happened up in the clouds. Isolated 28° plate arcs can be reproduced in simulations by either triangular pyramidal crystals with 30-32 pyramidal faces or octahedral cubic ice crystals with an octahedral face horizontal . Both models require rather restricted shape/orientation conditions.
Dr. Lefaudeux brought up another interesting point. The 9° and 24° plate arcs were totally missing in Lascar, implicating the absence of middle column sections in the pyramidal crystals. In Haikou and Chengdu though, they were present and quite strong.
Are these displays simply variants of the Lascar display with different crystal combinations? Or are we looking at a totally new breed? We'll need more photos at different solar elevations to unravel the mystery. Good news is that now we know such displays can probably occur anywhere. Before the Haikou case, we thought that the responsible crystal clouds are high mountain related since Lascar and Chengdu sit beside the Andes and the Himalayas respectively. The clouds responsible for what happened in Haikou, however, had their origin in the middle of South China sea.
We encourage skywatchers world-wide to keep an eye out for these elusive arcs. They might just pop up in the next odd-radius display over your backyard.
Nicolas A. Lefaudeux, "Crystals of hexagonal ice with (2 0 -2 3) Miller index faces explain exotic arcs in the Lascar halo display," Appl. Opt. 50, F121-F128 (2011)
M. Riikonen, M. Sillanpää, L. Virta, D. Sullivan, J. Moilanen, and I. Luukkonen, “Halo observations provide evidence of airborne cubic ice in the Earth’s atmosphere,” Appl. Opt. 39, 6080–6085 (2000) | <urn:uuid:d5ec6f6a-a73d-4d5a-9e0e-d78cddac880f> | {
"date": "2020-01-22T05:00:21",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9324231147766113,
"score": 2.90625,
"token_count": 661,
"url": "http://www.thehalovault.org/2018/09/"
} |
The Abergavenny Arms is situated near Tunbridge Wells being famous for its natural spring water wells located in the famous Pantiles which Queen Victoria frequented.
Historically, the Abergavenny was built during the reign of Henry VI. In 1442 to 1461 it was a simple framed building that brewed its own beer, the first recorded keeper of the house then known as the “Apsis” was one William Appes a former disciple of the Kentish rebel leader Jack Kade, who in 1450 with his raggle taggle band of followers marched on London in an attempt to overthrow the Government.
During the late 16th century the Apsis became “The Bull”, until the 18th century when the lounge bar became the parish courthouse and the cellars where the local jail where miscreants charged with crimes from drunkenness to sheep stealing were held to await their fate. The latter carried the death sentence, legend has it that a yew tree once opposite was used to hang the sheep stealers where they were left hanging as an example to other wrongdoers. The cells which still exist today, were in use until the 19th century.
In 1705, Commodious stables were built and in the mid 18th century, The Bull became a posting house offering food, accommodation and fresh horses to travellers.
In 1770, a coachman on an overnight stay died in his sleep, the law of the land at that time decreed that should a death occur in a public house, the house should be closed until after an inquest. Being conversant with the law and having no wish to close, the innkeeper threw the body out of the window and reporting a case of suicide. The coachman now celebrates the anniversary of his death by returning each year to haunt the resident innkeeper.
In 1823 “The Bull” became the Abergavenny Arms hotel in honour of Lord Abergavenny who owned the property until 1933. The crest of the Abergavenny family bears the head of the bull which would account for the name of the inn for over 200 years. | <urn:uuid:b2551fff-3e80-4a15-b672-eea8a8a42716> | {
"date": "2020-01-22T06:30:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9858893752098083,
"score": 2.609375,
"token_count": 441,
"url": "https://abergavennyarms.co.uk/our-history/"
} |
Cucurbit yellow stunting disorder virus (CYSDV) and Watermelon chlorotic stunt virus (WmCSV) are the most widespread and damaging viruses to cucurbits in the Middle East. CYSDV and WmCSV are cucurbit-infecting bipartite whitefly-transmitted viruses. Post-transcriptional gene silencing (PTGS) is a universal mechanism by which plants are able to systemically switch off the expression of targeted genes via the reduction of steady-state levels of specific RNAs. PTGS was used in this study to control the two viruses. In this study, the efficiency of the dsRNA for the ability to trigger resistance against the CYSDV and WmCSV was investigated. Three regions of three genes of CYSDV genome were selected; the coat protein gene (CP), heat shock gene (Hsp70) and ORF3, while the two regions of two genes of WmCSV genome were selected; CP gene and rep gene. Bioassay, dot-blot hybridization and polymerase chain reaction (PCR) methods were capable to evaluate the resistance against viruses. Clear symptoms on tobacco plants took two to three weeks to appear and all non-infiltrating tobacco plants (positive control) showed viral symptoms after inoculation. Most of the agro-infiltrating sense/antisense constructs did not yield symptoms of the viruses. Dot-blot hybridization, showed that negative hybridization was obtained with infiltrating tobacco plants with prepared constructs compared to those non-infiltrating tobacco plants used as the control. Only one out of five gave positive signals with the construct pasCYSDV-Hsp70. Using PCR, positive reactions of the expected size of 500 bp fragment with WmCSV and 800 bp with CYSDV were obtained with the infiltrating tobacco plants with sense constructs, which pointed out the existence of viral genome in challenging tobacco plants. Infiltrating tobacco plants with sense/antisense constructs gave negative PCR pointed out the lack of the viral genome.
Key words: Cucurbit yellow stunting disorder virus (CYSDV), watermelon chlorotic stunt virus (WmCSV), Post-transcriptional gene silencing (PTGS), coat protein (CP), Hsp70, ORF3, Rep, dot-blot, hybridization.
Copyright © 2020 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0 | <urn:uuid:608554ea-f694-4c2d-8955-85a2bfb44d1e> | {
"date": "2020-01-22T04:47:23",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9288405776023865,
"score": 3,
"token_count": 531,
"url": "https://academicjournals.org/journal/AJB/article-abstract/79F699955915"
} |
Starting with Manfred von Ardenne, early attempts have been reported on the examination of specimens inside "environmental" cells with water or atmospheric gas, in conjunction with conventional and scanning transmission types of electron microscopes. However, the first images of wet specimens in an SEM were reported by Lane in 1970 when he injected a fine jet of water vapor over the point of observation at the specimen surface; the gas diffused away into the vacuum of the specimen chamber without any modification to the instrument. Further, Shah and Beckett reported the use of differentially pumped cells or chambers to presumably maintain botanical specimens conductive in order to allow the use of the absorbed specimen current mode for signal detection in 1977 and in 1979. Spivak et al. reported the design and use of various environmental cell detection configurations in an SEM including differential pumping, or the use of electron transparent films to maintain the specimens in their wet state in 1977. Those cells, by their nature, had only limited application use and no further development was done. In 1974, an improved approach was reported by Robinson with the use of a backscattered electron detector and differential vacuum pumping with a single aperture and the introduction of water vapor around 600 Pa pressure at the freezing point of temperature. However, neither of those approaches produced a stable enough instrument for routine operation. Starting work with Robinson in 1978 at the University of New South Wales in Sydney, Danilatos undertook a thorough quantitative study and experimentation that resulted in a stable operation of the microscope at room temperature and high pressures up to 7000 Pa, as reported in 1979. In the following years, Danilatos, working independently, reported a series of works on the design and construction of an environmental or atmospheric scanning electron microscope (ASEM) capable of working at any pressure from vacuum up to one atmosphere. These early works involved the optimization of the differential pumping system together with backscattered electron (BSE) detectors until 1983, when he invented the use of the environmental gas itself as a detection medium. The decade of 1980 closed with the publication of two major works comprehensively dealing with the foundations of ESEM and the theory of the gaseous detection device (GDD). Furthermore, in 1988, the first commercial ESEM was exhibited in New Orleans by ElectroScan Corporation, a venture capital company wishing to commercialize the Danilatos ESEM. The company placed an emphasis on the secondary electron (SE) mode of the GDD and secured the monopoly of the commercial ESEM with a series of additional key patents. Philips and FEI companies succeeded ElectroScan in providing commercial ESEM instruments. With the expiration of key patents and assistance by Danilatos, new commercial instruments have been recently added to the market by LEO (succeeded by Carl Zeiss SMT). Further improvements have been reported to date from work on the original experimental prototype ESEM in Sydney and from numerous other workers using the commercial ESEM in a wide variety of applications worldwide. An early comprehensive bibliography was compiled in 1993 by Danilatos, whilst a more recent survey can be found in a Ph.D. Thesis by Morgan (2005).
An ESEM employs a scanned electron beam and electromagnetic lenses to focus and direct the beam on the specimen surface in an identical way as a conventional SEM. A very small focused electron spot (probe) is scanned in a raster form over a small specimen area. The beam electrons interact with the specimen surface layer and produce various signals (information) that are collected with appropriate detectors. The output of these detectors modulates, via appropriate electronics, the screen of a monitor to form an image that corresponds to the small raster and information, pixel by pixel, emanating from the specimen surface. Beyond these common principles, the ESEM deviates substantially from an SEM in several respects, all of which are important in the correct design and operation of the instrument. The outline below highlights these requirements and how the system works.
The specimen chamber sustaining the high-pressure gaseous environment is separated from the high vacuum of the electron optics column with at least two small orifices customarily referred to as pressure-limiting apertures (PLA). The gas leaking through the first aperture (PLA1) is quickly removed from the system with a pump that maintains a much lower pressure in the downstream region (i.e. immediately above the aperture). This is called differential pumping. Some gas escapes further from the low pressure region (stage 1) through a second pressure limiting aperture (PLA2) into the vacuum region of the column above, which constitutes a second stage differential pumping (stage 2). A schematic diagram shows the basic ESEM gas pressure stages including the specimen chamber, intermediate cavity and upper electron optics column. The corresponding pressures achieved are p0>>p1>>p2, which is a sufficient condition for a microscope employing a tungsten type of electron gun. Additional pumping stages may be added to achieve an even higher vacuum as required for a LaB6 and field emission type electron guns. The design and shape of a pressure limiting aperture are critical in obtaining the sharpest possible pressure gradient (transition) through it. This is achieved with an orifice made on a thin plate and tapered in the downstream direction as shown in the accompanying isodensity contours of a gas flowing through the PLA1. This was done with a computer simulation of the gas molecule collisions and movement through space in real time. We can immediately see in the figure of the isodensity contours of gas through aperture that the gas density decreases by about two orders of magnitude over the length of a few aperture radii. This is a quantitatively vivid demonstration of a first principle that enables the separation of the high-pressure specimen chamber from the low pressure and vacuum regions above.
By such means, the gas flow fields have been studied in a variety of instrument situations, in which subsequently the electron beam transfer has been quantified.
By the use of differential pumping, an electron beam is generated and propagated freely in the vacuum of the upper column, from the electron gun down to PLA2, from which point onwards the electron beam gradually loses electrons due to electron scattering by gas molecules. Initially, the amount of electron scattering is negligible inside the intermediate cavity, but as the beam encounters an increasingly denser gas jet formed by the PLA1, the losses become significant. After the beam enters the specimen chamber, the electron losses increase exponentially at a rate depending on the prevailing pressure, the nature of gas and the accelerating voltage of the beam. The fraction of beam transmitted along the PLA1 axis can be seen by a set of characteristic curves for a given product p0D, where D is the aperture diameter. Eventually, the electron beam becomes totally scattered and lost, but before this happens, a useful amount of electrons is retained in the original focused spot over a finite distance, which can still be used for imaging. This is possible because the removed electrons are scattered and distributed over a broad area like a skirt (electron skirt) surrounding the focused spot. Because the electron skirt width is orders of magnitude greater than the spot width, with orders of magnitude less current density, the skirt contributes only background (signal) noise without partaking in the contrast generated by the central spot. The particular conditions of pressure, distance and beam voltage over which the electron beam remains useful for imaging purposes has been termed oligo-scattering regime in distinction from single-, plural- and multiple-scattering regimes used in prior literature.
For a given beam accelerating voltage and gas, the distance L from PLA1, over which useful imaging is possible, is inversely proportional to the chamber pressure p0. As a rule of thumb, for a 5 kV beam in air, it is required that the product p0L = 1 Pa·m or less. By this second principle of electron beam transfer, the design and operation of an ESEM is centered on refining and miniaturizing all the devices controlling the specimen movement and manipulation, and signal detection. The problem then reduces to achieving sufficient engineering precision for the instrument to operate close to its physical limit, corresponding to optimum performance and range of capabilities. A figure of merit has been introduced to account for any deviation by a given machine from the optimum performance capability.
The electron beam impinges on the specimen and penetrates to a certain depth depending on the accelerating voltage and the specimen nature. From the ensuing interaction, signals are generated in the same way as in an SEM. Thus, we get secondary and backscattered electrons, X-rays and cathodoluminescence (light). All of these signals are detected also in the ESEM but with certain differences in the detector design and principles used.
The conventional secondary electron detector of SEM (Everhart-Thornley detector) cannot be used in the presence of gas because of an electrical discharge (arcing) caused by the kilovolt bias associated with this detector. In lieu of this, the environmental gas itself has been used as a detector for imaging in this mode:
Gaseous detection device
In a simple form, the gaseous detection device (GDD) employs an electrode with a voltage up to several hundred volts to collect the secondary electrons in the ESEM. The principle of this SE detector is best described by considering two parallel plates at a distance d apart with a potential difference V generating a uniform electric field E = V/d, and is shown in the accompanying diagram of the GDD. Secondary electrons released from the specimen at the point of beam impingement are driven by the field force towards the anode electrode but the electrons also move radially due to thermal diffusion from collisions with the gas molecules. The variation of electron collection fraction R within anode radius r vs. r/d, for fixed values of anode bias V, at constant product of (pressure·distance) p·d = 1 Pa·m, is given by the accompanying characteristic curves of efficiency of the GDD. All of the secondary electrons are detected if the parameters of this device are properly designed. This clearly shows that practically 100% efficiency is possible within a small radius of collector electrode with only moderate bias. At these levels of bias, no catastrophic discharge takes place. Instead, a controlled proportional multiplication of electrons is generated as the electrons collide with gas molecules releasing new electrons on their way to the anode. This principle of avalanche amplification operates similarly to proportional counters used to detect high energy radiation. The signal thus picked up by the anode is further amplified and processed to modulate a display screen and form an image as in SEM. Notably, in this design and the associated gaseous electron amplification, the product p·d is an independent parameter, so that there is a wide range of values of pressure and electrode geometry which can be described by the same characteristics. The consequence of this analysis is that the secondary electrons are possible to detect in a gaseous environment even at high pressures, depending on the engineering efficacy of any given instrument.
As a further characteristic of the GDD, a gaseous scintillation avalanche also accompanies the electron avalanche and, by detection of the light produced with a photo-multiplier, corresponding SE images can be routinely made. The frequency response of this mode has allowed the use of true TV scanning rates. This mode of the detector has been employed by a latest generation of commercial instruments.
The novel GDD has become possible first in the ESEM and has produced a practically 100% SE collection efficiency not previously possible with the Everhart-Thornley SE detector where the free trajectories of electrons in vacuum cannot all be bent towards the detector. As is further explained below, backscattered electrons can also be detected by the signal-gas interactions, so that various parameters of this generalized gaseous detector must be controlled to separate the BSE component out of the SE image. Therefore, care has been taken to produce nearly pure SE images with these detectors, then called ESD (environmental secondary detector) and GSED (gaseous secondary electron detector).
Backscattered electrons (BSE) are those emitted back out from the specimen due to beam-specimen interactions where the electrons undergo elastic and inelastic scattering. They have energies from 50 eV up to the energy of the primary beam by conventional definition. For the detection and imaging with these electrons, scintillating and solid state materials have been used in the SEM. These materials have been adapted and used also in ESEM in addition to the use of the GDD for BSE detection and imaging.
BSE pass through the gaseous volume between the electrodes of the GDD and generate additional ionization and avalanche amplification. There is an inner volume where the secondary electrons dominate with small or negligible BSE contribution, whilst the outer gaseous volume is acted upon mainly by the BSE. It is possible to separate the corresponding detection volumes so that near pure BSE images can be made with the GDD. The relationship of relative strength of the two signals, SE and BSE, has been worked out by detailed equations of charge distribution in the ESEM. The analysis of plane electrodes is essential in understanding the principles and requirements involved and by no means indicate the best choice of electrode configuration, as discussed in the published theory of the GDD.
Despite the above developments, devoted BSE detectors in the ESEM have played an important role, since the BSE remain a most useful detection mode yielding information not possible to obtain with SE. The conventional BSE detection means have been adapted to operate in the gaseous conditions of the ESEM. The BSE having a high energy are self-propelled to the corresponding detector without significant obstruction by the gas molecules. Already, annular or quadrant solid-state detectors have been employed for this purpose but their geometry is not easily adaptable to the requirements of ESEM for optimum operation. As a result, no much use has been reported of these detectors on genuine ESEM instruments at high pressure. The "Robinson" BSE detector is tuned for operation up to around 100 Pa at the usual working distance of conventional SEM for the suppression of specimen charging, whilst electron collection at the short working distance and high pressure conditions make it inadequate for the ESEM. However, plastic scintillating materials being easily adaptable have been used for BSE and made to measure according to the strictest requirements of the system. Such work culminated in the use of a pair of wedge-shaped detectors saddling a conical PLA1 and abutting to its rim, so that the dead detection space is reduced to a minimum, as shown in the accompanying figure of optimum BSE detectors. The photon conduction is also optimized by the geometry of the light pipes, whilst the pair of symmetrical detectors allow the separation of topography (signal subtraction) and atomic number contrast (signal addition) of the specimen surface to be displayed with the best ever signal-to-noise-ratio. This scheme has further allowed the use of color by superimposing various signals in a meaningful way. These simple but special detectors became possible in the conditions of ESEM, since bare plastic does not charge by the BSE. However, a very fine wire mesh with appropriate spacing has been proposed as a GDD when gas is present and to conduct negative charge away from the plastic detectors when the gas is pumped out, towards a universal ESEM. Furthermore, since the associated electronics involve a photomultiplier with a wide frequency response, true TV scanning rates are readily available. This is an essential attribute to maintain with an ESEM that enables the examination of processes in situ in real time. In comparison, no such imaging has been reported with the electron avalanche mode of the GDD yet.
The use of scintillating BSE detectors in ESEM is compatible with the GDD for simultaneous SE detection, in one way by replacing the top plane electrode with a fine tip needle electrode (detector), which can be easily accommodated with these scintillating BSE detectors. The needle detector and cylindrical geometry (wire) have also been extensively surveyed.
Cathodoluminescence is another mode of detection involving the photons generated by the beam-specimen interaction. This mode has been demonstrated to operate also in ESEM by the use of the light pipes after they were cleared of the scintillating coating previously used for BSE detection. However, not much is known on its use outside the experimental prototype originally tested. Clearly, ESEM is more powerful and meaningful under this detection mode than SEM, since the natural surface of any specimen can be examined in the imaging process. Cathodoluminescence is a materials property, but with various specimen treatments required and other limitations in SEM the properties are obscured or altered or impossible to detect and hence this mode of detection has not become popular in the past. The advent of ESEM with its unlimited potential may provoke more interest in this area too, in the future.
The characteristic elemental X-rays produced also in the ESEM can be detected by the same detectors used in the SEM. However, there is an additional complexity arising from the X-rays produced from the electron skirt. These X-rays come from a larger area than in SEM and the spatial resolution is significantly reduced, since the “background” X-ray signals cannot be simply “suppressed” out of the probe interaction volume. However, various schemes have been proposed to solve this problem. These methods involve spot masking, or the extrapolation technique by varying the pressure and calibrating out the effects of skirt, whereby considerable improvement has been achieved.
In vacuum SEM, the specimen absorbed current mode is used as an alternative mode for imaging of conductive specimens. Specimen current results from the difference of electron beam current minus the sum of SE and BSE current. However, in the presence of gas and the ensuing ionization, it would be problematic to separate this mode of detection out of the generally operating gaseous detection device. Hence this mode, by its definition, may be considered as unsustainable in the ESEM. Shah and Becket assumed the operation of the specimen absorbed current mode if the conductivity of their specimen was assured during the examination of wet botanical samples; in fact, Shah by 1987 still considered the ionisation products in gas by SE and BSE as a formidable obstacle, since he believed that the ionisation did not carry any information about the specimen. However, he later embraced to correct role of gaseous ionisation during image formation.
The electron beam impinging on insulating specimens accumulates negative charge, which creates an electrical potential tending to deflect the electron beam from the scanned point in conventional SEM. This appears as charging artifacts on the image, which are eliminated in the SEM by depositing a conductive layer on the specimen surface prior to examination. Instead this coating, the gas in the ESEM being electrically conductive prevents negative charge accumulation. The good conductivity of the gas is due to the ionization it undergoes by the incident electron beam and the ionizing SE and BSE signals. This principle constitutes yet another fundamental deviation from conventional vacuum electron microscopy, with enormous advantages.
As a consequence of the way ESEM works, the resolution is preserved relative to the SEM. That is because the resolving power of the instrument is determined by the electron beam diameter which is unaffected by the gas over the useful travel distance before it is completely lost. This has been demonstrated on the commercial ESEMs that provide the finest beam spots by imaging test specimens, i.e. customarily gold particles on a carbon substrate, in both vacuum and gas. However, the contrast decreases accordingly as the electron probe loses current with travel distance and increase of pressure. The loss of current intensity, if necessary, can be compensated by increasing the incident beam current which is accompanied by an increased spot size. Therefore, the practical resolution depends on the original specimen contrast of a given feature, on the design of the instrument that should provide minimal beam and signal losses and on the operator selecting the correct parameters for each application. The aspects of contrast and resolution have been conclusively determined in the referenced work on the foundations of ESEM. Further, in relation to this, we have to consider the radiation effects on the specimen.
The majority of available instruments vent their specimen chamber to the ambient pressure (100 kPa) with every specimen transfer. A large volume of gas has to be pumped out and replaced with the gas of interest, usually water vapor supplied from a water reservoir connected to the chamber via some pressure regulating (e.g. needle) valve. In many applications this presents no problem, but with those ones requiring uninterrupted 100% relative humidity, it has been found that the removal of ambient gas is accompanied by lowering the relative humidity below the 100% level during specimen transfer. This clearly defeats the very purpose of ESEM for this class of applications. However, such a problem does not arise with the original prototype ESEM using an intermediate specimen transfer chamber, so that the main chamber is always maintained at 100% relative humidity without interruption during a study. The specimen transfer chamber (tr-ch) shown in the diagram of ESEM gas pressure stages contains a small water reservoir so that the initial ambient air can be quickly pumped out and practically instantaneously replaced with water vapor without going through a limited conductance tube and valve. The main specimen chamber can be maintained at 100% relative humidity, if the only leak of vapor is through the small PLA1, but not during violent pumping with every specimen change. Once the wet specimen is in equilibrium with 100% relative humidity in the transfer chamber, within seconds, a gate valve opens and the specimen is transferred in the main specimen chamber maintained at the same pressure. An alternative approach involving controlled pumping of the main chamber may not solve the problem entirely either because the 100% relative humidity cannot be approached monotonically without any drying, or the process is very slow; inclusion of a water reservoir inside the main chamber means that one cannot lower the relative humidity until after all of the water is pumped out (i.e. a defective control of the relative humidity).
During the interaction of an electron beam with a specimen, changes to the specimen at varying degrees are almost inevitable. These changes, or radiation effects, may or may not become visible both in SEM and ESEM. However, such effects are particularly important in the ESEM claiming the ability to view specimens in their natural state. Elimination of the vacuum is a major success towards this aim, so that any detrimental effects from the electron beam itself require special attention. The best way around this problem is to reduce these effects to an absolute minimum with an optimum ESEM design. Beyond this, the user should be aware of their possible existence during the evaluation of results. Usually, these effects appear on the images in various forms due to different electron beam-specimen interactions and processes.
The introduction of gas in an electron microscope is tantamount to a new dimension. Thus, interactions between electron beam and gas together with interactions of gas (and its byproducts) with specimen usher a new area of research with as yet unknown consequences. Some of these may at first appear disadvantageous but later overcome, others may yield unexpected results. The liquid phase in the specimen with mobile radicals may yield a host of phenomena again advantageous or disadvantageous.
The presence of gas around a specimen creates new possibilities unique to ESEM: (a) Hydrated specimens can be examined since any pressure greater than 609 Pa allows water to be maintained in its liquid phase for temperatures above 0 °C, in contrast to the SEM where specimens are desiccated by the vacuum condition. (b) Electrically non-conductive specimens do not require the preparation techniques used in SEM to render the surface conductive, such as the deposition of a thin gold or carbon coating, or other treatments, techniques which also require vacuum in the process. Insulating specimens charge up by the electron beam making imaging problematic or even impossible. (c) The gas itself is used as a detection medium producing novel imaging possibilities, as opposed to vacuum SEM detectors. (d) Plain plastic scintillating BSE detectors can operate uncoated without charging. Hence, these detectors produce the highest possible signal-to-noise-ratio at the lowest possible accelerating voltage, because the BSE do not dissipate any energy in an aluminium coating used for the vacuum SEM.
As a result, specimens can be examined faster and more easily, avoiding complex and time consuming preparation methods, without modifying the natural surface or creating artifacts by the preceding preparation work, or the vacuum of the SEM. Gas/liquid/solid interactions can be studied dynamically in situ and in real time, or recorded for post processing. Temperature variations from subzero to above 1000 °C and various ancillary devices for specimen micro-manipulation have become a new reality. Biological specimens can be maintained fresh and live. Therefore, ESEM constitutes a radical breakthrough from conventional electron microscopy, where the vacuum condition precluded the advantages of electron beam imaging becoming universal.
The main disadvantage arises from the limitation of the distance in the specimen chamber over which the electron beam remains usable in the gaseous environment. The useful distance of the specimen from the PLA1 is a function of accelerating voltage, beam current, nature and pressure of gas, and of the aperture diameter used. This distance varies from around 10 mm to a fraction of a millimeter as the gas pressure may vary from low vacuum to one atmosphere. For optimum operation, both the manufacturer and the user must conform, in the design and operation, to satisfy this fundamental requirement. Furthermore, as the pressure can be brought to a very low level, an ESEM will revert to typical SEM operation without the above disadvantages. Therefore, one may trade-off the ESEM characteristics with those of SEM by operating in a vacuum. A reconciliation of all these disadvantages and advantages can be attained by a properly designed and operated universal ESEM.
Concomitant with the limitation of useful specimen distance is the minimum magnification possible, since at very high pressure the distance becomes so small that the field of view is limited by the PLA1 size. In the very low magnification range of SEM, overlapping the upper magnification of a light microscope, the superior field is limited to a varying degree by the ESEM mode. The degree of this limitation strongly depends on instrument design.
As X-rays are also generated by the surrounding gas and also come from a larger specimen area than in SEM, special algorithms are required to deduct the effects of gas on the information extracted during analysis.
The presence of gas may yield unwanted effects in certain applications, but the extent of these will only become clear as further research and development is undertaken to minimize and control radiation effects.
No commercial instrument is as yet (by 2009) available in conformity with all the principles of an optimal design, so that any further limitations listed are characteristic of the existing instruments and not of the ESEM technique, in general.
The ESEM can also be used in transmission mode (TESEM) by appropriate detection means of the transmitted bright and dark field signals through a thin specimen section. This is done by employing solid state detectors below the specimen, or the use of the gaseous detection device (GDD). The generally low accelerating voltages used in ESEM enhance the contrast of unstained specimens while they allow nanometer resolution imaging as obtained in transmission mode especially with field emission type of electron guns.
Some representative applications of ESEM are in the following areas:
An early application involved the examination of fresh and living plant material including a study of Leptospermum flavescens. The advantages of ESEM in studies of microorganisms and a comparison of preparation techniques have been demonstrated.
In conservation science, it is often necessary to preserve the specimens intact or in their natural state.
ESEM studies have been performed on fibers in the wool industry with and without particular chemical and mechanical treatments. In cement industry, it is important to examine various processes in situ in the wet and dry state.
Studies in situ can be performed with the aid of various ancillary devices. These have involved hot stages to observe processes at elevated temperatures, microinjectors of liquids and specimen extension or deformation devices.
Biofilms can be studied without the artifacts introduced during SEM preparation, as well as dentin and detergents have been investigated since the early years of ESEM.
The ESEM has appeared under different manufacturing brand names. The term ESEM is a generic name first publicly introduced in 1980 and afterwards unceasingly used in all publications by Danilatos and almost all users of all ESEM type instruments. The ELECTROSCAN ESEM trademark was obtained intermittently until 1999, when it was allowed to lapse. The word “environmental” was originally introduced in continuation to the prior (historical) use of “environmental” cells in transmission microscopy, although the word “atmospheric” has also been used to refer to an ESEM at one atmosphere pressure (ASEM) but not with any commercial instruments. Other competing manufacturers have used the terms "Natural SEM" (Hitachi), “Wet-SEM” (ISI), “Bio-SEM” (short-lived, AMRAY), “VP-SEM” (variable-pressure SEM; LEO/Zeiss-SMT), “LVSEM” (low-vacuum SEM, often also denoting low-voltage SEM; JEOL), all of which seem to be transient in time according to prevailing manufacturing schedules. Until recently, all these names referred to instruments operating up to about 100 Pa and with BSE detectors only. Lately, the Zeiss-SMT VP-SEM has been extended to higher pressure together with a gaseous ionization or gaseous scintillation as the SE mechanism for image formation. Therefore, it is improper to identify the term ESEM with one only brand of commercial instrument in juxtaposition to other competing commercial (or laboratory) brands with different names, as some confusion may arise from past use of trademarks.
Similarly, the term GDD is generic covering the entire novel gaseous detection principle in ESEM. The terms ESD and GSED, in particular, have been used in conjunction with a commercial ESEM to denote the secondary electron mode of this detector.
The following are examples of images taken using an environmental scanning electron microscope (ESEM). | <urn:uuid:53eb5896-598c-4022-bfa6-201e48caaa16> | {
"date": "2020-01-22T05:59:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9381404519081116,
"score": 2.78125,
"token_count": 6221,
"url": "https://alchetron.com/Environmental-scanning-electron-microscope"
} |
Potter Wasp is the common name for a group of caterpillar-hunting wasps known for their pot-shaped mud nests built by some species. Potter wasps are also known as Mason wasps. Potter wasps are found throughout the northern hemisphere, mainly in temperate regions. There are about 270 species in the United States and Canada and about 3000 species worldwide.
Potter Wasp Characteristics
Potter wasps are medium to large sized wasps measuring between 9 to 20 millimetres in length. Potter wasps are black with white, yellow, orange, or red markings.
Potter Wasp Life Cycle
Potter wasp adults feed on flower nectar and collect small caterpillars to feed their young. The caterpillars are paralyzed with the wasp’s sting and piled into the brood cell which is the compartment in which the wasp larvae develops. The female wasp then lays an egg on the stored caterpillars. The Potter wasp larvae consumes from 1 to 12 caterpillars as it grows. Potter wasps are important in the natural control of caterpillars. | <urn:uuid:a7347b66-191f-4ba5-9374-ba0ac32db887> | {
"date": "2020-01-22T06:46:30",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9633665680885315,
"score": 3.921875,
"token_count": 229,
"url": "https://animalcorner.co.uk/animals/potter-wasps/"
} |
Rainforests are home to more than half of the Earth’s plants and animals. With such a diverse array of animals, each layer of the rainforest teems with thousands of unique species. The shrub layer, where saplings and shrubs grow close to the forest floor, is home to large, predatory cats, millions of insects and a wealth of reptiles.
Many of the rainforest’s top predators patrol the rain forest floor and shrub layer. One of the most fierce, the jaguar (Panther onca), can weigh as much as 300 pounds and is a master of camouflage, stealth and speed. Quiet on the ground, these big cats stalk and ambush their prey (armadillos, peccaries, capybara, tapir, deer, squirrels, birds and even snails and turtles) often climbing trees to do so. Many primates also live in and around the shrub layer. These include the tailless species like gorillas, bonobos and chimpanzees.
Grasshoppers, spiders, scorpions, caterpillars, beetles and even wasps make their home in the shrub layer. One such insect, the titan beetle (Titanus giganteus), is the largest beetle in the world. The largest recorded specimen measured almost 7 inches long. These beetles feed on larvae and decaying wood in the rainforest shrub layer and are capable of both flying and loud hissing.
Snakes and lizards all but rule the rainforest floor and shrub layers. From the predatory boa constrictor to color-changing chameleon, this group makes up a large percentage of shrub-layer animals; only insects have larger numbers. Green anacondas (Eunectes murinus), though semi-aquatic, ambush prey such as birds, small mammals and amphibians in the ground and shrub layers. Geckos and chameleons hide in logs, decaying vegetation and small shrubs, changing colors to camouflage, defend territory and communicate with mates.
Most of the birds on the rainforest floor and shrub layer are insect eaters. Multicolored peacocks (Pavo cristatus) are social birds who prefer to live in large flocks. They share the rainforest layer with junglefowls, bowerbirds and cassowaries. Cassowaries are the largest rainforest-dwelling ground bird in the world and can grow to 40 inches tall. They’re quick, agile birds; cassowaries can run at speeds of up to 30 miles per hour.
- John Foxx/Stockbyte/Getty Images | <urn:uuid:0d00487d-dca2-410d-91ec-21b201e74ae1> | {
"date": "2020-01-22T06:37:37",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9240831136703491,
"score": 3.90625,
"token_count": 545,
"url": "https://animals.mom.me/rainforest-shrub-layer-animals-3088.html"
} |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2000 September 13
Explanation: Only last month the stage was set for Comet LINEAR (C/1999S4 LINEAR) to become the first "naked-eye" comet of Y2K. It didn't fill that role, of course, but it did turn in a very dramatic performance. Closely followed by astronomer Mark Kidger and colleagues with the Isaac Newton Group telescopes (La Palma, Canary Islands), comet LINEAR's nucleus apparently fragmented extensively on the night of July 25th. A faint fluorescent cloud fading against a background of stars is all that is still visible in this August 21st telescopic view from Loomberah, NSW Australia. Why did comet LINEAR break up? Comets are conglomerates of ice and rock. A very plausible scenario is that a substantial fraction of LINEAR's icy component was evaporated, leaving too little to hold the rocky material together. In any event, no bright telltale condensations remain. So, following its first tour through the inner Solar System, an encore from comet LINEAR seems unlikely!
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/GSFC
& Michigan Tech. U. | <urn:uuid:af2932a5-29d9-428d-a1cd-017ea4412a36> | {
"date": "2020-01-22T04:42:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9351440072059631,
"score": 3.375,
"token_count": 295,
"url": "https://apod.nasa.gov/apod/ap000913.html"
} |
Refa’ah Rafie’ Al Tahtawi recognized the importance of translation in achieving the Arab Renaissance and openness to other cultures. Therefore, he called for the establishment of a translation college – now Al-Alsun – to graduate translators for different languages. At the beginning of the third millennium, Arab countries, including Egypt and the UAE, realized the importance of translation in supporting the cultural advancement. Accordingly, Dr. Jaber Asfour founded the National Center for Translation, while the UAE established Kalima Center, which took part in translating large number of books.
Kalima Center was established in 2007 and is composed of a jury of 10 specialists and experts in the field of translation. The main task of the jury is to present proposals to translate valuable books and recent publications and to review the lists of books proposed for translation by international publishing houses and translators.
The main objective of the project is to address a thousand-year-old chronic problem, namely, the lack of translation in the Arab world, represented in the scarcity of excellent books translated from foreign languages into Arabic. This problem has prevented Arab readers from enjoying the works of greatest authors and thinkers.
On the UAE National Day, albawabhnews review the most important translated books issued by Kalima Center.
African Legends of Evolution
The book, written by Stephan Pelger, translated by Mousa Hamoul and published in 2005, consists of seventy-one chapters. Each chapter contains a number of legends that Africans still believe in about the origins of their kingdoms, their peoples, their tribes, their livestock, their plantations, or how they learned different crafts. These legends were originally oral tales until the westerners, merchants, colonists and later ethnographers and anthropologists, began to communicate with Africans and wrote down the same.
Euclid ‘s Dream…Journey to Hyperbolic Geometry
The writer, Maurice Margenstern, invites readers to a journey through the history of mathematics and biographies of some of its scientists, especially at the time of the discovery of hyperbolic geometry, which helped us understand the internal system of the world wide web and inspired mathematicians and artists.
Islam in Europe: Patterns of Integration
Europe is currently witnessing a debate about Islam, especially as the number of Muslims in the continent reached twenty million. The book makes a comparison, in dealing with the Islamic phenomenon, between the laws and legislation of France, Germany, England, the Netherlands, Belgium and the Scandinavian countries, as well as Southern European countries such as Italy and Spain.
Language and Myth
The book is written by Ernest Casserer and translated by Saeed Al-Ghanemi. In the introduction, the writer wonders, “What is the relationship between language and myth? If we recognize that there is a difference in how people understand the world, does this difference come from language or myth? Accordingly, which is precedent language or myth, or are both of them arising together from the same source? | <urn:uuid:e0dd732d-cde1-4635-b59d-ce5afeedb3c5> | {
"date": "2020-01-22T04:44:06",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573578834533691,
"score": 2.75,
"token_count": 609,
"url": "https://aralingua.com/kalima-uae-translation-project-to-address-chronic-issue/"
} |
Today I read an article on the Back to School Do’s and Don’ts written by By Jerry Bubrick,PhD. He is the Senior Director, Anxiety and Mood Disorders Center; Director, Intensive Pediatric Obsessive-Compulsive Spectrum Disorders Program.
Here are some of his suggestions from the article.
Get back into a routine. Change your child’s bedtime from 11 pm to 9 pm. Start waking up your child at school hours. Once they are awake, have your child complete the normal school routine: shower, dress, and eat breakfast. At night, the author suggests limit screen time. All screens should be off one hour before bed time. In addition, Dr. Bubrick states parents should Shop for school supplies earlier rather than later.
The most important part of the day is to make sure your child fuels their body. Dr. Bubrick says we should be more aware of meals. Why is this important? Dr. Bubrick gave an excellent example. If your child ate at 1pm and doesn’t get home until 5pm, then he/she maybe ravenous and unable to focus on homework. In order to focus, your child will need a healthy snack and then after about an hour your child will be better able to focus on homework.
When asking about your child’s day, Instead of asking “Did you make any friends?” (this may cause embarrassment to your child), Ask “How was your day?” Or
“Tell me three things you liked about today”- I like this last comment. it allows for more conversation to happen. In the question, “How was your day?”, if your child is like mine, you will get the answer, “Fine” then the child will walk away. When you ask for three good things, there are always leading comments to make after you listen first to what your child has to say. You could follow with, “What made this good for you?”
Dr. Jerry Bubrick suggests doing a trial run in order to get off to the right start.
He feels, especially for the child who is very anxious, take a drive by the school, walk into the building and allow the child to become acquainted with the smells and sounds. In addition, map out the classes and know where the locker is located.
As a parent, he suggests not to be afraid of set backs.
He feels parents and the child need to “Temper your expectations.” The expectation of the first days being stellar is not realistic. It is important to let kids ease into it and have ups and downs.Remember as a parent, every two steps forward there is one step back.
As parents, it is important to help kids manage their commitments.
Usually the first week of work is slow. So it is easy to take on new commitments.However, Dr. Bubrick suggests to wait until mid October before signing up for new activities. This way you have enough time for adjusting the schedules
Furthermore, it is important for the kids to balance their lives so that they are not coming home at 9pm and then starting homework and then off to bed at 11pm.
Dr. Bubrick believes this leads to depression. He feels children over commit themselves with activities. It is our job as their parents to show them how to balance.
One of the most important things to remember is that you are your child’s best advocate. If you see a problem but the school hasn’t contacted you, you contact them.
I tried to properly link the article. I had a difficult time. If you copy and paste this link, it should take you to the original article. http://www.childmind.org/en/posts/articles/2011-8-24-back-school-best-results | <urn:uuid:39f8acb1-8b54-46df-889e-d30e21df687d> | {
"date": "2020-01-22T06:58:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9686376452445984,
"score": 2.578125,
"token_count": 802,
"url": "https://aspieteenz.com/category/teenage-stress/"
} |
Perineal tear is an unintended laceration of the skin and other soft tissue structures separating the vagina from the anus. Tears vary in severity.
What is a perineal tear?
It's where as a result of delivery of a baby - usually on the larger side - an accidental tear is made to the perineum.
What is the perineum?
It's the wall between the vagina and anus, and everything that is in it.
It mainly occurs in women as a result of vaginal childbirth, which strains the perineum
In humans, the head of the fetus is so large in comparison to the size of the birth canal, term delivery is rarely possible without some degree of trauma. As the head passes through the pelvis, the soft tissues are stretched and compressed
What causes a tear in the wall between the vagina and anus?
Childbirth, because the stretching causes straining of this wall. If you think about the big size of the head, giving birth without some degree of trauma is really quite difficult.
Fetal head is oriented OP (occiput posterior, i.e. face forward)
Primip (mother has not given birth before)
Fetus is large
What makes it more likely that you tear the wall between the vagina and anus?
If bub's face is facing forward. Mom who hasn't given birth before. Or a big bub.
1st degree tear, where laceration is limited to the fourchette and superifcial perineal skin or vaginal mucosa
2nd degree tear, where laceration extends beyond fourchette, perineal skin and vaginal mucosa - to perineal muscles and fascia, but not the anal sphincter
3rd degree tear, where the fourchette, perineal skin, vaginal mucosa, muscles, and anal sphincter are torn. They can be subdivided into:
3a: Partial tear of the external anal sphincter involving <50% thickness
3b: >50% tear of the external anal sphincter
3c: Internal sphincter is torn
4th degree tear, where the fourchette, perineal skin, vaginal mucosa, muscles, anal sphincter, and rectal mucosa are torn
Whoa... That was a lot of words. So in simple terms, what's the difference between a 1st, 2nd, 3rd, and 4th degree tear?
It's easiest to define it by what it doesn't involve. 1st degree doesn't involve the perineal muscles. 2nd degree doesn't involve the anal muscles. 3rd degree doesn't involve the anal mucosa.
Superficial tears require no Tx
Chronic perineal pain
Dyspareunia (painful sex)
What bad things can happen as a reuslt of a tear in the wall between the vagina and anus?
There can be chronic pain where the tear is. Sex can be painful. And depending on the degree of the tear, there can be lost control over poop.
1st and 2nd degree tears rarely cause long term problems
In women who've experienced a 3rd or 4th degree tear, 70% are asymptomatic after 12 months
Severe tears can cause significant bleeding, long-term pain, or dysfunction
The majority of tears are superficial
1st and 2nd degree perineal tears are the most common complicating condition for vaginal devlieries
Episiotomy (intentional laceration, to facilitate delivery) | <urn:uuid:7f7b0d09-3448-4999-96cf-2ff9ad271043> | {
"date": "2020-01-22T06:20:02",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9035928249359131,
"score": 2.53125,
"token_count": 749,
"url": "https://autoprac.com/perineal-tear"
} |
Nestled in the grassy Rose Kennedy Greenway downtown, The Meeting House is a site-specific structure of a precarious, partly sunken building that juts 14 feet into the air. Commissioned by the Rose Kennedy Greenway Conservancy, the piece is a Quaker-style house made up of 20 different parts using traditional woodworking techniques.
Artist Mark Reigelman II constructed the structure in pieces in Brooklyn and assembled it on-site. The massive yellow house is a topsy-turvy reconfiguration of colonial architecture from the surrounding Boston area, imitating the traditional Quaker-style meeting houses that served historically as community centers and gathering places. The piece was inspired specifically by the Pembroke Friends Meeting House, built in 1706, the oldest Quaker meeting house in Massachusetts. It also seeks to remind visitors of the “Big Dig” project that took place from 1991 to 2006, during which the city displaced thousands of citizens from their homes in order to construct an elevated highway, which includes the 1.5-mile Greenway on which The Meeting House sits. Whereas such a structure looks out of place surrounded by looming skyscrapers, a couple decades ago it would have been surrounded by the context of other residential structures.
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand. | <urn:uuid:afe6fc4f-091e-4883-872f-d60b748f2709> | {
"date": "2020-01-22T04:43:07",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9272485375404358,
"score": 2.515625,
"token_count": 453,
"url": "https://blog.adafruit.com/2017/09/19/an-off-kilter-homage/"
} |
Teenage acne is caused by a combination of factors: oily skin, genetics, stress, bacteria, and hormones.
A proper skin care regime, along with over-the-counter and prescription medications, can clear up acne affected skin and prevent future breakouts.
Acne treatments may take 6-8 weeks to be effective.
Has your child recently started to suffer anxiety because of adolescent acne blemishes? You’re probably not the only one who has noticed. For teenagers, there’s perhaps nothing more socially awkward — or a more major blow to the self-esteem — than an enormous pimple in the middle of their forehead.
What causes teen acne?
When a child enters adolescence, rising hormone levels stimulate an increase in oil production of the sebaceous glands on the head, neck, chest and back.
About 8 in 10 tweens and teens have acne; several factors influence its development:
- Oily skin
What is an appropriate skin care routine for a typical teenager?
My advice is to wash the face once or twice daily with a lathering soap to remove bacteria, dead skin cells, and excess oil.
Keep hair clean by shampooing daily. Greasy hair contacting the face or neck can increase acne.
And don’t excessively touch the face. Constant poking and prodding may occlude (plug up) follicles more easily, leading to more pimples.
What over-the-counter products do you recommend?
First, discuss with your teen how important it is to stick with a skin care treatment for at least 6-8 weeks to see if it is effective. Often, kids don’t use acne products long enough to see if they are actually working. Acne will not clear up overnight; it takes time.
For daily face washing, a simple soap like Dove for Sensitive Skin, or Cetaphil Foaming Face Wash, are good choices if the skin dries out easily.
For oily skin, or if the acne is considerable, try an over-the-counter (OTC) acne cleanser containing salicylic acid or benzoyl peroxide. Caution: benzoyl peroxide may bleach dark towels and clothing, and about 4% of users become allergic to it. If a rash develops, discontinue use and contact your doctor.
If a teenager has more than just a few pimples, try Differin Gel (adapalene). This is an over-the-counter retinoid, similar to the prescription product Retin-A. After washing and drying the face, apply a pea-sized amount, once daily.
To help or prevent dryness, use a light, non-comedogenic (won’t plug pores) moisturizer like Cetaphil or Cerave Lotion, once daily.
If acne has not improved after using Differin Gel for 6 weeks, add to the routine a topical gel or lotion that contains 5-10% benzoyl peroxide or 2% salicylic acid.
After acne clears, continue using products to prevent new breakouts.
What help can a dermatologist offer?
If a teenager tries the above recommended OTC products and is not improving after a trial of at least 2 months; usually has a dozen or more active pimples at the same time; or is developing deep, hard, cystic or nodular acne lesions (which can lead to scarring), it may be time to visit a dermatologist or family doctor.
Prescription treatments may include topical medications and oral antibiotics, which treat related bacteria, as well as offering anti-inflammatory benefits.
With proper and early treatment, it is usually possible to get acne under control and prevent scarring.
A dermatologist can also help mediate some of the acne-related tension that can occur between parents and teens and remind the teen that maintaining good habits for clear skin, and proper use of acne-fighting products, is their responsibility.
What do you advise teens know about picking or popping their pimples?
If you do decide to pop a pimple, only pop one that has a white head.
Only attempt to pop it once. Picking at or popping a pimple more than once will create a deeper scab and increase the possibility of scarring.
First, wash the area (and your hands) with soap and water. Then, use a pin or needle — cleaned with alcohol — to gently prick the surface of the pimple. Remove contents by pushing two cotton swabs together, rather than using fingernails.
Never attempt to squeeze a deep pimple, as you might cause additional inflammation and longer healing time.
What are some common misconceptions about acne and its treatment?
Diet. There are no scientific studies that prove a relationship between diet and acne. However, some research has suggested that a high glycemic diet, or consuming a lot of highly processed foods, may have some correlation with acne flares. If a teen drinks a lot of milk, organic milk is recommended since it does not have any added, artificial hormones. In general, for healthy skin, eat a sensible, well-balanced diet.
Birth control pills. There is some misunderstanding about how helpful birth control pills may be for teenage girls with acne. The higher estrogen pills may help for mild cases, but these also have additional side effects. If a patient needs to go on birth control for medical purposes other than acne control, ask about those which may be more favorable to clearing acne.
Accutane. Patients with serious acne may benefit from treatment with this medication. The generic form is called isotretinoin; the brand name is no longer made. One controversy about this drug is that it has been claimed to cause teen depression. However, the association has not been scientifically proven. Moreover, effective acne treatment improves mood and self-esteem. Ask your child’s dermatologist if this medication is an option, and be sure to tell the dermatologist and other providers about any new depressive symptoms in your child if he or she is taking isotretinoin. The American Academy of Dermatology recommends in detail this medication for treatment of severe acne and moderate acne that resists treatment or is producing scars or psychosocial distress.
Subscribe to To Your Health and browse more of our best skin care articles.
PacMed dermatologist Barbara Fox sees patients in the greater Seattle area.
This information is not intended as a substitute for professional medical care. Always follow your health care professional's instructions. | <urn:uuid:c3c1972a-e449-4933-889c-82a2daff8ca0> | {
"date": "2020-01-22T06:26:32",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9226804971694946,
"score": 2.734375,
"token_count": 1340,
"url": "https://blog.providence.org/washington/how-to-help-clear-up-your-teen-s-acne-outbreak"
} |
The year was 1918. The United States, under the leadership of President Woodrow Wilson, had struggled to remain neutral in a conflict that had engulfed the European powers and their colonial empires in war. For three years, Wilson successfully navigated his nation on the path of peace, but by 1917 it was painstakingly clear that the United States could not condone the belligerency of Germany. The sinking of passenger liners such as the Lusitania and provocations like the infamous Zimmerman Note had infuriated American officials. On April 6, 1917, Congress declared war against Imperial Germany.
World War I witnessed shocking innovations in the realm of warfare. German U-Boats patrolled beneath the waves of the Atlantic for unsuspecting targets. The Allies and the Central Powers alike shelled their opponents from miles away with debilitating chemicals. Yet perhaps one of the most influential shifts in modern warfare theories arrived on the wings of the airplane. All nations, including the United States, understood that future military victories would require control of the skies.
Thousands of miles away from the nearest battlefield, in the small town of Cooper, Texas, Greaver Lewis Miller was preparing to fulfill his civic duty. At twenty years old, Miller enlisted with the Army’s Signal Officer’s Reserve Corps with the hopes of becoming a certified pilot. With no prior aviation experience, Miller graduated from the U.S. School of Military Aeronautics at the University of Texas at Austin on July 13, 1918. Armed with the latest aviation theories, Miller put his knowledge to the test at Rich Field.
An airfield near Waco, Texas, Rich Field was devoted to the training of American pilots in the 1910s and 1920s. It was named after Perry Rich, a soldier who had died in a flying exercise in 1913. Abandoned shortly after the war, the airfield was used as a civilian airport for a number of years. (And for our Waco readers—yes, Richfield High School was constructed on part of its site.)
In its prime, Rich Field was home to some of the best pilots the U.S. military had to offer. Flying an airplane was an art, and Miller excelled at it. On December 13, 1918, he officially became a “Reserve Military Aviator” by passing the required examinations. While Miller’s papers don’t tell us much about the particulars of his WWI service, we know he continued to impress his superiors—he rose to the rank of Second Lieutenant on February 15, 1919.
Like many young boys, Miller had a dream to one day soar through the skies. Thanks to his determination and the opportunities that pilots had during the First World War, Miller’s dream became a reality. He had earned his wings.
The Greaver Lewis Miller papers, a small collection of Miller’s personal records, are available for research at The Texas Collection, thanks to the generosity of his son, Jerry. As we prepare to celebrate Independence Day, The Texas Collection thanks Greaver Lewis Miller and all those who have served our country.
By Thomas DeShong, Library Assistant | <urn:uuid:c30ae6e1-f138-4a8d-9834-c624e5a243d3> | {
"date": "2020-01-22T06:21:41",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9824085235595703,
"score": 3.203125,
"token_count": 638,
"url": "https://blogs.baylor.edu/texascollection/category/waco/historic-waco/page/7/"
} |
This term, Primary 5a have been learning about the Scottish Wars of Independence as part of our Social Studies. We have compared the armour and equipment used by the Scottish soldiers and the English soldiers.
As part of our Expressive Arts curriculum we have designed and created our own shields using a range of materials. We will use our finished products as props in an improvised dramatization of the Battle of Bannockburn.
This term Primary 5a have been learning about the life cycle of plants. We have investigated how a plant reproduces through pollination. In class we identified and labelled the reproductive parts of a plant; the anther, style, stigma, filament and ovary, discussing the role of each.
Since we have had beautiful weather this week, we decided to visit the Chestnut Garden and combine the subject areas of Science and Art to create detailed line drawings of different plants we could see. We paid special attention to the different parts involved in pollination.
Primary 5a have been learning Mandarin as part of our Modern Language studies. We have been practising some basic greetings and conversation as well as learning more about the Chinese culture.
In order to celebrate Chinese New Year, Primary 5a created some Chinese Dragon symbols using craft materials.
Our Mandarin teacher was very impressed with our knowledge and understanding of the Chinese New Year celebrations! | <urn:uuid:02abd8b5-217a-4fa6-b388-933b6c3b9968> | {
"date": "2020-01-22T05:41:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9579830765724182,
"score": 3.421875,
"token_count": 274,
"url": "https://blogs.glowscotland.org.uk/er/snholclassof2013/category/expressive-arts/"
} |
Lesson 6 Plan
||Lesson Topic: Access to education - changes and continuities since the Civil Rights Movement; tactics of the grassroots youth movement.||Unit’s Essential Questions: How has access to education changed since the Civil Right Movement? To what extent do schools remain segregated today? What was the grassroots movement and what were its tactics?|
2. People, places, environments.
3. Individual development and identity.
4. Power, authority, and governance.
5. Civic ideas and practices.
Objectives Your objectives should be measureable, contain an observable verb, and be written in student-friendly language.
|Students will know or be able to:
1. Discuss and diagram how access to education changed since the Civil Rights Movement.
2. Interview each other to decide the extent to which schools remain segregated today.
3. Define the grassroots movement and examine its tactics.
|Mechanism of assessment for measuring each objective:
1. Students will create diagrams of change in education since the Civil Rights Movement.
2. Students will interview two people and write down their responses.
3. Students will write down a definition of the grassroots movement and list its tactics.
|Instructional methods used Look at your “Teaching Methods to Try” list and choose methods that will best help you reach your objectives.||
Methods evaluation: After teaching, reflect on how well each method worked and what you would do to refine or build on each method.
Folders, paper, pencils, flip chart, markers, handouts of exit tickets and homework
Evidence of differentiationBased on your assessment of student learning, what are you going to do to accommodate the range of needs in your classroom?
What will you be doing?
What will the students be doing?
10:30-10:35 Notebooks and binders distributed. Objectives are read.
10:35-10:40 Attendance - Shrek or Wall-E?
10:40-11:00 School segregation conversation continued.
“Do you feel your school is segregated?”
You may not interview a person who interviewed you. Based on the responses you received and your personal experience (the adjective/noun/verb), draw a diagram/picture/table or symbols to show segregation before and after the Civil Rights Movement.
11:20-11:35 Create your contribution to fight for equality. It can be a poem, song, dance, picture, or plain text. Prepare to share.
1. Education is power
2. Quality education is a human right
3. It is an individual’s choice of what school they go to
4. Segregation is over
5. It is my responsibility to make sure I receive a quality education
10:30-10:35 Students will get their notebooks and folders as they come into the classroom. A student will volunteer to read daily objectives written on the board.
10:35-10:40 Students will choose between Shrek and Wall-E when I call their name.
11:20-11:35 Students will work independently.
11:35-11:50 Students will share with the class.
11:50-12:00 Students will receive their exit ticket. Students will help us draw a Venn diagram on the board.
ACCORDION Students will move from one side of the classroom to the other and share their responses voluntarily or upon request. | <urn:uuid:f72284b7-6655-4603-a573-154f73933fb6> | {
"date": "2020-01-22T06:28:27",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 5,
"language": "en",
"language_score": 0.9272574782371521,
"score": 4.53125,
"token_count": 717,
"url": "https://brown.digication.com/alla_chelukhova_teaching/Artifact_I_Lesson_Plan"
} |
If there is a tree that we can say is part of the nature of the Mediterranean culture, that undoubtedly is the olive tree.
What a wonderful tree, I have always been fascinated by its roots in life, there is no other like it. You uproot it, you leave it lying on the ground, you think it's dead and he takes his time, it may even take a few years, if the weather helps a bit, he relives it, and one day the green resurfaces among his dry roots. Its capacity for resistance is amazing.
That is why today we can enjoy authentic living treasures. True survivors for centuries of raw winters covered in snow and summers under scorching suns and yet tireless workers who with their fruit have delighted without differentiating rich or poor, nobles or slaves, regardless of whether those who cared for them were Iberians or Romans, Arabs or Christians. We have a lot to learn.
They are not majestic trees, they do not reach impressive heights. They are rather rough, robust, with broad and rough trunks, sometimes rolled and folded on themselves forming strange figures. As young people, some tens of years old, they tend to have a compact trunk, which over the years tends to open and there are hollows where birds nest and small animals such as mice and snakes are housed; then those spaces widen and become small caves that take advantage of larger animals such as wild cats, badgers, foxes. Finally, when centuries have passed watching sunsets, they become dreamlike places where children fly with their imagination between medieval castles, ships crossing seas or perfect shelters to prepare their magical potions. A fantastic space to play while the rest of the family picks the olives.
There is a lot to tell about the olive trees, we will do it, it is our commitment. | <urn:uuid:011b8ac3-e524-41b9-8dce-d9b625658f2f> | {
"date": "2020-01-22T04:54:34",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9705637097358704,
"score": 2.640625,
"token_count": 372,
"url": "https://castelldelacosturera.com/gb/content/13-the-olive-trees"
} |
Separate insurance contracts (i.e., insurance policies not bundled with loans or other kinds of contracts) were invented in Genoa in the 14th century, as were insurance pools backed by pledges of landed estates. The first known insurance contract dates from Genoa in 1347, and in the next century maritime insurance developed widely and premiums were intuitively varied with risks. These new insurance contracts allowed insurance to be separated from investment, a separation of roles that first proved useful in marine insurance.
The cost of insurance is on the rise: the price for auto insurance rose 3.6% between 2011 and 2012, and 3.1% for homeowners and renter’s insurance, according to the Insurance Information Institute. In fact, auto liability insurance premiums alone have been increasing by 2.8% annually for the past three years. This makes choosing the right coverage and provider all the more crucial to save money without sacrificing important aspects of coverage. | <urn:uuid:74e21685-3a72-47ac-8a48-c793220820e9> | {
"date": "2020-01-22T06:14:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.966949999332428,
"score": 2.703125,
"token_count": 190,
"url": "https://cheapcarinsurancekey.com/carinsurance2/auto-insurance-premium-decrease-car-insurance-companies-jamaica.html"
} |
[Über den Nachweis und das Verhalten der bei der Bestrahlung des Urans mittels Neutronen entstehenden Erdalkalimetalle]
O. HAHN AND F. STRASSMANN
Die Naturwissenschaften 27, p. 11-15 (January 1939).
[Translation in American Journal of Physics, January 1964, p. 9-15]
This article in the original German.
[N.B. the placement of the figures bear no relation to their placement in the original. You may click each figure for a larger image. The istopes discussed below are as follows: "Ra I" = unclear, "Ra II = Ba-141, "Ra III" = Ba-139, "Ra IV" = Ba-140 and "Ac I" = unclear, "Ac II" = La-141, "Ac III" = La-139, "Ac IV" = La-140.]
In a recent preliminary article in this journal2 it was reported that when uranium is irradiated by neutrons, there are several new radioisotopes produced, other than the transuranic elements -- from 93 to 96 -- previously described by Meitner, Hahn, and Strassmann. These new radioactive products are apparently due to the decay of U239 by the successive emission of two alpha particles. By this process the element with a nuclear charge of 92 must decay to a nuclear charge of 88; that is, to radium. In the previously mentioned article a tentative decay scheme was proposed. The three radium isotopes with their approximate half-lives given, decay to actinium, which in turn decay to thorium isotopes.
A rather unexpected observation was pointed out, namely that these radium isotopes, which are produced by alpha emission and which in turn decay to thorium, are obtained not only with fast but also with slow neutrons.
The evidence that these three new parent isomers are actually radium was that they can be separated together with barium salts, and that they have all the chemical reactions which are characteristic of the element barium. All the other known elements, from the transuranic ones down through uranium, protactinium, thorium, and actinium have different chemical properties than barium and are easily separated from it. The same thing holds true for the elements below radium, that is, bismuth, lead, polonium, and ekacesium (now called francium). Therefore, radium is the only possibility, if one eliminates barium itself.
In the following, the separation of the mixture of isotopes and the isolation of each species is described. From the changes in the activity of the various isotopes their half-lives can be found, and also the decay products can be determined. The half-lives of the daughter decay products cannot be fully described in this article however because of the complexity of the process. There are at least three, and probably four, isomeric decay chains, each one with three species. The half-lives of all the daughter products could not be thoroughly investigated so far.
Barium was of course always used as a carrier for the "radium isotope." As a first step, one can precipitate the barium as barium sulfate, which is the least soluble barium salt after the chromate. However, due to previous experience and some preliminary work, this method of separating the "radium isotope" by means of barium sulfate was not used. The reason was that this precipitate also carries with it a small amount of uranium, and also a not negligible quantity of actinium and thorium isotopes. These are the supposed decay products of the "radium isotope," and therefore they would prevent making a pure preparation of the primary decay products. Instead of the sulfate precipitate, barium chloride was chosen as precipitating agent, which is only very slightly soluble in strong hydrochloric acid. This method worked very well.
When uranium is bombarded with slow neutrons, it is not easy to understand from energy considerations how radium isotopes can be produced. Therefore, a very careful determination of the chemical properties of the new artificially made radioelements was necessary. Various analytic groups of elements were separated from a solution containing the irradiated uranium. Besides the large group of transuranic elements, some radioactivity was always found in the alkaline-earth group (barium carrier), the rare-earth group (lanthanum carrier), and also with elements in group IV of the periodic table (zirconium carrier). The barium precipitate was the first to be investigated more thoroughly, since it apparently contains the parent isotopes of the observed isomeric series. The goal was to show that the transuranic elements, and also U, Pa, Th, and Ac
could always be separated easily and completely from the activity which precipitates with barium.
1. For this reason, the irradiated uranium was treated with hydrogen sulfide, and the transuranic group was separated with platinum sulfide and dissolved in aqua regia. Barium chloride was precipitated from this solution with hydrochloric acid. From the remaining filtrate, the platinum was precipitated again with hydrogen sulfide. The barium chloride was inactive, but the platinum sulfide still had an activity of about 500 particles/minute. Similar experiments with the longer-lived transuranic elements gave the same result.
2. A precipitate with barium chloride was made using 10 g of unirradiated uranium nitrate. The U was in radioactive equilibrium with UX1 + UX2 (thorium and protactinium isotopes) and had an activity of about 400 000 particles/minute. The precipitate showed an activity of 14 particles/min; that is, practically no activity. That means neither U, nor Pa, nor Th, comes down out of solution when the barium chloride crystallizes.
3. Finally, using a solution of actinium (MsTh2) having an activity of about 2500 particles/min, a barium chloride precipitate was separated. This gave only about 3 particles/min which is also practically inactive.
In a similar way, the barium chloride precipitates obtained from the irradiated uranium solution were carefully investigated. However, sulfide precipitates made from the radioactive barium solution were practically inactive. Also, lanthanum and zirconium precipitates had only slight activities whose origin could easily be traced to the activity of the barium precipitates.
A simple precipitate with BaCl2 from a strong hydrochloric acid solution naturally does not allow one to distinguish between barium and radium. According to the reactions very briefly summarized above, the radioactivity which precipitates with the barium salts can only come from radium, if one eliminates barium itself for the time being as altogether too unlikely.
We now discuss briefly the graphs of the activities obtained with the barium chloride. They enable us to determine the number of "radium isotopes" present, and also their half-lives.
Figure I shows the activity of the radioactive barium chloride after a 4-day irradiation of uranium. Curve a gives the measurements for the first 70 h; curve b gives the measurements on the same sample continued for 800 h. The lower curve is plotted on 1/10 the scale of the upper one. At first there is a rapid decrease of activity, which becomes a gradual increase after about 12 h. After about 120 h, a very gradual exponential decrease of activity begins again, with a half-life of about 13 days. The shape of the curves shows clearly that there must be several radioactive species present. However one cannot tell for sure what they are. They might be several "radium isotopes," or one "radium isotope" with a series of radioactive daughter products.
The three "radium isotopes" which were previously reported in the earlier article were confirmed here. They are designated for the time being as Ra II, Ra III, and Ra IV (because of a presumed Ra I reported below). Their identification and the determination of their half-lives is explained briefly with the help of the figures. Figure 2 shows the radioactive decay of the "radium" after a 6-min irradiation of uranium. Curve a is the total activity, measured for 215 min. This curve is a composite of the activity from two "radium isotopes," Ra II and Ra III (compare Fig. 3), and also a small amount of actinium, which is formed by decay of Ra II. This latter substance, which is designated as Ac II, has a half-life of about 2 1/2 h. This was shown in another experiment, which is not described here. The theoretical growth curve for such an actinium isotope resulting from Ra II is shown in the figure as curve b. Here the half-life of Ra II is taken to be 14 min, in anticipation of later results. When curve b is subtracted from curve a, then curve c in Fig. 2 is obtained. This remaining activity must now come from the radium isotopes, mostly being due to the short-lived Ra II, and with a slight contribution from Ra III with its longer half-life. The latter has a half-life of about 86 min, as is seen in Fig. 3 later on. Curve d in
Fig. 2 shows the activity due to Ra III. When d is subtracted from c, one finally obtains curve e, which is the activity due to pure Ra II. It has an exponential decrease with a half-life of 14 min. This value should be correct within ±2 min.
Now we come to the identification and halflife determination of Ra III. A uranium sample is irradiated for one hour or several hours. One finds a rapid decrease in activity at first, then a still rather intense activity which decreases to one-half in about 100-110 min, and then a further decrease. In order to show that this activity was also mostly due to a radium isotope, the following procedure was used. The "radium" was separated from the irradiated uranium sample with barium chloride; after 2 1/2 h, the barium chloride was dissolved again, and reprecipitated. The short-lived Ra II has completely decayed during this time, and the Ac II (2 1/2 h half-life) which was formed from Ra II in the barium chloride is removed in the recrystallization process. The barium chloride still has considerable activity, so a "radium isotope" must still be present. The procedure here is like that used by Meitner, Strassmann, and Hahn3 for the investigation of the artificial radioactive daughter products of thorium. The resulting activity which remains is shown in Fig. 3, curve a.
During the first hour, the rate of decrease is almost exactly exponential, with a half-life of about 86 min. A small residual activity remains, which is no doubt due to a long-lived "actinium isotope" formed by the decay of Ra III. The decay of the actinium activity can be roughly determined by the departure of curve a from a pure exponential. This is shown in Fig. 3 as curve b. (It was also shown chemically that the decay of Ra III leads to an "actinium isotope" with a relatively long life.) If one subtracts b from a, one obtains curve c for Ra III alone. It shows a very nice exponential decrease with a half-life of 86 min. This value should be correct within ±6 min.
Now we come to the third "radium isotope," which is designated here as Ra IV. In Fig. 1, curve b, a substance with a half-life of about 12 to 13 days was indicated. In a manner quite similar to that used for Ra III, it was shown that this more slowly decreasing activity must be practically all due to a "radium isotope." A lengthy irradiation of uranium was made, then the neutron source was removed, and by waiting about one day the isotopes Ra II and Ra III were allowed to decay completely. If one makes a barium precipitate now, and carefully recrystallizes again, then any activity found with the barium chloride can only be due to another "radium isotope." Such an activity was always found, even after several days of waiting. The decay rate follows a characteristic pattern. It increases gradually for several days, reaches a maximum, and then decreases with a half-life of about 300 h (12.5 days).
In Fig. 4 are shown several such curves. The sample for curve c was prepared from a uranium solution which had been irradiated with low intensity, and the other curves are due to barium precipitates from more intensely irradiated uranium solutions. (The curves cannot be used to determine the relative intensity factor directly, since the geometrical arrangement was not identical. Under identical conditions, such as equal amounts of uranium being irradiated, etc., we found that the relative intensity factor was about 7.) The shapes of the three curves are very similar. The growth of activity has a half-life of less than 40 h, and the decay about 300 h. However, the long-lived "Ra IV" doubtlessly has a half-life of somewhat less than 300 h, because the Ac IV which is mostly responsible for the initial growth of activity probably decays to a long-lived "thorium isotope." Therefore the half-life of Ra IV cannot be determined precisely, but a value of 250-300 h is probably close to being correct. From curves a, b, and c, one can see clearly that the beta rays of Ra IV are much less penetrating than those from its daughter product, since otherwise such a sharp increase would not occur.
To summarize our results, we have identified three alkaline earth metals which are designated as Ra II, Ra III, and Ra IV. Their half-lives are 14±2 min, 86±6 min, and 250-300 h. It should be noted that the 14-min activity was not designated as Ra I nor the other isomers as Ra II and Ra III. The reason is that we believe there is an even more unstable "Ra" isotope, although it has not been possible to observe it so far. In our first article about these new radioactive decay products we reported an actinium isotope with a half-life of about 40 min. Our initial assumption was that this least stable actinium had resulted from the decay of the least stable radium isotope. In the meantime, we have determined that the 14-min radium (previously given as 25 min) decays to actinium with a 2.5-h half-life (previously given as 4 h). However the less stable actinium isotope mentioned above is also present. Its half-life is somewhat shorter than previously reported, perhaps a little less than 30 min. This "actinium isotope" cannot result from the decay of the 14-min, 86-min, or the long-lived "Ra." Also this "actinium isotope" can be shown to be present after only a 5-min irradiation of uranium. The simplest explanation is to assume the formation of a "radium isotope" whose half-life must be shorter than 1 min. If it had a half-life longer than 1 min, we should have been able to detect it. We searched for it very carefully. Therefore we designate this heretofore unknown parent of the least stable "actinium isotope" as "Ra I." With a more intense neutron source it should no doubt be detectable.
The decay scheme which was given in our previous article must now be corrected. The following scheme takes into account the needed changes, and also gives the more accurately determined half-lives for the parent of each series:
The large group of transuranic elements so far bears no known relation to these isomeric series.
The four decay series listed above can be regarded as doubtlessly correct in their genetic relationship. We have already been able to verify some of the "thorium" end products of the isomeric series. However, since the half-lives have not been determined with any accuracy yet, we have decided to refrain altogether from reporting them at the present time.
Now we still have to discuss some newer experiments, which we publish rather hesitantly due to their peculiar results. We wanted to identify beyond any doubt the chemical properties of the parent members of the radioactive series which were separated with the barium and which have been designated as "radium isotopes." We have carried out fractional crystallizations and fractional precipitations, a method which is well-known for concentrating (or diluting) radium in barium salt solutions.
Barium bromide increases the radium concentration greatly in a fractional crystallization process and barium chromate even more so when the crystals are allowed to form slowly. Barium chloride increases the concentration less than the bromide, and barium carbonate decreases it slightly. When we made appropriate tests with radioactive barium samples which were free of any later decay products, the results were always negative. The activity was distributed evenly among all the barium fractions, at least to the extent that we could determine it within an appreciable experimental error. Next a pair of fractionation experiments were done, using the radium isotope ThX and also the radium isotope MsTh1. These results were exactly as expected from all previous experience with radium. Next the "indicator (i.e., tracer) method" was applied to a mixture of purified long-lived "Ra IV" and pure MsTh1; this mixture with barium bromide as a carrier was subjected to fractional crystallization. The concentration of MsTh1 was increased, and the concentration of "Ra IV" was not, but rather its activity remained the same for fractions having an equivalent barium content. We come to the conclusion that our "radium isotopes" have the properties of barium. As chemists we should actually state that the new products are not radium, but rather barium itself;
other elements besides radium or barium are out of the question.
Finally we have made a tracer experiment with our pure separated "Ac II" (half-life about 2.5 h) and the pure actinium isotope MsTh2. If our "Ra isotopes" are not radium, then the "Ac isotopes" are not actinium either, but rather should be lanthanum. Using the technique of Curie,4 we carried out a fractionation of lanthanum oxalate, which contained both of the active substances, in a nitric acid solution. just as Mine. Curie reported, the MsTh2 became greatly concentrated in the end fractions. Withour "Ac II" there was no observable increase in concentration at the end. We agree, with the findings of Curie and Savitch5 for their 3.5-h activity (which was however not just a single species) that the product resulting from the beta decay of our radioactive alkaline earth metal is not actinium. We want to make a more careful experimental test of the statement made by Curie and Savitch that they increased the concentration in lanthanum (which would argue against an identity with lanthanum) since in the mixture with which they were working there may have been a false indication of enrichment.
It has not been shown yet if the end product of the "Ac-La sample," which was designated as "thorium" in our isomeric series, will turn out to be cerium.
The "transuranic group" of elements are chemically related but not identical to their lower homologs, rhenium, osmium, iridium, and platinum. Experiments have not been made yet to see if they might be chemically identical with the even lower homologs, technetium, ruthenium, rhodium, and palladium. After all one could not even consider this as a possibility earlier. The sum of the mass numbers of barium+technetium, 138+101, gives 239!
As chemists we really ought to revise the decay scheme given above and insert the symbols Ba, La, Ce, in place of Ra, Ac, Th. However as "nuclear chemists," working very close to the field of physics, we cannot bring ourselves yet to take such a drastic step which goes against all previous experience in nuclear physics. There could perhaps be a series of unusual coincidences which has given us false indications.
It is intended to carry out further tracer experiments with the new radioactive decay products. In particular a combined fractionation will be attempted, using the radium isotope resulting from fast neutron irradiation of thorium (investigated by Meitner, Strassmann, and Hahn6) together with our alkaline earth metals resulting from uranium. At places where strong neutron sources are available, this project could actually be carried out much more easily.
In conclusion we would like to thank Miss Cl. Lieber and Miss I. Bohne for their efficient help in the numerous precipitations and measurements.
1 From the Kaiser Wilhelm Institute for Chemistry, at Berlin-Dahlem. Received 22 December 1938.
2 O. Hahn and F. Strassmann, Naturwissenschaften 26, 756 (1938).
3 L. Meitner, F. Strassmann, and O. Hahn, Z. Physik 109, 538 (1938).
4 Mme. Pierre Curie, J. Chim. Phys, 27, 1 (1930).
5 I. Curie and P. Savitch, Compt. Rend. 206, 1643 (1938).
6 L. Meitner, F. Strassmann, and O. Hahn, loc. cit. | <urn:uuid:2d902a54-c532-4715-887c-291d0e3337be> | {
"date": "2020-01-22T05:50:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9590300917625427,
"score": 2.765625,
"token_count": 4596,
"url": "https://chemteam.info/Chem-History/Hahn-fission-1939a/Hahn-fission-1939a.html"
} |
John Dewey is not even mentioned in the Wikipedia article on Logic. That’s an oversight that I’m tempted to remedy, but it also reflects the fact that the 20th century development of logic in the tradition of Frege, (early) Wittgenstein, Russell, Gödel, and Tarski has largely ignored Dewey’s work, conceiving it in various ways, but above all, as not part of Logic. His idea that logic is the theory of inquiry is deemed to be a non-starter.
Dewey’s new logic
Bertrand Russell, in particular, took pains to explain why Dewey’s logic (1938) was not real logic, how it failed to address the fundamental questions of truth conditions or the relation between propositions and meaning, an idea that Tarksi had already developed in his model theory. Logicians should focus on concepts such as truth conditions, consistency of logical systems (that not all statements are provable), and completeness (that true statements are provable).
The development of model theory as a basis for semantics meant that the direct connection with the world was severed; logicians could now focus on the structure and operation of logical systems per se, without concern for real world consequences. In the terms of academic logic, it’s clear that Russell won the battle; Dewey’s “new logic” as Russell demeaned it, especially with its insistence on connection to lived experience, is now judged irrelevant by virtually all mathematical logicians, and most philosophical logicians.
However, despite the great achievements of Tarski and others to follow, the standard account of logic has encountered obstacles. Kurt Gödel proved that any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. For most systems of greater complexity, it’s not possible to say what consistency and completeness even mean.
Logicians began to see that formal logic was inadequate for the goals that David Hilbert, Russell and Whitehead, and others had proposed. Moreover, it was completely inadequate for that part of the universe that isn’t elementary arithmetic, i.e., social relations, history, culture, language, art, learning, nature, and all the other things that most people care about.
In recent years, these inadequacies of the formal semantics approach have led to a reconsideration of Dewey’s theories. Thomas Burke, among others, has called for a critical, re-examination of logic as the theory of inquiry. In Dewey’s new logic: A reply to Russell, he analyzes the debate between Russell and Dewey that followed the publication of Dewey’s Logic: a theory of inquiry in 1938. He concludes that although Russell won the battle, Dewey won the war, in the sense that his logic holds more promise for the future, especially as a a logic for work in the social sciences and humanities, or for practical concerns.
Dewey’s unread book
In the preface to his 1938 book on logic, Dewey says,
This book is a development of ideas regarding the nature of logical theory that were first presented, some forty years ago, in Studies in Logical Theory; that were somewhat expanded in Essays in Experimental Logic and were briefly summarized with special reference to education in How We Think.
There are many proposed encapsulations of Dewey’s vast body of work. If I had to choose one, it might be logic, which Dewey himself saw as a 40-year project. His early training, an academic context that sought a logical basis for knowing and life, and the ways in which his logic integrates across his ideas in art, education, political theory, morality, and other areas, suggests to me that logic could be the strongest connective thread.
As he develops his logic, one can see the core behind many of Dewey’s major ideas, such as warranted assertions, situation, ends-in-view, habits, the continuum of inquiry, facts and meanings, and the relation between natural and social science. He also confronts major issues in logic as they are conceived by Russell et al., but always with a twist, which not surprisingly, makes his views unacceptable to that community. Nevertheless, I agree with Burke et al. that Dewey offers us the best option for a usable logic for the problems of today.
Reading Dewey’s Logic: A theory of inquiry
Some of Dewey’s Logic: a theory of inquiry can be a slow read. Published 71 years ago, the style is often pedantic. Dewey’s characteristic lack of references, diagrams, compelling metaphors, and good examples doesn’t help. His attempts to speak to the world of Russell and Tarski often get in the way. Nevertheless, the ideas are powerful, and deserve the reconsideration mentioned above.
Much of the book can seen as explaining one of the few definitions Dewey ever provides:
Inquiry is the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole.
The book is 556 pp. (my copy), divided into four parts. Part I is probably the most useful for most readers. It’s here that he provides the rationale for conceiving logic as inquiry, and discusses topics such as common sense in relation to scientific inquiry.
Part II defines inquiry and explores the construction of judgments. Part III on propositions and terms is a shorter section, and probably the most technical in the book. It’s also the one that speaks most to Tarski, although in a way that I suspect he rejects. Part IV focuses on mathematics and science. I found it to be the most interesting, especially as it deals with scientific methods, scientific laws, theories of knowledge, and social inquiry.
My recommendation on reading is to slow-read Part I, in order to understand what Dewey is trying to do. Use Part II as a way to see how the theory plays out, but devoting effort to chapters differentially, e.g., I find chapter 8 on understanding and inference to be especially good. Part III could be left for a more advanced read. Part IV is very good, especially the last three chapters.
Table of Contents
Here is the TOC for Logic: a theory of inquiry. The links are to the Past Masters collection at the University of Illinois (login required).
- Burke, F. Thomas (1994). Dewey’s new logic: A reply to Russell. Chicago: University of Chicago Press.
- Burke, F. Thomas; Hester, D. Micah; Talisse, Robert B. (Eds.) (2002). Dewey’s logical theory: New studies and interpretations. Nashville: Vanderbilt University Press.
- Dewey, John (1938). Logic: a theory of inquiry. New York: Henry Holt.
- Talisse, Robert T. (2002). Two concepts of inquiry. Philosophical Writings, 20, 69-81.
- Tarski, Alfred (1983). Logic, semantics, metamathematics: Papers from 1923 to 1938 (2nd ed.). Hackett, Indianapolis: Hackett. | <urn:uuid:c2ca99d5-850b-4097-98ca-91d25e5aa03c> | {
"date": "2020-01-22T05:30:50",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9565435647964478,
"score": 2.9375,
"token_count": 1512,
"url": "https://chipbruce.net/tag/gottlob-frege/"
} |
Graphic Design Production Basics educates instructional designers on a few basic graphic design production concepts. It helps to bridge the communication gap between instructional designers and graphic designers, leading to stronger team development.
This course allows learners to experience the production dynamic between instructional designers and graphic designers. A comprehension of graphic design terminology leads to more common ground, and less frustrations or misconceptions of what is needed from each party to complete a project. If you would like to take the course on ScormCloud, please email me: [email protected]. | <urn:uuid:7c488bdf-2c62-4675-8392-8bc93e5abdcc> | {
"date": "2020-01-22T05:03:27",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9188095927238464,
"score": 2.78125,
"token_count": 107,
"url": "https://christinamooredesign.com/2017/11/28/graphic-design-production-basics/"
} |
The pace of change in our world is accelerating and designing classroom instruction with a 20th-century mindset no longer prepares students for future success. While not everyone agrees on a single solution, sitting in rows and listening to content delivered through a lecture is slowly being replaced with active learning environments where students are prompted to ask questions, seek out relevant information, and apply information, not just remember it.
“The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” --Alvin Toffler
In 2002, the Partnership for 21st Century Skills (P21) began looking at the skills students need to be successful citizens beyond school. After interviewing leaders in a range of fields and working with schools to implement the P21 framework, they identified a set of four essential skills they call the 4 C’s: critical thinking, creativity, collaboration and communication.
The rest of this article describes steps you can take to reinforce the 4 C’s in your classroom and provides ideas you can use to change instruction to make the 4 C’s an integral part of the learning process.
The often-cited, but nevertheless accurate, reality is that today’s wealth of information makes it essential that every student must be able to compare and evaluate facts and opinions and make decisions based on that analysis. We need citizens who know more than “about” something, we need citizens who can take information and apply it to solve problems and create solutions.
Developing student’s thinking skills is not a revolutionary concept. Benjamin Bloom and colleagues first published his eponymous Bloom’s Taxonomy, the ubiquitous methodology for classifying educational learning objectives, in 1956! You’re likely also familiar with Webb’s Depth of Knowledge, which was released back almost 30 years ago and the revised Bloom’s Taxonomy from 2001. No matter which system you use, educational models like these can help you plan instruction that promotes higher levels of thinking.
Today’s presentation tools make it easy for students to generate flashy presentations. Student’s can simply mine data during a brief web search and copy and paste it into a presentation with lots of animation and movement. We can be so enamored by design and production value that we overlook the fact that the content was simply a dump of facts and information and shows no evidence of thought and understanding.
It is easy — and essential — to move beyond “about” projects; simply ask students to demonstrate knowledge AND thinking with end products that require both. Educator Elizabeth Allen has a list of fun products students can create that push them beyond copying and pasting, requiring them to think, internalize, and contextualize.
We need to create classroom cultures that value questions more than answers. If we want students to analyze, evaluate, and work with challenging ideas and problems, we need to equip them with skills for categorizing, deconstructing big ideas into component parts, identifying relationships, and asking more questions. Great thinking starts with effective inquiry.
The US Patent office evaluates new ideas using the criteria of 1) originality, 2) usefulness, and 3) novelty. This sort of innovation is a result of creativity in practice. If we are going to be able to address the needs and issues in our highly complex and rapidly changing world, we need to stop thinking about creativity as just art, or as a unique character trait.
To promote creativity in our students, we need to create a learning culture that values and promotes creative behaviors. One of the biggest predictors of person’s creative capacity is their openness to experience. Ensuring that the classroom culture values risk-taking and difference can help students overcome their reluctance to try new things, especially those things at which they might not initially excel. Make it clear that creativity is a positive, valued attribute of EVERY student. “Expect that your students can do it.”
To promote student creativity, require students to create work that is uniquely theirs. This sounds easy, but making it happen requires you to change as well. Let go of giving students exact instructions that, when followed, are guaranteed to meet your expectation of success. Let students take control of project design; let them define what, where, when, and how during ongoing discussions about why.
Student work, whether done individually or in teams, should not look like work done by other students on the same topic. Sameness is a symptom…when the processes and the resulting product(s) all look the same, there is too little control in students’ hands and too many instructions being followed. Create an environment where student creativity can flourish by transferring responsibility for learning and demonstrations of that learning to students.
Students often dislike working in groups because they do not know how to collaborate productively. Take the time to develop norms for group work and discuss behaviors and actions that result in successful team projects.
Learning to build on one another's knowledge and expertise involves respect, listening, and contributing. You might scaffold the learning process by assigning roles, allowing students to see the different tasks needed to complete a project and understand how their strengths can contribute to the overall success of their group. Make sure students have time and opportunities to reflect on their own strengths and weakness, as well as how to utilize the strengths of their team members for maximum effect.
While building skills for successful teamwork is important, it is just the beginning of collaboration in a 21st century classroom. If we want to prepare students for high-level thinking and work, we need to give them access to real work with experts and colleagues in a field of study. They need to work alongside professionals on tasks for a real audience who values that work.
Including communication as one of the 4 C’s underscores the changing nature of literacy. Powerful literacy skills have always included the ability to read and to share thoughts, questions, ideas, and solutions in ways others can understand. Literacy includes traditional speaking and writing as well as well as new modes of communication made possible by widespread, affordable availability of video and multimedia tools.
Regardless of the medium, students must still be capable of clear, concise writing and the correct use of topic-specific vocabulary. Today’s students must also build skills with multimedia forms of communication, requiring the ability to “show rather than tell” using pictures, music, intonation, and more.
Technology has changed how we communicate. PEW Research Center findings suggest that students prefer writing on the computer to writing on paper and that they will write and edit more when writing on a device rather than writing by hand.
If we want students to work hard, we need to give them audiences for their work that value the content and delivery for more than academic purposes. Technology makes it easy to connect students to the world around them, providing an authentic audience for their communication. The Web makes it easy to connect to a specific audience, allowing students to share an idea or solution that make a real difference in the lives of real people.
A 21st century classroom provides students with the dispositions and skills to meet both the 3 R’s and the 4 C’s. Don’t worry…this isn’t yet another requirement you’ll need another teacher to cover.. To help students gain these essential 21st century skills, you can adapt the process of learning, not the content they learn. By adding in both maker movement ideals and elements of project-based learning, you can provide learning opportunities that require students to learn and apply these skills.
Students want to be producers, not consumers. If you have the budget and the space, you could create a Maker Space in your classroom where students can use paper and other recycled materials to build prototypes of their ideas as well as products that solve real problems.
If you don’t, you can take advantage of technology tools that start with a blank page and let students develop their own curriculum products that show their knowledge and passion. Open-ended tools like Frames and Wixie require students to think creatively as they develop, implement, and effectively communicate new ideas to others.
Having students create with video, audio, text, and images provides an opportunity to exercise higher-order thinking skills. Students must critically evaluate both content and media as they frame, analyze, and synthesize information to solve problems and answer questions.
Project-based learning (PBL) connects students to real world issues and problems with an authentic audience. A project-based approach to teaching and learning requires students to question, think, and work together to apply learning. As they apply knowledge, explore relationships between ideas, and develop solutions, they engage deeply with and make personal connections to the curriculum.
“Creating authentic learning tasks... prompts this important process of meaning-making, because they establish relevance; the question, ‘why is this important to me?’ does not go unanswered.” — Virginia Padilla Vigil
This type of project work requires students to employ flexibility and adaptability as they reevaluate their work throughout the project process, becoming self-directed learners as they produce quality results. Working in diverse teams to complete a project on time and meeting assessment requirements helps to build leadership, responsibility, social skills, collaboration skills, and cultural awareness.
Whether you start with small steps for each of the 4 C’s or tackle a large instructional shift to move toward a 21st century classroom, the important thing is to embrace the goal of provide students with the content and skills they need to succeed in our complex and changing world.
Jonassen, D. H., & Reeves, T. C. (1996). Learning with technology: Using computers as cognitive tools. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 693-719). New York: Macmillan.
Lehrer, R. (1993). Authors of knowledge: Patterns of hypermedia design. In S. P. Lajoie & S. J. Derry (Eds.), Computers as Cognitive Tools (pp. 197-227). Hillsdale, NJ: Lawrence Erlbaum.
Virginia Padilla Vigil, Ph.D. Encouraging Rigor and Excellence in the Classroom.
Create custom rubrics for your classroom.
Graphic Organizer Maker
Create custom graphic organizers for your classroom.
A curated, copyright-friendly image library that is safe and free for education. | <urn:uuid:25eecf1e-59c5-4376-8d81-af760835413a> | {
"date": "2020-01-22T06:08:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.942708432674408,
"score": 3.765625,
"token_count": 2154,
"url": "https://creativeeducator.tech4learning.com/2016/articles/create-a-21st-century-classroom"
} |
Dandelion root has been used as a therapy for many years. It is able to treat allergies, lower cholesterol levels, to improve bile production, and to detoxify the liver. Also, it has diuretic properties and if good for pregnant women and for those in menopause.
Spring is the best time to harvest dandelion roots, particularly in the beginning of April. It is best to pick it from places that are less polluted, such as areas away from town and the road.
You may not know that all parts of dandelion have medicinal properties. Its leaves are rich in vitamins and you can use them in a salad, or with potatoes and eggs.
The dandelion stem has the power to relieve stomach problems, boosts the function of the gallbladder, regulates metabolism, and purifies the blood. Also, the stem can be used to treat diabetes, while the milk from the stem can be used to remove warts.
In addition, people can use dandelion flowers to prepare the homemade dandelion syrup. It will help your body to purify the blood, it relieves a cough and improves digestion.
Recipe for dandelion syrup
Pour 3 liters of water over 400 yellow dandelion flowers. Next, cut 4 oranges and 4 lemons into slices and add them to the mixture. Leave it for 24 hours.
After 24 hours, strain the mixture and place it in a pot. Add 2 cups of sugar into the pot and cook for 30 minutes.
Remove it after it boils and gets thick enough. Then, place the syrup in sterilized jars. Treat cold, cough, or bronchitis with this syrup.
Dandelion root – its health benefits and properties that can fight cancer
Dandelion is long used and popular for its medicinal properties. Nowadays, modern medicine validates the health benefits of this flower and says it is even able to cure cancer.
If you want to prepare and store dandelion roots, you need to peel, cut, and dry them on a fresh air. Leave them dry for 2 weeks or until they get brittle under the fingers. When they become dry, place them in a jar and keep it in a dark and cool place.
Dandelion roots are able to clean kidneys, liver, lymph, and gallbladder. It is helpful in treating gallstones, constipation, hepatitis, acne, edema, and rheumatism. Additionally, it is effective for women in prevention and treatment of problems with breastfeeding, cysts, tumors, and cancer.
This is how you can prepare dandelion tea:
Dry, chop and mince some dandelion leaves. Place the mixture in a jar and keep it for use in the future. Add half a teaspoon of this mixture in a glass of water and your tea is ready.
Another option is to combine 60 grams of a fresh mixture and 30 grams of dried dandelion root. Place the mixture in a pan with 2.5 ounces of water with a pinch of salt. Bring to a boil, cover the pan, and simmer for around 20 minutes. Then, strain the mixture and drink three cups every day. | <urn:uuid:1cccfe9c-0ea3-4f35-a5ba-f0b267233b24> | {
"date": "2020-01-22T06:36:21",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9374778270721436,
"score": 2.515625,
"token_count": 658,
"url": "https://curesforhealthylife.com/dandelion-cures-cancer-hepatitis-liver-kidneys-stomach-heres-prepare/"
} |
Here’s the transcript (starting around the 0:33 time stamp.)
The negationists of India are not just tolerated, they are effectively celebrated. For they are rewarded by the establishment and often placed in the top echelons of power. Once placed, they go about the task of rewriting history and conjuring up centuries of Hindu-Muslim unity… out of thin air!
Indeed, the Muslim conquests down to the 16th century were for the Hindus a pure struggle of life and death. Entire cities were burned down, the
populations massacred, hundreds of thousands killed in every campaign and similar numbers deported as slaves. Every new invader made often literally his hills of Hindu skulls. The Bahmani sultans made it a rule to kill 100,000 Hindus in a year. In 1399, Timur killed 100,000 captives in a single day and many more on other occasions.
The conquest of Vijayanagara in 1565 left large areas of Karnataka depopulated and so on. The American historian Will Durant summed it up as follows:
The Islamic conquest of India is probably the bloodiest story in history. It’s a discouraging tale for its evident moral is that civilization is a precious good whose delicate complex of order and freedom, culture and peace can at any moment be overthrown by barbarians invading from without or multiplying within.
Yet these traumatic events of the past that pushed Hindu civilization to the
brink of extinction don’t find a place in the collective memory of the
inheritors of this civilization. The credit for this goes to the negationism
- Indian National Congress,
- Aligarh Muslim University and
- the Marxist historians.
The rewriting of history textbooks began even before India attained independence. The Congress supported the Khilafat movement with the aim of encouraging the Muslims to join the struggle for freedom but their strategy backfired by further intensifying the separatist tendencies among the Muslim community.
At that time, Congress leaders were not yet actively involved in the rewriting of history. They were satisfied to quietly ignore the true history of Hindu Muslim relations. After the communal riots of Kanpur in 1931, a congress report advised the elimination of the enemy image by changing the contents of the history books. Subsequent generations of Congress leaders would profess negationism very explicitly.
The second major source of negationism is Aligarh Muslim University, often described as the cradle of Pakistan. Unlike their more orthodox allies in the Deoband school, intellectuals of Aligarh found it difficult to reconcile their agenda of modernizing the Muslim community with the blood-stained history of Muslim rulers.
Around 1920, Aligarh historian Muhammad Habib launched a grand project to rewrite the history of the Indian religious conflict. The main points of this version of history are as follows:
- Trivializing the original accounts of Islamic chroniclers describing the slaughter of Hindus, the abduction of their women and children and the destruction of their places of worship by calling these accounts as exaggerations.
- Downplaying the religious zeal of the conquerors by attributing the loot and plunder to economic motives.
- Bringing in the racial factor and portraying the barbarism of the conquerors as unrelated to the doctrines of Islam.
- The violence of Islamic warriors was portrayed not to have played an important role in the establishment of Islam in India.
These arguments cannot stand the test of historical criticism. We can demonstrate this with the example of Mahmud Ghaznavi, who ravaged the lands of Gujarat, Sindh and Punjab.
Ghaznavi was a Turk yes but certainly not a barbarian. He patronized the arts and literature and was a fine calligraphist himself. The barbarity of his campaigns cannot be attributed to his ethnic stock nor did he care for material gains. He left the rich mosques on his path untouched and even turned down a Hindu offer to give back a famous Idol in exchange of a huge ransom. “I prefer to appear on Judgment Day as an idol breaker rather than an idol seller.”
The final boost to negationism was delivered by the Marxist historians who took over the reigns of India’s educational and research institutes and built a reputation for unscrupled history writing, in accordance with the party line. They took negationism to a whole new level. For Marxists like Bipin Chandra, communalism is not a dinosaur, meaning that it is a strictly modern phenomenon. They explicitly denied that before the modern period, there existed such a thing as Hindu identity or Muslim identity.
Even now, negationism in India is practiced with the utmost prowess by historians and writers under the spell of Marxism. It would be wrong to expect that it will die a natural death because it has become a deeply entrenched bias and a thought habit for many people. Children usually survive their parents and negationism will survive Marxism for some time unless a serious effort is made to expose it on a grand scale.
Read more, think critically, gather the courage to face the truth. Even the
Upanishads say Satyameva Jayate, the truth shall prevail.
I am Koenraad Elst for Upword. | <urn:uuid:88df4990-608b-4dff-b5c0-e2847c972544> | {
"date": "2020-01-22T05:15:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9517055153846741,
"score": 2.6875,
"token_count": 1072,
"url": "https://deeshaa.org/2019/06/20/koenraad-elst-negatisionism-in-india-hiding-the-real-history-of-islam/"
} |
By Kevin McElwee
Only about 10 percent of the human genome are actually genes. The other 90 percent? Once called “junk DNA,” researchers now know that this genetic material contains on-off switches that can activate genes. But how these segments, called enhancers, find and activate a target gene in the crowded environment of a cell’s nucleus is not well understood.
Researchers have now filmed the enhancers as they find, connect and activate genes in living cells. The study was published in the journal Nature Genetics in July 2018 by an international team led by Thomas Gregor, an associate professor of physics and the Lewis-Sigler Institute for Integrative Genomics.
The malfunction of gene activation causes the development of many diseases, including cancer.
“The key to curing such conditions is our ability to elucidate underlying mechanisms,” Gregor said. “The goal is to use these rules to regulate and re-engineer the programs under-lying development and disease processes.”
Since enhancers are sometimes located far from the gene they regulate, researchers have been puzzled by how the two segments find each other. Previous studies conducted on non-living cells provided only snapshots that omitted important details.
In the new study, researchers used imaging techniques developed at Princeton to track the position of an enhancer and its target gene while simultaneously monitoring the gene’s activity in living fly embryos. Hongtao Chen, an associate research scholar and lead author on the study, attached fluorescent tags to the enhancer and its target gene. He also attached a separate fluorescent tagging system to the target gene that lights up when the gene is activated.
Video of the cells let researchers observe how two regions of DNA interact with each other, said Michal Levo, a postdoctoral research fellow. “We can monitor in real time where the enhancer and the gene are physically located and simultaneously measure the gene’s activity in an attempt to relate these processes,” she said.
The researchers found that physical contact between the enhancer and gene is necessary to activate transcription, the first step in reading the genetic instructions. The enhancers stay connected to the gene the entire time it is active. When the enhancer disconnects, gene activity stops.
Their observations contradict a favored hypothesis known as the “hit-and-run model,” which suggested that the enhancer does not need to stay attached to the gene during transcription.
The team discovered that sometimes the enhancer and gene met and connected but gene activation did not occur, a finding they plan to explore further.
The study, which included work by Princeton graduate student Lev Barinov and collaborators at Thomas Jefferson University, was funded by the National Institutes of Health and the National Science Foundation. | <urn:uuid:20dec56f-6179-47f7-86e3-a4dfbf38a660> | {
"date": "2020-01-22T05:50:21",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9502117037773132,
"score": 3.625,
"token_count": 570,
"url": "https://discovery.princeton.edu/2018/12/02/finding-meaning-among-the-junk/"
} |
Dental implants are the go-to solution for missing teeth. They are fitted in place of the root of the tooth, giving an solid anchor for an artificial tooth to attach to. Once installed, they are just as effective as all of your other teeth, giving you a secure tooth to bite, chew and smile with.
Losing a tooth can be traumatic experience, whether it is due to prior tooth decay, gum disease or an accident. Our modern implants can replace everything from a single missing tooth to a full mouth of teeth. A long-lasting solution that looks great and functions as if it has always been a part of your mouth, implants are the best option for replacing damaged or missing teeth.
What is a dental implant?
Dental implants are replacements for missing teeth, they are fixed in placed and supported by your jawbone. The implant itself is an artificial replacement for a tooth root and is placed under the gums. Implants is used to support an artificial tooth or multiple teeth. Another word sometimes used to describe an implant is a fixture. If the implant is supporting only one tooth then a crown is placed on top of it – this is the visible part of the tooth replacement in the mouth, above the gum. Implants can also be used to support multiple teeth. They can support fixed bridges – similar to crowns joined together and removable dentures – false teeth which can be removed from the mouth for cleaning, but are supported in the mouth by implants, much more securely than without any implants. To attach your implant (under your gum), to your crown, bridge or denture (on top of your gum), many implants use a connector, called an abutment.
What are tooth implants made from?
Most modern implants are made from titanium. Titanium is a very biocompatible material which is both strong a durable. On a microscopic level, new bone cells can actually grow onto the surface of titanium implants during a process called osseointegration, which basically means your implants securely integrate with to your jawbone. Some implants may also be made from zirconia, although this is not as widely used as titanium.
Am I suitable for Dental Implant Treatment?
Most adults with good overall health are suitable for treatment. In general implants are not suitable for children and adolescents as they are usually only used when the jaws have stopped growing. Smoking or excessive alcohol intake may increase the risk of problems with both the initial healing of implants and their long term maintenance. A few medical issues can increase the risk of complications from dental implants such as poorly controlled diabetes and certain drugs used for the treatment of cancer and heart disease. Every case is individual and there are very few absolute contraindications to the placement of implants so don’t be put off if you are concerned about any lifestyle or medical issues, the vast majority of people are still suitable for treatment. It is important that your gums and any remaining teeth you may have are as healthy as possible so professional cleaning of your gums and teeth may be required prior to treatment.
An overview of the implant dentistry process
There are a few different stages to dental implant treatment. It is important not to rush things to get a successful and long lasting result, so from start to finish treatment can usually take anywhere between three to nine months. In some rarer cases, overall treatment time may be shorter or even sometimes longer than this. There are a number of approaches to treatment and a lot will depend on your individual circumstances, but the typical process usually involves the following: 1. Your implant consultation – You will be given an initial consultation to discuss implants. Your teeth and gums will be examined and some initial X-rays may be taken. It is usually possible to make and assessment on your suitability for implant treatment at this point. Often, an estimate of how many implants you will need, how long treatment will take and the overall treatment cost can also be given.
2. Records and treatment planning – To properly plan your treatment it is important to take records of the condition of your teeth, your gums and your jawbones. This way implants can be placed in the optimum positions in your mouth and avoid accidentally placing them in an awkward or potentially harmful position. Moulds or scans of your teeth and gums will be taken and x-rays or CT scans (which is similar to an x-ray but in 3D) can be taken of your jawbones. Proper positioning for the implants and replacement teeth can then be planned, and an accurate written plan detailing the sequence and cost of treatment can be formulated.
3. Implant placement – Implant placement surgery is a relatively straightforward procedure that can be performed in a dentist’s surgery. A local anaesthetic similar to that used for a tooth filling is most commonly used during placement. You don’t need to be put to sleep, although some patients prefer to have sedation for the treatment. If during the treatment planning phase it was deemed you did not have enough bone for placement of implants, there are a number of ways that the volume of bone can be increased or augmented. Bone augmentation if usually performed either before or at the same time as implant placement depending on your specific case.
4. Integration period – The time it takes for an implant to fuse with your bone can vary anywhere between six weeks and six months. During this time it may be possible to place temporary teeth in your mouth to hide the gap. These usually takes the form of a removable denture or fixed bridges, but sometimes temporary teeth may be fitted onto the implants themselves. In some circumstances you may not need any temporary replacement, such as where teeth are towards the back of the mouth and cannot be seen easily.
5. The restorative phase – After the implants have integrated with your jawbone, the process of making the definitive replacement teeth can begin. A number of factors affect what type of final teeth are made. These include, but are not limited to, how many teeth are missing, the amount of teeth and gum remaining in your mouth, the way you bite together and your budget. Usually a mould, scan or series of moulds or scans of the implants in your mouth will need to be made for the production of your final teeth. This process can take anywhere from a few days to a few months, depending on your specific case, but for many cases takes a few weeks. At the end of treatment your new replacement teeth will be checked in your mouth to make sure that the fit and appearance is correct before they are either screwed or cemented securely into place.
6. Maintenance – Having invested time and money into your dental implants it is important that you look after them. It is very important to regularly clean your new replacement teeth on your implants, any remaining natural teeth you may have and your gums. Regular visits to your dentist are important so that your implant teeth and gums can be checked, and any minor issues dealt with quickly before they become bigger problems. Regular trips to a dental hygienist to have your teeth, implants and gums cleaned are also advisable.
How do you know if you have enough bone for dental implants?
While a routine dental x-ray can give some information on how much bone is available for dental implant placement. Unfortunately traditional x-rays only provide 2D images and therefore only give an estimate of bone height and width, but not on bone thickness or depth. A CBCT scan of your jawbone can give more information. It provides a three dimensional image of your jawbone and also gives information on the strength or density of the bone available. The other advantage of CBCT scans is that they can highlight areas where important anatomical structures such as nerves and blood vessels are so that these can be avoided during surgery. We use CBCT technology for the vast majority of implant treatments.
What can cause bone loss in the first place?
If you are a little low on bone in your jaws, you may be wondering why this has happened. Whenever you lose a tooth or have one taken out some bone around where the tooth root was is lost. The amount of bone lost varies from person to person and can be quite high initially, then tends to slow down over time. While the rate bone loss may slow down, bone loss can continue without a tooth in place as the bone is no longer needed to support a tooth. Having your teeth replaced with removable dentures without implants to support them can actually increase the amount of bone that is lost over time.
Can dental implants preserve bone?
This is one of the big functional advantages of dental implants over other methods of replacing missing teeth. Unlike bridges and dentures not supported by implants, once a dental implant is in place, supporting teeth and subject to everyday functional forces during eating, smiling and talking they can actually stimulate the surrounding bone to become stronger and thicker. However, it is important to remember that everyone is different and the amount of bone preservation you can expect will vary depending on your individual case.
What can be done if there is not enough bone?
A reduced amount of bone does not necessarily mean you cannot have dental implant treatment. It is often possible to replace missing bone or to encourage new bone to grow. Procedures intended to increase the amount of bone available are often referred to as bone augmentation, bone regeneration or bone grafting procedures. The details of the procedure you require will depend on your specific case. Some common procedures used to increase the amount of bone available include the following:
1. Onlay Grafting – Also known as block grafting is where a piece or block of your own bone is taken from somewhere else and laid on top of the of the deficient area (hence the term onlay). The block of bone may be taken from another area of your mouth such as near your chin or the back of your mouth or if a larger amount of bone is needed can also be taken from another part of your body such as your leg or hip. The new piece of bone is then left to heal and join with the underlying deficient area to increase the amount of bone available.
2. Guided bone regeneration – Your own bone or bone substitute can be placed in a deficient area to bulk up the amount of bone available. The bone placed will usually be in small particles or grains with the aim of new bone filling in the space between these grains. Sometimes a sheet or membrane may be placed over these particles to keep them in place. Occasionally the membrane is placed on its own to promote new bone to grow. Guided bone regeneration often removes the need to have a second surgery to take a block of bone from another part of your mouth or body.
3. Sinus lift – A sinus lift sinus augmentation is used to increase the bone height around the back of your upper jaw. Over time the height of the bone in this area can reduce and your sinuses – air filled spaces in your jaw bone can increase in size further reducing the amount of bone available. It is possible to lift your sinuses up and either graft bone in the space created by lifting your sinuses or promote new bone to grow in this area.
What is bone substitute?
As an alternative to a second surgery to harvest your own bone from another area of your mouth or body, several bone substitutes are available. Special synthetic bone substitutes have been manufactured and bone products have also been produced from bovine (derived from cow), porcine (derived from pig) or other human sources. These materials have all been specially prepared to make them safe for use in people.
When is bone grafting performed?
If a relatively small amount of bone needs to be added then it is often possible to add this at the same time that the implants are placed. As the implant is integrating new bone can grow around it and grow into the scaffold created by the bone grafting procedure. This has the advantage that it does not slow down the overall treatment time.
When a larger amount of bone is needed it is often better to graft bone before placing the implants. A larger amount of bone takes longer to mature than a smaller amount and it may be between 3 and 12 months before the grafted area is suitable to receive an implant. While you will be keen to get your new teeth as soon as possible it is important not to rush things to ensure a successful outcome in the long term.
Dental Implant Surgery
The implant surgery itself usually involves making a small nick in the gum and then gently lifting the gum away from your jawbone. This allows the bone to be properly visualised. A small channel is then created to allow the implant to be inserted. After the implant is inserted, the gum may be replaced to completely cover the implant. Alternatively, a special cover called a healing cap may be placed over the implant or a temporary replacement tooth may be placed immediately on the same day as the surgery. The exact details of the surgery will vary on a case by case basis.
Do you have to have a gap during implant treatment?
It is possible to wear replacement teeth during implant treatment. This is most common where the teeth due to be replaced are towards the front of the mouth where they are more visible. Teeth can be replaced on a temporary basis by removable dentures or fixed bridges and in some cumstances a temporary tooth may be placed on the implant itself.
Can dental implants be placed next to natural teeth?
Dental implants are routinely placed next to natural teeth and it is usually very safe to do so. Where the roots of adjacent teeth are very curved and lie in the path where an implant may go it is usually still possible to place implants with a few tweaks to the treatment. Implants can be tilted to stay clear of the roots or a shorter implant may be used.
Do dental implants hurt?
Implant surgery is a relatively straightforward procedure which can be completed with a local anaesthetic similar to that used for a routine dental filling. Depending on how complex your case is, placing implants can take anywhere from half an hour for one or two to several hours for multiple implants and major bone grafting.
Like any surgery, you can expect a bit of discomfort afterwards. Many patients are surprised as the amount of discomfort is often less than expected. If required, you can take painkillers similar to those you might take for a headache such as ibuprofen or paracetamol.
Stitches are often used to secure the gum around the implants following the surgery. Sometimes dissolvable stitches may be used however if non-dissolvable stitches they may pull a little afterwards. Stitches are used they are usually removed a week or two later. In initial days after the surgery you should report any unexpected pain, swelling or prolonged bleeding or bruising.
Can I be put to sleep for the implant surgery?
Many implants are completed with a simple local anaesthetic (injection) alone. If you prefer you can have the implants placed with conscious sedation. When sedation is used, the patient is still awake but is in a relaxed state, many people compare it to being a little tipsy. Often patients don’t remember having the treatment done. For larger treatments, such as those that involve major bone grafting, you can be given a general anaesthetic. This will usually require hospital admission.
Do Implants always work? Can anything go wrong?
Where the human body is involved unfortunately nothing is 100% guaranteed to work first time. A number of factors may lead to implants being rejected by your body including smoking and excessive alcohol intake so you will be encouraged to quit smoking and reduce your alcohol intake if these apply to you. Sometimes if implants are not placed correctly in the first place problems can arise. Some of the problems we see in implants NOT placed by us include:
1. Implants being rejected by your body – If your implant is not placed in the correct position in the bone it may fail to integrate, be rejected and could fall out.
2. Ugly looking teeth – The implant essentially replaces the root of the tooth, then artificial teeth are placed on top of this. Implants placed in the wrong position or at the wrong angulation can mean that the tooth on top of it is also wrong, giving an unpleasant appearance.
3. Damage to important nerves and blood vessels – Your jaws are not only made of bone. They also have important nerves and blood vessels running through them. If an implant is accidentally placed into a nerve or blood vessel it could have serious negative consequences. We use advanced digital technology to place our implants in the correct position to help avoid these problems.
How we use the advanced digital technology for successful dental implants in London
We have invested in advanced 3D scanning and printing technology to help ensure your implant treatment is a success. Our workflow for successful dental implant treatment planning is as follows:
1. We scan your jawbone – We use a CBCT scan to provide information on the strength of your bone and highlights the location of nerves and blood vessels so we can avoid placing implants there.
2. We scan your teeth and your gums – To make sure that your new implants fit in with your remaining teeth and gums and don’t look ugly. Don’t worry, if you are missing all your teeth we can take a scan of only your gums so that your new teeth fit properly and look natural.
3. We combine your jawbone scan with your teeth and gum scan – Very few dentists do this and it is actually one of the most important aspects of planning your treatment. To get teeth that both look natural and are long lasting it is important that they fit with your remaining teeth, gums and bone. If you don’t consider all three, you risk problems in the future.
4. We plan exactly where we want your new teeth to go – This part is crucial. It may seem crazy, but many treatments are planned with the teeth as an afterthought. We think this is wrong. We plan the teeth first and then work backwards.
5. Then we plan the position of the implants to match the final teeth – Ultimately you want new teeth, not new screws in your jaw. Failing to consider the final result from the outset can leave implants being placed where teeth won’t fit properly, leaving an ugly appearance and risking problems in the future. After planning the position of the teeth we then plan the position of the implants so they are placed in the optimum location to support the teeth you want.
6. Time to turn plan into reality – It’s all good having a plan but we then need to be able to execute it. Using the information gained from our advanced 3D scanning procedures, we use 3D printing technology to manufacture a guide that is used in surgery to place the implants in the position that we planned. We can also use the 3D plan to make new teeth ahead of your surgery to deliver immediately to you the same day.
Looking after your dental implants
Having invested your time and money in implant treatment, it is important that you look after them. Like natural teeth, implant teeth can suffer from issues such as gum recession and gum disease, and fractures. One key element of maintaining your implants is to practice good oral hygiene. Gum disease around implants is one of the major long term causes of implant failure and is largely a result of implants not being cleaned properly. Like cleaning your teeth, cleaning your implants is not difficult. The vast majority of patients will be able to clean around their implant teeth just like they should around natural teeth, by brushing and flossing. In some cases special floss and brushes for in between teeth and implants may be required.
It is likely that cleaning will take a little longer initially. However, once you are into a good routine the process will become much easier. It is also important to see your dentist and hygienist for regular check-ups to keep an eye on your implant teeth. This way, if any problems do arise they can be dealt with at the earliest opportunity.
London Dental Implant FAQ
What is the cost of dental implants?
The cost of treatment can vary depending on the amount of treatment required and its complexity. Often records such as a CBCT scan of your jawbone will need to be taken to properly assess your case before an accurate treatment cost can be given. This will help avoid any unforeseen additional treatment and extra costs you may need.
Is the treatment painful?
You will be given a local anaesthetic for the implant placement itself and therefore you should not feel any pain. Afterwards, like any surgical procedure you can expect a degree of discomfort. Most patients are surprised at how little discomfort they experience after implant placement. Some patients experience discomfort similar to having a tooth out. Any discomfort experience can usually be managed by simple painkillers like those you would take for a headache.
Am I too old for implant treatment?
You cannot be too old for implant treatment. As long as you are in reasonably good health, implants can still be placed. There is no upper age limit for treatment.
Will I be able to eat whatever I want?
The aim of any implant treatment is to restore both the function and aesthetics of your teeth. In this way we don’t only want your teeth to look good but we want you to be able to eat whatever you want as well. During treatment, there may be some times when it will be easier for you to stick to a soft diet and you may be advised to avoid certain foods for short periods of time. However, once treatment is completed, patients should be able to eat a wide range of foods without any issues.
How long will the treatment take?
Treatment usually takes multiple appointments over anywhere from weeks to several months. over a period of months. Treatment time varies on a case by case basis and is usually shorter for simpler treatments and longer for more complicated cases. In general, treatment takes between three and nine months, with the majority taking less than six months.
How long will the implants last?
The aim of every treatment is to provide a result that will last as long as possible. If properly cared for dental implants can last a lifetime. It is important to attend the dentist and hygienist on a regular basis for check-ups on your implants. | <urn:uuid:0437e392-746b-4a2c-8746-0d5e78fdc671> | {
"date": "2020-01-22T05:52:31",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.944985032081604,
"score": 2.96875,
"token_count": 4547,
"url": "https://drjohnfagbemi.co.uk/cosmetic-dentistry/dental-implants/"
} |
The photographer may have told the children to hold hands and grab the waist belts to keep their arms still during the exposure.
This ambrotype is not only endearing, but raises several points of interest: First, both children were stood atop chairs and with that placement came the possibility that either could topple—especially the younger child who looks about age two or three. The belts around the siblings’ waists were not part of their costumes—it is likely they were both strapped to metal stands that photographers used to provide stability for their subjects, as well as to keep young sitters like these from wandering out of the frame. (That this was sometimes necessary is illustrated the adorable image below. How refreshing it is to see a mother cracking up at the antics of her toddler whilst Daddy or a studio assistant tries to keep the child from escaping.) The photographer may also have told the siblings to hold hands and grab the waist belts to keep their arms still during the exposure.
Both children may also have been head clamped, as can be seen in the Victorian cartoon below. Contrary to what duplicitous eBay sellers and 14-year-old goth bloggers might propose, these metal stands were not used to hold up dead bodies. The cartoon below clearly shows how posing stands worked to help keep sitters still.
Secondly, it is unclear whether the child on the right of the ambrotype is a boy or a girl. The center-parted hair argues female, but the rest of the outfit says boy despite the floral top and long cotton bloomers under a buoyant checked skirt.
Also tantalizing are the partial words visible at the edges of the image. At one point, the sticky back of the ambrotype was covered by newspaper. If still intact, this may have yielded a clue about this image’s precise date and location of origin. Ω
The sap of another generation,
fingering through a broken tree
to push fresh branches
towards a further light,
a different identity.
—John Montague, “The Living and The Dead”
This wonderful outdoor image, circa 1910, shows a bonneted babe sitting in a wicker pram on an early spring day in the eastern United States. The child’s pudgy hand appears lightly pinched, rather than held, by the arthritic fingers of his or her grandmother—perhaps great-grandmother. The old woman, who was probably born in the 1830s, is magnificent with her weathered face and carefully coiffed, almost ruched white hair in contrast to her elaborate dark clothing. She seems quite elderly, but sturdy and strong. A house, possibly the family home, can be glimpsed through the leafless trees behind her.
The next image is of a multigenerational British family posed on a ground-floor window ledge on a pleasant day during the mid-1860s. Grandmother, who is dressed in black-and-white widow’s clothing, sits in wicker chair, whilst Father and Mother lean into the picture from inside the home. Mum’s hand rests possessively on the shoulder of her youngest son, whilst the eldest brother perches on the sill and the middle son sits cross-legged below him. The daughter of the house, a tween in a jaunty summer dress, looks very much a mini-me of her mother.
The third image, which is marked “J. McCornick, Photographer, 3 The Bridges, Walsall,” is more somber. One subject is a young girl of about 12 years beside an elderly gentleman who is likely her grandfather. The seated female may be the girl’s mother or her grandmother—it is hard to be sure, although they are clearly related.
The members of this family group are dressed in mourning, but nothing more of the nature of their loss can be supposed, except that the mother or grandmother was not mourning for her husband. The prevailing custom for widows’ bonnets was to include a white inset to frame the face.
Grandfather, whose hand appears to rest protectively on the small of his granddaughter’s back, holds in his other hand some type of folded document or wallet. The message he conveyed with this prop is now inscrutable, but it would have been understood by the carte de visite’s viewers.
The final image is a four-generation portrait, identified on the reverse as “Elizabeth Stokesbury, age 79 years; Clarissa Stokesbury, age 51 years; Extonetta Book, age 29 years; Esther Cook Book, age 3 years.”
At the far left is Elizabeth Clark (11 April 1824-5 Oct. 1910), born in Fayette County, Ohio, to Welsh native Joshua Clark (1795-29 March 1830) and his wife Mary Blaugher (1795-16 March 1879).
Elizabeth Clark married farmer John S. Stokesbury (7 Sept. 1819-12 May 1867), the son of Robert Stokesbury (1790–1839) and Anna Baughman (1794–1870). In 1850, the Stokesburys farmed in Jefferson, Green County, Iowa; by 1860 they had moved to a new farm in the county of Wayne. The couple had eleven children to assist them: Robert (b. abt. 1842); Angeline (b. abt. 1844); Mary Ann (b. abt. 1846); Joseph (b. abt. 1848); Sarah (b. abt. 1850); Clarissa (12 Sept. 1851-8 March 1935); Harvey (b. abt. 1853); John (b. abt. 1859); Elizabeth Ann (28 June 1861-9 Aug 1946); Clark D. (b. abt. 1863); and Launa (1865-1939).
At age 16, Clarissa, second from left, married a cousin, Jesse Bush Stokesbury (24 Jan. 1843-18 Dec. 1918), the son of James Madison Stokesbury (1813–1869) and Phoebe Painter (1819–1902). By 1870, Clarissa and Jesse had migrated to Chariton, Iowa, where, the family farm was enumerated on the 1870 Census. However, their days on the land were ended by 1880, when Jesse was recorded on the census as a laundry man, and on the 1900 Census he was enumerated as a day laborer. His widowed mother-in-law, Elizabeth Stokesbury, was also in residence, along with her youngest children.
Clarissa and John had the following sons and daughters: Bryant W. (b. abt. 1868); Hillary Edwin (13 April 1870-8 Feb. 1950); Theodosia (b. abt 1872); and Extonetta (b. Dec. 1873), second from right in the photograph, who was known as “Nettie.”
On 24 November, 1898, Nettie married harness maker and saddler John Atwater Book (Sept. 1864-17 April 1924), son of Harlan and Emmaline Book. By 1900, the Books and their first child, Esther Cook (far right—and yes, Cook Book) all lived with Jesse and Clarissa Stokesbury. Nettie and John had two more children: Sarah E. (b. 6 Feb. 1902); and Jesse H. (b. 24 Dec. 1903). Sarah married Loren L. Adams on 12 September, 1935; Jesse married Fae Arza Wicks in June 1929. He died in January 1970 in Seymour, Indiana, and was buried at Chariton Cemetery.
Nettie’s brother Edwin Stokesbury, who became a broom maker and married Ollie B. Ritter on 20 February, 1894, had set up house in Chariton by 1900. The couple had four children, but shortly thereafter the marriage failed. Ollie married as her second husband a man named Van Trump and Edwin’s children took their step-father’s surname. By 1920, widowed Clarissa and her son Edwin lived together.
In 1920, Esther Book worked as a bookkeeper in a Chariton store along with sister Sarah. On the 1930 Census, Nettie, Esther, and Sarah were enumerated in one household, with Nettie working as a sales lady in a variety store; Esther worked as a bookkeeper in a bank and Sarah was a tailoress in a dry goods store.
The Des Moines Register of 25 December, 1935, featured a testimonial advertisement by Nettie in which she was quoted, “I like the simplicity of operating the Colonial Furnace and the way it holds fire. The damper enables one to feed the fire so that no smoke, soot, or gas escapes into the rooms. And I like the draft in the feed door, which can be opened to prevent puffing.”
Extonetta Book died on 8 May, 1962, and was buried in Chariton Cemetery. It appears that her daughter Esther never married. She worked for many years as the secretary of the Farmers Mutual Insurance Association and died 25 March, 1965, three years after her mother. Esther is also buried in Chariton Cemetery. Ω
If the baby was not dead, but sleeping, why was he laid on a covered cushion or small table instead of being held in his nanny’s arms?
This is an puzzling image—and one for which I am interested in reader input. The inscription on the image, printed in pencil, reads: “Mother, Me, Duncan (died 10-19), and Nanny McFalls.”
When I purchased the cabinet card, I presumed that it was a postmortem image showing a deceased child guarded by his or her nanny, who wore a black bow on her white cap as well as a black dress with a white pin-front apron. The child’s well-heeled mother, in a proper dark dress, raised her eyes to heaven as if for angelic support, clutching her remaining offspring, who held a large china doll and looked warily at the camera.
The baby rested upon a draped piece of furniture in a position that indicated the illusion of sleeping rather than in-one’s-face death, which was a style of Victorian postmortem images that grew increasingly popular as the turn of the millennium approached.
The infant showed no visible signs of illness, rigor mortis, or decomposition. The child was not dressed for burial but wore regular clothing for an infant of his age, including little hard-soled leather walking shoes. The nanny’s hand rested on his arm while she faced the camera without any grief apparent. If the baby was not dead but sleeping, why was he laid on a covered cushion or small table instead of being held in his nanny’s arms? Also, he was old enough to be woken to have his picture taken. Why would he have been posed this way if he was just having a wee nap?
The fashions shown in this image date it, I am confident, between 1887 and 1890. This accords exactly with the presence of photographer Edmund Geering in Abderdeen, Scotland. Geering was an Englishman born in Sussex in about 1843. He was active as a photographer in Kincardineshire by 1871. He married a Scotswoman and was, according to Aberdeen city directories, operating out of 10 Union Place from the early 1880s to about 1889.
So the fashions, the type of photo, and the career of the photographer all place the image in the late 1880s. This brings me to the death date noted in the inscription: “10-19.” What does it mean? October 19? October 1919? If the latter, this is not a postmortem image at all and is instead simply a photo of an affluent woman, her children, and her servant. If the date refers only to a month and a day, why is there no year?
One possibility is that Duncan was not the baby, but the child. The baby grew up to become the writer of the inscription and Duncan was actually the child in the frilly dress holding the doll. In fact, the child’s hair was parted on the side, which was one indicator of maleness in an age where boys and girls dressed alike during the first years of life. In this scenario, it was the baby’s brother, Duncan, who died as an adult in October 1919.
My fellow Flickr historian and actual cousin, Laura Harrison, opined, “If you look at the order of names, it would seem ‘Me’ is the tot and ‘Duncan’ is the baby. With October 1919 being the date of death, and assuming the picture was taken between 1881 and 1891, the baby could have served in World War I and died in 1919 from battle injuries. A lot of soldiers died in the years after the war due to injuries.”
Good point, cousin.
After looking at the reverse inscription, Flickr user Christie Harris chimed in, “The inscription looks like it was probably written well after the photo was taken; I think the 1919 [death date] would be more likely.” I agree with Christie that the handwriting of the inscriber was quite modern and was added many years later.
And so we are left with a mystery. Actually, two: I genuinely want to know more about Nanny McFalls. I searched for her as best I could, but with so little to go on, I could not identify her. In the image, she seems a cheerful, young Scottish woman who cared about her charges and who was loved enough in return to earn a place in her employer’s family portrait. Ω | <urn:uuid:41cc57ea-5cd6-4267-8579-3a5ce5998335> | {
"date": "2020-01-22T06:15:54",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9804546236991882,
"score": 3.078125,
"token_count": 2892,
"url": "https://dyingcharlotte.com/tag/child/page/2/"
} |
In the classroom, it can be hard for instructors to know whether their teaching methods are resonating with students. That’s why bug-in-ear coaching, a technique that allows teachers to receive real-time coaching during a lesson, is gaining popularity in school districts across the United States, writes Madeline Will for Education Week.
With bug-in-ear coaching, a teacher wears an earpiece while teaching a lesson, which is livestreamed for an instructional coach, writes Will. The instructional coach then delivers live feedback during the lesson, allowing the teacher to apply the recommendations in real time.
Currently, about a dozen states use bug-in-ear coaching. The practice is especially beneficial for rural and underserved school districts, who may not be able to hire their own coaches.
“It was really nice to feel supported and get direct feedback in the moment, because as much as you can do that through somebody being there and watching you, they always do it afterwards or by interrupting [the lesson],” says Michael Young, who teaches special education at Elk Ridge Elementary School in Buckley, Washington. “It was helpful information that changed the way I taught.”
That’s because bug-in-ear coaching “correct[s] behaviors on the spot,” says Mary Catherine Scheeler, an associate professor of special education at Pennsylvania State University and a pioneer of bug-in-ear coaching. “I like to say practice makes permanent. If people are practicing things incorrectly, they become part of the repertoire.”
The research on bug-in-ear coaching suggests that it works. Instructors coached through this method not only use evidence-based practices in their lessons more frequently, but also tend to keep up with the improvements they learned after their coaching sessions end.
Experts also suggest that bug-in-ear coaching can improve teachers’ tendencies to ask meaningful questions and increase student talk time. According to research by the University of Washington‘s (UW) College of Education, instructors coached through an earpiece “significantly increased the number of opportunities they created for children with disabilities to communicate.” And students were better able to use language to advocate for themselves.
The instructors explained that their virtual coach gave them “encouragement to keep going, even if [the teaching technique] feels like it’s not working,” says Kathleen Artman Meeker, the director of research at UW’s Haring Center for Inclusive Education. “You have somebody there with you, shoulder to shoulder, helping you think through it without disrupting the flow of the day.”
At first, some instructors are hesitant to try bug-in-ear coaching because they worry it will be distracting. But research suggests that “teachers acclimate very quickly to the bug-in-ear technology,” says Paula Crawford, the section chief of program improvement and professional development at the exceptional children division of the North Carolina Department of Public Instruction. “I’ve observed how within five to 10 minutes, teachers have been able to take the feedback and immediately change their practice within the classroom.”
Nancy Rosenberg, the director of the UW College of Education’s Applied Behavior Analysis program, suggests that real-time coaching has a much bigger impact than other forms of coaching. “[Meaningful feedback is] really hard to do after the fact, to watch a video and say, ‘Oh, you could have done this [here],’ or, ‘You could have responded more quickly’,” she says (Will, Education Week, 2/26). | <urn:uuid:28729cbc-dd79-4f90-ade4-22b75324413c> | {
"date": "2020-01-22T05:08:21",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9733188152313232,
"score": 3.0625,
"token_count": 758,
"url": "https://eab.com/insights/daily-briefing/academic-affairs/they-became-better-teachers-in-just-10-minutes-heres-how/"
} |
I participated in an Influencer Activation on behalf of Influence Central for MedImmune. I received product samples to facilitate my review as well as a promotional item to thank me for my participation. When I had my twins two years ago, I was very aware of the risk of RSV. As anticipated, our babies came early, making them even more susceptible to this common seasonal illness – however, to premature babies, this illness is serious. It causes mild to moderate symptoms in full-term, healthy babies, but can be dangerous and life-threatening to babies born prematurily.
In fact, RSV disease is the leading cause of hospitalization for babies during their first year of life in the United States, with approximately 125,000 hospitalizations and up to 200 infant deaths each year. Despite this – one third of mothers do not even know about RSV – and that’s troubling.
I was one of those mothers when my first was born 5 years ago. I had never heard of the virus and only learned about it when a co-worker’s child passed away from the illness. It was the worst possible way to learn about the virus and left me frightened for my then 3 month old son. Fortunately, when I knew my twins would most likely arrive early I knew about this virus and could take appropriate precautions (which can you read about below). November 17th is World Prematurity Day, so learning about the link between premature infants and RSV is especially important to share!
What is RSV and why are preemies at-risk?
RSV is a common seasonal virus that occurs in epidemics from November-March, though this can vary. While every baby is at risk for contracting RSV, those born prematurly are more at risk of developing more severe symptoms, which include:
- Persistent coughing or wheezing
- Bluish color around the mouth or fingernails
- Rapid, difficult or gasping breaths
- Fever, especially if it is over 100.4°F [rectal] in infants under 3 months of age)
Since RSV can mimic a cold, it is important to watch children extremely closely for early signs of these more severe symptoms. How can you protect your child? RSV is extremely contagious. It is easily spread through touching, sneezing and coughing. The virus itself can also live on the skin and surfaces for hours. Once it is contracted, there is no treatment for RSV, so prevention is key. Here are some things you can do, as a parent, to help minimize the spread of RSV: • Wash hands and ask others to do the same • Keep toys, clothes, blanket and sheets clean • Avoid crowds and other young children during RSV season • Never let anyone smoke around your baby • Steer clear of people who are sick or who have recently been sick Help inform other parents and protect other children by sharing this infographic below. If you want to learn more about RSV and get more facts, visit www.RSVprotection.com | <urn:uuid:e0605f4a-2ac2-461b-adab-9c68ab803a5b> | {
"date": "2020-01-22T05:48:33",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9733014702796936,
"score": 2.734375,
"token_count": 623,
"url": "https://elleoliveco.com/2014/11/rsv-awareness/"
} |
Cambodian People's Party
This article needs additional citations for verification. (June 2019) (Learn how and when to remove this template message)
|Vice Presidents||Say Chhum|
|Founded||28 June 1951|
|Headquarters||Street 2004, Sangkat Teuk Thla, Khan Sen Sok, Phnom Penh|
|International affiliation||Centrist Democrat International|
58 / 62
125 / 125
1,645 / 1,646
11,051 / 11,572
4,034 / 4,114
The Cambodian People's Party (CPP; Khmer: គណបក្សប្រជាជនកម្ពុជា, Kanakpak Pracheachon Kâmpuchéa; French: Parti du peuple cambodgien, PPC) has been the ruling political party of Cambodia since 1979. Founded in 1951, it was originally known as the Kampuchean People's Revolutionary Party (Khmer: គណបក្សប្រជាជនបដិវត្តន៍កម្ពុជា, KPRP). After toppling the Khmer Rouge regime with the Vietnamese-backed liberation of Phnom Penh, it became the ruling party of the People's Republic of Kampuchea (1979–1989), which was later renamed the State of Cambodia (1989–1991). The current name, CPP, was adopted during the final year of the State of Cambodia, when the one-party system as well as the Marxist–Leninist ideology were abandoned. Originally rooted in communist and Marxist–Leninist ideologies, the party took on a more reformist outlook in the mid-1980s under Heng Samrin. In 1991, the CPP officially dropped its commitment to socialist ideology, and has embraced a free market economy.
- 1 History
- 2 The PRK in power (1979–1992)
- 3 List of party leaders
- 4 Organization
- 5 Election results
- 6 See also
- 7 Notes
- 8 Bibliography
- 9 External links
The original Kampuchea People's Revolutionary Party (KPRP) was founded on 28 June 1951 by Cambodian nationalists who struggled to free Cambodia from French colonial rule. Nationalists in Cambodia, Vietnam and Laos shared the same belief that to free themselves from France successfully they needed to work together. Thus the Indochinese Communist Party was formed in 1930. However, the triumph of the Japanese during the early stage of World War II crippled French rule and helped to nurture nationalism in all three Indochinese countries. Consequently, the idea of an Indochinese-wide party was submerged in the rhetoric of fierce nationalism. In Cambodia, growing nationalist sentiment and national pride married historical mistrust and fear of neighbouring countries, which turned out to be a stumbling block for the ICP.
In 1955, a subsidiary party named People’s Party was established to contest in the national election that year. The name of the party was changed to the Workers’ Party of Kampuchea (WPK) on 28 September 1960 and then to the Communist Party of Kampuchea (CPK, whose followers were named as the Khmer Rouge by Prince Norodom Sihanouk) in September 1966 with its headquarter in Ratanak Kiri province. The rise of Pol Pot and his group to the party leadership in 1963 divided the party between true nationalists and genocidal perpetrators who eventually killed their own people in a genocide between 1975 and 1979. In one sense, CPK was a new organization.
Constitution and early days Pen Sovan's leadership (1979–1981)
In early 1979, KPRP members who overthrew Democratic Kampuchea or the Khmer Rouge to end the genocide held a congress. At this gathering, they declared themselves the true successors of the original KPRP founded in 1951 and labelled the congress as the Third Party Congress, thus not recognizing the 1963, 1975 and 1978 congresses of CPK as legitimate. The party considered 28 June 1951 as its founding date. A national committee led by Pen Sovan and Roh Samai was appointed by the Congress. The women's wing of the party, the National Association of Women for the Salvation of Kampuchea, was also established in 1979 with a vast national network of members that extended to the district level.
The existence of the party was kept secret until its 4th congress in May 1981, when it appeared publicly and assumed the name KPRP. The name-change was stated to be carried out "to clearly distinguish it from the reactionary Pol Pot party and to underline and reassert the continuity of the party's best traditions". Very little is known about the Third Party Congress, also known as the Congress for Party Reconstruction, except that Pen Sovan was elected first secretary of the Central Committee and that the party had between sixty-two and sixty-six regular members. The congress elected a Central Committee of 7 members: General Secretary Pen Sovan, Permanent Members: Heng Samrin and Chea Sim; and Central Members: Hun Sen, Bou Thang, Van Son and Chan Sy. Chan Kiri is Head of the Party Commission for Inspection. March 1979, Van Son and Chan Kiri was dismissed and Pen Sovan, Chea Sim and Say Phouthang (chair of the Central Organization Committee of the Party) formation of the Party' Central Standing.
Fourth Party Congress: change of strategy (1981)
In Pen Sovan's political report to the Fourth Party Congress held 26 to 29 May 1981, he was careful to distance the KPRP from Pol Pot's CPK and he denounced the CPK as a traitor to the party and to the nation. The KPRP decided at the Fourth Party Congress to operate "openly". This move seemed to reflect the leadership's growing confidence in its ability to stay in power despite the ongoing guerrilla war with the Khmer Rouge. The move may have had a practical dimension as well because it involved the people more actively in the regime's effort to build the country's political and administrative infrastructure after the Khmer Rouge genocide from 1975 to 1979. The Fourth Party Congress reviewed Pen Sovan's political report and defined the party's strategy for the next several years. The Congress adopted five "basic principles of the party line", which were to uphold the banners of patriotism and of international proletarian solidarity; to defend the country (the primary and sacred task of all people); to restore and to develop the economy and the culture in the course of gradual transition toward socialism; to strengthen military solidarity with Vietnam, Laos, the Soviet Union and other socialist nations; and to develop "a firm Marxist–Leninist party".
At the Congress, it was decided that henceforth the party would be known as the KPRP in order to distinguish it from "the reactionary Pol Pot party and to underline and reassert the community of the party's best traditions". The Fourth Party Congress also proclaimed its resolve to stamp out the "reactionary ultra-nationalist doctrine of Pol Pot", to emphasize a centralized government and collective leadership and to reject personality cults. The "ultra-nationalist doctrine" issue was an allusion to Pol Pot's racist, anti-Vietnamese stance. The Congress, attended by 162 delegates, elected twenty-one members of the party Central Committee, who in turn elected Pen Sovan as general secretary and the eight members of the party inner circle to the Political Bureau (Heng Samrin, number 2 member; Chea Sim, number 4 member; Hun Sen, number 6 member; Chan Sy, minister of defense and prime minister from December 1981; number 7, including Chea Soth, deputy prime minister, Bou Thang, chair of the Party’s Central Propaganda Committee, deputy prime minister from 1982 to 1992 and minister of defense from 1982 to 1986; and Chan Sy), seven members of Secretariat (including Hun Sen). It also adopted a new statute for the party, but did not release the text. According to Michael Vickery, veterans of the independence struggle of the 1946 to 1954 period dominated the party Central Committee. A majority of the Central Committee members had spent all or part of the years 1954 to 1970 in exile in Vietnam or in the performance of "duties abroad".
Heng Samrin's leadership (1981-1991)
The KPRP's pro-Vietnamese position did not change when Heng Samrin suddenly replaced Pen Sovan as party leader on 4 December 1981. Heng Samrin and Chea Sim is the first and second positions in the Politburo. Pen Sovan, who was reportedly flown to Hanoi under Vietnamese guard, was "permitted to take a long rest. In any case, the new general secretary won Hanoi's endorsement by acknowledging Vietnam's role as equal partner in the Cambodian-Vietnamese relationship.
In mid-1981, the KPRP had countrywide party branches in Phnom Penh, in Kampong Som and in the eighteen provincial capitals. Party membership was estimated at between 600 and 1,000, a considerable increase over 1979, but still only a fraction of the number of cadres needed to run the party and the government. In 1981, several of the 18 provinces had only one party member each and Kampong Cham, the largest province with a population of more than 1 million, had 30 regular members, according to Cambodia specialist Ben Kiernan.
Since 1 February 1983 Say Phouthang (Secretary, Head of the Party Commission for Inspection), Chan Sy and Bou Thang occupied respectively the third, fourth and fifth positions in the Politburo. On 1 May 1985, Hun Sen was named the fourth, just behind the Heng Samrin, Chea Sim and Say Phouthang.
The party held its Fifth Party Congress from 13 to 16 October 1985 to reflect on the previous five years and to chart a new course for the next several years. The party's membership had increased to 7,500 regulars (4,000 new members joined in 1985 alone). The party had an additional pool of 37,000 "core" members from which it could recruit tested party regulars. There were only 4,000 core members in mid-1981. According to General Secretary Heng Samrin's political report, the KPRP had twenty-two regional committees and an undisclosed number of branches, circles and cells in government agencies, armed forces units, internal security organs, mass organizations, enterprises, factories and farms. The report expressed satisfaction with party reconstruction since 1981, especially with the removal of the "danger of authoritarianism" and the restoration of the principles of democratic centralism and of collective leadership. However, it pointed out "some weaknesses" that had to be overcome. For example, the party was "still too thin and weak" at the district and the grass-roots levels. Ideological work lagged and lacked depth and consistency; party policies were implemented very slowly, if at all, with few, if any, timely steps to rectify failings; and party cadres, because of their propensities for narrow-mindedness, arrogance and bureaucratism were unable to win popular trust and support. Another major problem was the serious shortage of political cadres (for party chapters), economic and managerial cadres and technical cadres. Still another problem that had to be addressed "in the years to come" was the lack of a documented history of the KPRP. Heng Samrin's political report stressed the importance of party history for understanding "the good traditions of the party".
The report to the Fifth Congress noted that Heng Samrin's administration, in coordination with "Vietnamese volunteers", had destroyed "all types" of resistance guerrilla bases. The report also struck a sobering note: the economy remained backward and unbalanced, with its material and technical bases still below pre-war levels and the country's industries were languishing from lack of fuel, spare parts and raw materials. Transition toward socialism, the report warned, would take "dozens of years".
To hasten the transition to socialism, the Fifth Congress unveiled the PRK's First Plan, covering the years 1986 to 1990. The program included the addition of the "private economy" to the three sectors of the economy mentioned in the Constitution (the state sector, collective sector and the family sector). Including the private economy was necessary because of the "very heavy and very complex task" that lay ahead in order to transform the "nonsocialist components" of the economy to an advanced stage. According to the political report submitted to the congress, mass mobilization of the population was considered crucial to the successful outcome of the First Plan. The report also noted the need to cultivate "new socialist men" if Cambodia were to succeed in its nation-building. These men were supposed to be loyal to the fatherland and to socialism; to respect manual labor, production, public property and discipline; and to possess "scientific knowledge". Heng Samrin's political report also focused on foreign affairs. He recommended that Phnom Penh strengthen its policy of alliance with Vietnam, Laos, the Soviet Union and other socialist countries. He stressed—as Pen Sovan had in May 1981—that such an alliance was, in effect "a law" that guaranteed the success of the Cambodian revolution. At the same time, he urged the congress and the Cambodian people to spurn "narrow-minded chauvinism, every opportunistic tendency, and every act and attitude infringing on the friendship" between Cambodia and its Indochinese neighbors.
The KPRP's three objectives for the period 1986 to 1990 were to demonstrate military superiority "along the border and inside the country" for complete elimination of all anti-PRK activities; to develop political, military and economic capabilities; and to strengthen special relations with Vietnam as well as mutual cooperation with other fraternal countries. Before Heng Samrin's closing address on 16 October, the 250 party delegates to the congress elected a new Central Committee of 45 members (31 full members and 14 alternates). The Central Committee in turn elected Heng Samrin as general secretary, a new Political Bureau (nine full members: Heng Samrin, Chea Sim, Hun Sen as Second Secretary, Say Phouthang, Bou Thang, Chea Soth, Men Sam An, Math Ly, Ney Pena and two alternates), a five-member Secretariat (Heng Samrin, Hun Sen, Bou Thang, Men Sam An and Ney Pena) and seven members of the Central Committee Control Commission.
After the Fifth Congress, the party's organizational work was intensified substantially. The KPRP claimed that by the end of 1986 it had more than 10,000 regular members and 40,000 candidate members who were being groomed for regular status. As of 1990, members of the Politburo were Heng Samrin (General Secretary), Chea Sim, Hun Sen, Chea Soth, Math Ly, Tea Banh, Men Sam An, Nguon Nhel, Sar Kheng, Bou Thang, Ney Pena, Say Chhum and alternate members included Sing Song, Sim Ka and Pol Saroeun. Members of the Secretariat were Heng Samrin, Say Phouthang, Bou Thang, Men Sam An and Sar Kheng.
Hun Sen's leadership (1991–present)
In 1991, the party was renamed Cambodian People's Party (CPP) during a United Nations-sponsored peace and reconciliation process. Politburo and the Secretariat to enter into the new Standing Committee, Chea Sim as President and Hun Sen as Vice-president. Despite rooted in socialism, the CPP was not ideologically blind. In fact, it has always adopted a pragmatic approach to protect and promote the interests of the nation and the party. For instance, the CPP played an indispensable role in Cambodian peace negotiation process, which led to the signing of the Paris Peace Accords on 23 October 1991 and the creation of the second Kingdom of Cambodia. CPP leaders have been among the most reformists to stir Cambodia towards free market economy and regional integration.
Under the leadership of the CPP, Cambodia has been transformed from a war-torn country to a lower-middle-income economy in 2016. It aims to turn Cambodia into a higher-middle-income country by 2030 and high-income country by 2050. Ideologically, an increasing number of CPP senior leaders claim that the Cambodian ruling party has adopted a centrist democracy. They believe that it is the middle path between extreme capitalism and extreme socialism, with the emphasis on the values and principles of social market economy in which free market economy goes hand in hand with social and environmental protection, and the promotion of humanism guided by Buddhist teaching.
Having said that, academics such as John Ciorciari have observed that the CPP still continues to maintain its communist-era party structures and that many of its top-ranking members were derived from KPRP. Also, despite Hun Sen being only the deputy leader of the party until 2015, he had de facto control of the party. Prime Minister Hun Sen has continued to lead the party to election victories after the transition to democracy. It won 64 of the 123 seats in the National Assembly in the 1998 elections, 73 seats in the 2003 elections and 90 seats in the 2008 elections, winning the popular vote by the biggest margin ever for a National Assembly election with 58% of the vote. The CPP also won the 2006 Senate elections. The party lost 22 seats in the 2013 elections, with opposition gained. Since 2018, the party commands all 125 seats in the National Assembly, and 58 of 62 seats in the Senate. Hun Sen, the Prime Minister of Cambodia, has served as the party's President since 2015.
The PRK in power (1979–1992)
• Heng Samrin: General Secretary of the KPRP (1981–1991); Chairman of the Revolutionary Council (later the Council of State) (1979–1992) • Chea Sim: Minister of the Interior (1979–1981); President of the National Assembly (1981–92), Chairman of the Council of State (1992–1994) • Pen Sovan: Minister of Defense (1979–1981); General Secretary of the KPRP (1979–81); Prime Minister (1981) • Hun Sen: Minister of Foreign Affairs (1979–1986; 1987–1990); Deputy Prime Minister (1981–85), Prime Minister (1985–1993) • Chan Sy: Minister of defense (1981–1982), Prime Minister (1981–1984) • Say Phouthang: Vice President of the State Council (1979–1993) • Chea Soth: Minister of Planning (1982–1986), Deputy Prime Minister (1982–1992) • Bou Thang: Deputy Prime Minister (1982–1992), Minister of Defense (1982–1986) • Math Ly: Vice President of the National Assembly • Kong Korm: Minister of Foreign Affairs (1986–1987) • Hor Namhong: Minister of Foreign Affairs (1990–1993)
List of party leaders
|Pen Sovan||5 January 1979||5 December 1981||2 years, 334 days||Prime Minister (1981)|
Minister of Defence (1979–1981)
|Heng Samrin||5 December 1981||17 October 1991||9 years, 316 days||Chairman of the People's Revolutionary Council (1979–1992)|
|Chea Sim||17 October 1991||8 June 2015||23 years, 234 days||Chairman of the National Assembly (1981–1993)|
Chairman of the Council of State (1992–1993)
President of the Senate (1999–2015)
|Hun Sen||20 June 2015||Present||4 years, 215 days||Prime Minister (1985–present)|
The party is headed by a 34-member Permanent Committee, commonly referred to as the Politburo (after its former Communist namesake). The current members are (with their party positions in brackets):
- Hun Sen (Chairman)
- Heng Samrin (Honorary Chairman)
- Sar Kheng (Deputy Chairman)
- Say Chhum (Chairman of the Standing Committee)
- Say Phouthang
- Bou Thang
- Tea Banh
- Men Sam An
- Nguon Nhel
- Ney Pena
- Sim Ka
- Ke Kim Yan
- Pol Saroeun
- Kong Som Ol
- Im Chhun Lim
- Dith Munty
- Chea Chanto
- Uk Rabun
- Cheam Yeap
- Ek Sam Ol
- Som Kim Suor
- Khuon Sudary
- Pen Pannha
- Chhay Than
- Hor Nam Hong
- Bin Chhin
- Keat Chhon
- Yim Chhay Ly
- Tep Ngorn
- Kun Kim
- Meas Sophea
- Neth Savoeun
National Assembly election
117 / 117
|New||1st||KPRP (sole legal party)|
51 / 120
64 / 122
73 / 123
90 / 123
68 / 123
125 / 125
1,598 / 1,621
7,552 / 11,261
1,591 / 1,621
7,993 / 11,353
1,592 / 1,633
8,292 / 11,459
1,156 / 1,646
6,503 / 11,572
45 / 57
46 / 57
58 / 58
- Kampuchean United Front for National Salvation
- Modern Cambodia
- People's Republic of Kampuchea
- Politics of Cambodia
- Niem, Chheng (26 June 2019). "CPP set to mark anniversary, vows to maintain public trust". The Phnom Penh Post. Retrieved 26 June 2019.
- Ven, Rathavong (5 June 2018). "CPP determined to maintain Kingdom's peace and development". Khmer Times. Retrieved 26 June 2019.
- Prak, Chan Thul (2 February 2018). "Cambodian government criminalizes insult of monarchy". Reuters. Retrieved 21 June 2019.
- Hul, Reaksmey (27 October 2018). "Hun Sen, Former Opposition Leader in Row Over 'Loyalty to Royals'". Voice of America. Retrieved 21 June 2019.
- Quackenbush, Casey (7 January 2019). "40 Years After the Fall of the Khmer Rouge, Cambodia Still Grapples With Pol Pot's Brutal Legacy". TIME. Retrieved 7 December 2019.
- "Khmer People's Revolutionary Party (KPRP)". Global Security. 6 February 2012. Retrieved 30 July 2019.
- Diamond, Larry (April 2002). "Elections Without Democracy: Thinking About Hybrid Regimes" (PDF). Journal of Democracy. 13 (2): 31, 32. Retrieved 27 January 2014.
- McCargo, Duncan (October 2005). "Cambodia: Getting Away with Authoritarianism?" (PDF). Journal of Democracy. 16 (4): 98. doi:10.1353/jod.2005.0067. Retrieved 27 January 2014.
- Hughes, Caroline (January–February 2009). "Consolidation in the Midst of Crisis" (PDF). Asian Survey. 49 (1): 211–212. doi:10.1525/as.2009.49.1.206. ISSN 1533-838X. Retrieved 27 January 2014.
- Khorn, Savi (11 June 2019). "Ministry: Councillors to be appointed by next Monday". The Phnom Penh Post. Retrieved 17 June 2019.
- "Report on the Commune Council Elections – 3 February 2002" (PDF). comfrel.org. Committee for Free and Fair Elections in Cambodia (COMFREL). March 2002. Retrieved 4 September 2018.
- "Final Assessment and Report on 2007 Commune Council Elections" (PDF). comfrel.org. Committee for Free and Fair Elections in Cambodia (COMFREL). 1 April 2007. Retrieved 4 September 2018.
- "Final Assessment and Report on 2012 Commune Council Elections" (PDF). comfrel.org. Committee for Free and Fair Elections in Cambodia (COMFREL). October 2012. Retrieved 4 September 2018.
- "Final Assessment and Report on 2017 Commune Council Elections" (PDF). comfrel.org. Committee for Free and Fair Elections in Cambodia (COMFREL). October 2017. Retrieved 4 September 2018.
- Guo, Sujian (2006). The Political Economy of Asian Transition from Communism. Ashgate Publishing, Ltd. ISBN 0754647358. | <urn:uuid:a7ccf800-7cd7-4492-bc5b-bab003f9bf45> | {
"date": "2020-01-22T07:04:34",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9299405813217163,
"score": 2.78125,
"token_count": 5185,
"url": "https://en.wikipedia.org/wiki/Cambodian_People%27s_Party"
} |
Eragrostis curvula is usually a long-lived perennial grass, but it is sometimes an annual plant. It is variable in appearance, and there are many different natural and cultivated forms.
In general, it forms tufts of stems up to 1.9 metres (6.2 ft) tall. The tufts may reach a diameter of 38 centimetres (1.25 ft).
The grass grows from a thick root network. Plants have been noted to have roots penetrating over 4 metres (13 ft) deep in the soil and 3 metres (9.8 ft) laterally. The roots can grow 5 centimetres (2.0 in) per day. The first root to grow into the soil from a seedling can send out up to 60 small rootlets per inch. The dense root system forms a sod.
The drooping leaves of the grass are up to 65 centimetres (2.13 ft) long but just a few millimeters wide, and they may have rolled edges. The inflorescence is a panicle with branches lined with centimeter-long spikelets. Each spikelet may contain up to 15 flowers. One panicle may produce 1000 seeds. Cultivated plants may produce two crops of seed per year. The plant self-fertilizes or undergoes apomixis, without fertilization.
This grass is valuable as a forage for livestock in Africa, its native range. There are many ecotypes. Several of these ecotypes were collected and introduced in the United States as cultivars. The grass was first planted in the United States in Stillwater, Oklahoma, in 1935. It was good for livestock, and its massive root network made it a good plant for erosion control.
It spread quickly as it was planted for ornamental purposes. It reached New York in the 1960s and in the 1970s and 80s it was planted alongside many highways such as the Long Island Expressway. Today it occurs as an invasive species in wild habitat from the southwestern United States to the East Coast. It can be found in woodlands, chaparral, prairie, grassland, and disturbed areas. It is tolerant of very acidic and very basic soils; it grows easily in mine spoils. This species may hybridize with other Eragrostis, such as Eragrostis caesia, E. lehmanniana, and E. planiculmis.
Cultivars of this grass include 'South African Robusta Blue', 'Witbank', 'Ermelo', 'Kromarrai', 'American Leafy', and 'Renner'. Cultivars may be selected for yield, palatability for livestock, and drought resistance. It is planted along waterways in Sri Lanka and mountainsides in Japan, and it is used for oversowing fields in Argentina. In the United States it is often planted alongside Korean lespedeza. It is planted as a nurse crop for sericea lespedeza, coastal panic grass, and switchgrass.
It is an invasive species in some regions, such as parts of the United States and Victoria and other Australian states. It is aggressive and can crowd out native plants. Its drought resistance helps it to survive in dry environments.
- Gucker, Corey L. (2009). Eragrostis curvula. In: Fire Effects Information System, [Online]. U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory. Retrieved 12-22-2011.
- Ncanana, S., et al. (2005). Development of plant regeneration and transformation protocols for the desiccation-sensitive weeping lovegrass Eragrostis curvula. Plant Cell Rep 24 335-40. Retrieved 12-22-2011.
- Halvorson, W. L. and P. Guertin. (2003). USGS Weeds in the West project: Status of Introduced Plants in Southern Arizona Parks. Archived 2012-04-26 at the Wayback Machine USGS. Retrieved 12-22-2011.
- Eragrostis curvula. FAO Plant Profile. Retrieved 12-22-2011.
- Eragrostis curvula. USDA NRCS Plant Fact Sheet. Retrieved 12-22-2011.
- Parsons, W. T. and E. G. Cuthbertson. Noxious weeds of Australia. Csiro Publishing 2001.
- Eragrostis curvula. USFS Weed of the Week. Retrieved 12-22-2011.
- Eragrostis curvula. Purdue University Center for New Crops and Plants Products. Retrieved 12-22-2011.
|Wikimedia Commons has media related to Eragrostis curvula.| | <urn:uuid:72fb6455-1faf-45e2-bdf2-4b4e0459af4f> | {
"date": "2020-01-22T07:11:01",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8609380125999451,
"score": 3.859375,
"token_count": 981,
"url": "https://en.wikipedia.org/wiki/Eragrostis_curvula"
} |
flowers round in red, white, or blue. To all artists this is repulsive, and whence could have originated such a barbarous custom (I will not call it taste) is a matter of wonder. Certainly not from Paris, where flowers are one of the necessaries of life; nor London, where they are used more in garden or hothouse decorations than agrémens for the drawing-room. The true artist will not degrade art by following "the fashion." This may suit the modes of millinery, but Art should not wear the paint or mince the gait of Fashion.
MATERIALS ESSENTIAL FOR IMITATING FLOWERS IN WAX.
A pair of scissors, light and thin, such as used by surgeons, are the best adapted for the purpose; they should be thin in the blades and rather loose in the rivets, so as to cut easily round the paper pattern; a cup to hold water; a pallet; three or four steel pins, with bead heads of different sizes; six or eight bristle brushes; two or three small sable pencils; three rings of green wire of different thicknesses; two wooden molds for forming bell-shaped flowers, such as the Lily of the Valley or Stephanotas; a small quantity of gum arabic dissolved in pure water; some white wax in sheets of a thin texture, also some of the extra thick or double wax; a few | <urn:uuid:14a7bfb9-3218-4e16-961d-c5bc6b127326> | {
"date": "2020-01-22T05:58:20",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9370413422584534,
"score": 2.625,
"token_count": 298,
"url": "https://en.wikisource.org/wiki/Page:The_Art_of_Modeling_Flowers_in_Wax.djvu/15"
} |
The relationship between earlier settlers, and later Americans, and the Native inhabitants of this country is an ongoing topic that we revisit in the texts we study in class. In our class discussions we have noted the use of stereotypes when referring the Native Americans – the term ‘Native America’ itself in its homogeneous application, the ‘Noble Savage’ and ‘Wise Chief’, the ‘Indian Princess’ and the ‘Squaw’ and the barbarous ‘Savage’. We have also discuss how the Native American individuals themselves also seemed to purposely play into these stereotypes knowing that, unless they appeared to fulfill the preconceived notions of the white settlers and early Americans, there was a greater chance of their protests and pleas being ignored (See the post over Red Jacket’s speech for more information on this).
While many of the founders of the nation practiced Deist principals regarding religion, Christianity was still the dominant religion and touchstone for most Americans. The conflict between Puritan ideals and the Catholics and Quakers eventually shifted into the conflict between Protestants and all other religions (even other sects of Christianity) by the time of expansionism. In the early days of settlement, the conversion of the Native American was seen as a vital step to ‘civilizing’ the new world (as we discussed in our readings of Mary Rowlandson’s Captivity Narrative and Ben Franklin’s ‘Notes Concerning the Savages’), and as America set her eyes westward to expand, so the need to convert and assimilate the Native people of the American west to Christianity became another vital step in expansion.
One of the earliest accounts of this attempt at conversion took place at The Foreign Mission School. As we discuss and analyze the writings of Native American and Hawaiian students of this school, it is important to have a deeper understand of its history and historical context.
The Foreign Mission School in Cornwall, Connecticut was founded with the
plan that it would draw young men from world cultures, educate them, convert them to Christianity, and then send them back to their native lands to spread their new found religion. Click here to listen to the podcast episode over ‘The Heathen School’.
And click here to read and listen to a recent interview with the author of the new book “The Heathen School: A Story of Hope and Betrayal in the Early Republic” and read a short excerpt from the book.
We we be examining the letters of two Cherokee students at The Foreign Mission School, David Brown and Elias Boudinot, to a Swiss Baron that wanted to fund the school. These letters were written at the insistence of the school’s principal who claimed that the letters were the students’ own writing except for the changing of “a very few words”.
You will be working together in small groups to conduct a rhetorical analysis of the letters, focusing on the syntax, diction and rhetorical devices used by the students of the mission school to achieve their purpose of securing further funding for the school. Be prepared as a class the effectiveness of the author’s writing during the time period and compare that to its effectiveness to a modern readership. Also be able to discuss the reliability of the letter as a primary source document, and cite specific evidence from the text that adds or detracts from its credibility.
*Note: If you are interested in researching or learning more about the issue of religious tolerance in America, this article from The Smithsonian can provide a jumping off point for more information – Click here for the Smithsonian article.
- How does the school to achieve their purpose of securing further funding for the school?
- How would you rate the effectiveness of the author’s writing during the time period and compare that to its effectiveness to a modern readership?
- Cite specific evidence from the text that adds or detracts from its credibility. | <urn:uuid:69af5cec-1408-4f9e-a766-3f8ffc62a58b> | {
"date": "2020-01-22T05:47:17",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9603776335716248,
"score": 3.453125,
"token_count": 804,
"url": "https://englishwithmrspierce.com/2017/02/01/the-foreign-mission-school-and-religious-tolerance-in-america-3/"
} |
This wonderful invention has helped generations of businesses and patrons, ever since 1888. What made this invention such a success? Was its original value in keeping out horses (as some legends suggest) or purely the benefit of people exiting and entering simultaneously that made this invention so popular. Who knows but either way, revolving doors have taken society, businesses and governments by storm (in a very quiet way).
You might think revolving doors have not changed in the last 100 years and you’d be wrong. As the world has changed and science and technology developed, the additional features and technologies integrated into this type of door as evolved significantly too (companies like Advance Access have done revolutionary stuff with glass revolving doors in recent years).
What’s the history behind revolving doors?
As mentioned, revolving doors can be traced back to the 1880s. In Germany, a patent was granted in 1881 to H. Bockhacker, and in America in 1888 to T van Kannel. The patent papers describe the goals of revolving doors as:
- Having a noise-free
- Minimising collisions between people entering and exiting.
- Limiting the effects of the weather on buildings (a revolving door can’t be blown open, and rain or snow can’t be forced inside by wind).
The original doors that were built off these premises above have now very much evolved into the modern equipment that we see across the globe including buildings and offices in London (even coworking spaces). Whether there being used as metal security doors, being used as an extra space for marketing or acting as turnstiles that enable better flow through into sports venues or theme parks, revolving doors provide some core benefits that are often overlooked.
Creating a strong impression
Glass revolving doors make an impression on anyone entering your building. Revolving glass doors can make your structure seem formidable as well as ultra-stylish to show you’re organisation is one to be taken seriously and respected. Revolving glass doors can also be customised with colour, posters and more marketing assets , enabling your doors to act as brand ambassadors to anyone who walks in.
Light and thus glass also never go out of fashion, so this is a long term investment that you likely won’t need to upgrade anytime soon.
Reducing energy consumption
A set of revolving doors can help to significantly reduce energy costs and thus utility bills for any organisation who owns or operates a building or business premises. This is because;
- A sealed door won’t allow drafts to enter the building. Therefore it’s easy to maintain the temperature inside, without the need for additional heating or cooling systems use as more traditional doors (in hotter climates reducing the use/need for air conditioning).
- Glass doors allow in great amounts of light compared to using wooden doors. That means your entrance hall, meaning you’ll require fewer light fittings and less energy to light it up.
- Revolving doors also reduce the amount of dust, mud and debris that enters the building from outside, acting as a sort of airlock for larger particulates! This will reduce the need for cleaning, reducing the electrical cost associated with cleaning appliances.
Overall revolving doors can enable you to significantly reduce your energy consumption within the lobby of a building.
Enabling a safer environment
Glass doors are an effective way of enabling the flow of foot traffic in and out of a building, in fulfilling this process they offer several safety advantages over more traditional doors, such as:
- Delaying the entrance of an individual allowing security to quickly check the individual and potential meet them at the door if they are determined to be a possible threat.
- They allow people in and out of the building at a manageable pace and mostly eliminate the problem of people getting stuck indoors when trying to enter or exit the building
In addition to these advantages with modern revolving doors you can control the speed of revolving glass doors, so intruders can’t storm in and will only be able to enter with one or two others. Your new decorative feature just turned into a safety and security perk.
It’s also worthwhile to note that while they promote flow, they can be hindrances during emergencies. For this reason, it’s a good idea to have these doors to be flanked by normal doors or be easily collapsible (in many jurisdictions this is also legally required for fire and health/safety reasons). Now, when throngs of people need to exit during a fire, they won’t be trapped inside because of a revolving door that only moves at a certain pace.
A better customer experience
If you’re in retail, creating a positive customer experience is essential. What better way to help people, revolving doors enhance the experience for customers by allowing them to easily and seamlessly enter or exit a store (even while carrying many bags). Additionally, for differently abled customers, such as those with crutches or even a wheelchair it’s easier to navigate revolving glass doors compared to a normal door.
Get the most out of your revolving doors
Aside from the energy saving and experience benefits, revolving doors can be a great addition to any commercial or office building. Finally, it’s worth noting when talking to your supplier about revolving doors to ask some key questions/requests:
Subscribe for entrepreneurial & small business advice
Subscribe to our newsletter for advice and insights on starting, managing and growing a small business in the UK.
- Does it seal well to prevent leakages?
- How often should you replace or check these seals?
- Request signage (or obtain elsewhere) to remind patrons to use your new revolving door.
- Some avoid revolving glass doors especially if they’re not automated because they have to expend energy to use them.
Hopefully, you’ve learnt by now that not all doors are created equal and that alongside more traditional options, sliding doors can offer some major benefits to any organisations or business running a commercial building or non-commercial building. | <urn:uuid:2bd9adf8-206f-42ae-839a-c9eb07d50244> | {
"date": "2020-01-22T05:28:45",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9508593678474426,
"score": 2.96875,
"token_count": 1228,
"url": "https://entrepreneurhandbook.co.uk/4-benefits-revolving-doors-offer-over-more-traditional-solutions/"
} |
The Infrared Speckle Interferometer
The first version of the Infrared Speckle Interferometer was developed by ESO in late 1979, for use with the ESO 3.6-metre telescope at La Silla Observatory. During the first year of operation the prototype instrument, in a preliminary state, made it possible to record sufficient results to prove its feasibility. The first astrophysically usable results from this instrument were obtained in 1981.
The Infrared Speckle Interferometer was based on the ESO 3.6-metre Infrared Photometer using a specific high frequency boosted preamplifier, additional apertures and the f/35 wobbling secondary mirror in a slit-scanning mode. These additions were developed at ESO with the joint support of the Lyon Observatory, and the Meudon Observatory in France.
The instrument functions were based on the speckle imaging technique, which in infrared, was using short-exposure one-dimensional (1D) images to “freeze” the variation of atmospheric turbulence, theoretically increasing the resolution up to the diffraction limit of the telescope. Exposures were obtained by digitally converting the signal in perfect synchronisation with the atmospheric scanning, the rate of which could be tuned over a wide range so it was always “freezing the seeing”.
The Infrared Speckle Interferometer had optimised sensitivity, dynamics and frequency response for observations from 2200 nm to 4800 nm, with various slits adapted to the resolutions at these wavelengths. The filter was selected between a set of broad-band photometric filters and two circular variable filters (CVF) — interference filters designed such that the wavelength transmitted varies as a function of the position of the filter. It could achieve the full resolution of the ESO 3.6-metre telescope and store instantaneous 1D images of the objects under study taken in the direction of the scanning. Two observing modes were available to achieve either maximum resolution for bright objects or lower resolution of faint objects. Observing with the maximum resolution mode meant that the adopted slit width and step as well as a fast scanning rate allowed very compact objects to be measured. Observing in imaging mode meant the use of a large slit and a slower scanning rate in order to measure faint objects.
The theoretical limiting magnitudes were 7.7 at 2200 nm, and 4.0 at 4800 nm under good seeing conditions. The performance of the Speckle Interferometer was, for a given seeing, a compromise between sensitivity and resolution, i.e. the wider the slit, the higher the limiting magnitude and the lower the resolution.
The Infrared Speckle Interferometer was remotely controlled through a specific acquisition programme, available on a computer terminal, in a very similar manner to that already used at the ESO 3.6-metre telescope. An online simplified version of the data treatment software was also available to the observer and an offline version developed for HP 1000 systems could be used for the final data reduction process.
This prototype version of the Infrared Speckle Interferometer, which was never available as a general-user instrument was replaced by a new version in 1984 (see The Infrared Specklegraph).
Infrared Speckle Interferometer at the ESO 3.6-metre telescope
This table lists the global capabilities of the instrument. | <urn:uuid:e59102ef-3f62-43f9-9208-98266d341cde> | {
"date": "2020-01-22T04:58:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9432973861694336,
"score": 3.28125,
"token_count": 698,
"url": "https://eso.org/public/brazil/teles-instr/lasilla/36/infrared-speckle-interferometer/?lang"
} |
5 Tips to Be Able to Attract More Butterflies Into Your Yard
Butterflies are beautiful insects. They come in all shapes, sizes, and colors. They are wonderful to watch and admire. Unfortunately a lot of butterflies are dying off. Monarch butterflies have been slowly dying off because their main food source, milkweed, is not in full abundance anymore. Farmers that produce GMO crops, spray the milkweed with pesticides because it is considered a pest plant. This significantly reduces the amount of food monarch butterflies have. Here are some ways to attract more butterflies into your yard, which will help them get more food and have a better survival rating.
How to Attract More Butterflies Into Your Yard
Use Fewer Pesticides
Sometimes pesticides are necessary. You don’t want your garden or yard to be overtaken by pests, but some pesticides will kill butterflies. Do some research about which pesticides will get rid of the pests you are having problems with, but won’t hurt butterflies. There are a lot of natural alternatives to pesticides you can use. These natural alternatives are effective at repelling specific pests and won’t hurt other insects.
The More Sun the Better
Butterflies crave the sun. They absolutely love it. If you want to attract more butterflies into your yard, this is important. Your yard should have plenty of sun for the butterflies to sunbathe in. They love sitting on rocks and getting warm. If your yard already has a lot of shade, consider removing old trees or bushes to create more sunlight.
Butterflies are attracted to a lot of colors. They love a yard with pinks, oranges, yellows, reds, or purples. Plant flowers that are bright and attractive and the butterflies will come. You can also make sure that the flowers have a flat top, butterflies like to be able to land right on top of open flowers.
Image Source: Pixabay
Butterflies eat from milkweed. You need to have milkweed in your yard if you want butterflies to frequently come there. However, you need to have the right kind of milkweed. Check and see what kind of milkweed grows in your region. If you start growing a milkweed that the butterflies in your region are unfamiliar with, they probably won’t eat it.
Create a Butterfly Retreat
Create a place that butterflies love going to, and they’ll continue to go there. Use all of these tips to create a yard that butterflies enjoy coming to. Have brightly colored flowers, some milkweed, they can eat from, and sun they can rest in. They’ll love it! They also like to sit in very shallow pools of water. You can put a small dish with water in it, for the butterflies to get a drink form and cool off. If you create a wonderful place for the butterflies to relax in, they’ll definitely be back.
Butterflies are so wonderful and beautiful to watch. Having these insects in your yard will benefit them, and benefit you too. You’ll be able to give them food, water, and a place to rest which will benefit them. But they’ll benefit you by gracing your yard with their beauty. Try these tip and tricks to attract more butterflies into your yard and see if your yard can become a butterfly habitat.
Image Source: PIxabay | <urn:uuid:5d2c830a-b3f0-4b82-aad4-aff87a75d2c1> | {
"date": "2020-01-22T05:36:38",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9503105282783508,
"score": 2.78125,
"token_count": 696,
"url": "https://everythingbackyard.net/5-tips-to-be-able-to-attract-more-butterflies-into-your-yard/"
} |
It is especially helpful when it comes to printing and framing your work. Following our steps here takes your image from any size into the exact ratio you need for any purpose.
[Note: ExpertPhotography is supported by readers. Product links on ExpertPhotography are referral links. If you use one of these and buy something, we make a little bit of money. Need more info? See how it all works here.]
What Are Ratios?
A ratio is the relationship of the measurements of a particular object. For us, it is the Height and Width of our photographs.
Every image has these two attributes. You can decide what these numbers are, wither when you capture the shot, or during post-production editing.
These numbers are significant, in a few ways. Firstly, it comes down to the style of print you want ot create. A Panoramic image, for example, is supposed to be much wider than it is high.
Secondly, it is especially important when it comes to framing your print. You might have the frame and want ot fit an image to it.
What makes it confusing is that many framing companies use a different ratio guide than camera manufacturers do.
We suggest sticking with the preselected image and ratio size in-camera. That way you get the biggest, high-quality image possible. Then, find the frame to fit it. It is easy to have a frame made for a specific size. It isn’t necessarily cheap, however.
The ratios can be used for both landscape and portrait orientated images, and are designed to focus on the interesting elements in your scene.
Find out below where and why you should use each one.
How to Change Aspect Ratio in Lightroom
Lightroom excels in many areas, including organization and non-destructive editing. Another thing it does really well is changing the aspect ratio of the images you take.
Changing it in this editing program is the easiest way. Adobe Photoshop makes it a little more complicated, even though you can use it for the same outcome.
First, open the image you want to change the aspect ratio of. Head over to the Develop module.
Next, you need to click on the Crop tool, which looks like a square made from ‘marching ants’.
Inside the drop-down area, you’ll see Aspect and Angle. We will be using the Aspect section.
Clicking on the two arrows next to ‘Original’ will drop down a menu. This allows you to change the aspect ratio of your image with some common, preset ratios.
Original is the aspect ratio that the image was imported in at.
As Shot is the aspect ratio you captured the image at.
Custom allows you to drag and drop to change the ratio of the scene.
Personally, I feel this image would look better with an aspect ratio of 16:9. By clicking on the preset ratio of 16:9 will create a grid over your image, showing you a preview of the new ratio.
You are free to move this grid around the image to select the best composition. You can even click and drag on the blocks to extend or shrink the aspect ratio.
Once you are happy with your selection, click on Done.
This is the final image. It started out as 3:2, and finished with an aspect ratio of 16:9.
Creating a Custom Aspect Ratio in Lightroom
To make a custom aspect ratio for your image, open the image as you would normally. Go to the Develop module and click on the Crop button like we did the first time.
In the drop-down menu, select Enter Custom…
An Enter Custom Aspect Ratio box will open allowing you to enter in the new aspect ratio. You can enter values of up to three decimal places.
Once you have found the aspect ratio you are looking for, click on Done.
Check out our post on how to choose the best aspect ratio for landscape photography next! | <urn:uuid:509a019a-69e3-4354-b019-2a82a8d17c69> | {
"date": "2020-01-22T05:14:11",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9354886412620544,
"score": 2.578125,
"token_count": 826,
"url": "https://expertphotography.com/aspect-ratio-in-lightroom/"
} |
Language is defined as a method of human communication through organized words, either spoken or written. Lateralization is referred to as the localization of functions in the brain, commonly attributed to its left hemisphere and right hemisphere.
The area of the brain that is responsible for both spoken and written language is the Wernicke’s Area in the left hemisphere. Analysis of written words is performed in the temporoparietal cortex as well as in the anterior inferior frontal cortex.
On the other hand, speech production is initiated in the Broca’s Area. This area is assisted by the premotor area in selecting and sequencing speech sounds. The arcuate fasciculus makes it possible for the language information in the Wernicke’s Area to be transmitted to the Broca’s Area to produce speech.
Human split-brain studies have helped develop knowledge about language and lateralization. In split-brain studies, the cutting of the corpus callosum (a group of nerve fibers connecting the two brain hemispheres) is cut. These studies have proven that the left and the right brain hemispheres have specific language functions.
Naming objects is one of the language-related functions of the left hemisphere. Objects placed in the right visual field are easily recognized by both normal people and split-brain subjects. However, split-brain subjects cannot identify objects located in the left visual field unlike normal subjects. This proves that the only known language function of the left-hemisphere is to name identified objects. Logic, critical thinking and reasoning are also functions that are dominantly processed in the left hemisphere.
Verbal identification of objects presented in the left visual field cannot be identified by people who have undergone the split-brain surgery. However, they can identify these objects by means of their sense o touch. Some words can also be comprehended through the right hemisphere. The split-brain studies show that the figurative sides and context of language are understood via the right hemisphere. In addition, the emotional expression of language is processed in the right hemisphere. Also, music stimulates the right hemisphere more than spoken words do.
Hemispheric lateralization is important in both the parallel processing and sharing of information. While the left hemisphere is mostly concentrated on the interpretation of information through logic and analysis, it coordinates with the right hemisphere which is focused on the interpretation of experience in its totality via synthesis.
In terms of handedness, most people who are right-handed reveal left hemisphere language dominance. This means that they tend to be better in logical and critical thinking than in creative and expressive thinking. Most left-handed people do not show right-handed language dominance, as common belief tells us. Rather, 70% of left-handed people still demonstrate left hemisphere language dominance.
This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.
That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution). | <urn:uuid:cf8a526b-a6a8-42c1-8b27-fbafd9ef9d52> | {
"date": "2020-01-22T06:52:01",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9366791248321533,
"score": 4.28125,
"token_count": 679,
"url": "https://explorable.com/language-and-lateralization"
} |
Being able to get out and about and stay active is really important, particularly as you get older. It enables you to keep doing things you enjoy and stay connected with people.
- Keeping active and taking regular exercise is important and has many benefits, even if you have been inactive for years
- Regular exercise helps to strengthen muscles and improve fitness, balance, stamina and suppleness at any age, as well as reducing joint pain
This all helps you to stay independent and keep doing the things you enjoy, as well as reduce your risk of having a fall.
This section will help you assess your general mobility, physical activity, strength and balance levels and offer appropriate activities and exercises to build/maintain your strength and fitness.
There will also be the opportunity for you to test yourself (which we hope you take) to see how you fare when it comes to strength and balance in comparison with other people. | <urn:uuid:797b87fc-8817-47b6-9f2e-7b7f7aacd7d9> | {
"date": "2020-01-22T05:12:41",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9781827330589294,
"score": 2.8125,
"token_count": 184,
"url": "https://fallsassistant.org.uk/falls-assessment/staying-steady/"
} |
Homeschool Hands-On Science Curriculum
Homeschool Hands-On Science Curriculum, What makes "difficult subjects", such as science, so difficult to teach and to learn? It could be that the concepts aren't quite as easy to learn via rote memorization or by simply reading about them in books. Based on how people learn, science is a subject that is best taught by using hands-on experimentation and relating it to the physical world around us.
Homeschool Hands-On Science Curriculum-Why Is Science So Hard?
Science often gets a bad rap as being one of those subjects we need to teach our kids - but since we don't understand it all that well ourselves, we dread teaching it and our kids can't get excited it about learning it, either. And that's a real shame because children love science before any formal schooling takes place. According to the Science Education National Academies Press Book, How People Learn, gaining knowledge about science is a natural part of development, "Developmental researchers have shown that young children understand a great deal about basic principles of biology and physical causality, about number, narrative, and personal intent, and that these capabilities make it possible to create innovative curricula that introduce important concepts for advanced reasoning at early ages."
Homeschool Hands-On Science Curriculum Understanding
So we know that young kids can innately understand science... why do they have a difficult time learning it in school?One problem is that homeschool teachers expect to be able to purchase a science textbook, open it up, and start teaching straight from its pages. This presents several problems, however. For one, it's boring. Kids don't get excited about seeing things on the pages of a book. For another, it's ineffective. This type of teaching promotes rote memorization. That's difficult for most of us because we have so much data in our brains already and years later, we usually forget most of that information because it's not tied to other, more important areas of our lives.
Homeschool Hands-On Science Curriculum Teaching
Fortunately, you don't have to thoroughly understand the topics you teach. What if you could learn along with your child? There is a way to do this, and it involves finding a curriculum that supports this type of study. It's referred to as "building block" methodology that introduces students to essential topics first, then builds on that foundation to introduce more and more complex topics. This logical progression allows students to begin understanding science at an early age and even learn college-level science by the time they are in middle school.
Homeschool Hands-On Science Curriculum Importance
Not only should science education use a building block approach, it should serve as more of a guide rather than a listing of dry, hard facts. In order to help students come to the "right" conclusions, an effective science curriculum sets up the scenario, then allows students to explore the concepts in a hands-on manner. This occurs while connecting the new concepts to information they already know, thus allowing them to assimilate science in real life situations. In other words, students form a hypothesis based on science learning, then perform an experiment to prove or disprove the belief. This is how real scientists work and how kids are best able to understand real science.
Homeschool Hands-On Science Curriculum Summary
When you are considering any homeschool science curriculum, look for a depth of understanding over a breadth of knowledge. It's not important to cram a bunch of science "facts" into a student's head; what is important is allowing him or her to explore and have fun with the process of science. Doing science rather than simply memorizing it is what results in true comprehension. | <urn:uuid:13d2a11c-f332-4ec4-8070-8bf53263f760> | {
"date": "2020-01-22T04:25:37",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.953567624092102,
"score": 2.90625,
"token_count": 771,
"url": "https://familyhomeschooler.com/homeschool-hands-on-science-curriculum/"
} |
Esteban, a businessman that is used to manage popular bars, has just bought his first gas station. It is a very old gas station, which has been in the same place for several decades, even before some of us were born. Esteban was then in need of remodeling it and for this he had to apply for a loan.
During his first week of operation, while still filling out the paper work for the loan, Esteban received a visit from the Ministry of Environment and Energy (“MINAE” for its acronym in Spanish). Government officials told him that they had to take water samples from the wells to verify that his gas station was not contaminated. For his misfortune, government officials found high amounts of hydrocarbons outside the storage tanks.
MINAE officials told him that he should conduct further studies to determine the impact and extent of leaks. Esteban had to use all the loan’s money, and more, to pay for the studies and the subsequent remediation.
Like many businessmen, who did not conduct a preliminary study of the sites they wanted to acquire, Esteban had to pay large amounts of money to remedy a pollution he did not cause.
These preliminary studies, which experts call Due Diligence, can be applied both in the purchase and sale of a site. Due Diligence is a process that typically consists of three phases: a qualitative study, a quantitative study, and remediation. Each phase depends on the previous one, so it is not always necessary to perform all of them.
Qualitative analysis, also called Phase I, requires an experts’ exhaustive site visit, in addition to information, photographs and historical data review. In this phase, the consultants are dedicated to finding possible existing or historical sources that could cause contamination in the soil or in the groundwater.
As this phase is qualitative, it is very important that whoever carries it out has prior knowledge of how to conduct the studies and the type of industry to be evaluated. At this stage, it is not possible to ensure whether or not there is contamination, but you can determine the places that may have been affected.
If experts have detected possible sources of contamination, it will be necessary to continue with the quantitative phase. In this phase, through the drilling of groundwater wells and soil borings, soil and groundwater samples are taken to analyze them and determine if there are concentrations of contaminants. The location, quantity and depth of these wells will be directly related to the findings of the first phase.
Finally, if significant concentrations of contaminants are detected in the second phase, either in the soil or in the groundwater, it will be necessary to carry out the remediation phase. This phase will depend on the type and quantity of contaminants found, as well as the type of soil impacted or the use of groundwater. The most widely used remediation technique in Costa Rica is bioremediation; however, it cannot always be used due to the type of soil or contaminant.
The next time you need to buy or sell a land, remember to carry out a Due Diligence with a team of professionals who have extensive experience in the domain. Avoid repeating Esteban’s story. | <urn:uuid:bfe4ec0e-fa43-4009-af0f-9b4dee634031> | {
"date": "2020-01-22T06:22:32",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9717791080474854,
"score": 3.15625,
"token_count": 655,
"url": "https://futurisconsulting.com/anticipate-or-go-bankrupt/"
} |
Asparagus is the Latin name of the lace fern, which in turn comes from the Greek sparassein, ‘rip’. The specific setaceus, from Latin, refers to the fine appearance, like a horsehair, of their leaves.
It is a typically ornamental and shade plant native to South Africa, related to the common asparagus (Asparagus officinalis), this species has been cultivated as a vegetable for eating their young shoots, also used as a medicinal plant. Several species of wild asparagus, green asparagus in particular, are collected in the Mediterranean region for the same purpose. It is also grown in pots of the Real Alcázar, the asparagus densiflorus.
Francisco Hernández (1517-1587), scholar, humanist and physician of King Philip II, recorded a thin asparagus in Mexico for its astringent properties. Hernandez led a major expedition to America designed to investigate the medicinal properties of plants of the New World. He had to write a list of plants for medicinal use and had to report on how to cultivate them. In turn, Hernandez had to send the Iberian Peninsula those novel plants that did not exist there, besides writing a natural history of the territory. This expedition, born of the will of Philip II to know and exploit the resources of their domains, can be considered as the first one, with a scientific goal that was made in America, prologue to that in the eighteenth century financed by the Bourbon monarchs, in line with the Enlightenment spirit. | <urn:uuid:2b0cf814-ffd4-437b-ab2e-68db8f65da81> | {
"date": "2020-01-22T06:23:28",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.961345374584198,
"score": 3.4375,
"token_count": 325,
"url": "https://gardenatlas.net/garden/palacio-de-las-duenas/species/lace-fern/?filter=all"
} |
Validation is probably in one of most important techniques that a data scientist use as there is always a need to validate the stability of the machine learning model-how well it would generalize to new data. It needs to be sure that the model has got most of the patterns from the data correct, and its not picking up too much on the noise, or in other words its low on bias and variance. The goal of this article is to present the cross-validation concept.
Table of Contents
- What is cross-validation?
- Why it is helpful?
- What is overfitting & underfitting?
- Different Validation strategies
- When to use each of the above technique?
- In kFold how many folds to use?
What is cross-validation?
Cross-validation, it’s a model validation techniques for assessing how the results of a statistical analysis (model) will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice.
The goal of cross-validation is to define a data set to test the model in the training phase (i.e. validation data set) in order to limit problems like overfitting,underfitting and get an insight on how the model will generalize to an independent data set. It is important the validation and the training set to be drawn from the same distribution otherwise it would make things worse.
Why it is helpful?
- Validation help us evaluate the quality of the model
- Validation help us select the model which will perform best on unseen data
- Validation help us to avoid overfitting and underfitting.
What is overfitting & underfitting?
- Underfitting refers to not capturing enough patterns in the data. The model performs poorly both in the training and the test set.
- Overfitting refers: a)capturing noise and b) capturing patterns which do not generalize well to unseen data. The model performs extremely well to the training set but poorly on the test set.
The optimal model performs well both in the train but at the test set as well.
Different Validation strategies
Typically, different validation strategies exist based on the number of splits being done in the dataset.
Train/Test split or Holdout: # groups =2
In this strategy, we simply split the data into two sets: train and test set so that the sample between train and test set do not overlap, if they do we simply can’t trust our model. That is the reason why it is important not to have duplicated samples in our dataset. Before we make our final model we can retrain the model on the whole dataset without changing any of the hyperparameters of the model.
But train/test split has one major disadvantage:
What if the split we make isn’t random? What if one subset of our data has only people from a certain state, employees with a certain income level but not other income levels, only women or only people at a certain age? . This will result in overfitting, even though we’re trying to avoid it! as it is not certain which data points will end up in the validation set and the result might be entirely different for different sets. Thus, it is a good choice only if we have enough data.
Implementation in python:
K-fold: # groups = k
As there is never enough data to train a model, removing a part of it for validation poses a problem of underfitting. By reducing the training data, we risk losing important patterns/ trends in data set, which in turn increases error induced by bias. So, what we require is a method that provides ample data for training the model and also leaves ample data for validation. K Fold cross validation does exactly that.
It can be viewed as repeated holdout and we simply average scores after K different holdouts. Every data point gets to be in a validation set exactly once, and gets to be in a training set k-1times. This significantly reduces underfitting as we are using most of the data for fitting, and also significantly reduces overfitting as most of the data is also being used in validation set.
This method is a good choice when we have a minimum amount of data and we get sufficiently big difference in quality or different optimal parameters between folds. As a general rule, we choose k=5 or k=10, as these values have been shown empirically to yield test error estimates that suffer neither from excessively high bias nor high variance.
Implementation in python:
Leave one out : # groups = len(train)
It is a special case of Kfold when K is equal to the number of samples in our dataset. This means that will iterate through every sample in our dataset each time using k-1 object as train samples and 1 object as test set.
This method can be useful if we have too little data and fast enough model to retrain.
Implementation in python:
Extra : Stratification
Usually, when we use train/test split, Kfold we shuffle the data trying to reproduce random train validation split. In that case, it is possible different target distribution to be applied to different folds. With stratification we achieve similar target distribution over different folds when we split the data.
It is useful for:
- Small datasets
- Unbalanced datasets
- Multiclass classification
General, for a balanced big dataset, stratification split will be quite similar to a simple shuffle (random) split.
When to use each of the above technique?
If we have enough data and it is likely to get similar scores and optimal model’s parameters for different splits, train/test split is a good option. If on the contrary, scores & optimal parameters differ for different splits we can choose KFold approach while if we have too little data we can apply leave-one-out. Stratification helps to make validation more stable and it is especially useful for small and unbalanced dataset.
In kFold how many folds to use?
As the number of folds increasing the error due the bias decreasing but increasing the error due to variance; the computational price would go up too. Obviously, you need more time to compute it and you would need more memory.
With a lower number of folds, we’re reducing the error due to variance, but the error due to bias would be bigger. It’s would also computationally cheaper.
General advice for big dataset usually k = 3 or k = 5 is a preferred option while in small datasets it is recommended to use Leave one out.
Cross Validation is a very useful tool of a data scientist for assessing the effectiveness of the model, especially for tackling overfitting and underfitting. In addition,it is useful to determine the hyper parameters of the model, in the sense that which parameters will result in lowest test error.
If you liked this article, please consider subscribing to my blog. That way I get to know that my work is valuable to you and also notify you for future articles.
Thanks for reading and I am looking forward to hear your questions :)
Stay tuned and Happy Machine Learning.
Originally published at :https://towardsdatascience.com/cross-validation-70289113a072 on Aug 16, 2018. | <urn:uuid:30333b81-549e-453f-a0eb-8eb9056408c6> | {
"date": "2020-01-22T05:34:50",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9191501140594482,
"score": 3.484375,
"token_count": 1527,
"url": "https://gdcoder.com/importance-of-cross-validation/"
} |
macrophotography popular medias
5 months ago
DescriptionOrb-weaver spiders or araneids are members of the spider family Araneidae. They are the most common group of builders of spiral wheel-shaped webs often found in gardens, fields and forests. "Orb" was previously used in English to mean "circular", hence the English name of the group.
📸 iPhone X <———————————« | <urn:uuid:d5e4d132-a775-4a52-8ac4-d8008b9fc5d5> | {
"date": "2020-01-22T06:15:20",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9311878681182861,
"score": 2.640625,
"token_count": 90,
"url": "https://gramrix.com/tag/macrophotography"
} |
Resilience is the capacity to recover from adversity and return to well-being. Paul Tough, in his book How Children Succeed, explains that even kids who grow up in the most difficult situations of poverty, abuse, neglect, and stress can rise up from the ashes. It may not be the norm for kids of adversity, but with help, they can do it. “The teenage years are difficult for almost every child, and for the children growing up in adversity, adolescence can often mark a terrible turning point, the moment when wounds produce bad decisions. But teenagers also have the ability—or at least the potential—to rethink and remake their lives in a way that the younger children do not.”
Young teenagers who are supported by family and adults who empower them will face life’s challenges with more guts and stamina than those who fly solo. Those who have a strong sense of belonging, hope, and purpose will hold up better in the face of obstacles. Good parenting can transform a child into a happy, healthy, successful young person.
Resilience is not callousness. It is toughness. I think of certain people in my life who exhibit toughness when it is necessary and sweet sensitivity when it is called for. I call it “kind strength.” Kids can learn to be strong and kind, but it will not come naturally. Parents must model that dichotomy, and it should be directly taught to their children.
I hope my son is kind enough to recognize when a classmate is being bullied and strong enough to help the kid deal with the bully. And if he fails in helping the bullied kid, then I hope he is resilient enough to seek help from an adult and deal with the consequences. It takes confidence, empathy, courage, and inner toughness to be that kind of person. It may take a decade to get to that place, but that is my hope for him.
One day, our children will be on their own. They will live on their own with their own friends, their own responsibilities, their own troubles. Equipping them for independence requires us to guide them, not rescue them, as they handle adversity. We’ll have to help them help themselves.
Pain is Healthy
Dr. Paul Brand, in his brilliant book The Gift of Pain, wrote about the need to manage pain, not just avoid it, as we grow up. “Modern parents lavish sympathy every time their son or daughter suffers any slight discomfort. Subliminally or overtly, they convey the message that ‘Pain is bad.’”
So, what should we do to help kids? Should we make their lives miserable to toughen them up? No. Should we place artificial trials in their lives to prepare them for the real world? No. Life is tough enough. It will kick them around in time, if we just let it.
They will get plenty of practice dealing with trouble if we would just stop rescuing them at every turn. Your daughter will forget her lunch and homework. So be it. Let her deal with it, and then help her communicate her feelings. Listen to her and help her learn. But don’t take off from work to run home to get her lunch and homework and deliver it to her locker. Maybe what she really needs is a zero in the grade book and to miss a meal. Then, when her next trial comes and she fails to make the volleyball team, she’ll be a little more able to deal with the pain. Again, it is wise to listen to her and help her communicate her feelings. But don’t go meddling in her trouble, trying to fix it all up for her. Don’t call the coach demanding to know why your daughter was so unfairly assessed. Instead, teach your daughter something greater: she is loved by God, loved by her family, and has talent that will shine in some other area. A loss is followed by a gain. Those are life lessons so valuable that it costs pain and suffering to learn them.
There is great value in pain. Saint Augustine wrote, “Everywhere a greater joy is preceded by a greater suffering.” Let your kids experience a greater joy by allowing them to work through some pain. Let them make their mistakes while they are in our care so that we can love and support them through it. | <urn:uuid:2c6da222-ed43-44e8-9e32-3dd1353b42b0> | {
"date": "2020-01-22T04:47:54",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9746818542480469,
"score": 3.078125,
"token_count": 893,
"url": "https://growingupwell.org/2014/03/04/raising-resilient-children/"
} |
by April Reinhardt
(last updated September 5, 2008)
While at my desk last week, assembling booklets for a board meeting, my work involved jogging piles of papers into uniform stacks. Invariably while jogging papers, I develop paper cuts on the inside of my forefingers and on the web of skin between my forefingers and thumb. While not a serious laceration, paper cuts sting intensely and can bleed a little or a lot, depending on how deep the cut.
A paper cut is just one example of a simple cut. Scissors, box cutters, broken glass, three-pronged metal fasteners, safety pins, cat claws, kitchen knives, tools—basically, any sharp instrument or object can cause a minor cut. There are a few simple rules when caring for simple cuts:
Some cuts are not minor, although they appear simple at first. While washing dishes with Dad after a large Thanksgiving meal, I sliced the outside of my right pinky finger with a small, sharp knife. I didn't feel the cut at first, but saw the blood washing down the drain with the rinse water. I immediately grabbed the clean dish towel and clamped it over my finger, holding it tightly for several minutes. Although the cut was less than one-half inch long, it was deep, bled profusely, and a half-hour later required twelve stitches in the emergency room.
Where simple cuts are concerned, there are times when you should not hesitate to seek medical attention. If you've been clawed or bitten by an animal, acquire a cut from a rusty instrument, or can't remove debris from a cut, see a doctor immediately to ascertain if a tetanus shot is required.
Sunburn can be serious and there is no quick remedy to undo the damage. You can reduce the worst sunburn symptoms and ...Discover More
Many people think that dehydration occurs only when you are thirsty and only if the weather is hot. Those misconceptions ...Discover More
A choking child is a terrifying event. Not only is the child scared witless, but often the caregiver is as well. By ...Discover More | <urn:uuid:af761d0a-a7e2-4008-9f90-b156d90f6150> | {
"date": "2020-01-22T06:34:44",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9451902508735657,
"score": 2.515625,
"token_count": 442,
"url": "https://health.tips.net/T003637_Caring_for_Simple_Cuts.html"
} |
If you think that you may have allergy induced asthma, but aren't ready to visit the doctor's yet, you really need to be aware of what the symptoms are. By being able to positively identify these symptoms you can easily tell whether you have a severe case of allergies, or if you may have allergy induced asthma. Remember, the only person that can really tell you whether you have this kind of asthma is a doctor, so if you have a few of the symptoms make an appointment with your physician to have the proper tests conducted.
- Sneezing. When dealing with allergy induced asthma, periodic sneezing isn't really that much of an issue. Instead, what you need to be aware of is the longer bouts of sneezing where you can't really do a whole lot else.
- Wheezing. Wheezing when you breath, particularly after you have been around common allergens, is often a symptom of this particular type of asthma. Often this particular symptom will also appear with a rattling sound in your chest as well.
- Nasal stuffiness. Anyone that has ever been around something that makes them sneeze too much, or that has had a cold can attest to what nasal stuffiness feels like. When by itself, or with the sneezing symptom, does not by themselves indicate a possible case of allergy induced asthma. Instead, if this crops up in conjunction with any of the breathing issues, then you may have something to worry about.
- Itchy, watery, or burning eyes. This is a common ailment of anyone that has ever had allergies or hay fever. In fact, if you have ever cut an onion then you have an idea of what to look out for. Again, this could be simply a symptom of hay fever or seasonal allergies, it can also be a symptom of asthma when it is found in conjunction with any of the breathing issues.
- Coughing. Coughing has long been a traditional symptom of asthma, and when it is found in conjunction with a couple of these other symptoms, you can be pretty sure that you need to have a test done. Keep in mind, this isn't going to be a simple cough, but rather a long series of coughs that make it extremely difficult to do anything, or a series of coughs that simply won't quit easily.
- Itchy mouth or throat. If you feel a tickling in the back of your throat or mouth, and it starts to turn into an itchy feeling, then you may want to look into whether you have allergies. If you have this feeling at the same time as some of the breathing issues then you really need to have a couple of tests performed.
- Tightness of the chest. Perhaps one of the most common, and traditional, symptoms of possible asthma and an asthma attack is a feeling of tightness in the chest. Keep in mind though that this feeling of tightness has best been described as though the person afflicted has steel bands or cables wrapped around their chest, and are unable to breath at all.
- Runny nose. Just as with nasal stuffiness, this is something that you can experience quite easily if you are subject to colds, hay fever, or seasonal allergies. While this by itself is not a symptom of asthma, it is definitely one if it has a tendency to appear at around the same time that any of the traditional asthma symptoms show up.
- Difficulty breathing. Often times simply having a bit of difficulty breathing, particularly when being exposed to something like pollen, allergens, or even pet dander, can be a rather simple indication that you have asthma. Difficulty breathing is one of the more traditional symptoms of asthma, and something that you should be on the look out for. | <urn:uuid:03232827-9b2c-41e0-8549-d078c6d9b7f0> | {
"date": "2020-01-22T06:37:57",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9722542762756348,
"score": 2.8125,
"token_count": 767,
"url": "https://health.tips.net/T008587_Allergy_Induced_Asthma.html"
} |
Your liver helps produce urea, stores vitamins and minerals and aids in maintaining a steady level of glucose in the blood. When it becomes diseased, numerous health problems can arise. Fatty liver disease is one cause of hepatomegaly, but there are other conditions that cause your liver to become enlarged as well 1. Some people experience no symptoms with a fatty liver while others may experience severe problems. Consult with your health care provider about your risk factors for fatty liver disease and hepatomegaly.
If you are experiencing serious medical symptoms, seek emergency treatment immediately.
What is Hepatomegaly?
Hepatomegaly is the clinical term for an enlarged liver 1. According to the Mayo Clinic, an enlarged liver is not the problem, but rather it is a sign of an underlying problem that needs to be addressed 1. Liver diseases like cirrhosis, fatty liver disease, hepatitis and benign liver tumors can cause your liver to be enlarged, as can cancer and cardiovascular problems 1. Symptoms of this condition may include fatigue, stomach pain or jaundice; if you have these symptoms call your doctor for an examination.
What is a Fatty Liver?
Fatty liver disease involves a buildup of fat in the liver and can be classified as nonalcoholic fatty liver disease, or NAFLD; or alcoholic fatty liver disease 3. Most individuals with NAFLD experience no complications, but there is a more-severe form of the disease called nonalcoholic steatohepatitis, the Mayo Clinic explains. Symptoms of NAFLD include stomach pain in the upper right quadrant, fatigue and weight loss. Alcoholic fatty liver disease occurs when large amounts of alcohol are consumed over a period of time and is usually reversible after an individual decides to abstain or drink in moderation, the Cleveland Clinic explains 2. Symptoms of alcoholic fatty liver disease can include fever, muscle wasting, enlarged spleen and jaundice 1.
Treatment for hepatomegaly involves finding out the underlying condition causing the enlargement and treating that condition accordingly. There is no treatment for NAFLD, but you may be urged to minimize your risk factors, like obesity and diet, the Mayo Clinic advises. Sometimes a medication may contribute to NAFLD; if this is true in your case, your medication may be changed. Alcoholic liver disease is typically treated by abstaining from alcohol, and if there are complications like jaundice or dehydration, hospitalization may be necessary.
Preventing Hepatomegaly and Fatty Liver
It is possible to prevent the development of a fatty liver and hepatomegaly. Eating a healthy diet that emphasizes fruits, vegetables and whole grains; staying at a healthy weight; quitting smoking; using only the recommended dosages of medications and minimizing your consumption of alcohol, if you drink at all, can help keep your liver healthy. Talk with your health care provider about your risk factors for developing liver disease and any other ways you can reduce your risk of liver problems.
- beer in beer-mug image by Witold Krasowski from Fotolia.com | <urn:uuid:78448b7a-74ad-43f8-995f-2a5628f088f3> | {
"date": "2020-01-22T04:25:25",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9260600209236145,
"score": 2.9375,
"token_count": 632,
"url": "https://healthfully.com/hepatomegaly-and-fatty-liver-7451591.html"
} |
Unusual Facts About Cavities And Fillings
Cavities are one of the most pervasive issues in dentistry. The problem is that you can have one or many without ever even knowing. That’s why you need to call our highly trained dentists at 610-628-9337 today for a dental cleaning and dental exam. If you catch tooth decay soon enough, you can get a dental filling to repair the tooth.
How Cavities Get Started
Why do teeth get cavities? No, it’s not sugar. At least, it’s technically not sugar.
Everyone has some harmful bacteria that live in their mouth. They live off food particles that get trapped on your teeth and gums after you eat. They’ll eat just about anything, but sugar and carbohydrates really help them thrive.
They secrete an acid that damages your enamel. That’s what cavities are — small holes created when acid erodes the enamel. Unfortunately, enamel does not grow back. The only way to repair the damage is by
- Removing the bacteria there
- Filling in and sealing up the cavity
Since enamel cannot be regrown, prevention is very important. That’s why you need to call our Hellertown, PA dental office today for dental cleanings and dental exams. Small cavities are much easier to treat than large ones.
Did You Know These Facts?
Although cavities are probably the most common dental problem, there are things most people do not know about cavities and fillings. Here are some of the little-known facts.
1. You Don’t Have To Get Metal Fillings Anymore
In the past, metal amalgam fillings were the only choice. You might be able to choose between silver and gold, but that’s about it. They work well, but they do not exactly look natural in your smile.
Thankfully, Hellertown Dental Group offers two modern alternatives. Porcelain and composite resin both repair cavities just as well. However, both are color-matched to your teeth. Once these are in place, your smile will still look natural.
2. Fillings Are Not Permanent
No matter the kind, fillings are meant to last a long time — but they are not permanent. That’s because your teeth are constantly being used to crush, tear, and chew up food. All kinds of pressure are hitting that filling.
In addition, decades of drinking very hot and very cold drinks expand and contract that metal a tiny bit. It’s definitely too small to notice, but over time, this will weaken the bond between the healthy enamel and the dental filling. This means you’ll need to replace your fillings eventually.
3. Cavities Keep Growing Until Treated
Very rarely will you find a problem that solves itself. Cavities are no exception.
Because cavities are caused by harmful bacteria thriving on your teeth, any cavity they are causing with acid will just keep growing. As long as the bacteria are there, you will lose more enamel.
But here’s the thing: Enamel has no nerve endings. You could have a big cavity right now and not know it. That’s why you need regular dental exams at our Hellertown, PA dental office. They only way to stop a cavity from growing is to get rid of that bacteria.
4. Untreated Cavities Can Lead To Root Canals
If you skip your routine dental exams long enough, a cavity that could have been stopped when it was small can turn into something very big. In fact, it can grow so deep that it breaks through to your dental pulp.
This is where the nerve endings and blood vessels of your teeth can be found. Once a cavity reaches here, the bacteria infect your dental pulp. This means you will need either a root canal or to have that tooth removed.
5. Dental Crowns Can Repair Large Cavities
When a cavity gets too big, you have two problems preventing you from getting a regular dental filling. First, there’s not enough healthy enamel to bond with. That filling will likely fall out soon. Second, the whole tooth is in danger of fracturing open one day while you’re chewing.
That’s where a dental crown comes to the rescue. These are caps designed to look and feel like a real tooth. It fits snugly over your damaged tooth, sealing up the cavity. A dental crown also holds the tooth together so it won’t fracture.
Treating Cavities In Hellertown, PA
Call us TODAY at 610-628-9337 or contact us online to schedule your next dental cleaning and dental exam. Our dentists are very experienced, so they know how to spot cavities while they are still small and manageable. | <urn:uuid:079f4f78-9258-4e8b-8f52-1f1cbd863766> | {
"date": "2020-01-22T06:19:32",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9432299137115479,
"score": 2.78125,
"token_count": 1007,
"url": "https://hellertowndentalgroup.com/blog/unusual-facts-about-cavities-and-fillings/"
} |
Yesterday I heard a presentation by educator Marla Rea about students in Texas who are learning English; she spoke of the many obstacles for these youngsters. Her talk was definitely an eye-opener (an expression that would be difficult for a non-English speaking person to define) regarding education for those who speak little or no English.
Ms. Rea is a doctoral student in bi-lingual education and is dean of instruction for English language learners in Bryan, Texas.
My interest in Marla’s talk increased when she mentioned a book I read a few years ago about a boy named Enrique who came to America to find his mother. Enrique’s mother left her family in Honduras when she came to the United States to have a better life for herself and for her family in Honduras. Although Enrique’s mother regularly sent money to provide for her family, Enrique missed a mother – he missed his mother.
Enrique’s Journey by Sonia Nazari left an indelible impression on me. Ms. Rea’s talk left an impression on me. Her knowledge and excellent teaching skills, combined with her passion for these children in our country who obviously need to have good English skills, was inspiring.
Ms. Rea conducted research at two literacy programs in Texas on the acquisition of English literacy among U.S. immigrants in Texas. The purpose of her study was to explore the journey to English language literacy, the challenges parents and the children face during the process, and the role reversals that happen during the process and its impact on the children. During the year, Marla participated in several scholarly and academic activities. Overall, Marla continues to make excellent progress towards her doctoral studies, and she is using the knowledge and experiences acquired during the fellowship program to expand literacy programs in the community in her professional role as the director of English Language Literacy Programs with the local independent school district. [Source: Texas Center for the Advancement of Literacy and Learning] In 2008-2009, Ms. Rea was one of three recipients of the Barbara Bush Family Literary Fellowship.
According to the 2000 census, the main languages by number of speakers older than five are:
- English- 215 million
- Spanish- 28 million
- Chinese Languages – 2.0 million + (mostly Cantonese speakers, with a growing group of Mandarin speakers)
- French- 1.6 million
- German- 1.4 million (High German) + German dialects like Hutterite German,Texas German, Pennsylvanian German, Plautdietsch
- Tagalog – 1.2 million + (Most Filipinos may also know other Philippine languages,e.g. Ilokano, Pangasinan, Bikol languages,and Visayan languages)
- Vietnamese – 1.01 million
- Italian- 1.01 million
- Korean- 890,000
- Russian- 710,000
- Polish – 670,000
- Arabic- 610,000
- Portuguese- 560,000
- Japanese – 480,000
- French reole – 450,000 (mostly Louisiana Creole French – 334,500)
- Greek – 370,000
- Hindi – 320,000
- Persian- 310,000
- Urdu- 260,000
- Gujarata- 240,000
- Armenian- 200,000
“Children who speak English as their first language
are now a minority in inner-city London primary schools . . .”
“There are over 600,000 non–English speaking students
in the Texas education system.“ | <urn:uuid:77edeec5-c71d-4c66-aed2-255076f71fc9> | {
"date": "2020-01-22T04:42:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9531841278076172,
"score": 2.609375,
"token_count": 745,
"url": "https://hopeseguin2010.wordpress.com/tag/children/"
} |
This site is intended for health professionals only
In the event of the Government rationing anti-viral treatments for swine flu, elderly victims should be put at the back of the queue, a study suggests.
Italian researchers say reserving drugs such as Tamiflu for the younger population could be the most effective way to save lives and prevent illness.
The controversial strategy from the Bruno Kessler Foundation in Trento, Italy, is based on the great 1918 pandemic, which was most lethal to younger adults.
Mathematician Stefano Merler mostly studied Italy, which is said to have only enough anti-virals to treat seven million people, or 12 per cent of the population.
Predictive models showed that governments should stockpile enough drugs to treat at least a quarter of their populations, assuming moderate levels of infectivity.
Mr Merler argues if supplies are lower than this, it makes sense to ration the anti-virals according to age-specific fatality rates.
“Although a policy of age-specific prioritisation of anti-viral use will be controversial ethically, it may be the most efficient use of stockpiled therapies,” he explained.
Copyright Press Association 2009 | <urn:uuid:5b105f4e-7381-4b09-8fb6-71ed657c2fee> | {
"date": "2020-01-22T04:34:41",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.938632071018219,
"score": 3.203125,
"token_count": 247,
"url": "https://hospitalpharmacyeurope.com/news/editors-pick/save-anti-virals-for-the-young/"
} |
A second person has died in the US from a severe lung disease thought to be caused by vaping, public health officials have revealed.
It has prompted the US Center for Disease Control and Prevention to tell people to stop using e-cigarettes pending investigation.
“While this investigation is ongoing, people should consider not using e-cigarette products,” Dr Dana Meaney Delman, who is leading an investigation into the 450 cases said.
The person was said to have fallen ill after trying a product bought at a dispensary for recreational cannabis.
Experts are currently investigating a mysterious lung disease linked to use of e-cigarettes which has affected around 150 people around the country.
The Centers for Disease Control and Prevention said 149 people nationwide had contracted a severe respiratory illness after vaping.
In the past e-cigarettes have been marketed as a way to enjoy smoking with fewer of the health risks of traditional cigarettes, but news of a second death will no doubt have people questioning their safety.
So just how safe is vaping?
There’s no doubt the advice surrounding the safety of vaping is somewhat conflicted.
"Numerous studies from health groups in the UK have concluded that using an e-cigarette, known as vaping, is better for your health compared to smoking,” explains Dr Diana Gall, from www.doctor-4-u.co.uk.
“A report by Public Health England, which was compiled by several UK-based academics, found that vaping is 95% safer than smoking. Cancer Research UK has also given its support to people using vaping as a means of fighting against smoking related diseases.”
Experts believe one of the main reasons is because e-cigarettes don’t burn tobacco, so should therefore eliminate the risks associated with tar.
“When you smoke or use an e-cigarette, you inhale nicotine, but unlike smoking, the nicotine from e-cigarettes comes in a vapor and doesn’t require burning tobacco,” Dr Gall explains.
“That means that vaping does not expose the body to unpleasant substances such as tar and carbon monoxide, which can cause cancers and are among the biggest threats to health when smoking cigarettes or other tobacco products.”
Surely nicotine has some health risks though?
“Some people do mistakenly believe that nicotine itself is dangerous, which would make vaping almost as dangerous as smoking,” Dr Gall explains.
“But while nicotine is addictive, it cannot cause smoking-related diseases such as cancers or heart disease.”
According to Dr Gall pure nicotine is a toxic compound, but the nicotine found in tobacco, e-liquids and nicotine replacement therapies (NRT) is not pure enough to be poisonous.
“So reports of this being an issue for consumers are exceptionally rare,” she adds.
Vaping isn’t without health risks
While many experts believe vaping is much less damaging to health compared to smoking, recent research has cast an element of doubt over those health beliefs.
Researchers from the University of Athens recently found that flavourings in e-cigarettes harm the lungs by causing inflammation.
Smokers looking to quit often turn to vaping in the belief that it is better for their health, but analysis, conducted on mice, showed that even in the short-term, the inflammation vaping caused was similar or worse than conventional cigarettes.
Researchers compared several groups of mice that received whole-body exposure to varying chemical combinations four times each day, with every session separated by 30-minute smoke-free intervals.
The results, published in the American Journal of Physiology-Lung Cellular, found that even short term use causes as much or even more damage as the real thing.
Other research has raised questions about the chemicals in e-cigarettes.
“Certain studies have found certain chemicals in e-cigarette vapour that are the same than those found in cigarette smoke, but they are at much lower levels,” explains Dr Gall.
Further research has found that nicotine or other molecules found in e-cigarettes can still impact lung health.
The US is so concerned about the potential health implications of vaping they are considering whether to impose a ban.
According to latest figures from the US, e-cigarettes are now the top high-risk substance used by teenagers despite laws prohibiting sales to those under 18.
The US Food and Drug Administration (FDA) is now warning that if the trend continues there could be an outright ban.
READ MORE: What do e-cigarettes really do to your body?
Being vape safe
While the information about the health risks associated with vaping may be somewhat confusing, there are some steps you can take to ensure you’re being as vape-safe as possible.
“The safest thing to do is to use high quality, official e-cigarette products,” advises Dr Gall.
“There are tight regulations for selling e-cigarettes in the UK, so you’re likely to be safe using them as long as you use them properly and avoid bootleg vaping products which could contain more harmful substances.”
It’s important to follow guidelines with regards to charging too.
“There have been reports of e-cigarettes catching fire or exploding, but the main cause of this issue is using the wrong charger,” Dr Gall says.
“As long as you use the right charger for your e-cigarette and avoid leaving them charging unattended or overnight, then nothing should go wrong with them.”
According to Dr Gall if you are pregnant, then leading UK baby charities recommend using NRT products such as gum and patches to stop you smoking.
“However, if you find vaping useful to stay smoke-free, then this is much safer for you and your baby compared to smoking," she adds. | <urn:uuid:073e7d0d-0799-4402-893a-bd14084c1b71> | {
"date": "2020-01-22T05:34:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.957942008972168,
"score": 2.90625,
"token_count": 1195,
"url": "https://in.style.yahoo.com/health-risks-vaping-e-cigarettes-120410683.html"
} |
Prior to World War II, women in Canada were looked down upon in the work force. They were expected to be homemakers with their number one priority being to take care of the household. When the war broke out however in both factory and field women were desperately needed as hundreds of thousand of men went to war.
Women in the workforce started out with just factory jobs, but they developed into so much more. Women were offered jobs that were much more complicated than their house-wife roles. Instead of cooking, cleaning and raising children, they were offered factory jobs which were usually only for men. Here they would help build planes, pack parachutes and become secretaries. Women also worked in skilled jobs such as mechanics, engineers, carpenters, code-breakers, farmers and pilots. Women than became more confident and independent, and showed this through their new found skills.
Many women were paid 5 times more per week than they were before the war, although they were still paid less than men. The jobs they took on while the men were away however, made women the equal of men. Women assumed higher ranking positions. Women were trained in many roles and were put in charge of running offices, designed planes and managed logistics. Women were also trained to be pilots and to drive ambulances. Women proved themselves and showed they could learn new skills. They could work safely and quickly around a factory or shipyard. Women in Toronto, Ontario, for example, drove buses and streetcars.
At the end of the war, women were expected to go back to their traditional roles. Their role hugely changed as the soldiers returned. In large numbers women returned to the household. The women for the first time had had a taste of paid work and they knew it was better than just staying home and scrubbing, cleaning, looking after kids and preparing meals. They fought back, and some did get jobs. This was only the beginning for women’s rights when it came to the workforce.
After the World War Two, women increasingly decided to stand up for themselves. They knew they were capable of doing anything a man could do and their evidence was the work they accomplished during the war. With every generation, women are proving they are quite capable of doing many jobs and holding high positions in business and the public sector. This all started with the independent women who proved themselves to be amazing workers when people doubted them. Women today are scientists, doctors, lawyers, etc. which they should have been all along!
– Alyssa Shaw | <urn:uuid:9cc48633-4817-4e9c-8ee5-95f82ca1f08b> | {
"date": "2020-01-22T05:55:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.995307445526123,
"score": 4.1875,
"token_count": 511,
"url": "https://inequalitygaps.org/first-takes/women-and-work/wartime-work-and-demobilisation/"
} |
You could win $500 and a Trip to Washington, D.C.
OWN IT on March 17 - 23 and take ownership of poison prevention.
National Poison Prevention Week (NPPW), the third week in March each year, is a week nationally designated to highlight the dangers of poisonings and how to prevent them. However, every day people can and do prevent poisonings. We invite you to become actively involved in helping ensure the safety of children and adults in your home and your community.
To help spread the word of this important week, the NPPW Council is holding a contest for grades K - 12. To enter, kids must create a video that helps spread the word about poisoning prevention. Videos must be no longer than two minutes, and should present their interpretation of the theme, Poison Prevention/Safety - Own It, and must be submitted to the contest website by Sunday, February 24th. Videos can be created by individuals or as a team. Student videos should illustrate a message that communicates ways we can all OWN IT and take responsibility for preventing poisonings. The following are just a few video theme ideas:
- Storing packages in safe places, away from the reach of children
- Keeping potentially dangerous products in their original protective packaging
- Knowing the phone number for the Poison Control Center
- Educating loved ones about the dangers of household poisons
As a manufacturer of child resistant closures, it's important to our team at Mold-Rite to also take the next step in keeping kids safe. Our goal in this effort is to help spread the word about safe storage and proper use of packaging to help create awareness on how to prevent accidental poisonings. For more information, please visit our website.
If you have any questions about NPPW or the contest,
please call: 800-286-6107 or email: [email protected] | <urn:uuid:0159feb6-c7d6-4ed9-a943-72f22c8a14cd> | {
"date": "2020-01-22T05:05:03",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9417580962181091,
"score": 2.515625,
"token_count": 384,
"url": "https://info.mrpcap.com/blog/its-not-too-late-to-enter-into-this-years-nppw-2019-contest"
} |
Researchers at MIT have found a way to make one of the world’s strongest materials even stronger. Graphene, a two-dimensional form of carbon that gets its strength from a unique honeycomb structure, was made even more durable by compressing and fusing it into a 3D sponge-like configuration. The ultralight material has a density of just five percent, but could be as much as 10 times stronger than steel.
A two-dimensional sheet of graphene measures one atom in thickness but is known as one of the strongest materials in the world. Using a combination of heat and pressure, a team of MIT researchers led by Markus Buehler, head of MIT’s Department of Civil and Environmental Engineering (CEE), was able to produce an even stronger version which resembles the form of some corals and microscopic creatures called diatoms, both of which have enormous surface area by volume but are lightweight due to their porous structure. Similarly, the 3D form of graphene has shown to be even stronger than its two-dimensional form.
“Once we created these 3D structures, we wanted to see what’s the limit—what’s the strongest possible material we can produce,” said Zhao Qin, a CEE research scientist and one of the study’s co-authors. “One of our samples has five percent the density of steel, but 10 times the strength.”
The potential applications for graphene are nearly endless. The super-strong, lightweight material can be used in ultra-fast charging supercapacitors to create batteries that last essentially forever, can improve the energy efficiency of desalination processes, and can even help solar panels convert more energy into usable electricity. Graphene is very expensive, though, so researchers are continuing to work on ways to enhance its value by bolstering its strength.
The research results were published this week in the journal Science Advances.
Images via Melanie Gonick/MIT and Zhao Qin | <urn:uuid:01ac0956-f902-4df0-8840-2fbe50abed9b> | {
"date": "2020-01-22T04:41:08",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.960034966468811,
"score": 3.828125,
"token_count": 407,
"url": "https://inhabitat.com/mit-researchers-unveil-ultralight-material-10-times-stronger-than-steel/"
} |
Pathetic Fallacy is a literary device in which nature is described as having human emotions, feelings or characteristics.
It is a specific type of personification, but whereas personification covers all objects to which human emotions may be applied, pathetic fallacy relates specifically to nature (e.g. flora and fauna, weather).
The term was first devised by John Ruskin, a social critic from the Victorian times, who used it when criticising popular poetry of the late 18th century and early Romantic works.
Why is Pathetic Fallacy used?
- Communicating Meaning: As with all metaphors, this literary device is especially useful at emphasising meaning. The figurative nature of the tool makes it particularly effective at communicating experiences or emotions that may otherwise be difficult to explain.
- Creativity: Whole poems can be created around just one key metaphor and associated use pathetic fallacy, allowing for in depth exploration of a key idea or concept. This was particularly common as part of the Romantic movement.
In “The Rime of the Ancient Mariner” (English Romantic Verse), Samuel Taylor Coleridge uses pathetic fallacy to help create effective imagery.
“All in a hot and copper sky”
“The bloody Sun, at noon”
These descriptions, utilising pathetic fallacy, highlight the relentless heat and conditions that the sailors described in the poem have to endure. Giving human characteristics helps a reader to better understand what is being described, and therefore they picture the scene more effectively.
William Wordsworth is another Romantic poet who used this device in his poem ‘I wandered lonely as a cloud’, in this instance for creativity and to help communicate emotions.
“I wandered lonely as a cloud that floats on high o’er vales”
There are a whole range of examples of pathetic fallacy which rely on clichés (e.g. ‘furious storm’), and these are particularly apparent with the breadth of literature available today, however Wordsworth’s “lonely’ cloud description still stands out as a creative and imaginative. This description also encourages empathy both with the cloud and its desires, which are likely to be shared by many readers.
Lord Tennyson uses the technique in ‘Maud’:
“The red rose cries”
Emily Bronte also uses the technique in ‘Wuthering Heights’:
“There was a violent wind” | <urn:uuid:9ab42820-1845-43fd-aae7-76fb922640ea> | {
"date": "2020-01-22T04:58:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.930469810962677,
"score": 3.546875,
"token_count": 506,
"url": "https://interpreture.com/pathetic-fallacy-explained/"
} |
Curated legal resources & perspectives on the profession
First Amendment: Speech
Abrams v.S., 250 U. S. 616, 630 (1919) (Holmes, J., dissenting): protection of speech for free trade in ideas, even those that are distasteful to people
Schenck v. U.S., 249 U.S. 47, 52, 39 S.Ct. 247, 63 L.Ed. 470 (1919): free speech & falsely shouting “fire” in a theater
Meyer v. Nebraska, 262 U. S. 390 (1923): Due Process clause of 14th Am “prevents States from forbidding the teaching of a foreign language to young students.” (506) (Also: we’re not in the business of creating cookie-cutter citizens; see Meyer, at 402.)
Pierce v. Society of Sisters, 268 U. S. 510 (1925): First Amendment rights of parents to direct their children’s education
Whitney v. CA, 274 U. S. 357, 374 (1927) (Brandeis, J., concurring): State can’t prohibit speech even if it’s socially abhorrent.
Stromburg v. California, 283 U. S. 359 (1931): protected speech: displaying a red flag (Communist party association) against CA state law.
West Virginia v. Barnette, 319 U. S. 624 (1943): types of symbolic acts protected by First Amendment: compulsory flag salute.
Thornhill v. Alabama, 310 U. S. 88 (1940): speech acts protected by First Amendment: picketing.
Chaplinsky v. New Hampshire, 315 U.S. 568 (1942): limited occasions when state may regulate speech: fighting words as unprotected speech (Word Doc Summary)(PDF Summary)
McCollum v. Board of Education, 333 U. S. 203 (1948): religious education in public school setting violated student’s First Amendment rights
Terminiello v. Chicago, 337 U. S. 1 (1949): city’s breach of peace ordinance violated right of expression
Sweezy v. New Hampshire, 354 U. S. 234 (1957): state could not compel disclosures from witness (Due Process case)
Engel v. Vitale, 370 U. S. 421 (1962): reading of school prayer & establishment clause of 1st Am
Edwards v. South Carolina, 372 U. S. 229 (1963): speech acts protected by First Amendment: Civil Rights protesters—freedom of speech, assembly, and petition.
NYT v. Sullivan, 376 U.S. 254 (1964): defamation as unprotected speech
Cox v. Louisiana, 379 U. S. 536, 554 (1965): rights of free speech & assembly don’t give people permission to give speeches anywhere and whenever they please.
Brown v. Louisiana, 383 U. S. 131 (1966): speech acts protected by First Amendment: Civil Rights
Burnside v. Byars, 363 F. 2d 744 (5th Cir. 1966): the wearing of symbols can’t be prohibited unless there’s a substantial disruption of the learning environment.
Keyishian v. Board of Regents, 385 U. S. 589 (1967): NY’s provisions that public servants (such as profs at State University) renounce Communism are so overly broad and vague that they are unconstitutional. (Marketplace of ideas necessary for robust nation; Brennan, at 603.)
U.S. v. O’Brien, 391 U. S. 367, 376-377 (1968): Draft card burning: governmental interest outweighed freedom of expression. Symbolic speech. (Word Doc Summary)(PDF Summary)
Ginsberg v. New York, 390 U. S. 629 (1968): sale of sexually explicit magazines to a teen; obscenity not within area of protected speech when it came to minors. Stewart concurred, emphasizing minors’ incapacity for decisionmaking.
Brandenburg v. Ohio, 395 U.S. 444 (1969) (per curiam): incitement to lawlessness as unprotected speech
Watts v. U.S., 394 U.S. 705 (1969): speech with threatens unlawful violence may be unprotected: gov’t may criminalize true threats, but not political hyperbole (Word Doc Summary)(PDF Summary)
Tinker v. Des Moines Independent School District, 393 U.S. 503 (1969): if student speech does not cause substantial disruption in the school setting, it is protected. (Word Doc Summary)(PDF Summary)
Cohen v. CA, 403 U. S. 15, 20 (1971): Court struck down adult’s conviction for disorderly conduct because he wore a jacket with an obscenity about the draft on it into a courthouse.
Miller v. CA, 413 U.S. 15 (1973): obscenity as unprotected speech
Thomas v. Bd. of Educ., Granville Cent. Sch. Dist., 607 F.2d 1043 (2nd Cir. 1979): Students suspended for producing satirical publication, some of which was worked on at the school. Distribution happened off-campus and after hours, and most of the work was done away from school. Nexus was de minimis.
Hazelwood School District v. Kuhlmeier (U.S. 1988): school newspaper, articles on pregnancy & divorce. School control over school-sponsored speech.
Texas v. Johnson, 491 U.S. 397 (1989): state may not proscribe speech that society finds offensive
R. A. V. v. City of St. Paul, 505 U.S. 377 (1992): state may not proscribe speech it doesn’t like. Cross-burning & overbroad ordinance targeting certain content of speech/nonverbal expression. (Word Doc Summary)(PDF Summary)
Capitol Square Review and Advisory Bd. v. Pinette, 515 U. S. 753, 771 (1995): The burning of a cross is a “symbol of hate”
Saxe v. State College Area School District, 240 F.3d 200, 213 (3d Cir. 2001):re. lewd, vulgar, offensive speech (not protected); reasonable projection of disruption is sufficient to restrict student speech
State v. Perkins, 243 Wis. 2d 141 (2001): objective “true threat” test—reasonable listener
J.S. ex rel. H.S. v. Bethlehem Area Sch. Dist., 569 Pa. 638, 807 A.2d 847 (Pa. 2002): First Amendment and true threat doctrine: okay to regulate harmful speech (Word Doc Summary)(PDF Summary)
Virginia v. Black, 538 U.S. 343 (2003): true threat doctrine & intent to intimidate (cross-burning as presumptively indicating intent to intimidate in VA statute, but SCOTUS disagreed because it could be just an “ideological statement,” and no statute should suppress ideas). (Word Doc Summary)(PDF Summary)
Morse v. Frederick, 551 U.S. 393, 127 S.Ct 2618, 168 L.Ed.2d 290 (2007): banner promoting drugs at off-campus school event. Special school environment & interest in NOT promoting drug use meant that Frederick’s 1st Am rights were not violated by punishment. LOCATION of event (outside schoolhouse gates) was superseded by PURPOSE (school event).
Wisniewski v. Bd. of Educ. Of Weedsport Cent. Sch. Dist., 494 F.3d 34 (2d Cir. 2007): student speech on internet
Doninger v. Niehoff, 527 F.3d 41 (2d Cir. 2008): student speech on internet: class officer posted insults towards admins on blog, and was prevented from running for senior class secretary. Foreseeable risk of substantial disruption due to off-campus speech.
U.S. v. Parr, 545 F.3d 491 (7th Cir. 2008): subjective/objective test of true threat—it’s perceived as threatening (objective), and was intended as such (subjective).
Layshock ex rel. Layshock v. Hermitage School District, 650 F.3d 205 (3rd Cir.2011) (en banc): First Amendment student speech(MySpace profile of principal): off-campus, did not cause substantial disruption, so protected. (Word Doc Summary)(PDF Summary)
J.S. v. Blue Mountain School District, 650 F.3d 915 (3d Cir.2011) (en banc): no substantial disruption, so off-campus speech was protected. Interesting issues raised in concurrence & dissent, tho, about ubiquitous nature of internet.
Elonis v. U.S., 575 U.S. ___ (2015): “holding that, under longstanding common-law principles, a federal anti-threat statute which does not contain an express scienter requirement implicitly requires proof of a mens rea level above negligence.” (Knox, 1157) (Word Doc Summary)(PDF Summary)
Commonwealth v. Knox, 190 A.3d 1146 (Pa. 2018): true threat doctrine and rap lyrics. See notes on dissent by Justice Wecht for variety of subjective/objective evaluations of true threats across circuit courts. (Word Doc Summary)(PDF Summary) | <urn:uuid:a1160731-2986-41d9-b4e7-7c3d1642c3a8> | {
"date": "2020-01-22T06:33:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8606546521186829,
"score": 2.6875,
"token_count": 2022,
"url": "https://ipsedixitpaiges.com/caselaw-families/first-amendment-speech/"
} |
Every textbook on American music mentions the importance of William Henry Fry (1813-1864), some even devote whole chapters to him. In these texts, however, the emphasis on Fry has been as a critic and an out-spoken champion of American music rather than as a composer. This is for a good reason. Fry was a widely read, prolific, and influential writer. Because of his position as an editor and music critic for the New York Tribune, the most powerful newspaper in the country at the time, Fry’s columns were read as far north as Boston and as far west as Chicago. His articles were also copied by many musical journals including Dwight’s Journal of Music, The Musical Review and Choral Advocate, and The Musical World and New York Musical Times.
Fry’s musical credentials were also impressive. In fact, it was Fry’s musical background that gave his articles their authority. Fry considered himself primarily an opera composer. He composed three operas, one of which, Leonora (1845), was the first “grand opera” by an American to be performed in America. But, he also composed seven symphonies, at least one string quartet, a Mass, an oratorio, a march for band, and several choruses and solo songs.
Despite these accomplishments, little is known about Fry’s music today. While there is a biography on Fry, two dissertations, and several articles, there are no musical editions of his works, most of which still exist in manuscript form. Therefore, much of what has been written in the textbooks about Fry’s music has either come from Fry’s writings or been based on contemporary accounts of performances.
The least studied area of Fry’s repertoire has been his symphonic works, which were by far his most popular and widely heard compositions. One of the reasons for this discrepancy is that three of Fry’s four most popular symphonies were believed to be lost. In 1865, a year after Fry’s death, Edward P. Fry presented his brother’s musical works to The Library Company of Philadelphia. Eighty-one years later, Fry’s biographer, William Treat Upton, cataloged those works. He noted that three symphonies were missing from the collection: A Day in the Country, The Breaking Heart, and Grand Symphony—Childe Harold. While Upton speculated that the single Adagio movement found in the collection “may possibly prove to be the “Adagio” from the Symphony The Breaking Heart,” he concluded that it seemed “rather doubtful.”
The “Adagio” to which Upton was referring is an enigmatic piece known as Adagio Sostenuto. A note in the score of this work, initialed “E. P. F.” and dated May 1865, speculates that the manuscript “appears to be a full score in W. H. Fry’s hand writing [sic] of some overture composed by him and a copy by a copyist of the same piece . . . Quere? [sic] Is this The Breaking Heart?” Despite Edward Fry’s suspicions, William Henry Fry’s biographer, William Treat Upton, concluded that the work could not be the missing symphony because it only generally matches other works composed during that period.
According to Upton, Adagio Sostenuto
might seem to belong among Fry’s earlier works, (The Breaking Heart was composed in 1852), careful study of its details points in another direction. It was only in his later works that Fry discovered the potentialities of the intervals of the major and minor ninth which he exploits so successfully in Notre Dame (composed in 1862-63). Before this he had been long preoccupied with the interval of the sixth (major and minor), an all absorbing interest which reached its climax in the revision of Leonora in 1858, only to be replaced in Notre Dame by a similar obsession with the interval of the ninth. This would seem to suggest that the Adagio, with its emphasis on the interval of the sixth, might be tentatively dated as about 1858—rather than either 1852 or 1863.
Upton also cited two examples from Adagio Sostenuto that paralleled Fry’s revisions of Leonora, completed in 1858:
Then, too, the long elaborate cadenza-like flute passage in even thirty-second notes set against the main theme exactly parallels a passage in the new portion of Act IV of Leonora (1858). Not only that, but at its close Fry introduces the identical effect produced in this same new portion of Leonora, by plunging the whole orchestra from 12/8 time (or 9/8 as the case may be) into a vigorous climactic measure in 6/4 time with its sturdy sequence of six highly percussive quarter notes.
Upton ends his argument by claiming that Adagio Sostenuto “does not fit the contemporary description of that work as given in Dwight’s Journal for February 4, 1854.” However, Fry’s lectures of 1852-53 provide evidence that contradicts these assertions and confirms Edward Fry’s suspicions about the identity of The Breaking Heart.
Adagio Sostenuto and Cantabile Adagio Molto
Among Fry’s compositions in The Library Company of Philadelphia are the illustrations and various musical selections from his lectures delivered in the winter of 1852-53, but neither A Day in the Country nor The Breaking Heart are among them. The collection does, however, include a set of orchestral parts from the first lecture at which these works were performed. These parts include three distinct musical selections identified by numbers—1, 3, and 5—and a fourth by a tempo indication, Cantabile Adagio Molto. It is also identified as “No. 4” in pencil in the cello and bass part. Half of the orchestral parts represented in the lecture notes contain the example Cantabile Adagio Molto including the strings, cornet, and timpani—enough to reconstruct the major portions of the piece.
Even though they do not contain titles, the numbered compositions can easily be identified using a contemporary account of the first lecture in The New York Daily Tribune on 1 December 1852. Selection 1 consists of an opening chord labeled “and there was light” which was most likely used to represent the “universal presence of Music in Nature,” as described in the Tribune. This is immediately followed by the “Star Spangled Banner” (also number 1), which was used to illustrate the major chord. Selection 2, not represented in the orchestral parts, is represented by a score. It consists of short groupings of various instruments and was used as an introduction to musical sounds. Selection 3 is clearly the mixed chorus “Laurels Twined Around the Warrior’s Brow,” from Fry’s Aurelia the Vestal, which was used to illustrate the chromatic scale and the minor mode. Selection 5, a men’s chorus in 6/8 meter, “Each Merry Moss Trooper Mounted His Steed” illustrated compound meter. This piece was paired with an example in common meter the newly composed A Day in the Country. Because of its placement adjacent to the Merry Moss chorus, it is conceivable that Cantabile Adagio Molto could be either A Day in the Country or The Breaking Heart, which was used to represent the “varieties of musical quantity and expression.” A recent discovery, however, narrows the choice.
After being lost for close to 125 years, a manuscript of A Day in the Country was recently discovered in Theodore Thomas’s personal orchestral collection by Brenda Nelson-Strauss, the director of the Rosenthal Archives at the Chicago Symphony Orchestra. This discovery occurred when the archives were being prepared for a move to a more modern facility. I would like to take this opportunity to thank Kile Smith of the Edwin A. Fleisher Collection of Orchestral Music at The Free Library of Philadelphia for relaying this valuable information to me and for putting me in contact with Brenda Nelson-Strauss, who was kind enough to provide me with a copy of the work for my study.
The last known performance of A Day in the Country occurred in 1876 when Theodore Thomas conducted it at the Centennial Exposition in Philadelphia. Thomas performed the work at its premiere twenty-four years earlier as a member of the violin section at the first lecture on 30 November 1852. Later, Thomas played the work many times as a member of Louis Jullien’s orchestra. When and where Thomas actually acquired the work is not yet known. It is possible that The Library Company of Philadelphia, who, as previously mentioned, received most of Fry’s works after his death, loaned A Day in the Country to Thomas for the Centennial performance. Unfortunately, it is not known whether the work was part of the original collection, as the accessioning papers have not been found.
It does not take long to determine that the work in question, Cantabile Adagio Molto, is not the newly found A Day in the Country, but rather the elusive Adagio Sostenuto, the work Edward Fry believed to be The Breaking Heart. While there are various measures which are shortened and combined and some octave shifts, all the major themes, keys, harmonies, and meter changes are present and in the same order as in Adagio Sostenuto.
Adagio Sostenuto and The Breaking Heart
Judging by the striking similarities and differences, it is clear that Cantabile Adagio Molto is an earlier version of Adagio Sostenuto. It is also clear that The Breaking Heart was performed at the first lecture. But are these works one in the same? The change in meter, thinning texture, and the fade in the dynamic level in the last five bars of Adagio Sostenuto seems to indicate that this movement does not stand on its own. There are various accounts of performances of a separate “Adagio” from Fry’s symphony The Breaking Heart, such as at the “Musical Congress of 1854” at Metropolitan Hall in New York. But, we know from Fry’s writings that he was not an advocate of the traditional multi-movement symphonic framework. Fry made this clear in the infamous debates forged in the pages of The Musical World and New York Musical Times between himself and Richard Storrs Willis.
By February of 1854, Fry had composed three symphonies— none of which contain autonomous movements. In one of Fry’s infamous verbal battles, this one with the New York Philharmonic Society, Fry went as far as offering to compose a symphony specifically in four movements if the Philharmonic would agree to perform it, suggesting he had not written one up to that time. Speaking of a performance of The Breaking Heart in Boston, Fry wrote in The Musical World and New York Musical Times: “I know my symphony in one slow movement, excited in Boston as profound a sensation as any instrumental piece presented by M. Jullien.” We know from Dwight’s response that this symphony “in one slow movement” was The Breaking Heart.
Why, then, are there continuous references to an “Adagio” from The Breaking Heart? It is plausible that the “Adagio” label was simply misidentified on Jullien’s program in November of 1853. Jullien developed a unique philosophy in crafting his programs. This philosophy mixed a lighter fare of galops, waltzes, polkas, and quadrilles with more substantial works such as overtures, symphonies, and instrumental solos. As was shown by Eugene Frey, this mixture was not concocted haphazardly; it adhered to a strict formula. The program began with an overture that was immediately followed by a quadrille by Jullien. Then came a movement from a well-known symphony, an aria from an opera, and dance piece. An instrumental solo and another quadrille led up to the intermission. When Jullien first performed The Breaking Heart in Boston, it appeared on the program in the third slot, the position typically reserved for single movements from well-known symphonies. Thus, given Jullien’s strict adherence to his formula (and the expectations of his audience), it is understandable why Fry’s one-movement symphony would be labeled “Adagio” from the “dramatic symphony” The Breaking Heart, as it was in November of 1853.
Despite Upton’s claim, Fry’s in-depth description of The Breaking Heart in the 4 February 1854 issue of the Dwight’s Journal of Music provides the decisive link between these two works. In this discussion, Fry describes a key relationship that is directly represented in Adagio Sostenuto:
The Breaking Heart . . . begins in seven flats before it gets into four, the key,—but that is to express the mysticism of the place with the uncertain wandering of the sufferer. But fairly afloat, the classical modulations are followed . . . Its Agnus dei, is in three flats, the classical relation to four—and then we get back to A flat by classical recurrence—and the piece, after several transitions, ends on the key note.
If one had to place the opening chord of Adagio Sostenuto within a key, it is true that seven flats would be the only one that could accommodate the F-flats (see musical example 1). However, this chord is best described as a flat-VI in the key of A-flat major, the key in which it is written. After resolving, a dominant prolongation occurs which finally settles in the tonic key, A-flat major, in measure nine, as Fry describes. The Agnus Dei in E-flat major described by Fry, refers to his programmatic explanation for the work. In the 21 January 1854 issue of The Musical World and New York Musical Times, Fry wrote that
The Breaking Heart represents a tragedy in a cathedral—that materialized home of eternity—where the senses of the neophyte in religion or architecture, are appalled—subdued by such colossal evidence of the grandeur of human genius. I shall never forget my sensations in visiting for the first time Cologne Cathedral, where the forest is wreaked upon stone, the vault of heaven idealized in dizzy arches—the sunset and clouds hurled into the circular windows, and all breathing a Faith which no longer can evolve such an idea. Of course when I take an educated, delicately reared young lady—not simply a young woman—and put her to die of love and melancholy in such a cathedral—when I arrest her ear by an Angus Dei [sic] (Lamb of God who takest away the sins of the world!) as played on the organ poetised [sic],—for such I consider the heroic plaints and thunders of the mighty brass instruments as I have treated them in the orchestra, where the human breath inspires the sound, and not a pair of bellows as the organ—when I write as has never been done before, the double elegy of violoncellos in deepest double octaves, fortified by Bottesini’s [the virtuoso bass player in Jullien’s orchestra] bass playing the melody and not with the other basses. . .
It seems more than likely that the hymn-like theme in E-flat major presented in mm. 23 to 31 by the brass is the one described here (see musical example 2). After a deceptive cadence (m. 31) and a transition (mm. 31 to 37), the piece expands to full orchestra where the “double elegy” of violoncellos and basses in “deepest double octaves” supports a rousing rendition of the second theme (see musical example 3). This section is followed by a retransition, which leads to a return of theme one in the key of E major, followed immediately by a modulation back to A-flat major through the dominant, the “classical recurrence” of the tonic key described by Fry.
In the same article, Fry describes the final measures of The Breaking Heart:
It is true its last notes are not preceded by the dominant chord or the cadence plagale [sic]—but by an enharmonic transition, leading to the final chord on A flat, with the tender third C above;—but we must remember the symphony began with a breaking heart—seeking God—in anguish and mysticism—and so we end, the third representing Love—for it is Love’s note—which did not fail in death.
The “enharmonic transition” described by Fry in the closing measures is also represented in Adagio Sostenuto. It is not, however, an enharmonic transition in the modern sense, Fry’s understanding was somewhat different. To Fry, any chord that functioned differently than the way it was spelled constituted an enharmonic transition. Here, Fry is trying to describe the final chord progression or cadence of the work—using terms like “plagal” and “authentic” cadence. He is not describing a transitional passage. Adagio Sostenuto ends just as Fry describes, with a non-traditional penultimate chord, a mediant in first inversion. This leads to the final sonority on the tonic with the third on top (see musical example 4).
Other contemporary accounts of The Breaking Heart also help confirm its identity. Willis, one of Fry’s harshest critics, stated in his review of Fry’s first three symphonies:
The Breaking Heart . . . shows an unquestionable improvement upon the ‘Day in the Country’ . . . The parts move more freely, the melodies are of a broader style, and the various departments of the orchestra are more dexterously called into use. We like much this symphony. There is warm feeling in it, and the theme expresses emotions which music is perhaps better able to express than poetry.
It is not difficult to locate the items described by Willis in Adagio Sostenuto. The mere title of the work, Sostenuto, evokes a broad style. Both the first and second themes are in a sustained, lyrical style—especially the first theme presented in the violins, which contains many tied notes and dotted-half notes. The 12/8 meter adds to this sustained quality—specifically when compared directly with a piece in common meter like the opening of A Day in the Country. And surely the statement of the hymn-like second theme in the four-part horns could convey the “warm feeling” described by Willis. A contemporary account of a performance of The Breaking Heart by Jullien’s orchestra in New York provides the final clue for the identity of Adagio Sostenuto.
On these two evenings there was opportunity to give careful hearing of some of Fry’s music. An Adagio pleased me much . . . some new effects were striking—for instance, an oboe solo with a sort of obbligato arpeggio (if that be a proper term) accompaniment by a flute running up and down through some three octaves. . . A deep, delicious melancholy seemed rather the character of the piece, than the powerful anguish and struggle indicated by its title, The Breaking Heart.
The oboe solo with obbligato flute described here is undoubtedly the same one in mm. 71-78 of Adagio Sostenuto (see musical example 5). Without hesitation, Edward Fry’s one-hundred and thirty-five year old question can finally be answered in the affirmative: Adagio Sostenuto is Fry’s lost symphony The Breaking Heart.
The events surrounding the composition and performance of The Breaking Heart and A Day in the Country are the events that plunged Fry into the textbooks of American Music. They were the most performed of Fry’s symphonies. Both had their premiers and were played at Fry’s infamous lecture series on music in 1852-53. And, Jullien’s orchestra played both numerous times throughout the United States. Thus, they were, perhaps, the first symphonies by an American to achieve national exposure and success, making Fry the most performed—and the most famous—American symphonic composer of the mid-nineteenth century.
With the identification of these symphonies and the recently released first recording of three of Fry’s Symphonies on the NAXOS label, it is hoped that a reevaluation of Fry’s music can now take place. And, that this process will be done by first placing our modern presumptions to the side so that we may view them, without bias, within their historical framework. For it seems clear that this is where their significance lies. When Theodore chose to play A Day in the Country at the Centennial Celebration of 1876 in Philadelphia, almost twenty-five years after his first encounter with the work, he did so not because of the symphony’s musical importance but because of its historical importance and unquestionable American character. In the end, the reevaluation process will lead not only to a better understanding of Fry’s works, but to a more complete understanding of Fry’s principle place in the annals of American music: that of the outspoken advocate and critic.
Joe Harvey has a Master of Arts in Music History from West Chester University, PA. This article is part of original research done for his thesis: Rethinking William Henry Fry: Uncovering Two Lost Symphonies. | <urn:uuid:b7be4b69-2eaf-4e91-8191-d7a3788fd271> | {
"date": "2020-01-22T06:36:28",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9643048048019409,
"score": 3.234375,
"token_count": 4631,
"url": "https://joeharvey.net/2016/03/25/the-lost-symphonies-of-philadelphia-composer-william-henry-fry/"
} |
Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.
Citation: Berman MG, Hout MC, Kardan O, Hunter MR, Yourganov G, Henderson JM, et al. (2014) The Perception of Naturalness Correlates with Low-Level Visual Features of Environmental Scenes. PLoS ONE 9(12): e114572. https://doi.org/10.1371/journal.pone.0114572
Editor: Lawrence M. Ward, University of British Columbia, Canada
Received: September 3, 2014; Accepted: November 11, 2014; Published: December 22, 2014
Copyright: © 2014 Berman et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. The low-level visual feature data are contained in the Supporting Information files as well as all of the images that were used in the analysis.
Funding: This work was partially funded by the The Tom and Kitty Stoner Foundation (TKF) and the University of South Carolina to MGB. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Research has demonstrated that interacting with natural environments can have beneficial effects on memory and attention for healthy individuals – and for patient populations –. In addition, views of natural settings have been found to reduce crime and aggression , and also improve recovery from surgery .
All of this evidence points to the importance of interacting with natural environments to promote mental and physical health. Yet, it is not clear exactly what it is about natural environments compared to urban or built environments that leads to these benefits. Problematically, there are numerous dimensions that differentiate natural from urban environments, so uncovering the most salient features that define natural environments would seem important given that there is something about natural environments that leads to salubrious effects for both cognitive and affective processing.
While there are a number of theories that posit why nature is restorative , –, it would be difficult to use these theories to inform the design of green spaces because these theories tend not to outline in a prescriptive way how to design a natural space to obtain the most benefit. In his seminal 1995 paper, Kaplan does list some criteria that would appear to be important for a natural environment being restorative: the environment must have sufficient extent, the environment must be compatible with one's goals, the environment must give people the sense of being away, and the environment must be fascinating. For the most part, it is currently not known how some of these concepts could be used to design a greenspace in a way to optimize psychological functioning.
The purpose of this research is to define low-level visual features that define objective and subjective measures of naturalness. We are not the first to examine how objective measures may characterize classes of natural and urban scenes as this has been done with great success and sophistication in the context of computer vision –, and mammalian vision –. However, the purpose of those studies was to classify/categorize scene types or to relate the biology of primary vision to statistical regularities of natural scenes. Here our purpose was not the classification of scenes, but rather identifying simple, low-level visual features that related to subjective perceptions of naturalness and could be readily manipulated in visual stimuli. Future research could then use such features, which can be easily manipulated, to test and design new environments in ways that may improve psychological functioning.
We accomplished this in three experiments. In the first experiment we had participants rate the similarity of images of parks that had varied natural and built content. Afterwards we examined these similarity data using a multidimensional scaling analysis (MDS; , ). This technique was used to identify the underlying featural dimensions that participants relied on when making their similarity estimates (similar procedures have been utilized by Ward and colleagues: –. To obtain explicit labels for the uncovered dimensions from MDS (the makeup of MDS dimensions must be inferred from the organization of the space), we conducted a second experiment in which naïve participants examined the MDS output, and labeled the dimensions according to their subjective impression of how the space was organized.
The most common label for the first dimension, i.e., the dimension that explained the most variance in similarity, was naturalness. Importantly, these dimension weights correlated strongly with direct measures of naturalness on each of the images as determined by a second group of independent raters (Experiment 3). Finally, we quantified low-level visual features for all of our images and were able to relate some of these low-level visual features to direct measures of naturalness and also to the first MDS dimension that represented a latent measure of naturalness.
Importantly, it has not been determined the degree of correspondence between subjective and objective measures of ‘naturalness’ and this study addresses this point head on. Our results show a strong correspondence. Another equally important point is that by uncovering the features that are most related to perceived naturalness, or defining what a natural environment is, these features may be found to be causal in producing the positive effects of interacting with natural environments. The features that define perceived naturalness could then be manipulated in future work to determine how they impact the restorativeness of natural environments.
Experiment 1: Spatial Multidimensional Scaling (MDS)
Materials and Methods
Twenty participants from the University of Michigan took part in this study (mean age = 19.8; # female = 20). This research was approved by the Institutional Review Board of the University of Michigan (IRB #HUM00006681). All participants provided written informed consent as administered by the Institutional Review Board of the University of Michigan (IRB# HUM00006681). Participants were compensated $10/hour for their participation.
The stimuli used in this study were photographs (.BMP format) taken from parks built by the TKF Foundation, a private foundation based in Annapolis, MD. The parks were from a range of locations around the Baltimore, Washington D.C. and Annapolis area. The photographs were resized to a maximum of 200 pixels along either dimension, maintaining the original aspect ratio, and assuring that the images were large enough so that the visual information in the images was easily detectable. They were shown on monitor that was 41 cm×30 cm, at a resolution of 1280×1024.
In order to assess the dimensions that characterized the TKF sites we composed a paradigm to compare images from 70 different TKF sites. We performed this experiment twice using different images of each site for each iteration of the experiment. We did so to ensure that our results were not idiosyncratic to the particular pictures that were selected; 62 sites had multiple pictures, but the other sites did not. Therefore, we used pictures from 16 additional TKF sites that only had single images (8 for each set). One set of images we labeled set 1 and the other set of images was labeled set 2.
On each trial, fifteen different pictures (pulled from a set of seventy total pictures) were shown to the participant, arranged in 3 discrete rows, with random item placement. Fifteen images was the largest set of images that could be displayed simultaneously to the participants without overcrowding the display. Participants were instructed to drag and drop the images in order to organize the space such that the distance among items was proportional to each pair's similarity (with closer in space denoting greater similarity). Participants were given as much time as they needed to scale each set; typically, trials lasted between 2 and 5 minutes. The x- and y-coordinates of each image was then recorded and the Euclidean distance between each pair of stimuli was calculated (for 15 stimuli there are 105 pairwise Euclidean distances). This procedure was performed repeatedly (over 29 trials), but with different image sets on each trial, so that all pairwise comparisons among the 70 total images were recorded. Thus, this provided a full similarity matrix comparing the ratings of each image to all of the other images (i.e., all 2415 comparisons) for each participant. This took participants about an hour to complete; similar rating procedures have been used by other researchers –.
We controlled the selection of images on each trial by employing a Steiner System ; these are mathematical tools that can be used to ensure that each item in a pool is paired with every other item (across subsets/trials) at least once. A Steiner System is denoted S(v, k, t), where “v” is the total number of stimuli, “k” is the number of items in each subset, and “t” is the number of items that need to occur together. Thus for us, v, k, and t, are 70 (total images), 15 (images per trial), and 2 (denoting pairwise comparisons), respectively. Simply put, the Steiner System provides a list of subsets (i.e., trials) identifying which items should be presented together on each trial. For some combinations of v and k, there may exist a Steiner set the does not repeat pairwise comparisons (i.e., each pair of items is shown together once and only once). For other combinations (including ours), some stimuli must be shown with others more than once. Because this leads to multiple observations per “cell”, we simply took the average of the ratings for the pairs that were shown together more than once. Across participants, images were randomly assigned to numerical identifiers in the Steiner System, which ensured that each participant saw each pair of images together at least once, but that different people received different redundant pairings.
After the similarity matrices were composed, we performed multidimensional scaling on the pairwise Euclidean distances using PROXSCAL , implemented in SPSS. PROXSCAL allows for both metric and non-metric MDS and we chose the metric version since pixel distances should be equivalent across the screen (i.e., the distance between 10 and 15 pixels should be psychologically the same as the distance between 30 and 35 pixels). We did not rotate the spaces because the group plots in MDS utilize orthogonal dimensions.
To determine the appropriate dimensionality for our data, we created Scree plots for each MDS space, plotting the model's stress against the number of dimensions used in the space. Stress functions vary across scaling algorithms (PROXSCAL uses “normalized raw stress”), but all are designed to measure the agreement between the estimated distances provided by the MDS output and the raw input proximities themselves (lower stress values indicate a better model fit). Scree plots are often used to determine the ideal dimensionality of the data by identifying the point at which added dimensions fail to improve the model fit substantially . For all four datasets, we found that stress levels plateaued at 4 dimensions (see Fig. 1); thus, the data were analyzed in 4 dimensions.
Experiment 1 Results
The results of the MDS analysis on the first set and the second set are displayed in Figs. 2 and 3. In those figures the scenes are superimposed on the resulting MDS plot so that the images are plotted based on their weights on dimension 1 and dimension 2. The data were scaled in 4 dimensions, as previously stated, in order to obtain the most appropriate organization of the space overall. However, we limited our forthcoming analyses on the weightings of dimensions 1 and 2, as those were the dimensions that explained the most variance in similarity.
The pictures are placed in the image based on their weights on dimension 1 and 2. A subset of the 70 images is plotted here because there are too many images to make this plot readable.
The pictures are placed in the image based on their weights on dimension 1 and 2. A subset of the 70 images is plotted here because there are too many images to make this plot readable.
To our eye, dimension 1 seemed to code for the naturalness of the images, with more ‘natural’ images having smaller/negative weights on dimension 1 and ‘built’ images having larger weights on dimension 1. It should be noted that the particular orientation of the dimensions is unimportant; what matters is the placement of items relative to other items. Thus, if the poles were reversed (i.e., ‘natural images had larger weights and ‘built’ images had smaller weights), the interpretation of a “naturalness” dimension would be unchanged.
Dimension 2 was a bit more difficult to characterize. Importantly, dimension 1 appeared to be similar for both the first set and second sets, suggesting that there was not something idiosyncratic about one set of images that produced these results. To validate that dimension 1 was coding for the naturalness of the scenes we conducted a second experiment.
Experiment 2: Subjective Labeling of Dimensions
Materials and Methods
To validate that dimension 1 (in both spaces) represented naturalness, we had naïve participants label dimensions 1 and 2 for the first set and the second set of images to identify what the most commonly composed labels were.
Fifty-seven participants from students from the University of Michigan and the University of South Carolina participated in our study (Michigan: 43 participants from two samples: Sample 1 Michigan 20 total, # female = 15; mean age = 19.95; Sample 2 Michigan 23 total, # female = 14, mean age = ∼24 (The ages were not recorded, but these were masters students so that mean age should be around 24); South Carolina: 14 participants, # female = 9, mean age = 20; note: the ages and gender of the participants were not recorded, but these were undergraduate students that matched the typical demographics from the experimental pool at the University of South Carolina). This research was approved by the Institutional Review Board of the University of Michigan (IRB #HUM00006681) and the Institutional Review Board of the University of South Carolina (IRB #Pro00028529). All participants provided written informed consent as administered by the institutional review board of the University of Michigan (IRB# HUM00006681) and the University of South Carolina (IRB# Pro00028529).
We had participants provide subjective labels for dimensions 1 and 2 from the first set of images and from the second set images. Participants viewed posters of Figure 1 and 2 and were instructed to: “Come up with a single word or phrase that best describes to you what the difference is between the left and right of each poster, and what the difference is between the top and bottom of each poster. This means that you will be coming up with two labels for each poster, one for left/right, and one for top/bottom.” Participants then told an experimenter their labels for dimensions 1 and 2 for the first and second posters. These labels were then aggregated for analysis.
From our reading of the generated labels, it appeared that the words nature or natural occurred frequently as descriptions of dimension 1. We performed a simple analysis where we counted the number of words that were repeated for the labels for the first dimension for the first and second sets of images. In addition, we removed preposition words from the labels, such as ‘to’ and ‘left’, to restrict the analysis to words with semantic content.
Experiment 2 Results
Some of the most common themes that were uncovered from our analysis were: buildings (10 times listed), nature (9 times listed), space (8 times listed), paths (7 times listed), pathways (7 times listed), path (6 times listed), natural (5 times listed), gardens (5 times listed), manmade (4 times listed), organic (4 times listed), softscape hardscape (3 times listed) and difference (3 times listed). No other word was listed more than once. From this analysis it appeared that naïve participants were seeing what we were seeing, i.e., that dimension 1 seemed to be coding for something more vs. less natural or more built vs. more organic.
Experiment 3: Rating the Perceived Naturalness of the Images
Based on the results of the poster labeling experiment, it appeared that MDS dimension 1 was coding for naturalness (low-scores) vs. manmadeness (high-scores). To test this directly, we had participants rate the perceived naturalness of each image.
Materials and Methods
Fourteen participants from the University of Michigan participated in our study (mean age = 19.2; # female = 7). This research was approved by the Institutional Review Board of the University of Michigan (IRB #HUM00006681). All participants provided written informed consent as administered by the institutional review board of the University of Michigan (IRB# HUM00006681).
For this experiment, we added 50 high-natural and 50 low-natural images that were used in . These include scenery of Nova Scotia and pictures of Ann Arbor, Detroit and Chicago. We added these images to 207 images that were from 87 TKF sites (images from 87 areas in urban parks from Annapolis, Baltimore and Washington), giving us a total of 307 images. These TKF images were all used in Experiment 1, but we included additional TKF images here that were not in Experiment 1 to have participants' rate naturalness on a larger set of data. In addition, in Experiment 1 the images were re-sized to be smaller, but the images in this experiment were shown in their native resolution and were in three different sizes: 512*384, 685*465, and 1024*680 pixels. Importantly, all image features were normalized to the size of the images.
Participants provided their ratings of naturalness on all 307 images. Participants were shown a single image at a time and rated it on a scale of 1 to 7 for how natural they considered the image to be. A ‘1’ indicated that the participants considered the image to be very manmade and ‘7’ indicated that participants considered the image to be very natural. A ‘4’ indicated that the image was not judged to be very natural or manmade (therefore anything below 4 was judged to be more manmade and anything above 4 was judged as being more natural). We then correlated these values with the weights on dimension 1 to check for correspondence between the ratings of naturalness and the latent variable ‘naturalness’ as revealed in the MDS analysis.
Significant correlations were found between perceived naturalness ratings and weights on dimension 1 for both the first set, r(70) = −.84, p<.0001 and the second set, r(70) = −.75, p<.0001. The scatter plots can be seen in Fig. 4. The correlations are negative because negative weights on dimension 1 indicate more ‘naturalness.’
The significant correlation between weights on dimension 1 from MDS and the direct naturalness ratings suggests that MDS dimension 1 is coding for naturalness. In many respects, weights on dimension 1 can be interpreted as representing latent naturalness as participants were simply rating the similarity of the images, and naturalness could have been one of many factors that was used to rate similarity. Taken together, this suggests that the MDS analysis produced highly interpretable and reliable dimensions.
Classifying Naturalness with Low-level Visual Features
The analyses thus far have established that individuals have consistent perceptions of what they consider to be natural images and that this dimension explains a good deal of variance in people's ratings of similarity of urban parks. Another question that we asked is whether low-level, objective visual features were related to subjective measures of naturalness. If we can find significant relationships between visual features and perceived naturalness, then it is possible that those features most related to naturalness may produce the positive psychological benefits that are attained from interactions with natural environments.
Low-Level Visual Features
Ten low-level visual features were used in our analysis and were correlated with the perceived naturalness ratings to see if any low-level visual features were related to perceived naturalness. These features were divided into color properties and spatial properties.
Color properties of the images were calculated based on the standard HSV model (Hue, Saturation, and Value) using the MATLAB image processing toolbox built-in functions (MATLAB and Image Processing Toolbox Release 2012b, The MathWorks, Inc., Natick, Massachusetts, United States). 1) Hue is the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, green, or blue. Hue describes a dimension of color that is readily experienced (i.e., the dominant wavelength in the color). We calculated the average hue across all image pixels and the average standard deviation of hue across all of an image's pixels for each image. The average hue represents the hue level of the image and the 2) standard deviation of hue (SDhue) represents the degree of diversity in the image's hue. 3) Saturation (Sat) is the degree of dominance of hue mixed in the color, or the ratio of the dominant wavelength to other wavelengths in the color. We calculated the average saturation of each image across all image pixels, as well as the 4) standard deviation of saturation for each image (SDsat). We also measured the overall darkness-to-lightness of a pixel's color depending on the brightness of the pixel. This dimension of color is called 5) Brightness (Bright) or the value of the color. We computed the average brightness of all pixels for each image, as well as the 6) standard deviation of brightness in each image (SDbright). Fig. 5 shows hue, saturation, and brightness maps of a sample image in our experiment, and Fig. 6 compares two images in terms of their color diversity (SDHue, SDSat and SDbright).
a) A sample image (b) Image's saturation map (c) Image's hue map (d) Image's brightness map.
In this section we describe how we calculated the spatial features of our images. A greyscale histogram of an image shows the distribution of intensity values of pixels that construct an image. Each pixel could have an intensity value of 0 to 255 (8-bit grayscale) and for a histogram with 256 bins, the probability value of the nth bin of the histogram (□□) shows the number of pixels in the image that have an intensity value of n-1 over the total number of pixels in the image. 7) Entropy of a grey scale image is a statistical measure of randomness that can be used to characterize part of the texture of an image using the intensity histogram. We used a simple definition of Entropy: (Eq. 1)Where is the probability value of the nth bin of the histogram. Entropy shows the average “information” content of an image. The more the intensity histogram resembles a uniform distribution (all intensity values occur with the same probability in the image), the greater the entropy value becomes in the image. We calculated the entropy of the images as a measure of uncertainty or “information” content (versus redundancy) in the image's intensity values. More comprehensive and sophisticated definitions of image entropy have previously been applied for natural images, but those are out of the scope of this study. Here we aim to define simple features that are not computationally intensive to calculate and can be readily manipulated in visual stimuli (, . Fig. 7 shows a comparison of high vs. low entropy in two images.
Another image feature that we calculated in this study concerned the spatial or structural properties of images provided by image gradients. An image gradient is a map of the image's brightness intensity or color changes in a given direction. The points of discontinuity in brightness (rapid brightness or color changes) mainly consisted of object, surface, or scene boundaries and fine details of texture in an image and are called edges. Images in this study (especially the more natural scenery) contain complex detailed texture and fragmentations, which could lead to some complexities in edge detection.
The most commonly used method for edge detection is the Canny edge detection algorithm . This usually consists of five stages: first, blurring (or smoothing) an image with a Gaussian filter to reduce noise; second, finding the image gradients using derivatives of Gaussian operators; third, suppressing non-maximum gradient values; fourth, double thresholding weak and strong edges; and finally, edge tracking of weak or disconnected edges by hysteresis. This method is therefore less likely than the others to be influenced by noise, and more likely to detect true weak edges . We used MATLAB's built in function ‘edge’ and set the method to ‘canny’ to calculate lower and upper thresholds to be used by the canny edge detection for each image. Then, the same function was used for each image with the determined sensitivity thresholds multiplied by either 0.8 (high sensitivity threshold) or 1.6 (low sensitivity threshold). We weighted faint and salient edges differently, so that each pixel could have a value of 0, 1, and 2 depending on how sharp of an edge it belonged to. Pixels assigned values of ‘0’ were not identified as edges by the canny edge detection algorithm at high sensitivity thresholds; pixels assigned values of ‘1’ were only detected as edges when using the high sensitivity threshold and not when using the less sensitive threshold (and therefore were less salient edges); finally, pixels assigned values of ‘2’ were detected as edges with the lower sensitivity threshold and therefore were the most salient.
Next, we quantified the pixels belonging to straight lines (horizontal, vertical and oblique lines) so that straight edge density and non-straight edge (curved or fragmented edges) density of images could be quantified and separated. Because of the complexity of the images, a typical Hough transform-based method could not detect straight lines accurately. Instead, we used a simple gradient-based connected component algorithm to detect straight lines in the images.
First, the images were convolved with the derivative of a Gaussian filter in the X and the Y directions to compute the gradient directions for Canny edges. Then each edge was assigned to one of 8 directions based on its value of where Gy and Gx are the y and x gradients. Then the connected components for the edge pixels in each direction were determined and labeled using MATLAB's ‘bwconncomp’ function. Finally, the Eigenvalues of the covariance matrix of the X and the Y coordinates of points for each connected component (i.e., edge) were used to compute the direction (i.e., the direction of the first principal component vector) and the straightness of the components. The first PC of the edges' coordinates should be parallel to the edge's direction and the second PC captures the variability of the edge's coordinates perpendicular to its direction. Pixels of a connected component above a threshold of straightness (i.e., the singular value for the first principle component needed to be greater than 104 times larger than the singular value for second component) met the criterion of a “straight edge.”
The number of pixels on straight edges and those on non-straight edges were divided by total number of pixels in the image to create 8) Straight Edge Density (SED), and 9) None-straight edge density (NSED). Fig. 8 shows maps of detected edges and straight edges in a sample image. Importantly, all of these features were normalized to the number of pixels in the image.
Fig. 9 displays the correlation of the low-level visual features with perceived naturalness ratings. Hue, NSED, SED, SDhue, and SDsat all significantly correlated with naturalness. These data suggest that low-level visual features (objective measures) can be used to predict individuals' perceptions of naturalness. To test this more directly, we trained a linear discriminant classification algorithm to predict whether an image was rated as natural or not based on these low-level visual features.
Linear and Quadratic Discriminant Classification of Perceived Naturalness
To examine how reliably these low-level visual image features predicted the perceived naturalness of the images, we trained two multivariate machine-learning algorithms, the linear discriminant classifier and the quadratic discriminant classifier, utilizing the low-level visual features to predict the perceived naturalness of the images. Utilizing a leave-one-out framework we could test how well each classifier could accurately predict the perceived naturalness of the image.
We implemented two multivariate machine-learning algorithms to classify individual's perceptions of naturalness of images based on low-level visual features of the images. The first classifier that we used was a Linear Discriminant (LD) classification algorithm that attempts to define a plane that separates two classes. Implementation of LD classification was performed using the ‘classify’ function in the Statistics toolbox in Matlab (the classifier type was set to ‘linear’). LD classifiers use a multivariate Gaussian distribution to model the classes and classify a vector by assigning it to the most probable class. The linear discriminant classification model contains an assumption of homoscedasticity, i.e., that all classes are sampled from populations with the same covariance matrix. For our purposes, this assumption means that (a) the variance of each low-level visual feature does not change for high vs. low natural images and (b) the correlation between each pair of features is the same for high and low natural images. Importantly, the assumption of homoscedasticity is equivalent to separating the two classes, (high and low natural images), with a linear plane in feature space. The plane is defined as a linear combination of features; the weight of each feature reflects the contribution of this feature to classification (that is, most relevant features have the highest absolute value of weight).
The second classification algorithm that we implemented was a Quadratic Discriminant (QD). Implementation of QD classification was performed using the ‘classify’ function in the Statistics toolbox in Matlab (the classifier type was set to ‘quadratic’). Like the LD classifier the QD classifier uses a multivariate Gaussian distribution to model the classes and classify a vector by assigning it to the most probable class. However, the QD model contains no assumption of homoscedasticity, and instead estimates the covariance matrices separately for each class (that is, the variances of and the correlations between features are allowed to differ across high vs. low-natural images). This indicates that when implementing QD the two classes are separated by a non-linear curved surface. Both LD and QD algorithms have been implemented with great success to classify brain states and participants brain activity patterns –.
We evaluated the success of each classifier using a cross-validation approach. A subset of images was used to train the classifier, and the image type (high natural vs. low natural) was predicted for the images that were not included in the training set. At each iteration, two images (1 high-natural and 1 low-natural) were held out for testing, and the remaining 305 were used to train the classifier; this process was repeated so that all combinations of high and low natural images were determined by classification.
For each combination of left-out high- and low-natural image we computed whether the image type was predicted accurately. The proportion of images that were accurately predicted was our metric of prediction accuracy, our main measure of the efficacy of the classifier.
The LD classifier was able to successfully predict whether an image was perceived as high- vs. low-natural with 79% accuracy. This prediction accuracy is well above chance performance (50%) and suggests that these low-level visual features reliably predict individuals' perceptions of naturalness. When we examined the features that appear most critical to classification, we found that edge density, the number of straight edges, and the standard deviation of hue were the most critical features. These feature weights are displayed in Fig. 10. More edge density, fewer straight edges and lower standard deviations in hue (less hue diversity) were all related to greater perceived naturalness.
A high absolute value of the weight indicates that that feature is important for classification. A positive weight indicates that that increasing this feature would lead to increased perceived naturalness; a negative weight indicates that increasing this feature would lead to a decrease in perceived naturalness. Error bars reflect 2 standard deviations from the mean.
In addition, we noticed that some features, such as the number of curved edges, had a non-linear relationship with ratings of naturalness. Therefore, we ran the classification analysis a second time, but using a non-linear classifier, i.e. the quadratic discriminant, which would capitalize on these non-linear relationships, as well as the interactions of them. When doing so classification accuracy increased to 81%.
Researchers have shown that the principle components of natural images from their frequency spectral maps could be sensitive in capturing some of the systematic properties of natural versus man-made scenery , , . In order to find out how the visual features used in this study correlate with the spectral principal components of the images, we ran a principal component analysis on the images similar to that of . We resized each image to 256*256 pixels and did a discrete Fourier transform on each image, and then reshaped it to a single column vector (65536*1). All 307 spectral images were then aggregated into a 65536*307 matrix and a principal component analysis was performed on the concatenated image matrix.
The first 4 principal components explained 95% of the variability in the magnitude of Fourier coefficients between images and were correlated with the low-level visual features. Fig. 11 shows the correlation matrix of these PC's with the simple image features. The PCs did correlate significantly with some of our features such as many of the color features, entropy, SED and NSED. As such, we performed another classification analysis utilizing these 4 principal components to predict perceived naturalness. When doing so we obtained above chance accuracy for both LD (prediction accuracy = 60.4%) and QD (prediction accuracy = 64.0%). However, classification with these PCs was not as strong as the classification accuracy calculated with the derived low-level features. In summary, these features significantly predicted ratings of perceived naturalness, linking low-level objective measures to subjective measures of naturalness.
The color bar indicates the strength of the correlation from -.4 to +.4
To further inspect the relation of these visual features with the naturalness dimension we previously obtained from the MDS analysis, we also regressed the weights on dimension 1 on the image features for both the first set and second set of stimuli. The results show that these features explain a significant amount of variance in dimension 1 weights (40% for first set and 35% for odd set; p<10−6 and p<10−5, respectively). The results for the regressions are shown in Tables 1 and 2. These results complement the results from the classification analysis and show that low-level features do reliably predict subjective perceptions of naturalness.
Previous research has shown that interacting with natural environments can have a salubrious effect on cognitive and affective processing compared to interacting with more urban/manmade environments , , . This suggests that there is something about natural environments that differs from urban environments that could improve psychological functioning.
The problem is that finding such features is difficult given how many features differ between these two environments. In this work, we found that individuals' perceptions of naturalness are quite consistent. This is corroborated by the fact that in the MDS analysis, the dimension that explained the most variance in individuals' perceptions of similarity in scenes was strongly related to direct ratings of naturalness. To take this one step further, we were able to link perceptions of naturalness with objective low-level visual features such as the density of edges, straight lines and hue diversity. This means that we have objective measures that significantly predict perceived naturalness and therefore may also be features that could be manipulated to improve psychological functioning.
Notably, our work replicates important previous research by Ward and colleagues. In that work, a sample of 20 photographs was used with more extreme levels of natural content (e.g., inside a rainforest, the Grand Canyon) and urban content (e.g., an aerial view of San Francisco, an smoggy freeway). With these images, Ward and colleagues had participants make pairwise similarity judgments between the 20 images (i.e., 190 pairs) and then ran an MDS analysis. Importantly, the first dimension that was uncovered from those experiments coded for the naturalness vs. constructed character of the images, . In another study naturalness was also correlated with dimension 1 weights, but was confounded with the openness vs. enclosedness of the images ; it is also worth noting that the number of significant dimensions in those studies was also around 4–5, which is similar to our study). The results presented here replicate this earlier work on a much larger set of images and with less variability in image content (i.e., our images were not just at the extremes of naturalness vs. manmadeness, but had a large distribution with many intermediately rated images). Therefore, our more restricted range of environments may better represent the types of environments encountered in daily life. In addition, the results also demonstrate that the phenomena are measurable even across a more subtle range of naturalness.
Importantly, this work extends upon the previous findings by identifying low-level visuals features that could be manipulated to uncover if any of these features may improve psychological outcomes (i.e., attention and mood). Without identifying the features that are related to naturalness, it would be difficult to construct an experiment aimed at uncovering physical features of the environment that may lead to improvements in psychological functioning, and even more difficult to design a future built environment to improve psychological functioning. Our work helps to provide a foundation and methods for classifying and quantifying our physical environment in psychologically and behaviorally meaningful ways , .
Additionally, many researchers have achieved great success in training machine-learning algorithms on low-level features to classify natural vs. man-made environments using sophisticated analyses , . Here we utilized simpler metrics to link with subjective measures of perceived naturalness, because our goal was to define simple, objective measures that could be easily manipulated (e.g., the number of curved and fragmented edges, average color diversity or the number of straight lines) to study if those features may improve psychological functioning.
One limitation of this study is the fact that we used images of natural and urban environments, which are by definition abstractions of the true environment. More specifically, JPEG and other lossy image compression schemes can alter spatial and color image statistics . One could draw conclusions about human perceptions of naturalness for compressed scenes in terms of their statistics, but this may not capture human perception of natural scenes and urban scenes in the “wild.” However, the fact that used some of the images that were also used here and found restorative effects (i.e., improvements in memory and attention) after exposure to those natural images suggests that compressed images do preserve at least some of the visual properties of scenes that lead to psychological benefits. Additionally, there are practical considerations when performing such studies and it would be difficult to quantify many of these low-level features in the wild.
Another potential limitation is that one may be able to create abstract images that contain many of the features of nature, but are not perceived as being natural (e.g., a Jackson Pollock painting). This would make classifying those images difficult, but it is possible that exposure to abstract art that contains many of the low-level features of nature could be restorative and is a topic that we plan to pursue in the future, i.e., whether the low-level features on there own are restorative or if the semantic meaning of nature is necessary for restoration.
Along these same lines, it is not clear if the low-level visual features used in our study objectively measure naturalness. For example, our data suggest that environments perceived as more natural contain more non-straight edges, fewer straight edges, and less color saturation, but it could be that other non-natural environments could also be found to share some of these distributional properties that we are finding for our natural environments (e.g., some of Antoni Gaudi's architecture in Barcelona). As such it is not necessarily the case that these low-level visual features define naturalness per se. However, based on the success of our algorithms in predicting the perceptions of naturalness, we are confident that these low-level features are related to naturalness, but may not be exclusive to natural environments. More images and scene-types would be needed to draw more definitive conclusions.
In his seminal 1995 paper, Kaplan lists some criteria that would appear to be important for a natural environment to be restorative: The environment must have sufficient extent; the environment must be compatible with one's goals; the environment must give people the sense of being away; and the environment must be fascinating . We have attempted to take the work of Kaplan one step farther by defining the low-level visual features that may drive perceptions of naturalness with the hope that these features may be causal to the beneficial effects of interacting with natural environments. It is our hope that in future work we can more fully identify the features of nature that may lead to psychological benefits so that those features can be utilized in future designs of the built environment.
Feature values for the images used in our study.
This work was supported in part by a grant from the TKF Foundation to M.G.B. and an internal grant from the University of South Carolina. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We thank Marie Yasuda for her help in data collection.
Conceived and designed the experiments: MGB MCH MRH JJ OK. Performed the experiments: MGB MCH MRH JJ OK HK TH. Analyzed the data: MGB MCH OK GY. Contributed reagents/materials/analysis tools: MGB MCH OK JJ. Wrote the paper: MGB MCH OK JJ HK JMH GY MRH TH.
- 1. Berman MG, Jonides J, Kaplan S (2008) The Cognitive Benefits of Interacting With Nature. Psychological Science 19:1207.
- 2. Kaplan S, Berman MG (2010) Directed Attention as a Common Resource for Executive Functioning and Self-Regulation. Perspectives on Psychological Science 5:43.
- 3. Berto R (2005) Exposure to restorative environments helps restore attentional capacity. Journal of Environmental Psychology 25:249.
- 4. Taylor AF, Kuo FE (2009) Children With Attention Deficits Concentrate Better After Walk in the Park. Journal of Attention Disorders 12:402.
- 5. Berman MG, Kross E, Krpan KM, Askren MK, Burson A, et al. (2012) Interacting with nature improves cognition and affect for individuals with depression. Journal of Affective Disorders 140:300–305.
- 6. Cimprich B, Ronis DL (2003) An environmental intervention to restore attention in women with newly diagnosed breast cancer. Cancer nursing 26:284.
- 7. Kuo FE, Sullivan WC (2001) Environment and crime in the inner city - Does vegetation reduce crime? Environment and Behavior 33:343.
- 8. Kuo FE, Sullivan WC (2001) Aggression and violence in the inner city - Effects of environment via mental fatigue. Environment and Behavior 33:543.
- 9. Ulrich RS (1984) VIEW THROUGH A WINDOW MAY INFLUENCE RECOVERY FROM SURGERY. Science 224:420–421.
- 10. Kaplan S (1995) The Restorative Benefits of Nature - Toward an Integrative Framework. Journal of Environmental Psychology 15:169.
- 11. Ulrich RS, Simons RF, Losito BD, Fiorito E, Miles MA, et al. (1991) Stress Recovery during Exposure to Natural and Urban Environments. Journal of Environmental Psychology 11:201.
- 12. Mayer FS, Frantz CM, Bruehlman-Senecal E, Dolliver K (2009) Why Is Nature Beneficial? The Role of Connectedness to Nature. Environment and Behavior 41:607.
- 13. Wilson EO (1984) Biophilia. Cambridge, MA: Harvard University Press.
- 14. Kellert SR, Wilson EO (1993) The Biophilia Hypothesis. Washington, D.C.: Island Press.
- 15. Oliva A, Torralba A (2001) Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision 42:145–175.
- 16. Torralba A, Oliva A (2003) Statistics of natural image categories. Network: computation in neural systems 14:391–412.
- 17. Fei-Fei L, Perona P. A bayesian hierarchical model for learning natural scene categories; 2005. IEEE. pp. 524–531.
- 18. Huang J, Mumford D. Statistics of natural images and models; 1999. IEEE.
- 19. Ruderman DL (1994) The statistics of natural images. Network: computation in neural systems 5:517–548.
- 20. Field DJ (1987) Relations between the statistics of natural images and the response properties of cortical cells. JOSA A 4:2379–2394.
- 21. van Hateren JH, van der Schaaf A (1998) Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London Series B: Biological Sciences 265:359–366.
- 22. Olshausen BA (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381:607–609.
- 23. Baddeley RJ, Hancock PJ (1991) A statistical analysis of natural images matches psychophysically derived orientation tuning curves. Proceedings of the Royal Society of London Series B: Biological Sciences 246:219–223.
- 24. Shepard RN (1980) Multidimensional scaling, tree-fitting, and clustering. Science 210:390–398.
- 25. Hout MC, Papesh MH, Goldinger SD (2013) Multidimensional scaling. Wiley Interdisciplinary Reviews: Cognitive Science 4:93–103.
- 26. Ward LM (1977) Multidimensional scaling of the molar physical environment. Multivariate Behavioral Research 12:23–42.
- 27. Ward LM, Porter CA (1980) Age-group differences in cognition of the molar physical environment: A multidimensional scaling approach. Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement 12:329.
- 28. Ward LM, Russell JA (1981) Cognitive set and the perception of place. Environment and Behavior 13:610–632.
- 29. Goldstone R (1994) AN EFFICIENT METHOD FOR OBTAINING SIMILARITY DATA. Behavior Research Methods Instruments & Computers 26:381–386.
- 30. Hout MC, Goldinger SD, Ferguson RW (2013) The versatility of SpAM: A fast, efficient, spatial method of data collection for multidimensional scaling. Journal of Experimental Psychology: General 142:256.
- 31. Kriegeskorte N, Mur M (2012) Inverse MDS: inferring dissimilarity structure from multiple item arrangements. Frontiers in psychology 3.
- 32. Doyen J, Hubaut X, Vandensavel M (1978) Ranks of incidence matrices of Steiner triple systems. Mathematische Zeitschrift 163:251–259.
- 33. Busing F, Commandeur JJ, Heiser WJ, Bandilla W, Faulbaum F (1997) PROXSCAL: A multidimensional scaling program for individual differences scaling with constraints. Softstat 97:67–74.
- 34. Jaworska N, Chupetlovska-Anastasova A (2009) A review of multidimensional scaling (MDS) and its utility in various psychological domains. Tutorials in Quantitative Methods for Psychology 5:1–10.
- 35. Kersten D (1987) Predictability and redundancy of natural images. JOSA A 4:2395–2400.
- 36. Chandler DM, Field DJ (2007) Estimates of the information content and dimensionality of natural scenes from proximity distributions. JOSA A 24:922–941.
- 37. Klette R, Zamperoni P (1996) Handbook of image processing operators. Handbook of image processing operators, by Klette, Reinhard; Zamperoni, Piero Chichester; New York: Wiley, 1996 1.
- 38. Canny J (1986) A COMPUTATIONAL APPROACH TO EDGE-DETECTION. Ieee Transactions on Pattern Analysis and Machine Intelligence 8:679–698.
- 39. Berman MG, Yourganov G, Askren MK, Ayduk O, Casey BJ, et al. (2013) Dimensionality of brain networks linked to life-long individual differences in self-control. Nature Communications 4.
- 40. Yourganov G, Schmah T, Churchill NW, Berman MG, Grady CL, et al. (2014) Pattern classification of fMRI data: Applications for analysis of spatially distributed cortical networks. NeuroImage 96:117–132.
- 41. Yourganov G, Schmah T, Small SL, Rasmussen PM, Strother SC (2010) Functional connectivity metrics during stroke recovery. Archives italiennes de biologie 148:259–270.
- 42. Field D (1999) Wavelets, vision and the statistics of natural scenes. Philosophical Transactions of the Royal Society of London Series A: Mathematical, Physical and Engineering Sciences 357:2527–2542.
- 43. Hancock PJ, Baddeley RJ, Smith LS (1992) The principal components of natural images. Network: computation in neural systems 3:61–70.
- 44. Craik KH (1973) Environmental psychology. Annual review of psychology 24:403–422.
- 45. Leykin A, Cutzu F. Differences of edge properties in photographs and paintings; 2003. IEEE. pp. III-541-544 vol. 542. | <urn:uuid:fec0bdf8-16f6-4ce4-8d1e-e63eef2c19bb> | {
"date": "2020-01-22T05:09:06",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9306795001029968,
"score": 3.046875,
"token_count": 10841,
"url": "https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0114572"
} |
Category: Hound Dogs
Facts about Whippet Dogs, "Scientific name for Whippet Dog, or domestic canine, is Canis lupus familiaris". The Whippet Dog was breed for well over a hundred years, before it was official recognition in 1891 by the (EKC) English Kennel Club.
The Whippet Dog was breed for racing in the beginning, the Whippet Dog breed was nicknamed (the poor man's race horse). The Whippet Dogs were first brought to United States by English mill operators of Massachusetts and that's where racing this dog breed began in America.
The Whippet Dog empowers great speed and balance. The Whippet Dog, is a miniature English Greyhound, it is the fastest domesticated animal in its weight class, capable of speeds up to 35 m.p.h. The word "Whip it" means (move suddenly or fast in a specified direction). Whippet Dogs can come in a wide variety of colors and patterns.
Whippet Dog is a medium sized dog which looks very similar with its cousin greyhound. The male weighs around 25 to 45 pounds (11.3 to 20.4 kg) and a height of 19 to 22 inches (48.2 to 55.8 cm). Its female counterpart is around 18 to 21 inches (45.7 to 53.3 cm) tall.
The Whippet Dog has a long and lean skull with a wide space between the ears. Its nose is black and dark blue or brown in color. The ears are small, folded and held back. The eyes are oval in shape and dark. The feet are thick and the front legs are straight looking more like that of a cat or hare. The Whippet Dogs tail is normally long and held low with a slight curve at the end. The coat is smooth and comes in all colors. These colors are brindle, red, tiger white, fawn or slate blue.
The Whippet Dog are very intelligent, affectionate, sweet and lively. It is also docile and a devoted companion to its owner. It loves quiet and calm places. This is proven by the fact that it loves staying in homes. The Whippet Dog is sensitive, both physically and mentally. For this reason it should not be roughly trained.
Whippet Dogs can be a good watchdog and the good thing is that it is reserved with strangers. When training the dog include variety of activities. The Whippet Dog is good with kids as long as they don’t tease or mishandle it. A perfect dog for traveling due to its cleanliness that does not leave bad odor.
Dealing with other pets that have been raised together with it is easy. But for strange pets like cats, the Whippet Dog can harass them given a chance.
Whippet Dogs sweat glands are between their paw pads.
A Whippet Dog sees in color and have better low light vision. Whippet Dogs have three eyelids, a lower lid, an upper eyelid lid and a third lid, that is called a haw or nictitating membrane, this keeps the Whippet Dogs eye protected and moist. Whippet Dogs eyes have a special membrane for seeing better at night, called a tapetum lucidum - a dogs reflective layer in the choroid chiefly of nocturnal, causing the eyes to glow when light at night hits the eyes and they consist of some layers of smooth flat cells covered by a section of double deformed crystals
Whippets are good to stay in apartments or yards as long as it is exercised. However, it is sensitive to cold and wearing a coat is advisable to keep it warm.
The lifespan of the Whippets are around 12 – 15 years.
A Whippet Dog’s mouth can apply approximately 150 to 200 pounds of pressure per square inch and an American Pit Bull Terrier, German Shepherd Dog and a rottweiler can have 320 LBS of pressure on avg.
Whippet Dogs have two times the amount of ear muscles than people. A Whippet Dogs can hear a sound at four times the distance of a human. Sound frequency is measured in Hertz (Hz) Def-Hertz is the measurement of frequency, explicitly it's one cycle per second. The higher the Hertz are, the higher the pitched the sound is. Whippet Dogs hear best at 45,000 Hz to 65,000 Hz, while humans hear best at around 20 Hz to 20,000 Hz.
All dogs are identical in makeup big or small– 42 permanent teeth and 321 bones. Whippet Puppies have 28 teeth and when they become adult dogs they have 42 teeth. Female Whippet Dogs are in heat for matting for about 20 days twice a year. Whippet Puppies for their first few weeks will sleep ninety percent of the day and their vision is not fully developed until after the first month. Female Whippet Dogs are pregnant for 60 days before they’re puppies are born
The number one heath problems amongst Whippet Dogs is obesity, so always make sure your dog doesn't get to fat. Many foot problems that Whippet Dogs have are just an issue of long toenails. Though it is prone to skin diseases, Anesthesia Sensitivity, Deafness, Eye Diseases, (VWD) Von Willebrand's Disease and stomach upsets.
When purchasing a Whippet Dog from a breeder, make sure to find a good breeder with references, check at least two to three of the puppies that were purchased from this breeder.
The Whippet Dog belongs to the Hound Group and in 1888 it was recognized by the (AKC) American Kennel Club.
Average body temperature for a Whippet Dog is between 101 to 102.5 degrees
A Whippet Dog is an omnivore, (definition-they eat both other animals and plants). All Dogs are direct descendants of wolves.
Whippet Dogs pant to keep cool with 10 to 35 breaths per minute with an average of 24 breaths per minute. A large dog breed resting heart beats between 60 to 100 times per minute, and a small dog breed’s heart beats on average between 100 to 140 pant a lot.
Only humans and dogs have prostates, But a dog doesn't have an appendix.
Whippet Dog’s nose prints can be used to identify them, their nose prints are like a humans finger print. A Whippet Dog smells more than 1,000 times stronger than that of a human. A Whippet Dog’s nose, secretes a thin layer of mucous that helps it absorb scent, after that they lick their noses and sample the scent through their mouth. | <urn:uuid:20036089-06c2-439c-a53b-0aebad66fbee> | {
"date": "2020-01-22T06:16:08",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9544954895973206,
"score": 2.890625,
"token_count": 1405,
"url": "https://knowledgebase.lookseek.com/Whippet.html"
} |
Powder coating metallic surfaces for corrosion protection and aesthetic purposes has grown by leaps and bound in the United States and more so in other parts of the world. The use of powder coating on aluminum and steel sheet products has led the way to greater technology and reduced costs as the demand has grown. Powder coating has long had traction for galvanized items but often the learning process resulted in a step back as steps forward were made. A major reason was that failures ocurred and it was not always evident how to avoid them in the surface preparation and application processes. This article provides some insight to the powder coating process for galvanizing and begins with a brief discussion of what powder coatings are and are not, as well as some of the characteristics of hot dipped galvanized steel that makes it difficult to properly coat.
Powder coatings can be divided into two categories, thermoplastic and thermoset, which are essentially the same as non-convertible and convertible liquid coatings, respectively. Thermoplastic powders maintain their chemical identity since it does not change when the dry film forms. Rather than curing by solvent evaporation, the film solidifies upon cooling. Thermoset products do undergo a chemical change (are converted) as they cure and solidify. Simply stated, thermoplastic powder melts into a liquid when heated and cools into a film of the same chemical form. Thermoset powder coatings are not [fully] cured but upon heating melt, the heat provides energy needed for chemical crosslinking to occur and cures (and cools) into a new polymer.
Thermoplastic Powders (convertible)
Nylon, Polyester, Polyolefins, Polyvinyl Chloride, Polyvinylidene Fluoride
Thermosetting Powders (non-convertible)
Acrylic, Epoxy, Epoxy-Polyester Hybrid, Polyester TGIC, Urethane Polyester
The powder is the result of a melt-mix of ingredients that include the resin system and many of the functional additives employed for liquid coating that influence flow, leveling, color and cure. The mixed material is ground to a fine, flour-like powder and packaged.
The commercial use of powder coating developed initially in Europe, where thermoplastic materials in powder form where being flame sprayed on to metallic surfaces. The fluidized bed process developed by Dr. Gemmer in Germany was patented in 1953 and remained the mainstay of the application process into the early 1960s when electrostatic spray became available. The most common thermoplastic materials of the 1940s and early 1950s were cellulose acetate butyrate (CAB), nylon 11 (a grade developed in France), polyester, plasticized polyvinyl chloride (PVC) and a few others. Thermoset epoxy also became available during this time.
Powder coatings, as the name states, are powders not liquids. However, for them to become effective as protective and attractive finishes they must form a continuous film on the items being coated. They necessarily go through a liquification process by use of heat energy. Powders can be flame sprayed, applied using fluidized beds and sprayed electrostatically.
- Flame spray application is accomplished by using powder from a feed hopper into a (clean) air stream to carry powder through a fueled flame that melts (liquifies) the atomized liquid droplets and are applied to the piece being coated. The coating flows out creating a continuous film onto pieces that are often pre-heated to allow for continued flow. The film solidifies as it cools below the melting (gel) point. The temperature of the item, duration of application and cooling rate serve to determine the film thickness of the coating. Thicknesses of 8 to 10 mils are normally required to achieve a pinhole free film. Generally, thicknesses are limited to about 25 mils. Application rates of 50 square feet (SF) per hour to achieve 10-15 mils of coating are common. Flame spray is used in the field as the primary means of application or to effect repairs in the field for shop powder coated items.
- Fluidized bed application also involves pre-heating the item to be coated, then lowering the item into a cloud of powder suspended by air jets. The heat content of the item and time in the fluidized powder established the thickness of the film formed on the surface. The thickness builds rapidly at first but generally approaches a maximum within 10 seconds. Typical thicknesses are 10-20 mils although multiple heating and dipping cycles can produce thicknesses of 100 mils. High mass [steel] items typically contain enough heat to produce a fused coating film. Low mass items (screens, tubes) may require post heating to complete the fusion and cure. Electrostatic fluidized beds are also available, generally for small and two-dimensional parts (wire screens). The most commonly used powder is epoxy although some thermoplastic resins may be applied in this manner. The powder cloud is charged using ionized air and the particles deposited on the grounded piece. This technique is widely used to insulate electric motor rotors and allows the removal of powder by vacuum from surfaces that are not to be powder coated.
Electrostatic spray of powder coating, like electrostatic spray of liquid coatings, involves air spray of electrically charged [dry] particles onto grounded pieces. A feed hose carries the powder (or liquid) to the spray gun which discharges the particles through a corona discharge imparting a charge to the powder particles which are propelled to the grounded object where they are deposited. The electric attraction is based on the electric potential and influences the thickness of the powder layer on the surface. The coated pieces may be pre-heated (such as FBE on pipe) but typically go through a post application heating to liquify the coating and then go through a cooling process. The process is easily automated for production facilities that can make use of several designs of robotic application nozzles.
Alternatively, the application may be performed by individuals using electrostatic spray guns for piece work.
The keys to proper film formation by powder coating are coverage, temperature and time. Manufacturers provide instructions for proper heating temperatures; temperature hold times and cooling steps that may be necessary. Misapplications can occur due to temperature being too low or too high, improper heating periods and uneven coverage of the powder. As with liquid coatings applied electrostatically, Faraday cage effect” can result in low coverage in corners and edges.
Hot Dipped Galvanizing
The American Galvanizers Association is an excellent source for information about the galvanizing processes, standards, inspections, testing and what constitutes a properly galvanized product. Note however, properly galvanized does not mean paintable. A short discussion on what the galvanizing process is, for those not already familiar with galvanizing, may be useful.
Galvanizing is the process used to apply a metallurgically bonded layer of zinc [metal] to properly cleaned steel plates, rods, shapes and articles. The zinc [galvanizing] includes several metallurgically distinct layers and function as a protective coating layer by being both corrosion resistant and anodic (sacrificial) to steel in wet environments. The layers that form begin with a gamma layer about 21-28% iron (balance zinc) just above the base steel and additional layers (delta and zeta) having lesser amounts of iron until an essentially pure zinc layer (eta) forms on the top. See Figure 1.
Steel surfaces are typically prepared and galvanized in the sequence shown in Figure 2. Grease, oils and soils are removed in alkali solutions after which the steel is rinsed and subject to pickling to remove rust, mill scale, and other foreign matter. The steel is rinsed again, immersed in a flux bath removed and allowed to dry. In alternative processes, the flux solution floats on the top of the molten zinc and the steel item receives the flux layer as it enters the hot dip tank to be galvanized. An alternative to pickling is abrasive blast cleaning to a degree necessary for coating, as described in SSPC-SP 16.
Following galvanizing the items may be quenched with water or water solutions containing rust inhibitive surface treatments or may be air cooled without surface treatment. Inspection of the galvanized items should be performed to ensure conformance with the specified galvanizing standard. However, it important to recognized that a properly galvanized surface is not the same as a galvanized surface properly prepared for painting or coating. The requirements are different and critical to successfully produce a painted galvanized product.
Powder coat can be applied to a variety of surfaces, including ferrous and non-ferrous metals, plastics, fiberboard and wood. Ferrous metal is by far the most common; consider all the fusion bonded epoxy pipe installed each year. Hot dip galvanizing has become quite common for light posts, traffic signals and sign trusses, handrails, etc. in cities and along highways. These are commonly powder coated.
Painting Hot Dipped Galvanizing
The “go-to” document for painting hot dip galvanizing in the United States has historically been ASTM D6386, “Standard Practice for Preparation of Zinc (Hot-Dip Galvanized) Coated Iron and Steel Product and Hardware Surfaces for Painting”. A 2010 standard, SSPC-SP 16, “Brush-Off Blast Cleaning of Coated and Uncoated Galvanized Steel, Stainless Steels, and Non-Ferrous Metals” references ASTM D6386. These documents address application of liquid coatings. ASTM D7803, “Standard Practice for Preparation of Zinc (Hot-Dip Galvanized) Coated Iron and Steel Product and Hardware Surfaces for Powder Coating” addresses powder coating. Even though this standard is specific to powder coating of galvanized steel it contains the same cleaning and preparation techniques for the surfaces coated with liquid products as ASTM D6386 and SSPC-SP 16. However, ASTM D7803 includes a pre-heating step to drive off air and moisture from the prepared galvanized surface prior to powder coating. This step is described as appropriate and necessary to prevent pinholes and blisters in the film. Solvent entrapment is not an issue since the coating products are solvent free. However, recall that the powder coating process requires that the powder particles must be converted into a liquid so that cross-linking (thermoset resins), flow-out and wetting (thermoset and thermoplastic resins) occur to achieve coverage (continuity) and adhesion. The pre-bake step is recommended to be about 70OF above the curing oven temperature.
Following the pre-bake step allowing the surface temperature to come down to room temperature before coating and curing could be self-defeating. However, the items should be cooled to below the melt/cure temperatures of the powder coating before application. The pre-bake step will also mollify non-visible salts (which are hygroscopic) and volatilize associated surface bound water molecules.
Review of the above references helps illustrate that hot dip galvanizing has always been difficult to properly paint. This is directly related to the surface chemistry of galvanizing. Metallic zinc is a reactive surface which is easily oxidized upon exposure to air and moisture. Zinc oxides and hydroxides that form on the surface interfere with adhesion and do not inhibit further surface oxidation. Wet environments promote the formation of zinc corrosion products (e.g. white storage stain). A period of up to two years in atmospheric service is necessary for the gray zinc patina (zinc carbonate; ZnCO3) to form over the zinc surface. This zinc salt is insoluble in water and inhibits further corrosion (oxidation) at the zinc metal surface below. It is well bonded to the surface and is considered a good substrate film for painting. Coating recommendations are that new galvanizing be coated within 48 hours following initial galvanizing. Following that period and until the patina is fully formed the zinc surface requires thorough cleaning and/or roughening to remove zinc oxides and hydroxide.
ASTM D6386 − 16a Section 6.1 states “In some atmospheric conditions, such as high humidity or high temperature, or both, the formation of zinc oxide on the blasted surface will begin very quickly so the paint coating should be applied within 30 min after sweep blasting. Zinc oxide formation is not visible to the naked eye; therefore, in any atmosphere, painting should be as soon as possible after surface preparation.”
This same language is found in ASTM D7803, “Whenever galvanized steel is rinsed, it is desirable to use heated drying to accelerate the complete removal of water from the surface.” and “ Powder coating shall take place soon after treatment to avoid pick up of surface contaminants.”
The galvanizer should be notified the finished pieces are going to be painted and what handling and treatment should be avoided. Both ASTM standards caution about water quenching and/or chromate conversion coatings following galvanizing. These treatments render the surface unsuitable for coating adhesion.
SSPC-SP 16 in the non-mandatory Appendix, Section A9.2 Zinc Oxides: states, “Newly exposed zinc surfaces will oxidize rapidly, especially in the presence of moisture. During brush-off blast cleaning and subsequent painting of galvanized steel, the surface temperature should be a minimum of 3 °C (5 °F) above the dew point, in order to retard the formation of zinc oxides. To limit the amount of zinc oxide on the cleaned surface, galvanizing should not be permitted to get damp after cleaning and should be painted as soon as possible within the same work shift that the surfaces were cleaned.”
The urgency for coating galvanized steel so soon after surface preparation is a testament to how reactive zinc metal is.
Clearly, there are a multitude of steps to produce a properly powder coated galvanized item. This should not be lost on specifiers or purchasers.
From a design standpoint, the key to producing a properly powder coated galvanized product is to remember: A properly galvanized surface is not the same as a galvanized surface properly prepared for painting. There are too many galvanizers and too many powder coaters that do not understand this.
Case in point: The images below show powder coated surfaces. There is a weld that was very rough but galvanized without dressing it. Is there a continuous [powder] coat film? Visually, what other surfaces raise concerns?
Was the item below properly pre-baked? Coated? Cured? Is the appearance simply an aesthetic issue or is there also a risk of reduced corrosion protection?
When the opportunity arises to review a powder coating specification make note of how infrequently holiday testing is required and when multiple coats are required.
Painting galvanizing has long been a difficult task with liquid coatings applied to inadequate surface preparation or aging until the carbonate patina has formed. Careful surface cleaning and removal of zinc oxides and hydroxides on the surface,- due to the reactivity of metallic zinc, when not properly done doom the applied organic coatings to failure. Powder coating required the same degree of cleaning and preparation as liquid coating plus the additional step of preheating to remove vapor and gases that would be trapped by the powder melt process leading to pinholes, bubbles and eventual delamination. There are numerous advantages to powder coating (VOC, wasted materials, etc.) the challenges to successfully coating galvanized steel with organic films are no less complex.
About the Author:
Rich Burgess is a senior consultant for KTA-Tator, Inc. where he has been employed for over 23 years. He is a member of SSPC and NACE and an active committee member for joint standards. Burgess is an SSPC-Certified Protective Coatings Specialist, a NACE-Certified Coating Inspector Level 3 (Peer Review) and an SSPC C-3 Supervisor/Competent Person for Deleading of Industrial Structures. In his current position, he performs coatings evaluations, coating failure analysis, specification preparation, expert witness and project management services for clients in the transportation, power generation, water/wastewater, shipping, marine and aerospace industries. Burgess is a principal instructor for the SSPC C-1, C-3, and C-5 courses, for the NACE CIP Program and a variety of KTA-offered training seminars. He holds a Bachelor of Science degree in Environmental Science from Rutgers University and a Master of Science in Operations Management from the University of Arkansas.
Propane and oxygen/acetylene are commonly used.
Powder Coating Institute (PCI), Powder Coating: The Complete Finisher’s Handbook, 3rd Edition. PCI Alexandria, VA. (www.powdercoating.org)
American Galvanizers Association (AGA), Centennial, CO 80112, https://www.galvanizeit.org/
Brush-Off Blast Cleaning of Coated and Uncoated Galvanized Steel, Stainless Steels, and Non-Ferrous Metals
ASTM International, West Conshohocken, PA 19428-2959, http://www.astm.org
Society for Protective Coatings (SSPC) Pittsburgh, PA 15222-4656, http://www.sspc.org.
Metals in the electromotive series, the most easily oxidized (reactive) to least easily oxidized is lithium, potassium, calcium, sodium, magnesium, aluminum, zinc, chromium, iron, cobalt, nickel, lead, hydrogen, copper, mercury, silver, platinum, gold. | <urn:uuid:0079c6fe-ee4b-4a4f-8623-dde310e8ddea> | {
"date": "2020-01-22T05:00:05",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9346332550048828,
"score": 3.078125,
"token_count": 3714,
"url": "https://kta.com/kta-university/powder-coating-hot-dip-galvanized-steel/"
} |
Guadeloupe National Park, which encompasses a tropical forest and includes La Soufrière volcano, along with the Grand Cul-de-Sac Marin Nature Reserve, a marine protected area adjacent to the park, together comprise the Archipel de la Guadeloupe Biosphere Reserve.
The Archipel de la Guadeloupe Biosphere Reserve, situated on Guadeloupe Island in the Caribbean Sea, comprises two geographically separate sites: Guadeloupe National Park, which encompasses a tropical forest and includes La Soufrière volcano, along with the Grand Cul-de-Sac Marin Nature Reserve, a marine protected area adjacent to the park.
The transition areas of the biosphere reserve include numerous small towns and villages with many tourist facilities. Some 225,500 inhabitants live permanently in the biosphere reserve and there are about 20,000 visitors per year to the marine part of the area (2000). Threats to the biosphere reserve are hurricanes, tourism, anchorage on coral reefs, deforestation and water pollution.
Guadeloupe National Park on Basse-Terre, comprises a tropical forest which is located in the west of the island and watched over by the still active volcano of La Soufrière (1,467 m or 4,813 ft above sea level). The tropical forest, which is completely uninhabited, is home to over 300 species of trees and bushes.
The park's tropical rain forest varies in its character and species among several sub-ecosystems, depending heavily on elevation.
The lower elevations (up to 500 m or 1,600 ft) of the park's buffer zone support a mesophilic forest, featuring trees such as white and red mahogany, rosewood, and jatobá. This ecologic area is also used for agriculture, including banana plantations and other food crops.
A montane moist forest covers 80% of the core area of the park, at elevations between 300 m (980 ft) and 1,000 m (3,300 ft). This dense and luxuriant ecosystem harbors a great diversity of plant species: very large trees that grow above 30 m (98 ft) (tabonuco, acomat boucan, chestnut); mid-level trees between 6 m (20 ft) and 10 m (33 ft) (bois bandé, oleander); shrubs and herbaceous plants below 10 m (mountain palm, heliconia, ferns); and epiphytic species (giant philodendron, aile-à-mouche, orchids).
The high-elevation forests above 1,000 m (3,300 ft) are much less dense than the park's other forests, due to the extremely wet conditions and constant cloud cover. These forests resemble savannas.
Grand-Cul-de-Sac Marin Nature Reserve encompasses a vast bay of 15,000 ha (37,000 acres) between Basse-Terre and Grande-Terre which includes coral reefs, mud flats, sea-grass bed and mangrove forests, freshwater swamps, forests and marshes. In the lagoon, sea-floor 'meadows' provide habitat to turtles and teem with fish. +
Giant sponges and soft corals, urchins and fish are abundant. The mangrove hosts many sedentary and migratory birds (pelicans, terns, moorhens, ducks, herons and kingfishers).
The Grand Cul-de-Sac Marin Nature Reserve includes coastal wetland forests that are flooded either permanently or intermittently by fresh or salt water, comprising nearly half of Guadeloupe's mangrove swamps.
Vegetation in the coastal zone faces the challenges of salinity in the air and soil, intense heat from the sun and its drying effect, and the constant wind. Notable plant species in this environment include seagrape and pear. | <urn:uuid:d018de39-9e92-4dbc-a094-a020267fdb5d> | {
"date": "2020-01-22T05:40:27",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9072047472000122,
"score": 3.375,
"token_count": 818,
"url": "https://lacgeo.com/archipel-guadeloupe-biosphere-reserve-guadeloupe-national-park-grand-cul-sac-marin"
} |
What impact does posting the aim, or central question of a lesson, have on teaching and learning? What purpose does it serve?
I’ve heard throughout my career that “you need to have your aim posted” at the start of every lesson. @stoodle got at this idea recently and made me realize that I myself have been pondering this for quite some time.
A year ago someone at a PD mentioned that they never post the day’s aim. Nor do they “announce” it at the beginning of class. Instead, the aim is elicited from students during the learning process. The essential question is built upon their prerequisite knowledge and pulled from their comprehension of what they learn from the lesson. It is never given, but rather discovered by the students.
When I heard this, I had an ah-ha moment. It made complete sense. Other than in the classroom, how often are we informed of what we’re going to learn before we actually learn it? Sure, you may have a goal you want to accomplish (e.g. complete yard work before 1 pm), but what you actually learn in the process (e.g. how to mow my lawn as efficiently as possible) is often unknown at the onset. We notice, strategize, experiment, learn, and then realize what we’ve learned.
Recently, I didn’t post the aim of a lesson on arithmetic sequences. I required my students, as part of their exit slip, to write what they thought the aim was for the lesson. Not only did 90% of the kids nail it, but one was even better, and more creative, than what I originally intended for the lesson.
(This is directly related to the overarching problem from the lesson)
This made me think. Whatever a student feels the aim is (during or at the end of a lesson), provides remarkable feedback as to the effectiveness of the lesson.
Another thing. I’m a firm believer that lessons should be based purely on questions. One question should lead to another, and then another, and then another. Ultimately, the central question – the heart of any lesson – should eventually be provoked. Because of this, I want my students to need the central question of a lesson to accomplish a task or goal. They can’t need it if I openly post it.
I’m left with many questions about this widely-adopted practice of aim-posting. What are the consequences of openly telling students the aim of a lesson? Conversely, what are the consequences of structured learning that promotes the discovery of the aim? If I don’t tell my students the aim, how do I frame a lesson from the onset? Does explicitly stating the aim perpetuate a top-down approach to learning? How can we use student-generated aims to inform our teaching? | <urn:uuid:4219b565-d74f-4987-bfec-b0a0815d2911> | {
"date": "2020-01-22T05:34:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9681201577186584,
"score": 3.25,
"token_count": 592,
"url": "https://lazyocho.com/tag/objective/"
} |
What are the Manual Therapy Precautions and Contraindications when working the Neck?
The neck contains many structures whose locations are important to know for reasons of client safety. Many of these structures are sensitive neurovascular structures (nerves, arteries, and veins) that contraindicate pressure. Others are similarly sensitive structures that require gentle pressure. The majority of these structures are located anteriorly (Fig. 13).
For this reason, it is essential to exercise caution when working the anterior neck of a client. However, even though caution is called for, it should not prevent therapeutic work entirely, as happens with some therapists. This is unfortunate, because anterior neck work can be extremely valuable, especially to clients who have experienced a whiplash accident in the recent or distant past. Knowledge of the anatomy of the anterior neck can allow work to be performed therapeutically and safely. (Future blogs will discuss other cautions and contraindications for specific pathologic conditions.)
Figure 13. Structures of the anterior neck. Many anterior neck structures are sensitive; therefore, caution is required when working this region. The thyroid cartilage, cricoid cartilages, trachea, and thyroid gland are located at midline. The common carotid artery and jugular vein are located slightly lateral to midline. The brachial plexus and the subclavian artery are located inferolaterally. (Courtesy of Joseph E. Muscolino.)
Anterior Structures: Common Carotid Artery and Jugular Vein
Most notably, the common carotid artery and jugular vein are located in the anterior neck, slightly lateral to midline, running inferiorly/superiorly. The following are some general precautions/guidelines for work in this area:
- Avoid working on these structures. It is usually easy to know when the fingers are pressing on the carotid artery because a pulse can be felt.
- When palpating for an artery, it is usually better to use a finger than the thumb because the thumb’s pulse is fairly strong and may be confused with the client’s pulse.
- Do not palpate too deeply for a pulse because it is possible to compress the artery and block its blood circulation, thereby blocking its pulse as well.
- If you detect the pulse of the client’s carotid artery while you are working on the area, do not stop working. Instead, either slightly move your palpating fingers, or gently displace the vessel to one side or the other and continue working in that spot.
The Carotid Sinus Reflex
In the common carotid artery in the anterior neck, the region called the carotid sinus (approximately halfway up the neck) contains stretch receptors that are located in the wall of the vessel. These receptors are involved in a neurologic reflex called the carotid sinus reflex, which can lower blood pressure. The mechanism works as follows. These stretch receptors are sensitive to stretching of the artery wall, which they interpret as coming from high blood pressure within the artery distending the artery wall outward. However, if the wall is stretched or distended inward (rather than outward) because of manual pressure, these stretch receptors are fooled into thinking that high blood pressure is causing the distortion of the vessel wall. Consequently, the stretch receptors trigger the reflex that results in lowering the client’s blood pressure. Although this can actually be used positively (e.g., intensive care nurses are trained to do this when a patient’s blood pressure is rising), it can also be seriously detrimental if the client is older and/or weak. Lowering the blood pressure excessively could cause the client to pass out and/or cause the heart to stop.
Anterior Structures: Midline
Located midline in the anterior neck are the thyroid cartilage, cricoid cartilage, and trachea. The following are general precautions/guidelines for working near these structures:
- Do not place pressure on these structures. Note their location in the anterior midline of the neck. As with blood vessels, it is best to avoid these structures altogether.
- If the client is comfortable with your working in this area, you can gently displace these structures to the side (be aware, however, that moving or pressing on them may cause a cough reflex). For example, if you are working on the anteromedial neck musculature, such as the longus colli, it may be helpful to gently displace these structures toward the other side to allow full access to the musculature.
- Avoid the thyroid gland, which is located in the lower anterior neck.
- Use only light pressure over the hyoid bone. The hyoid bone is located more superiorly in the anterior neck and serves as an attachment site for many muscles. Although the attachments of these muscles on the hyoid bone can and should be worked, the pressure used should not be very deep.
Anterior Structures: Brachial Plexus and Subclavian Artery
Located inferiorly and laterally in the anterior neck are the brachial plexus and the subclavian artery. These structures pass between the anterior and middle scalene muscles and then continue inferolaterally to pass deep to the clavicle. If appreciable pressure is placed on the brachial plexus, the client will often report a shooting pain that runs into and/or down the same-sided upper extremity. This is likely to happen when doing deep specific work to the scalenes. Guidelines for working the scalenes in the lower anterior neck are as follows:
- Begin with light to medium pressure before transitioning to deeper pressure.
- If pressure on the scalenes causes the client to experience referral of pain or some other sensory disturbance (e.g., tingling) down into the upper extremity, slightly change the location of your pressure because you might be placing your pressure directly on the brachial plexus nerves.
Therapist Tip: Scalene Work and Referral Symptoms
- Pain or other referral symptoms experienced into the upper extremity when applying pressure to the scalenes can result from pressure directly on the brachial plexus. However, pressure to the scalenes can also refer symptoms into the upper extremity because of trigger point (TrP) referral. Therefore, it can be difficult to be certain of the cause of the referral. Referral caused by direct nerve pressure tends to feel like a shooting pain; however, this is not always the case. Consulting a TrP referral illustration may help (see Chapter 2 for illustrations of TrPs and their referral zones). If your client’s pain falls within the typical TrP referral pattern, it is more likely that the pain is a TrP referral, but this is not definite. If the referral does not coincide with the typical TrP referral pattern, then you are most likely pressing directly on the brachial plexus and should move your pressure slightly so as to remove pressure from the nerves. When in doubt, it is always wise to be cautious and change the location of your pressure.
Lateral Structures: Transverse Processes
The transverse processes of the cervical spine have already been discussed, but it is worthwhile to mention them again in the context of precautions and contraindications when working the neck. The transverse processes are split into anterior and posterior tubercles whose sharp points make them very sensitive to your pressure. If you are massaging the attachments of the scalenes or other muscles, it may be necessary for you to work the soft tissue attachments that are directly on the transverse processes. If this is the case, it is essential to consider their sensitivity and adjust your pressure accordingly. However, never use the transverse processes as contact points when administering a force to stretch or perform joint mobilization of the neck. There is no justification for this. Stretching and joint mobilization are better and more comfortably accomplished by contacting the articular processes and laminar groove of the client’s cervical spine.
In the posterior neck, be aware of the location of the suboccipital nerve and vertebral artery (Fig. 14). These two structures are located in the suboccipital triangle, the triangular space bordered by the rectus capitis posterior major and obliquus capitis inferior and superior muscles. Further, the greater occipital nerve is also present in this region. Although deep tissue work in the posterior upper neck can be extremely valuable and may be necessary for the client, it is essential to take into consideration the location of these nerves and this artery when performing such work.
Figure 14. Neurovascular structures of the posterior neck. The suboccipital and greater occipital nerves and vertebral artery are demonstrated. Caution should be exercised when working in the upper posterior cervical region. OCI, obliquus capitis inferior; OCS, obliquus capitis superior; RCPMaj, rectus capitis posterior major. (Courtesy of Joseph E. Muscolino.)
Precaution with Extension and Rotation Motions
Another caution should be mentioned, even though it does not involve an anatomic structure per se. When treating a client’s neck, be aware that many clients do not tolerate well any extension beyond anatomic position and/or any extreme or fast rotation motions. This is especially true with elderly clients, but it may also be true for middle-aged or younger clients, especially if they have recently experienced a traumatic neck injury. For this reason, it is always wise to be aware of this possibility. It is advisable to increase these ranges of motions gradually over the span of several visits if necessary.
(All figure credits: Courtesy of Joseph E. Muscolino. Originally published in Advanced Treatment Techniques for the Manual Therapist: Neck. 2013.)
Note: This blog post article is the sixth in a series of six posts on the
Anatomy / Structure of the Cervical Spine for Manual Therapists.
The Six Blog Posts in this Series are:
- Introduction to the Cervical Spine
- Cervical Spinal Joints
- Motions of the Cervical Spine
- Musculature of the Cervical Spine
- Ligaments of the Cervical Spine
- Precautions When Working the Neck | <urn:uuid:9ec1b9e6-a862-46e7-8e8f-e40a7972e8bf> | {
"date": "2020-01-22T06:22:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9144043326377869,
"score": 2.640625,
"token_count": 2143,
"url": "https://learnmuscles.com/blog/2017/08/03/manual-therapy-precautions-working-neck/"
} |
With the season just beginning sailors are on the water which is still cold, around 7 degrees C on Lake Ontario
If you fell in we used to think about hypothermia being the principal cause of death, however in very cold water below 10 degrees the principal cause of death is drowning caused by cold shock. For about 1 minute you gasp for air often inhaling water especially if you are not wearing a lifejacket to keep your mouth out of the water. In 10 minutes you lose the ability to move your arms and legs -so if you can’t reach somewhere to get out of the water then you wont! You have around 1 hour before hypothermia sets in and around 2 hours before your heart stops and you die.
Its really important to wear a life jacket -with it you can survive for at least an hour.
For more information Google “Cold Water Bootcamp” or see | <urn:uuid:bc02cf9d-38f1-4dec-a5e3-21e31a4f8f14> | {
"date": "2020-01-22T05:39:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9536581039428711,
"score": 2.859375,
"token_count": 185,
"url": "https://learntocruise.ca/2016/05/11/cold-water-safety/"
} |
Identity More than 3,200 Tai Gapong people live in Southeast Asia. At least 2,000 inhabit a single village in Thailand—Ban Varit in Waritchaphum District of Sakhon Nakhon Province. There are about 500 homes in Ban Varit, most of which are inhabited by Tai Gapong families, along with some ethnic Phutai and Yoy people.
History The Tai Gapong say that they originated in Borikhamxai Province, Laos, in a district known as Gapong, from which they took their name. No district or town called Gapong exists in Laos today, but the 1,200 Tai Gapong in Laos today inhabit Ban Nahuong, about 25 kilometres (15 mi.) south of the town of Ban Nape in eastern Borikhamxai Province, near the Lao border with Vietnam.
According to Joachim Schliesinger, 'the ancestors of the Tai Gapong in Thailand migrated westwards from central Laos, crossed the Mekong and settled in their present location in 1844 or 1845. The reason for their migration is unclear; they may have been taken as war captives and resettled across the Mekong River by the Siamese army, or migrated voluntarily.'
In their language, Gapong means 'brain'—therefore the autonym of this interesting group means 'Brainy Tai'. Other Tai groups call them Phutai, but although the Tai Gapong say they are distantly related to the Phutai, they are now a distinct tribe with their own customs, history and dialect. In fact, even the Phutai who live in the Tai Gapong village in Thailand consider them different.
Customs For generations, Tai Gapong women have worn an elaborate traditional dress that sets them apart from other tribes. It consists of a short skirt that falls just below the knees, 'with white, red, brown and yellow horizontal stripes at its lower part, a longsleeved dark coloured vest buttoned in the middle with silver coins and decorated with red bands along the hem, collar and sleeveends. In the past, Tai Gapong women wore a silver belt, silver earrings, silver necklaces and silver anklets.'
Religion All Tai Gapong people in Thailand are Buddhists, while among those in Laos the situation is not so clear. Although many Tai Gapong families in Laos claim to be Buddhists, their ceremonies and rituals are dominated by animistic practices.
Even in Thailand the Tai Gapong reportedly 'believe in an array of spirits, such as the spirit of the village, the spirit of the house, the spirit of the water, the spirit of the tree, but their most important spiritual being is chao pu mahaesak, an angel-like being, humanized in the form of a man-like statue in his shrine. The Tai Gapong honour chao pu mahaesak annually on a specific day with flowers, whisky, rice and other small sacrifices.'
Christianity Because few people are even aware of the existence of the Tai Gapong people, little or no Christian outreach has ever been conducted among them. Only a very few Tai Gapong have heard the gospel. They continue—as they have for centuries—to live their lives without the slightest knowledge of Jesus Christ or his salvation. | <urn:uuid:78645593-e04b-4f07-87c5-cecc6053e635> | {
"date": "2020-01-22T05:25:47",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9591657519340515,
"score": 3.109375,
"token_count": 681,
"url": "https://legacy.joshuaproject.net/people-profile.php?rog3=LA&peo3=19263"
} |
Please click on the images to enlarge.
|Title of Scheme:
Textiles –Investigating linear structures of the school building through the manner of drawing with pins and thread.
|No. of Lessons: 15
Total Time: 7 ½ hours – (2 single and 1 double over 5 weeks)
|Group: First Year
No. of Pupils: 20
- Create a concentrated observational drawing that explores the linear structures of the school building
- Develop skills for textiles that includes design and planning and involves using pins and thread to draw and create lines
- Complete a well made drawing with pins and thread that displays close observation of the surrounding building, strong composition and includes rich details of the architectural aspects.
|Overall Learning Outcomes for the Scheme:
- Examine the work of Debbie Smyth a contemporary textile artist
- Demonstrate awareness of the shapes and structure and details of the school building in a accurate observational drawing
- Apply design skills in planning out sections, scale and accurate translation of the visual information
- Applies various stages of design in preparation for drawing on material such as creating an observational drawing and tracing which show precision and care
- Examine the school building with a view to capture a visually interesting perspective for the textile work
- Complete a well-designed drawing with pins and thread that describes the architecture of the school in rich detail.
- Increased ability to conduct field work
Please click here to view the full textile scheme | <urn:uuid:4769d4af-2ca1-4a3d-87c5-6b652232e2f4> | {
"date": "2020-01-22T06:28:57",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9292805194854736,
"score": 2.875,
"token_count": 298,
"url": "https://louisaschewe.com/scheme-design-and-planning/textiles/"
} |
Of course we all want snow, and even rain, to fall as we think of enjoying our mountain pursuits.
After all, carving our way down the slopes brings a certain feeling of freedom that really can't be found in any other activity.
But as we slip over the snow, let's spare a thought for what is under our skis and boards. Snow is much more than just the source of great recreational activities and indeed whole industries; it is a fundamental source of water.
Recently, we learned from the Whistler Naturalists that our two glaciers are continuing to recede, though unevenly, thanks to myriad reasons ("Glacier monitoring update shows Wedgemount continues to recede," Pique, Nov. 17, 2019).
These melting glaciers are stark reminders of the climate crisis we are facing. We can see and measure the change in a way that can't be ignored. Wedgemont has been measured across 46 years, starting in 1965, and since then has receded 585 metres. Overlord's monitoring began in 1986 and the last time it was assessed in 2017, it had receded 264 m.
Glaciers are an important source of water during melt times, creating hydro power, irrigation to lowland farms and gardens, as well as providing water to wildlife and local streams and rivers. They are in many ways the perfect storage mechanism, capturing water in times of plenty and releasing it when the weather is hotter and drier and the need greater.
They seem so majestic and huge at a global level it is hard to imagine they too are a threatened entity, and that their demise globally threatens the lives of 1.9 billion people.
This hard fact was brought home in an even more concrete way this week with a report published in the journal Nature titled: "The importance and vulnerability of the world's water towers."
A group of 32 international scientists, including a professor at the University of B.C. (UBC) wrote the report to highlight the importance and vulnerabilities of what they call Earth's 78 water towers.
The authors concluded that it's essential to develop international, mountain-specific conservation and revamp climate-change policies and strategies to protect ecosystems and people downstream from glaciers.
The world's most relied-upon mountain system—and also one of the most vulnerable, according to the report—is the Indus water tower in Asia, made up of vast areas of the Himalayan mountain range and covering portions of Afghanistan, China, India and Pakistan.
Other high-ranking water tower systems are the southern Andes, in Latin America; the Rocky Mountains, in North America; and the European Alps.
Researchers investigated how important the water from the glaciers was to those living at the base of the mountains.
"It was more about really reinforcing that the conversation needs to move beyond the changes that are happening to the cryosphere [frozen water on Earth] and what are those downstream impacts of these changes to people," UBC geography professor Michele Koppes, who was part of the study, told CBC on Dec.8.
She said as populations grow and climate change affects these water towers, other associated events such as floods, landslides and water turbidity will occur. Koppes pointed to a pair of massive landslides that decimated the northeast slope of Joffre Peak, resulting in the closure last May of the Nlháxten/Cerise Creek Conservancy, located north of Pemberton as examples.
She also said there will be less water in the summer and that has implications for how we use water to make electricity.
And what will be the long-term impact on snowmaking, more and more the saviour of ski resorts facing low precipitation levels in the early ski/board season?
"There [are] all these cascading impacts," said Koppes.
"It's really important to understand that we're vulnerable to these changes."
You may recall that in 1992, countries around the world adopted the United Nations Framework Convention on Climate Change (UNFCCC), the first near-universal international agreement to tackle global climate change. The objective of the treaty was to reduce greenhouse-gas emissions and prevent the dangerous effects of climate change. Every year since 1995, the Convention of the Parties, or COP, is held to make progress towards this objective.
This week, COP 25 took place in Madrid and part of that was the presentation of Blue COP, which is focused on the world's most important carbon sponge—the oceans. Healthy oceans absorb carbon and provide a buffer against climate chaos, so damage to them is damage to the climate, and vice versa.
This all goes to remind us that water, whether fresh or salt, solid or liquid, is foundational to our existence.
So, as we do the Ullr snow dance and enjoy that glass of fresh, cold water take a moment to consider how to honour this precious resource. | <urn:uuid:84fbab31-cd86-4945-b961-31ae2cb3d45d> | {
"date": "2020-01-22T05:05:00",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9596662521362305,
"score": 3.46875,
"token_count": 1007,
"url": "https://m.piquenewsmagazine.com/whistler/water-water-everywhere/Content?oid=14777052"
} |
Four of the five main organs in Chinese Medicine have a season they correlate with, but the Spleen is unique in that it doesn’t have a designated season. Some say that late summer (also known as Indian summer) is the season of the Spleen. Others have said the Spleen’s energy dominates the end of each season.
Regardless of what time of year it is, these foods that support the Spleen can usually help improve digestion as the Spleen is the center of the digestive system in Chinese Medicine. More detailed nutritional suggestions are made based on a patient’s specific diagnosis, but if you’ve been having digestion problems you might want to start by incorporating small amounts of these foods into your diet. Many of them have a yellow color, which is the color associated with the Spleen, and most of them are slightly sweet in nature because that is the flavor of the Spleen.
(Guest post written by Jacqueline Gabardy, L.Ac. of FLOAT: Chinese Medical Arts; Infographic also by Jacqueline Gabardy, L.Ac.) | <urn:uuid:4284cec4-57ab-4ee3-b063-4bd5270ba5e1> | {
"date": "2020-01-22T05:00:30",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9673309326171875,
"score": 2.875,
"token_count": 231,
"url": "https://mamafloat.com/2013/09/11/foods-for-the-spleen/"
} |
The U.S. has laws designed to protect children, who are presumed to lack well-formed judgment. In divorce cases that go to court, a parent cannot represent a child’s interests, and a parent’s interests may even conflict with the best interests of the child. In these cases the court will appoint a “guardian ad litem”(GAL) to act in the child’s best interests during the trial and provide reasoned advice from an independent individual who does not owe allegiance to either party. (Ad litem is a Latin term meaning “for the lawsuit.”)
Some states require that GALs be attorneys; in other states, lay people are eligible to volunteer as GALs, subject to GAL training and certification. Note: a GAL is appointed solely for the court proceedings, and is not a “guardian” appointed to manage a child’s interests in general. The appointed GAL participates, as appropriate, in pre-trial conferences, mediation and negotiations. The guardian has authority to conduct an independent investigation to ascertain the facts of the case, to investigate the child’s family background, and to meet and interview the child and the parents face-to-face. In court, GALs can cross-examine witnesses called by the parents’ attorneys and can call their own witnesses.
Other GAL duties may include:
- Advising the child, in terms the child can understand, of the nature of the court process, the child’s rights, the role of the GAL, and the potential outcome of the legal action.
- File appropriate petitions, motions, pleadings, briefs, and appeals on behalf of the child and ensure the child is represented by a guardian ad litem in any appeal
- Advise the child, in terms the child can understand, of the court’s decision and its consequences for the child and others in the child’s life.
In some states, GALs make recommendations to the judge as to how they think child custody should be decided in the child’s best interests—not necessarily according to what the child prefers, if he or she is old enough to have an opinion. Usually the child does not appear in court, but if the child is to testify, the GAL will help prepare the child when necessary and appropriate.
There is no attorney client privilege between the GAL and the parent, so nothing that is said is privileged and it can be shared.
The seasoned family law and divorce lawyers at the McGrath Law Firm, founded by attorney Peter McGrath, will walk you through every step of the challenging divorce process to address your concerns and achieve your goals as efficiently as possible. From spousal support, child support, fault, and equitable division of property and debt to valuations, pre-nuptial agreements, annulments, and restraining orders, the experienced attorneys at McGrath Law Firm have a successful track record in all aspects of divorce law. Call us to schedule your consultation at (800) 283-1380. | <urn:uuid:aff36165-85c9-4b58-aa7a-da53529e59ae> | {
"date": "2020-01-22T04:58:11",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9522252082824707,
"score": 2.53125,
"token_count": 639,
"url": "https://mcgrathlawfirm.com/tag/no-attorney-client-privilege/"
} |
What if I told you that improving your posture will improve your life?
If good posture isn’t important to you, it should be.
There are many reasons why your mother or teacher nagged you to sit up straight and it has nothing to do with looking good (although looking good could be a reason unto itself J)
Sitting, standing and moving well is incredibly important because doing these things incorrectly can result in discomfort, stiffness, and pain.
- Sitting or standing with poor posture can impede healthy breathing. Slouching puts pressure on the ribs, lungs, diaphragm and the other muscles associated with breathing. Try this little experiment: where you are right now, slouch or slump, then take a breath. See how shallow your breathing is? When you slouch or slump you cannot breathe deeply and easily.
- Slumping can cause pain. Poor posture bypasses your postural muscles and puts undue stress on your ancillary muscular system. This causes pain. To understand why let’s talk a little bit about anatomy. You have two types of muscles used to support and move your skeleton: postural muscles and phasic muscles. Your postural muscles include spinal muscles, some of your abdominal muscles, hip flexors calf muscles, and so on. These muscles are designed to work all day long maintaining your posture. Your phasic muscles are your biceps, trapezius, triceps, etc. They are designed to perform short sprints of work such as carrying the groceries or washing a window. When your posture is poor you rely less on your postural muscles and more on your phasic muscles to maintain your posture. The phasic muscles don’t like to work long hours. Remember, they only like to do short stints of work. That’s why when you carry a gallon of milk for 20 blocks and don’t change arms your arm aches! Poor posture shortens postural muscles and limits their mobility and this affects strength and can also cause pain and stiffness. Good posture does the opposite and lengthens postural muscles.
- Slouching impedes digestion. According to an article in the Harvard Health Letter, poor posture can lead to incontinence. “Slouching increases abdominal pressure, which puts pressure on the bladder. ” Poor posture can also lead to heartburn and slowed digestion, according to Dr. Kyle Staller. “Slouching puts pressure on the abdomen, which can force stomach acid in the wrong direction. And some evidence suggests that transit in the intestines slows down when you slouch.” It just makes sense that standing up taller makes digestion easier.
- Better posture means better balance. Think about it. Your head weighs anywhere from 10-15 pounds. Essentially you have a bowling ball sitting on the top of your spine. If your head isn’t beautifully poised on top of the spine then you will be off balance. Your body will contort itself in every conceivable way to maintain balance so that your head doesn’t hit the ground.
- Better posture translates into dynamic movement. When you have poor posture, your body has to work overtime to maintain balance. This causes excessive muscular tension throughout the body. Tense muscles don’t move easily. It is that simple. Try this. Tense your toes either by lifting them up or scrunching them. Now try to walk. What happens? You can’t walk freely or easily. Or try this. Clench your jaw. Now turn your head. Same thing! Right? You cannot move your head as easily. This is a basic tenet of the Alexander Technique: letting go of unnecessary muscular tension frees up your movement and makes moving easier. Moving more easily makes your movement more dynamic and coordinated.
So why not make this year the year to improve the way that you sit, stand, walk, and even run? Your health depends on it.
Teaching people how to move well is my passion. Sign up for posts that teach you how to be more comfortable in your body! Click here to sign up or use the form to the right of this post!
How You Move Matters! You can learn how to move better with my Amazon bestselling book Agility at Any Age: Discover the Secret to Balance, Mobility, and Confidence. My book is illustrated with 40 videos that you access with your iPad or smartphone!
You can purchase it here.
My name is Mary Derbyshire. I am a fitness and movement coach. My methodology is the Alexander Technique, a mindfulness practice that teaches you how to move better. When you move better you feel better and when you feel better your whole life improves! Let me know what you think or ask a question! I love to hear from my readers! Feel free to post in the comments section below and feel free to share this with your friends!
You can learn more about the Alexander Technique here. | <urn:uuid:1d884af9-66ac-4ffa-9266-7f15fddcb491> | {
"date": "2020-01-22T05:18:54",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9275723695755005,
"score": 2.78125,
"token_count": 1016,
"url": "https://mderbyshire.com/category/posture/"
} |
In 1600s Paris, one woman undertook an act of rebellion. Her weapon was fairy tales
Marie-Catherine d’Aulnoy — who’d been married off at 15 to an abusive man three decades her elder — slipped messages of resistance into her popular stories, risking jail in the process.
D’Aulnoy lived in a punishing patriarchy: women couldn’t work or inherit money, and were forbidden from marrying for love.
Through her work, she showed an alternative.
“She subversively wrote against some of the cultural norms for women at the time,” says Melissa Ashley, whose book The Bee and the Orange Tree is a fictionalised account of d’Aulnoy’s life.
“She was incredible.”
Going against the grain to write strong women
D’Aulnoy was born in 1650 and grew up to work in the “golden age of fairy tale writing”.
She even coined the term ‘fairy tale’ — ‘conte de fée’.
“We have this idea that fairy tales came from the Grimm Brothers in the 19th century and Hans Christian Andersen,” Ashley says.
But Ashley says it was d’Aulnoy who wrote “the very first fairy tale” — The Isle of Happiness.
It tells the story of a prince who travels to an enchanted island and meets Princess Felicity, who’s never seen a human. She entertains the prince with operas and lavish art, and before he knows it he’s been on the island for 300 years.
It was published in 1690 — seven years before fairy tales took off with the publication of Tales of Mother Goose by Charles Perrault, who also wrote Sleeping Beauty, the Little Glass Slipper and Puss in Boots.
For the remainder of the article, please go to | <urn:uuid:b6de2f56-0215-493e-b8c2-a65fae14f61b> | {
"date": "2020-01-22T06:55:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9591836929321289,
"score": 3.328125,
"token_count": 407,
"url": "https://melissaashley.com.au/2019/11/21/the-woman-who-coined-the-term-fairy-tale-risked-prison-to-write-coded-messages-of-rebellion/"
} |
Pat Farrell from the Environment topic group invites us to think about our wildlife and how it can be integrated into new housing developments.
The Environment group is considering the impact on wildlife of any developments – housing or otherwise – in the parish. Is it important to retain wildlife habitats such as hedges, trees, meadows and verges? Should we be looking to create new areas for our wildlife to flourish safely?
There are ways in which preservation of our environment and natural habitats benefit the community as a whole. For example:
- Cornish hedges and rows of trees help reduce traffic noise.
- Areas of meadow and verges absorb rainfall and reduce surface flooding.
- Trees intercept rain as it falls and can help prevent soil erosion.
- The presence of bushes and trees improves air quality by absorbing toxins and replenishing oxygen levels.
- Green ‘corridors’ enable wildlife to move through developments and access nearby habitats safely. The more we plan for wildlife alongside human habitation, the more likely we are to see a diverse range of species from our doorsteps and in our gardens.
- Greenery promotes a sense of mental wellbeing; we all like to see the trees and wild flowers that grace our landscape.
It isn’t hard to make room for wildlife in new developments and with careful planning the environment of a new housing development can be a place in which wildlife flourishes.
Bat roosts and bird nesting boxes can be incorporated into roof areas. Hedgehog highways can be encouraged by creating small doorways in impermeable boundaries. The seeding of verges and front lawns with ‘meadow mixes’ instead of rye grass encourages bees and butterflies, which in turn help to pollinate garden vegetable patches. Birds and butterflies can be encouraged by planting communal areas with native species, preferably berries or fruit; species which can withstand the unpredictable Cornish weather and look attractive.
What do you think? Does wildlife matter to you? Is it more important that we can build the new housing we need, where we need it? How can we achieve a balance? | <urn:uuid:670452fc-a02d-47fc-adcf-e3b9d81dfe45> | {
"date": "2020-01-22T04:57:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9143821597099304,
"score": 2.984375,
"token_count": 428,
"url": "https://mylorflushingplan.uk/2018/02/14/what-about-the-wildlife/"
} |
State Parks contain a diversity of habitats, from forest and fields, to shrub swamp, marshes and streams. All these landscapes support a wide variety of native plants.
As part of efforts at Parks to restore land and protect biodiversity, it is important to have the right plants for the right habitats in order to support healthy ecological function, provide critical habitat for wildlife and reduce the threat from invasive species.
Such projects require a source of plants that are native to the area. Since it can be difficult to find such plants commercially, the Plant Materials Program was started at Sonnenberg Gardens & Mansion State Historic Park in 2016.
This program was created by the Finger Lakes Environmental Field Team, which was working at Ganondagan State Historic Site in Victor, Ontario County, to restore a grassland habitat critical for endangered birds. And the job called for 200 different species of native plants.
Since then, such plants have been grown in the Sonnenberg’s historic greenhouses in Canandaigua, at the north end of its namesake lake in the Finger Lakes region, to cover parks projects in the eastern part of New York. Many of the Sonnenberg greenhouses had been vacant for years, so this was a perfect match for the facility.
Plant Materials Program Coordinator Brigitte Wierzbicki, Lead Technician David Rutherford and technician Elizabeth Padgett, supported by seasonal staff, partners, and interns, run the program. To fill orders, they identify native species in the field, sustainably collect seeds, propagate those seeds in the greenhouses, and deliver plants back to project sites.
Now in its fourth year, the Ganondagan project aims to recreate the oak savanna grasslands found there in the 1600’s, when the land was managed by the Onöndawá’ga (Seneca) people. This last season, the Plant Materials Program provided more than 5,000 plants towards this project, and over 100 pounds of hand-collected seed have been sown on site.
Currently, the Plant Materials Program provides for environmental stewardship projects across six State Park regions of the state, from the Finger Lakes Region and eastward to the Taconic Region. The program also works with Parks Western District Nursery and its Native Landscape Resource Center, managed by Kevin McNallie at Knox Farm State Park in Erie County, which provides native plantings for the western regions of the state.
Additional guidance on plant suitability for specific habitats or sites is provided by NY Natural Heritage Program.
Why Native Plants?
A wealth of literature points to native plants and species diversity as critical factors for successful restoration. Native plantings are better able to compete against invasive species than non-native plants. Planting more native species also increases both plant and animal diversity. Ensuring that plants are not only native, but regionally appropriate and genetically diverse increases the likelihood that the plantings will be successful and contribute to their local ecosystem.
Plant Materials Program staff search for wild, naturally-occurring populations for each project within the same ecoregion. Ecoregions are zones defined by their plants, soil, geography, geology, climate, and more. Plants that live in the same ecoregion have adaptations that help each species survive in those precise conditions, so seed has the best chance of survival if it is replanted within that zone.
New York State is split into 42 different ecoregions, with each region warranting a different seed collection so that seed is often not shared across projects. In the Sonnenberg greenhouses, plants are not allowed to hybridize (or cross-pollinate) with plants from other regions. Preserving the plant genetics of each ecoregion is important to maintain each unique habitat.
Science of Collecting Native Seeds
Seed collection involves more than just taking a seed from a plant. Our collectors ensure collections aren’t harming the population. Only a small fraction of seed is taken from each plant, so that enough seed remains to support that population, and to serve as food for insects and other animals.
Populations of a plant must be large enough to support seed collection. Areas are monitored before and after collection, and they are not collected from again for multiple years. The conservation of intact ecosystems is more effective than planting and restoring ecosystems, so it is important that seeds are collected in a way that protects existing plant populations.
Measures are also taken to capture genetic diversity, including collecting multiple times a season and using field techniques to collect evenly or randomly across a population. Collectors avoid selecting for specific traits, as that can reduce a population’s ability to adapt, and can in turn negatively impact other populations.
Native Plants Help an Endangered Butterfly
In the Capital Region, the Plant Materials Program collects wildflower seed to support Parks Stewardship staff in restoring rare butterfly habitat. Saratoga Spa State Park is home to the state and federally-endangered, and globally-rare Karner blue butterfly. This small butterfly lives in pitch pine-scrub oak barrens, and during its caterpillar stage, it feeds on only one wildflower: the blue lupine (Lupinus perennis).
Picking seeds from the right lupine plants is extremely important, as the chemical makeup of lupine has been shown to vary across the range of the species. Introducing a new strain of lupine might be harmful or even toxic for the butterflies. For example, the same species of lupine growing in another state could be different enough from the ones growing at Saratoga Spa State Park that, if planted there, could be toxic to the Karner blue butterflies living in the park.
A 2015 study found that survival and development of the Karner blue was linked to which lupines caterpillars had fed upon. Expanding lupine at Saratoga Spa through local seed is the safest option to protect the unique genetics of both the butterflies and lupine.
New Life for Sonnenberg’s Historic Greenhouses
Each spring, the Plant Materials Program grows a new cycle of plants in Sonnenberg’s historic Lord & Burnham greenhouses. These are greenhouses which date back to the Gilded Age of the early 1900s and reflect the botanical passions of the home’s original residents, Frederick Ferris Thompson and Mary Clark Thompson, two prominent philanthropists.
At the time of construction between 1903 and 1915, the greenhouses at Sonnenberg reflected state-of-the-art technology. Only a handful of other such Lord & Burnham structures survive today, with some major examples found at the New York Botanical Garden in The Bronx, The Buffalo and Erie County Botanical Gardens, the United States Botanic Garden in Washington, D.C., and the Phipps Conservatory and Botanical Gardens in Pittsburgh.
The federal government acquired the Sonnenberg grounds in 1931 and passed it to a not-for-profit preservation organization in 1972. State Parks bought the property in 2005, while the not-for-profit group continues to manage it and raise funds to support the restoration of these historically-significant greenhouses.
This 50-acre estate and its greenhouses, gardens, and Queen Anne-style mansion are all open to the public from May through October. A portion of the greenhouses interprets the legacy of the site, including a palm house, orchid house, and cactus house.
Patrons can tour the greenhouses utilized by the Plant Materials Program and learn about the thousands of plants grown for restoration of native ecosystems. Housing the program at Sonnenberg expands the interpretative value for park visitors and supports the restoration of these historic structures.
During this long winter, know that the next generation of native plants for New York State Parks projects is being nurtured in a historic greenhouse complex that dates to the Gilded Age, and come spring, will be ready to preserve and protect some of our most precious places.
Post by Brigitte Wierzbicki, Plant Materials Program Coordinator
Cover photo by Sonnenberg Gardens and Mansion State Historic Site
Consider Native Species When Planting At Home
- Check if you have natives already coming up in your garden or yard. It is likely that you already have some native plants that are providing habitat, and these will be best adapted to your local ecosystem. Use indentification resources to see what is from NY or New England. Apps like iNaturalist, online guides like GoBotany, or field guides like Newcomb’s (Newcomb’s Wildflower Guide, Lawrence Newcomb) are great resources for getting started.
- Native Plantfinder is a great resource to choose which plants are native to your zip code! It also ranks plants based on the number of native butterflies and moths that can use the plants—meaning you will be bringing in more wildlife into your garden including pollinators and birds. It is still in development and only a small fraction of these will be available commercially, so double check your favorites with what’s available.
- Use the New York Flora Atlas to ensure the plant you’re interested in is native to the state. Even better if it’s native to the county you’re planting in!
- Utilize the New York State Department of Environmental Conservation Saratoga Tree Nursery. The 2020 Seedling Sale is currently ongoing and is an affordable way to purchase native plants and support environmental conservation work in the state.
- Check out the Native Plant Nursery Directory to find your local native plant nursery. Request that your local garden center carries native plants, and ideally, ones that are from New York. Often the native species in nurseries are sourced from outside of New York, or even the southern U.S. These won’t be as well adapted to New York.
- Avoid cultivars of native species. You may find some natives in nurseries with different names signifying they have been bred for different colors or flower shapes. These changes can reduce the ecosystem function of the plants, or even populations beyond your garden if they are able to breed. Our native species evolved with the native pollinators, and changes can make the plants completely unusable for native pollinators.
- Do not collect from the wild for your garden. Taking from the wild can be more damaging to the ecosystem than the benefit that it may bring to your garden. Collecting from the wild is also often illegal. Many factors need to be considered for safe harvests, and many of our plant populations are experiencing declines due to development, habitat fragmentation, invasive species, deer overabundance, climate change, and more. It can be hard to know if the seeds you’re taking will damage the population or remove a critical food source, so don’t take the risk!
Bakker, J.D. & Wilson, S.D. (2004) Using ecological restoration to constrain biological invasion. Journal of Applied Ecology, 41, 1058–1064.
Fargione, J.E. & Tilman, D. (2005) Diversity decreases invasion via both sampling and complementarity effects. Ecology Letters, 8, 604–611.
Handel, K. (2015) Testing local adaptation of the federally endangered Karner blue butterfly (Lycaeides melissa samuelis) to its single host plant the wild lupine (Lupinus perennis). (Electronic Thesis or Dissertation).
Hereford J. 2009. A quantitative survey of local adaptation and fitness trade-offs. American Naturalist 173:579-588.
Johnson R, Stritch L, Olwell P, Lambert S, Horning ME, Cronn R. 2010. What are the best seed sources for ecosystem restoration on BLM and USFS lands? Native Plants Journal 11(2): 117-131.
Kline, V.M. (1997) Orchards of oak and a sea of grass. In: Packard, S.; Mutel, C.F., editors. The Tallgrass Restoration Handbook. Washington, DC: Island Press:3-21.
Omernik, J. M. (1987). Ecoregions of the conterminous United States. Annals of the Association of American geographers, 77(1), 118-125.
Plant Conservation Alliance, P. (2015). National Seed Strategy for Rehabilitation and Restoration 2015-2020. Bureau of Land Management. Available at: https://www. blm. gov/programs/natural-resources/native-plant-communities/national-seed-strategy. | <urn:uuid:d2ebe986-6c21-48a0-bc3a-765eae1393ef> | {
"date": "2020-01-22T06:10:34",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9085770845413208,
"score": 3.375,
"token_count": 2603,
"url": "https://nystateparks.blog/tag/native-plants/"
} |
Anutan is the language spoken on Anuta Island in the remote Temotu Province of the Solomon Islands. In English Anuta was previously also called Cherry Island. Its area is less than half a square kilometre and the resident population is nearly two hundred and fifty. The only guaranteed contact the island has with the outside world is an infrequent cargo ship from Honiara.
The linguistic lineage for Anutan is: Austronesian, Malayo-Polynesian (or Extra-Formosan), Central-Eastern Malayo-Polynesian, Eastern Malayo-Polynesian, Oceanic, Central-Eastern Oceanic, Remote Oceanic, Central Pacific, Eastern Fiji-Polynesian, Polynesian, Nuclear Polynesian, Samoic-Outlier, Futunic. The term "outlier language" refers to the Polynesian languages spoken outside the so-called Polynesian triangle Hawaii-New Zealand-Easter Island. Anutan, therefore, is closely related to Tikopian and other Futunic languages, and more distantly to other Samoic-Outlier languages such as Tuvaluan and East Uvean. However, although basically Nuclear Polynesian, Anutan also exhibits a substantial Tongic element that is possibly due to settlement from or early contacts with Tonga.
An interesting feature of Anutan is its small consonant inventory: with eight phonemes it is one of the smallest among the world's living natural languages. Hawaiian and a few other Polynesian languages also have eight but only Pirahã (Amazonas, Brazil), Rurutu (Austral Islands, French Polynesia) and Rotokas (Bougainville, Papua New Guinea) seem to have fewer consonants. The Anutan vowel system, on the other hand, is of the normal Polynesian size, consisting of the five cardinal vowels plus their corresponding long counterparts.
The small number of consonants in Anutan is apparently the result of phonemic reductions (i.e. mergers and losses) that have eliminated all the three voiceless fricatives, as well as the lateral and the glottal stop of Proto-Polynesian (which is conventionally written with ´q´). More specifically, in Anutan /p, f/ have become /p/, and /t, s/ have become /t/, while /r, l/ have become /r/. The glottal stop and /h/ have become zero. Also /s/ has in some instances become zero. In addition, Proto-Polynesian /w/ has changed into Anutan /v/ (the language's only fricative).
The following Tuvaluan words and their Anutan cognates illustrate this (the third example is taken from the northern Tuvaluan dialect of Nanumanga): fafine - papine (woman, female), vasa - vata (open sea, ocean), lahi - rai (big). An example for the loss of the glottal stop would be Proto-Polynesian *leqo vs. Anutan reo (voice). Proto-Polynesian *waka is Anutan vaka (canoe).
The stops /p, t, k/ are slightly voiced at times and /p, t/ also have the fricative allophones /f, s/. Following /o, u/ the voiced labiodental fricative /v/ is realised as /w/, while /r/ sometimes surfaces as /l/. Some of this allophonic variation is due to the contact with Tikopian, a language with growing phonological and lexical influence on Anutan. As in other Polynesian languages, unstressed vowels are optionally devoiced or even deleted between identical consonants and word-finally.
Stress in Anutan most often falls on the first syllable. Disregarding vowel deletion and related processes, the Anutan syllable structure can be summarized as (C)V(:).
In its morphology, Anutan exhibits considerable similarity with some other Futunic languages, especially neighboring Tikopia, but also with a number of more distantly related Polynesian languages. Nominal particles, verbal particles and prepositions, for example, are often identical with their corresponding Tuvaluan forms. The causative prefix is paka-.
The word order in Anutan can be rather flexible but SVO (subject-verb-object) seems to prevail. Ergative constructions of the type PVA (patient-verb-agent) are also quite common.
The Anutan lexicon, previously more distinct, has recently come under Tikopian as well as Pijin and English influence, so that mana (father as a term of address), for example, is often replaced by taati 'daddy'.
Note: The geminated vowels are normally, but not consistently, written with double letters, and the velar nasal in some sources appears as ´g´ or ´ŋ´.
Ko te penua e tokorai.
The island is populous.
Te tangata e rerei.
The man is good.
Te vaka ne ngoto.
The canoe sank.
Ku oti ne kau oru ki ei.
I have gone there.
Ko ia e poto i te ta o te vaka.
He is an expert canoe builder.
Kairo na iroa te kakau.
He doesn't know how to swim.
E pia te ra?
What is the time?
Te ra e pitu.
It is seven o'clock.
Aumai poi rau paka ma aku!
Bring hither a tobacco leaf for me!
Ko te tangata makeke e aro i te vaka rai o te ariki.
The strong man is paddling in the chief's large canoe.
Ko ia ne ariki i te vatia koi tamaaroa.
He became chief while still a bachelor.
Ko te mako ka pete e au.
The dance song will be sung by me.
Ko te toa ne taia e natou.
The warrior was slain by them.
Kau piipia ki te kaiapi rakau ke momori mai ki a te au.
I want a wooden pipe to be sent hither to me.
E tapa aku mea mai.
I have more than you.
Ko ai te mea e ke karanga ki ei?
About whom are you speaking?
Mana ne karanga atu pakapepeeki?
What did father say?
Te tangata e tai.
Te ika e pua te rau e pitu mo te mata rua maa varu.
Seven hundred twenty-eight fish.
(Information on Anutan provided by Emanuel Fuchs and mainly based on: Richard Feinberg, 1977, The Anutan Language Reconsidered: Lexicon and Grammar of a Polynesian Outlier. New Haven, Connecticut: Human Relations Area Files Books, 272 pp., and on personal communication with the author)
Na tamana ne karanga atu ko ia ke aru o kake i te niu. Te niu nei, maatea nga manumanu i ei. Nga roo ata mo nga morokau. Na tamana ne piipia ki nga manumanu ke naatou taa matea ko Motikitiki. Ko ia ne kake i te niu. Ne oko atu ki te poi niu. Nga manumanu ne o mai o uuti te tino o Motikitiki. Nga manumanu ne taa matea e Motikitiki. Nga roo ata mo nga morokau. Ko ia ne tori ipo te poi niu. Ko ia ne ipo ki raro. Ne au mo nga niu ki na tamana.
His father told him to go and climb a coconut tree. This coconut tree had many animals in it. Carpenter ants and centipedes. His father wanted the animals to kill Motikitiki. He climbed in the coconut tree. He reached the coconut. The animals approached to bite Motikitiki's body. Motikitiki killed the animals. The carpenter ants and the centipedes. He picked a coconut. He descended to the bottom. He came with the coconuts to his father.
(Adapted from: Richard Feinberg, 1998, Oral Traditions of Anuta. New York, New York: Oxford University Press, p. 26.) | <urn:uuid:38dfd971-74e7-4a46-8855-796746634ffa> | {
"date": "2020-01-22T05:36:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8808888792991638,
"score": 3.53125,
"token_count": 1840,
"url": "https://omniglot.com/writing/anutan.htm"
} |
Children are inquisitive. They love information. They ask constant questions because their minds are continuously processing everything they see and hear. Children come to their parents all the time to clarify the things they don’t really understand. As parents, we want to help our children learn and understand yet sometimes when they ask an ‘awkward question’ it is tempting to gloss over it.
There are so many issues that parents can find difficult to discuss with their children. Depending on our own experiences and beliefs, how ‘awkward’ a question is for us as individuals can vary hugely. For many parents, those awkward questions may include: “Where do babies come from? What is sex? What does gay mean? Why does he have two mammies? Why don’t I have a mammy? Why don’t I have a daddy? Why are some people homeless?”.
This week we offer ’10 ways to’ support us in answering those awkward questions:
- Don’t try to fob a child off by changing the subject or saying they are too young. If they are old enough to ask, they are old enough to get some information. By not answering awkward questions and telling children they are ‘too young’ to know such things, we are making taboos of so many subjects that are normal in our society. Children will learn quickly not to ask us anymore, and they may find other, perhaps unreliable, sources to answer their questions. A question will not go away until your child is satisfied with the answer they find.
- Be honest in an age appropriate way. This does not mean you wait until they are teens to tell them details (when you may be even more embarrassed). Give children little bits of information to match what they can understand as they develop. Plant the seeds and build the tree over time with them.
- At times a question may upset you yet this is no reason to not answer it. You may have to explain to your child that this question makes you a little sad but that you will talk with them about it. A parent absent from your child’s life is often very difficult to talk about and many parents worry that their child will feel the rejection they themselves may have experienced. But remember that children have a different relationship with and perception of an absent person in their life. They will not feel the same as you. Here we explore ways to explain an absent parent.
- Be factual and try not to make the information about any subject into a fairy tale. Educate your child about families and all the diverse families in our society.
- Try to have an open relationship with your child from the first days. Once they start talking to you, start talking and sharing with them. Remember, even though it may seem a long time away now, you don’t know what choices your child will make as they grow up and you don’t want them to think that you may be unsupportive of them in the future.
- Just because you explain once, that probably won’t mean that you’re off the hook. Children take pieces from each and every conversation. Some bits they recall and other bits get left behind. They will ask you again so try to be patient and answer them again. Maybe you can add in additional age appropriate detail the next time.
- There are many excellent books out there to support parents in talking with children about almost every topic. Perhaps you can get some books in the library and introduce them during story time.
- If your child has wrong information or understanding then correct them from the first error. Try to keep the information clear. Be open and honest or you will only create more awkward situations in the future. Always try to build your relationship based on trust.
- At times your child’s other parent might object to you answering these awkward questions. Try to talk with them and help them to understand why it is important to answer your child’s questions honestly. Provided you are sharing age appropriate information then you need not worry.
- Seek support from service providers such as One Family if you would you like support in talking with your child about challenging situations. Once you start to talk openly with your child and believe that you are the right person to help them understand the very complex world we live in then it will become easier for you.
This article is part of our weekly ’10 Ways to’ series of parenting tips, and is by One Family’s Director of Children and Parenting Services, Geraldine Kelly.
Next week we talk about teen relationships and sexuality.
Find out more about our parenting skills programmes and parent supports. For support and information on these or any related topics, call askonefamily on lo-call 1890 66 22 12 or email [email protected]. | <urn:uuid:5b14285b-a211-4852-a58f-abbb4a2fa964> | {
"date": "2020-01-22T06:35:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9685726165771484,
"score": 2.859375,
"token_count": 999,
"url": "https://onefamily.ie/parenting-how-to-answer-the-awkward-questions/"
} |
According to The United Nations Rights of the Child, it is the right of the child to have contact with both parents after parental separation; yet many parents see it as their right, as parents, to have contact with their child.
When it comes to contact with children, mums can hold the power from day one: they carry the baby for nine months so straight away they make the very first decisions about the baby. All too easily, fathers can take a back seat in parenting and when a separation occurs they can struggle to assert their position as an involved father. So many separated fathers, whom I work with, want to be hands-on fathers. Men are as capable as women but culturally we are often led to believe they are not.
It is not good for children to see two parents without equal status. If society doesn’t encourage fathers to play an active role in parenting then we are not allowing children the full opportunities they are entitled to: the right to both parents provided it is safe for the child.
We need to separate out poor partners from poor parents: it is a different relationship. Children only have two biological parents; allowing them every opportunity to have a relationship with both parents is important to the positive outcome of their lives. Here we offer ’10 ways’ to support your child through shared parenting:
- Explore what prevents you from allowing the other parent to have an active parenting role. Is this a genuine concern based upon facts or an opinion you have formed? Does your child feel safe and happy with the other parent? Try to follow their lead. Take small steps to try and build confidence in their ability.
- Start with small steps changes in contact. Talk with your child about what they would like to happen.
- Reassure your child that you trust that their other parent loves them and therefore you want both parents to be active in their life.
- Ask the other parent to do practical things to support parenting rather than only getting involved for the fun parts.
- Allow them to have opportunities to take children to and from school, to the doctor, the dentist and to after-school activities. Your child only has one life, it does not need to be separated into mum’s time and dad’s time.
- Share practical information with the other parent about your child’s development and everyday life. Know what stage your child is at. Don’t expect to be told everything, find things out for yourself, ask questions, read up on child development and talk to the school if you are a legal guardian.
- Pay your maintenance and don’t argue over the cost of raising a child. If you receive maintenance be realistic about what the other parent can afford. If you were parenting in the same home you would do everything you possibly could to ensure your child has what they need. It cannot be any different just because you parent separately.
- Buy what your child needs and not what you want to buy for your child. It is always lovely to treat children but not when it means they have no winter coat. Talk with the other parent about what the child has and what they need.
- Ask your family to respect your child’s other parent. They are, and always will be, the parent of your child. Children need to know that family respect their parents. It is not healthy for the extended family to hold prejudice over parents.
- If you are finding it really difficult to allow your child have a relationship with their other parent, seek professional support to explore the reasons for this. There is obviously a lot of hurt and I am not dismissing this in anyway but if you can move on you will allow your child to have positive experiences.
This ’10 Ways to’ article is by One Family’s Director of Children & Parenting Services, Geraldine Kelly, as part of our weekly ’10 Ways to’ series of parenting tips. You can read the full series here.
Join the One Family Parenting Group online here | <urn:uuid:57ef64bb-8866-45e0-a38c-a3e3947bcca6> | {
"date": "2020-01-22T06:36:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9686030745506287,
"score": 3.09375,
"token_count": 824,
"url": "https://onefamily.ie/supporting-your-children-through-shared-parenting/"
} |
A successful weight management program requires a long-term approach, one designed to modify the behaviors that can influence our ability to lose or gain weight.
The most important factors in eating for weight loss include maintaining energy and nutrient balance. Severe caloric restrictions will slow down the metabolism, making weight loss harder to achieve. For women this means a minimum of 1200 and for men, 1500 nutrient dense calories a day.
To maintain energy the nutrient balance should be 65-70 percent carbohydrate, 15-20% protein and 20-25% fat. Carbohydrates remain the best choice for fueling muscles and promoting a healthy heart. A 20% fat diet can assure you are not denied the foods that nurture you but limits fat intake to levels that support weight loss.
It’s also important to maintain frequency of meals. Three meals a day is standard in our society but no law says you can’t heat more often. It’s particularly wise to avoid the all-too-common pattern of no breakfast, little or no lunch, and a huge dinner. Several mini-meals of 300-400 calories keep the body’s metabolism elevated.
A varied diet is also important for long term weight loss. Avoid eating large amounts of one type off food–even if it is a nutrient dense food–to the exclusion of others.
Some people have the opposite energy problem. They weigh less than they should and have difficulty putting on weight. Some of the aids to gaining weight are the reverse of techniques suggested for losing weight.
First, start with a nutritionally adequate diet and eat larger meals, more often increasing the energy density of the food. Then, consider a progressive strength training program to add body weight in the form of lean tissue (muscles) while you strengthen the body. If implementing these suggestions does to achieve goal weight, you may need to accept the fact that your body is genetically regulated at a lower level of fatness and maintaining a greater amount of body weight may require more time, effort, and expense than are worthwhile.
Regardless of whether you need or want to lose or gain weight exercise remains the basis for any long term lifestyle goals. A balanced exercise program is the key component of any successful weight loss program. eight loss without exercise can have a negative effect on body composition, especially if weight is regained or lost.
So, exercise, eat a balanced and varied diet, low in fat, low in sugar and high in fiber. If you maintain that regimen the body will find it’s own genetic set point. | <urn:uuid:b8d3124f-50c5-44e6-a836-e88898397233> | {
"date": "2020-01-22T05:03:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9316255450248718,
"score": 2.859375,
"token_count": 519,
"url": "https://pinihealthclub.com/index.php/frequent-questions/"
} |
Feather Rainbow Color Sorting Activity for Preschoolers
Kids can easily create a beautiful rainbow with colorful feathers. This is a fun sensory activity for toddlers and preschoolers that combines singing, bright colors and soft feathers.
This post contains affiliate links. Privacy and Disclosure
Are you familiar with the song, I Can Sing a Rainbow ?
It's a catchy tune that includes the colors of the rainbow in the lyrics. You can find several different versions of the song on YouTube.
Listen to the song together before doing this activity. Then sing along as you choose feathers for the colors of the rainbow as depicted in the lyrics of the song.
Feather Rainbow Activity
Note: This post has been updated from its original format to provide better content. The images have been improved and the text clarified to optimize the activity for you and your early learners.
In this post we show how to use feathers and crayons in a craft and music activity.
Supplies for feather rainbow
- construction paper
Instructions for a feather rainbow activity
There are a few different ways to engage kids, with options for activities and a craft.
1. Draw lines with crayons on a sheet of paper to represent the colors of the rainbow.
You can draw the lines beforehand for younger children but older kids can do this step themselves.
As you sing the song, kids can match a feather to the corresponding line on the paper.
You'll notice I have a pink feather added to the rainbow. Since it's named in the song I wanted it to be an option as a color choice. There have been sightings of rainbows with marvelous pink hues. The pink appears with a little help from the blending of red and violet so it's a fun bit of learning to add to the activity.
I chose to eliminate indigo from this activity.
2. Use self-adhesive dots to display the colors of the rainbow. Kids can match a feather to each dot.
Older children can print the words for the colors next to the dots with a matching crayon or colored pencil.
3. Glue feathers in place to make a rainbow picture.
Start with a red feather, the first color named in the song. Glue the feather onto your paper. Continue playing the song, pausing the music after each color so you can glue the corresponding feather to the picture.
Point to each feather already glued onto the paper as you sing.
Don't be too concerned about the order of the colors. Allow your child the freedom to explore the materials and sensory experiences.
It's OK to be creative while enjoying the music and the sensory play, and engaging fine motor and language skills.
Rainbow board on Pinterest
Rainbow crafts provide wonderful opportunities for kids to experience colors and textures.
It's not important to get the rainbow "right". Kids will benefit from strengthening cognitive skills as they sing, and sort and count the feathers.
Above all, kids should have fun exploring the materials and creating a beautiful picture.
More from Preschool Toolkit
Rainbow crafts from kid-friendly bloggers
Craft sticks rainbow from Easy Peasy and Fun
Printable rainbow crafts from Kids Activities Blog
Tissue paper rainbow from Happy Hooligans
Fused bead rainbow from Fireflies and Mudpies | <urn:uuid:9e1aaf44-a9cd-4be5-9a78-5f22e306ff6b> | {
"date": "2020-01-22T06:06:51",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9091095924377441,
"score": 3.421875,
"token_count": 676,
"url": "https://preschooltoolkit.com/blog/feather-rainbow-craft/"
} |
During the 16th century, Europe experienced radical economic, social, and political changes. These changes were associated with the Industrial Revolution, which took place between 1500 and 1800. Based on the changes that took place in Europe during the 16th century, it is safe to state that, Europe in the 16th century was expansive.
In the previous centuries, many of the European countries relied on agriculture for economic development. Many people lived in small villages, where they ploughed their land using ancient farming tools, mainly for subsistence purposes. However, in the 16th century, focus shifted from subsistence farming to commercial farming (Lambert, n.d). This was made possible through the introduction of better farming equipment, thanks to the industrial revolution. Commercialization of agriculture led to development of trade and industries.
According to Lambert (n.d), many industries in Europe rose during the 16th century. For instance, mining of coal started in the early 16th century. This led to opening of coal mining factories across different parts of Europe where coal was available. Industries, which has already started to develop before the 16th century experienced rapid growth during the 16th century. Such industries included the iron, lead, and tin industries. Consequently, development of industries and flourishing of agriculture led to development of many urban centers, which were used by Europeans for trading activities. Farmers would transport their produce to urban areas to trade them with manufactured commodities. These trading activities enabled Europe to grow richer and richer during the 16th century (Lambert, n.d).
Early in the 14th century, Europe had experienced population decline due to a plague that swept across the continent (Lambert, n.d). However, in the 16th century, the population of Europe rose very steadily. By 1525, the population of Europe rose to three million, from 2.5 million in 1475. By the end of the 16th century, the population of Europe stood at four million. Increase in population was due to abundance in food commodities, and improved standards of living (Lambert, n.d).
Technological innovations were also experienced during the 16th century. One of the technological innovations that took place in the 16th century in Europe was the invention of the printing press. This resulted into printing of the Bible in the vernacular languages. Printing of the Bible led to the reformation of the church. The church had acted as the basic cultural pillar in Europe for many centuries. However, printing of the Bible saw the church experience some division. “The cultural consensus of Europe based on universal participation in the Body of Christ was broken” (The sixteenth Century, 2000). Protestant movements were formed, which introduced different methods of worshiping, and new cultural norms.
During the 16th century, Europe also experienced numerous social changes. In the beginning of the century, life was good, even the poor managed to afford meat in their meals. However, in the mid-1500s, life started to change as the population grew more and more, and the cost of living started to go high. The poor were the most affected by the increased cost of living because real earnings fell in huge margins. Commercialization of agriculture saw many peasants lose their land to the bourgeoisies. This led to rise of homeless people. Since the homeless did not have any means of livelihood, vagrancy was on the rise, especially in urban areas, as the homeless looked for means of earning income. Contrary to this, changing economic conditions led to rise of the middle class and the upper class. These classes of people lived in the urban areas and their main economic activity was trade. They developed new ways of spending leisure time: drinking, gaming, and gambling in taverns (Lambert, n.d).
Politically, Europe experienced major dynastic struggles during the mid-1500s. Many of the European nations were “born” during this period. Many of these nations were subjected to harsh monarchs (The sixteenth Century, 2000). Those who were in power divided the territories amongst themselves and their family members. For instance, King Charles V divided his empire among his sons and brothers (The sixteenth Century, 2000). Actions of King Charles V led to birth of Spain, Netherlands, and Austria/Hungary. The bourgeoisies also started becoming interested in politics. Bourgeoisies actively participated in politics as they struggled to protect their land and trade interests (The sixteenth Century, 2000).
The numerous changes that took place in Europe during the 16th century provide evidence that, indeed, Europe was expansive during the 16th century. The changes that took place in Europe during the 16th century resulted into numerous | <urn:uuid:211da6c3-ae80-4da8-9c67-d3bce695deeb> | {
"date": "2020-01-22T04:33:01",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9828091263771057,
"score": 3.96875,
"token_count": 954,
"url": "https://primeessays.com/samples/history/the-sixteeth-century.html"
} |
Diana's brain can only think about a few things at the same time. All human brains are like that. If she tries to think about too many things at the same time, she won't be happy.
So how can Diana do complicated tasks? They key is what we said earlier.
Diana's brain can only think about a few things at the same time.
Thunking is short for thinking-in-chunks. It's how brains do tasks that are too complex to fit in brains all at once.
- Break up the task into chunks.
- Work on each chunk separately.
- Put the chunks together.
Working on task pieces separately means that the brain is never overloaded.
Let's see how you do it. But first, before you do anything else…
Define the goal
You work out what the end product will be. That's the very first thing. If the assignment is one thing, and you do another, you'll have wasted your time. That's a Bad Thing.
Misunderstanding business goals is the number one reason programming projects fail.
So, how to define the goal. Take the Aussie rules program. You might start by drawing out the spreadsheet. What it looks like before the programs runs, and after:
The drawing shows what the program is supposed to do.
In RL (real life), you often don't know exactly what your program will be doing. You need to allow for that uncertainty. For this intro course, we'll assume that we know outcomes before we start.
Choose the big chunks
After choosing the outcome, you decide on the big chunks. But how? In sewing, there are patterns that show you the chunks. Sleeves, cuffs, etc.
There are patterns in programming, too. In programming, a pattern is a common way to do a task, that people find useful. You've already seen a pattern. Here's the tip code.
'Declare variables 'Get data from worksheet 'Compute 'Output to worksheet
This is the most common pattern for programs. It's called Input-Processing-Output (IPO). Each chunk is a group of statements. Put the chunks together, and you have a program.
You'll seeing other patterns in the course. They're documented like this:
Each pattern has:
- A name
- Situation where it applies
- How to use it
What you see above is a summary of the pattern. See the More… link? Open it in a new tab (middle-click the link, CTRL+click the link, or right-click the link and choose New tab or New window). You'll see the full details.
The patterns are "language agnostic," that is, they can be used with any programming language. In VBA, you have to declare variables before you use them when
Option Explicit is present (which we always do). However, in PHP, Python, and other languages, you can't declare variables in the same way.
That's why "Declare variables" isn't in the pattern: it doesn't apply in some common languages.
The pattern catalog
When you're working on an exercise, in this course or another, it can help to have a list of patterns. You can quickly scan the list, and see if there are patterns that apply to the task you're working on.
That's what the pattern catalog is for. There's a link to it in the Tools menu in the right-hand toolbar. You'll find links to where each pattern is used in the course, a keywords to find related patterns.
Back to Diana and Ben…
The chunks need to work together. Here's the tip program again.
amountis how those two connect together.
Yes, it's the variables that connect the chunks together.
Look at this code:
'Get data from the worksheet amount = Cells(1, 2) 'Compute results tip = mealAmount * 0.15
That wouldn't run. The chunk "Get data from the worksheet" puts the cost of the meal in the variable
amount. The program chunk "Compute results" expects the cost of the meal in the variable
mealAmount. The two chunks aren't connected right.
If one human was writing the input chunk, and another human was writing the processing chunk, they should get together before they start coding, and agree on how the chunks connect. In this case, they would agree on a variable.
Patterns aren't Legos
With Legos, the bricks are fixed. You can't change them. You connect them together; that's the whole point. But you don't start carving pieces off one, and gluing them onto another.
Patterns aren't like that. They're starting points. You melt them to change their shape. You blend them together. Until you get what you need. Even if it's this:
Here's the official summary of thunking. You can tell it's official, because it has a badge.
- Specify the goal.
- Decide on an overall pattern for a program that meets the goal.
- For each chunk in the pattern, decide how it will connect to other chunks.
- If a chunk is too big to think about all at once, apply these same steps to that chunk.
Big chunks are broken down into smaller ones. Small chunks are broken down into even smaller chunks, and so on. Until the tiny chunks fit into your head. If you've done the connections right, the chunks should all fit back together to make the entire program.
Before you write a program, be sure to understand the goal. Drawing input and output on paper can help.
Humans thunk when they design things. They divide the task into chunks, so they can think about one chunk at a time. Before they start working on a chunk, they should decide how the chunks will fit together.
Identify the big chunks for a task. Look for existing patterns you can use. A pattern is a way of doing something that has proved successful in the past.
This Web site has a pattern catalog. Use it if you need ideas about how to tackle a task. | <urn:uuid:16e77e21-fb3c-45aa-b415-876931e66a57> | {
"date": "2020-01-22T05:24:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9437668919563293,
"score": 3.53125,
"token_count": 1290,
"url": "https://programming.cybercour.se/course/chunks-and-patterns"
} |
Universe in the Classroom: using robotic telescopes in primary schools
Universe in the Classroom is a science engagement programme that provides under-served welsh primary schools with improved teaching methods and innovative tools, including access to robotic telescopes, to modernise and enhance the way science is taught to young children aged 4-11 years. As a result, the programme has engaged with 133 primary schools from 21 of the 22 Welsh counties, 47% of whom receive below average budget per student per year. In addition to the provision of high-quality resources, Universe in the Classroom offers teacher training workshops to improve teachers’ scientific knowledge and confidence, with 75% stating improved confidence using science resources post-training event. The programme has also engaged with of 22,083 schoolchildren, successfully improving their understanding of the Universe and challenging perceptions of scientists, with an additional 10% of girls describing scientists as female after a workshop hosted by our diverse and enthusiastic team of undergraduate role models. Furthermore, the number of complex scientific concepts discussed by students tripled post-workshop. Although we identified several potential deterrents affecting the uptake of robotic telescopes in primary school classrooms, these were addressed by the programme and 25% teachers claim to have used the telescopes in their schools, with an additional 75% stating their school would find a second LCO account useful. | <urn:uuid:49cce39a-fdcf-4c25-a0b0-4b704c341bbb> | {
"date": "2020-01-22T05:48:42",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9626200795173645,
"score": 3.265625,
"token_count": 267,
"url": "https://rtsre.net/ojs/index.php/rtsreconfproc/article/view/31"
} |
Sex in fungi.
Sexual reproduction enables genetic exchange in eukaryotic organisms as diverse as fungi, animals, plants, and ciliates. Given its ubiquity, sex is thought to have evolved once, possibly concomitant with or shortly after the origin of eukaryotic organisms themselves. The basic principles of sex are conserved, including ploidy changes, the formation of gametes via meiosis, mate recognition, and cell-cell fusion leading to the production of a zygote. Although the basic tenants are shared, sex determination and sexual reproduction occur in myriad forms throughout nature, including outbreeding systems with more than two mating types or sexes, unisexual selfing, and even examples in which organisms switch mating type. As robust and diverse genetic models, fungi provide insights into the molecular nature of sex, sexual specification, and evolution to advance our understanding of sexual reproduction and its impact throughout the eukaryotic tree of life.
Ni, M; Feretzaki, M; Sun, S; Wang, X; Heitman, J
Volume / Issue
Start / End Page
Pubmed Central ID
Electronic International Standard Serial Number (EISSN)
Digital Object Identifier (DOI) | <urn:uuid:316d757b-af7c-4ef5-9a70-7ce8ce511e94> | {
"date": "2020-01-22T06:22:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8634399175643921,
"score": 3.359375,
"token_count": 253,
"url": "https://scholars.duke.edu/display/pub776290"
} |
The United States Antarctic Program (USAP) was formed in 1959 with the purpose of studying Antarctica’s unique environment. The goal of the program is to study the ecosystems and their effects of global processes. The critical research done by the USAP is key to understanding human and natural cause and effects on our planetary ecosystem.
Sonic Enclosures designed and manufactured mobile sediment and chemical laboratory Milvans for Raytheon for use on the USAP. The harsh environments made for very in-depth design for transport and human working environment.
Sonic’s extensive engineering and design experience allowed us to manufacture top quality modules that will withstand the Antarctic’s harsh climate. | <urn:uuid:a2d41c42-ac0f-44f8-9833-7e41a4d071ff> | {
"date": "2020-01-22T04:48:28",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9185142517089844,
"score": 3.015625,
"token_count": 139,
"url": "https://sonicenclosures.com/portfolio/united-states-antarctic-program-usap/"
} |
The German Pinscher is a small but very energetic, watchful, alert, fearless, highly intelligent, and loyal breed.
The German Pinscher originated in Germany some 400 years ago, and has been used in the development of the Miniature and Doberman Pinscher breeds, as well as other breeds.
The German Pinscher was originally used as a vermin hunter and stable guard, amongst other things. The breed nearly became extinct during World War II; Werner Jung is credited with revitalizing the breed.
Though she be little, she be fierce! The German Pinscher is a muscular and powerful little dog, both in terms of body build and personality, making them excellent for endurance and agility.
German Pinschers are wonderfully devoted companions, extremely active, and intelligent.
- Active owners / families who enjoy including their dog in sports and outdoor activities.
- Experienced dog owners who have good dog training knowledge and know how to be “alpha” in the home.
- Homes with 5’ + fenced yards and willing to keep the dog on lead when not in a fenced area.
- Dog-savvy children.
- First time dog owners.
- The “stay at home” person or family.
- Families with cats or other small animals / pets.
- People who don’t have the time to exercise, socialize, or train their dog.
- Smart, learns rapidly.
- Little to no grooming required.
- Devoted to its owner, they LOVE “their” people.
- Great for anyone wanting an active, working dog.
- Very high energy level.
- Incredibly athletic: they can jump a 6 foot fence.
- High prey drive.
- Intelligent to the point of being too smart: they work hard to get what they want, and will rule the house if allowed.
Both males and females average 17-20 inches at the shoulder, and weighing 25 to 40 pounds. They can have cropped ears and docked tail, natural ears and a docked tail, or all natural (ears and tail).
Black and Tan/Red, Red to Stag Red, Fawn (Isabella) and Blue.
12 to 15 years.
They are known for their love and devotion to their family and are reliable with dog savvy children.
The German Pinscher is generally accepting of other dogs, though they do have a strong prey drive which can be a negative for other small pets or animals.
German Pinschers are known for their versatility; they excel in obedience, rally, agility, tracking and much more.
They are light shedders, bathe only when needed; nail trim weekly.
The is a relatively healthy breed. | <urn:uuid:6cd40b87-b561-4075-af11-2243fd0ada13> | {
"date": "2020-01-22T05:21:57",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9472580552101135,
"score": 2.796875,
"token_count": 575,
"url": "https://spdrdogs.org/breed-german-pinscher/"
} |
The Single Trooper Aerial Platform, or simply STAP, was a repulsorcraft used by the Trade Federation and later the Confederacy of Independent Systems. It was often piloted by a single B1 battle droid as a reconnaissance and patrol vehicle.
The STAP was a Baktoid Armor Workshop product designed by Trade Federation engineers, who were inspired by civilian airhooks. These similar craft were re-engineered for greater performance and reliability, and were purpose-built to be piloted by a B1 battle droid, as their humanoid frames were more suited to control the vehicles. Slim and lightweight, the tiny repulsorlift craft was fueled by high-voltage energy cells, which powered drive turbines providing the STAP with impressive speed and maneuverability. The agile vessels were typically bolstered by signals from an orbital Droid Control Ship, which skillfully guided the STAP's pilots. However, battle droids were exposed to enemy fire while riding the fragile craft. The vehicles were armed with a pair of forward-mounted blaster cannons.
Designed as a reliable reconnaissance and patrol vehicle, the STAP was typically deployed by the Trade Federation Droid Army at landing zones where their pilots could survey the area and transmit data back to a Droid Control Ship. STAPS were often deployed as support vehicles alongside other, larger craft which bore the brunt of combat, and only occasionally forayed into battle in order to harry enemy forces. STAPS were also utilized for "mopping up" missions once a battle was finished.
The Trade Federation deployed STAPs alongside its invasion forces during the blockade of Naboo. Numerous STAPs patrolled the swamps surrounding the landing zone, and accompanied Multi-Troop Transports into the city of Theed, where Viceroy Nute Gunray declared victory and occupation. The Jedi Qui-Gon Jinn and Obi-Wan Kenobi were harried by STAPs in the swamps, but Jinn destroyed the vehicles by deflecting their pilots' blaster bolts back into the craft with his lightsaber. STAPs surveyed the Great Grass Plains and transmitted data back to the orbital Droid Control Ship.
STAPs were later used by the Confederacy of Independent Systems during the Clone Wars. General Grievous, Supreme Commander of the Droid Army, utilized a combat speeder that was bulkier than the standard STAP.
- Star Wars: Episode I The Phantom Menace (First appearance)
- Star Wars Battlefront II (DLC)
- Jedi of the Republic – Mace Windu 1
- Jedi of the Republic – Mace Windu 2
- Jedi of the Republic – Mace Windu 3
- Star Wars: The Clone Wars – "The Hidden Enemy"
- Star Wars: The Clone Wars film
- "501 Plus One"—Age of Republic Special 1
- Star Wars: The Clone Wars – "Blue Shadow Virus"
- Star Wars: The Clone Wars – "Liberty on Ryloth"
- Star Wars: The Clone Wars – "Counterattack"
- Star Wars: The Clone Wars – "Citadel Rescue"
- Star Wars: The Clone Wars – "A Necessary Bond"
- Star Wars: The Clone Wars – "A Death on Utapau"
- Star Wars: The Clone Wars – "Crystal Crisis"
- Star Wars: The Clone Wars – "The Big Bang"
- Kanan 10
- Thrawn 2
- Ultimate Star Wars
- Star Wars: The Visual Encyclopedia
- Star Wars: On the Front Lines
- Star Wars: The Last Jedi: Incredible Cross-Sections
- Star Wars Encyclopedia of Starfighters and Other Vehicles
- Star Wars: Geektionary: The Galaxy from A - Z
- Ultimate Star Wars, New Edition | <urn:uuid:fc445691-0fa7-40da-9264-8ea175af879c> | {
"date": "2020-01-22T06:09:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9483975768089294,
"score": 2.59375,
"token_count": 770,
"url": "https://starwars.fandom.com/wiki/Single_Trooper_Aerial_Platform"
} |
According to the special days listing for April, there are several important people’s birthdays. https://stilllearningsomethingnew.com/2014/03/28/special-days-in-april-2/
Last year we learned everything “middle ages”. And, April was a fantastic birthday month for our studies. Here’s the links to the April birthday biographies we worked on complete with resource links.
Raphael April 6, 1483 https://stilllearningsomethingnew.com/2013/04/06/raphael/
Leonardo da Vinci April 15, 1452 https://stilllearningsomethingnew.com/2013/04/12/leonardo-da-vinci/
William Shakespeare April 23, 1564 https://stilllearningsomethingnew.com/2013/04/23/happy-birthday-william-shakespeare/
Our study focus this year is American History. This means that our birthday studies center around presidents. These are the presidents that we’ll be learning about during April.
Thomas Jefferson April 13, 1743
James Buchanan April 23, 1791
Ulysses S. Grant April 27, 1822
James Monroe April 28 1758
This link takes you to the resource list of what we use as we learn about a president. https://stilllearningsomethingnew.com/2014/02/14/resources-for-presidents-day/
And cake! Baking and decorating cakes or cupcakes to eat while studying makes lessons delicious. We like this site for online baking classes. The 3 cake decorating pdfs are wonderful guides. http://www.kingarthurflour.com/baking/online-baking-classes.html
Of course everyday is historic and special somehow but we think birthdays are the best! | <urn:uuid:ca90c798-1406-4483-9be1-4330bd127af3> | {
"date": "2020-01-22T04:26:59",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9005008339881897,
"score": 2.921875,
"token_count": 386,
"url": "https://stilllearningsomethingnew.com/tag/middle-ages/"
} |
Halliday & Matthiessen (2014: 398-9):
The logical structure of the verbal group realises the system of tense. … Thus tense in English is a recursive system. The primary tense is that functioning as Head, shown as α. This is the Deictic tense: past, present or future relative to the speech event. The modifying elements, at β and beyond, are secondary tenses; they express past, present or future relative to the time selected in the previous tense. … In naming the tenses, it is best to work backwards, beginning with the deepest and using the preposition in to express the serial modification. | <urn:uuid:b4194639-b5be-443d-aa95-87cf0428c1a2> | {
"date": "2020-01-22T05:49:19",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9060549139976501,
"score": 3.4375,
"token_count": 131,
"url": "https://systemictheory.blogspot.com/2019_04_15_archive.html"
} |
This course is specifically intended for teachers and students of AP United States History. The course includes over 240 pages of teacher class notes broken down into 52 lessons which are designed for use in classes that run from 45 to 90 minutes in length. The Student Workbook includes the essential elements of each lesson in an outline format. Each Unit has homework and classroom assignments that are specifically designed to teach students the skills and knowledge necessary to score well on the advanced exam. Unit exams (multiple choice, essay, and DBQ) and the grading rubric are based on the latest advanced model. The visuals that are included with the curriculum are integrated into the teacher lesson plans. The author taught Advanced US History for 25 years, conducted advanced teacher workshops, and has been an Advanced US History exam grader. | <urn:uuid:71625c53-f96a-4e78-8957-e2983e4f69e8> | {
"date": "2020-01-22T06:20:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9629229307174683,
"score": 3.671875,
"token_count": 157,
"url": "https://teaching-point.net/product/ap-us-history/?add_to_wishlist=2940"
} |
Subsets and Splits