content
stringlengths
71
484k
url
stringlengths
13
5.97k
This report shows public data only. Is this your organisation? If so, login here to view your full report. You are in Strategy and Governance » Investment policy Policy components/types Coverage by AUM ESG considerations are incorporated into our investment decision-making process at the stock selection stage as a mandatory part of writing an Investment Case and, where possible, assessed for the potential financial impact on the investment. Typically, ESG analysis will source information from a variety of sources, including (but not limited to) the company itself, specialist research providers, brokers and academics. We will utilise internationally recognised benchmarks, codes and standards as guidelines for corporate best practice within our ESG company analysis, but we are pragmatic in our recognition that no “one” model of ESG management can apply to a company, and that each company has to be considered in respect of the industry and markets in which it operates. Our ESG integrated approach is relevant across all the asset classes, sectors and markets in which we invest. At TT, we believe that high standards of corporate responsibility will generally make good business sense and have the potential to protect and enhance long-term investment returns. Consequently, our investment process integrates ESG considerations from the inception of each new stock idea. The investment philosophy targets companies that have persistently demonstrated high quality and repeatable returns. This also provides a natural bias towards those with better standards of ESG. Although ESG considerations have always been integrated within the process, the firm and team continually consider improvements to the ESG process. In late 2016, for example, a discussion of ESG issues became mandatory in each Investment Case written on a stock. Also, an ESG ‘scorecard’ was developed to quantitatively score each holding according to 14 metrics. Although our approach to ESG is not driven by quantitative analysis, this does help to highlight where a company may fall short (or may simply not publish data) on a particular issue and therefore which topics require further fundamental analysis on. TT is required to identify conflicts of interest that might arise between TT, the funds it manages, its clients, and between a client and another client, and to manage these conflicts fairly in accordance with FCA Principle 8. TT’s Conflicts of Interest Policy is appropriate for its size and organisation and the nature, scale and complexity of its business. Compliance identifies, maintains and regularly reviews a record of the types of activities undertaken by or on behalf of TT in which a conflict of interest arises to assess whether the controls effectively meet regulatory requirements and expectations. A written report is prepared on a quartlerly basis for the Partners Operating Group (“POG”) on activities which have or will give rise to a conflict of interest which entails a material risk or damage to the interests to one or more of the funds or its investors. If conflict management procedures are not sufficient to ensure, with reasonable confidence, that risks of damage to client interests will be prevented, TT must clearly disclose the general nature and/or sources of the conflict to the client before undertaking any business for the client, or else refrain from the activity entirely.
https://reporting.unpri.org/surveys/PRI-reporting-framework-2018/136E16A3-CFE5-43A2-85DF-F834D17186DF/bf735de92be04caa8c32fcbc25cbdd2c/html/2/?lang=en&a=1
Description and analysis of the flow of goods in/through/to/from Skåne with base year 2003 and comparative year 2013. General trends, internationalising trade and transport, freight transport, national development, transport structure, border region, the logistics- and port region, as well as transport flows and key figures were analyzed. Market research for Ystad Port | 2015 | Ystad Port Market analysis for Ystad Port about its future development with a potential relocation of the port. researched the potential ways of strengthening the port from the existing future scenarios drafted in other projects in the Baltic sea region. Specific actions and tools to shape strategies for the port’s future development were also suggested. Efterspørgsel i danske havne frem til 2030 | 2011 | Danske Havne Efterspørgsel på maritim transport via danske havne med prognoser for 2020 og 2030 i forskellige scenarier. Scenarier beskrives opdelt på forskellige maritime sektorer, samt for forskellige godstyper og hvilken efterspørgsel der kan forventes i fremtiden. Area Analysis of Danish ports | 2009 | Ministry of Transport, Denmark Tetraplan has conducted a spatial analysis of 12 Danish ports for the Transport Ministry. The focus of the investigation was on commercial land use, including the conflicts that arise when a number of stakeholders interested in the land areas. Through focus group interviews with port authorities and port companies, land requirements and land use in the ports were identified, and the conflicts and their themes have been identified. The analysis ended up with a number of recommendations, not least to the public authorities with a view to the future to ensure a good development of the port’s business areas as possible, and reduce potential conflicts.
https://newthinking.nu/index.php/sea-freight/
Customize your JAMA Network experience by selecting one or more topics from the list below. Torkamani A, Muse ED, Spencer EG, et al. Molecular Autopsy for Sudden Unexpected Death. JAMA. 2016;316(14):1492–1494. doi:10.1001/jama.2016.11445 © 2019 Approximately 11 000 individuals younger than 45 years in the United States die suddenly and unexpectedly each year from conditions including sudden infant death, pulmonary embolism, ruptured aortic aneurysm, and sudden cardiac death (SCD). Sometimes the cause of death is not determined, even after a clinical autopsy, leaving living relatives with an inaccurate or ambiguous family health history. Moreover, the rate of clinical autopsy has declined from approximately 50% fifty years ago to less than 10% in 2008, contributing further to uncertain family health histories.1 This uncertainty may be partially resolved with postmortem genetic testing (“molecular autopsy”).2 Initial studies, limited to cardiac channelopathy and epilepsy genes, have yielded molecular diagnoses in approximately 25% of cases.3,4 A more comprehensive molecular autopsy program, expanded beyond SCD, has the potential to provide more accurate family health information to a wider spectrum of afflicted families. Here we report preliminary results from a systematic, prospective, family-based, molecular autopsy study. Exome sequencing was performed on blood or tissue samples collected from deceased persons aged 45 years or younger, with sudden unexpected death, sequentially referred to Scripps Translational Science Institute by the medical examiner between October 2014 and November 2015. Deaths from an external cause or in persons with known comorbid conditions were excluded. Exome sequencing of saliva samples from parents, when available, was also performed. Full details of the genome sequencing, analysis, and interpretation methodology have been previously described.5 Mutations were categorized as likely cause of death (mutation previously reported or expected in an SCD-related gene); plausible cause of death (mutation of unknown significance in an SCD gene); or speculative cause of death (mutation previously reported in other disorders). Reported allele frequencies are the highest population-specific frequencies observed in the Exome Aggregation Consortium Browser. The study was approved by the Scripps institutional review board. Written informed consent was obtained from all participants and from next of kin for deceased individuals. Twenty-five cases (80% male) were sequenced, with 9 including both parents of the deceased. Clinical autopsies discovered the likely cause of death in 5 cases. A likely cause of death was identified by molecular autopsy in 4 cases (16%), a plausible cause in 6 (24%), and a speculative cause in 7 (28%); no mutations were identified in 8 (32%) (Table). The likely genetic cause of death was corroborated with clinical autopsy findings in 2 of 5 cases. All other clinical autopsy findings (3 cases) could be linked to a plausible or speculative genetic cause. Seventy percent (7/10 cases) of likely and plausible pathogenic mutations were inherited from relatives who did not die suddenly, as determined either by direct observation of the variant in a family member4 or inference based on previous observation in reference populations.3 Molecular autopsy was able to uncover a likely or plausible cause of death in 40% of cases (10/25). Many of the findings were variants of unknown significance inherited from relatives not affected by sudden death and present at population frequencies incompatible with full disease penetrance. Although this study is limited by its small sample size and potentially by selection bias, the observation of likely and plausible pathogenic variants in unaffected relatives is consistent with recent large-scale studies that identified clinically relevant variants in living relatives of SCD cases.4,6 Our study suggests similar findings may be observed in non-SCD sudden death. It should be noted that these speculative and plausible findings cannot definitively be linked to sudden death. The ambiguity associated with some of these genetic findings should be balanced against the potential for clinical follow-up, active surveillance, or preventive interventions in living relatives. Although molecular autopsies may help identify genetic causes of sudden unexpected death, a comprehensive and systematic effort to collect and share genetic and phenotypic data is needed to more precisely define pathogenic variants and provide quantifiable risks to living relatives. Corresponding Author: Ali Torkamani, PhD, 3344 N Torrey Pines Court, Ste 300, La Jolla, CA 92037 ([email protected]). Author Contributions: Dr Torkamani had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Torkamani, Muse, Spencer, Wagner, Topol. Acquisition, analysis, or interpretation of data: All Authors. Drafting of the manuscript: Torkamani, Muse, Wagner. Critical revision of the manuscript for important intellectual content: All Authors. Statistical analysis: Torkamani, Rueda. Obtaining funding: Torkamani, Topol. Administrative, technical, or material support: Torkamani, Spencer, Rueda, Wagner, Lucas. Study supervision: Torkamani, Muse, Topol. Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported. Dr Torkamani reports grants from the National Institutes of Health, National Center for Advancing Translational Sciences, National Human Genome Research Institute, and other funding from Cypher Genomics. Dr Topol reports personal fees from Illumina and nonfinancial support from Genapys and Edico Genome. No other disclosures were reported. Funding/Support: This work is supported by a National Institutes of Health and National Center for Advancing Translational Sciences clinical and translational science award (5-UL1-RR025774) and grants U01HG006476 and U54GM114833 from Scripps Genomic Medicine. Role of the Funders/Sponsors: The funders/sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Additional Contributions: We thank the deceased patients’ families for their cooperation. We also thank Galina Erikson, BS (Scripps Genomic Medicine), for her work on the pilot program and Dov Fox, JD (San Diego State University), for his input. These persons received no compensation for their contributions. Create a personal account or sign in to:
https://jamanetwork.com/journals/jama/article-abstract/2565740
The future is copper A theatre and research collaboration exploring the uses, abuses and future of copper mining in a globally changing environment. The challenge Copper is an invaluable natural resource due to its remarkable physical and chemical properties. As we transition to a sustainable world, so copper demand will grow. In fact, projected demand in a sustainable world powered by renewable energy is 4-20 times current demand, easily exhausting known copper reserves by the end of the century. This raises a number of urgent issues surrounding copper exploration and extraction: - Environmental damage caused by mining, and possible cleaner/ more efficient means of extraction. - Potential conflicts between global mining corporations and local communities - Global civilization’s dependency upon copper versus increasing opposition to extraction (‘extractivism’) and resource-based economics. Copper provides an outstanding example of the complex issues surrounding extraction resources in a changing climate. Yet these issues are largely absent from public discourse. What we're doing We aim to address this problem by creating an interdisciplinary art work that engages and explores these issues with the public through the creation of a newly devised physical theatre performance, created by award-winning company, Mechanimal. Despite its pivotal role in our understanding of both resources and climate change, earth science is a topic rarely explored in theatre – we are excited to see how this project can provoke new artistic, research and public dialogues around the critical role of resources at a time of environmental crisis. How it helps Natural resources benefit humanity, but damage to the environment associated with their extraction can be severe. The present climate emergency could quickly become a resource emergency if we fail to locate (or substitute) resources, such as copper, needed to drive sustainable economies. This project combines disciplines in order to address and raise awareness about a growing environmental problem of how, where and why we extract copper. It is important that we find cleaner and more efficient ways of extracting copper in the short term, and in the long-term review humanity’s relationship with resource extraction in the framework of Earth’s limited natural resources. A sustainable world requires societal change and greater public awareness of the key issues. Theatre is an art form with the ability to engage the wider public. Creating art that takes research and expertise beyond the confines of the university makes this project a unique opportunity.
http://bristol.ac.uk/cabot/what-we-do/copper-future/
The focus of promotion review is on professional growth and achievement. The award of promotion in rank represents a prediction that the individual will continue to make substantive contributions to the University and the profession. The University of Kansas, along with most other research universities in the United States, evaluates museums primarily on the quality of their performance as curators or specialists in their professional assignments. In addition to the traditional role of an art museum (to acquire, conserve, exhibit, research, and educate about works of art), a university art museum has the added expectation of successfully integrating the museum’s collection in curricula across many disciplines, of playing a leading role in university cultural life, and providing a forum for the exploration of scholarly questions of the day. Members of the curatorial and specialist staff must maintain the standards by which the museum achieves these purposes. Museum professionalism, scholarship and research are central to all the functions by which the museum’s purpose is defined, and thus upon which the evaluation of the academic staff’s annual efforts are made. A combination of six different categories may constitute the professional, scholarship and research criteria: acquisitions, conservation, exhibitions, scholarly publications/creative works, education/teaching, and interdisciplinary programming/audience development. Unclassified academic staff are not required to allocate effort in all three primary faculty responsibilities (professional performance, research, and service). Thus, professional performance will be defined contingent on the individual assignment. The SMA bases its evaluation of an individual's professional performance on the annual reports submitted by that individual detailing the person's activities from year to year, the promotion packet submitted by the candidate, and extramural evaluations provided by peers. Curators and Specialists hold 12‐month appointments. It is expected that Curators/Specialists will fulfill their professional duties at a high level of effectiveness resulting in significant accomplishments. Curators/Specialists’ duties can be varied, and can change frequently, due to the changing nature of the environment in which they work. While no single definition or standard of excellence can adequately address all aspects of curatorial or specialist professionalism, effective Curators/Specialists will need to demonstrate competence, currency in one’s area, creativity, and initiative. The quality of professional performance and competence in carrying out one’s assigned responsibilities, coupled with the candidate’s strengths, are the focal points for evaluation of professional performance. A key component of the SMA mission is fostering and supporting relationships with contemporary artists. As such, Curators/Specialists may conceptualize, organize, and host local, regional, national, and/or international artists to commission a new work of art with or without a formal on-site, artist residency. Teaching is a central element in every activity at the museum and the University. Although not formally a part of the Curator/Specialist’s museum duties, opportunities do arise for curators/specialists to teach courses and give guest lectures in other KU University departments and units. It is expected that curators/specialist give gallery talks, class-lead discussions, and study center presentations for students, faculty and the public. This type of teaching can be a significant portion of a curator/specialist’s workload during the semester. Acquisitions, both purchases and gifts, form a significant part of curatorial duties. The museum exists to activate, make relevant and protect its permanent collection, and the development of the collection is important to the museum’s continued vitality. Curators/Specialists are expected to be cognizant of the market in their areas of responsibility and to propose acquisitions when they are relevant to the mission of the museum. The recommendation of an acquisition at the Spencer Museum of Art can represent a significant responsibility. Acquisition recommendations are made through a combination of connoisseurship, scholarship, and educational possibility; none of which is adequate in itself and both represent some of the most sophisticated research techniques in the world of art history. Curators/Specialists must be able to locate works of art appropriate for acquisition, determine their authenticity and condition, and locate them within a cultural context as well as the context of existing museum collections. Gifts are also important to the museum’s continued vitality; Curators/Specialists are expected to participate in solicitations of gifts with the Director, when requested. Objects curators are responsible for maintaining the collection in good condition. They must identify condition and environmental problems and take measures to correct them, and they must do whatever they can to provide an environment in which the objects are secure. They must know how to handle, store and display objects properly and must be sure their colleagues do the same. The candidate’s record shall demonstrate effective museum practice as reflected in such factors as command of responsibilities, the ability to communicate effectively, and a demonstrated commitment to the mission of the Spencer Museum of Art, the University, and the profession or area of specialization in a related field. The candidate’s record shall demonstrate achievement in and document evidence of distinguished professional performance in a successfully developing career, with evidence of sustained productivity. The candidate shall have demonstrated continued effectiveness and growth as a professional. Such effectiveness and growth will be reflected in such factors as mastery of museum practice and professional and technological skills, and a demonstrated and ongoing commitment to the mission of the SMA, the University and a measure of national recognition. Art museums are expected to engage in research and scholarly activity. The wide range of Curators/Specialists’ assignments at KU and the demands of 12‐month appointments lead to variation in the type and amount of scholarly or creative activities in which they engage. It is expected that the outcomes of these activities will be disseminated and subject to critical peer evaluation. While productivity is expected, quantity per se is not a singular measure. In this way, museum academic staff will contribute to enhancing the profession of museums in society or a related area, or a specialized subject area, in which they conduct research. The concept of “scholarship” encompasses not only traditional academic research and publication, but also the creation of artistic works or performances and any other products or activities accepted by the academic discipline as reflecting scholarly effort, artistic rigor, and achievement for purposes of promotion. Curators/Specialists are encouraged to develop a research program sustained and strengthened over time; however, the rapidly changing nature of museums may lead to new areas of research that may be reflected in the Curator/Specialist’s research program. The research program should contribute, at least initially, to the field of museums or a related area, and be consistent with the mission of the Spencer Art Museum. However, because curatorial/specialist work does not exist in isolation from the community it serves, but rather co‐exists with and contributes to all disciplines, scholarly endeavors of curators/specialist may reflect this symbiosis and cross disciplinary boundaries. Curators/Specialists should begin their research program early and establish a sustained program of scholarly activity. Documented activities should demonstrate that the candidate’s experience has led to a broad understanding of the field, that the candidate has mastered a part of it, and that there has been intellectual development and contributions beyond those called forth by routine daily assignments. Exhibitions are the primary means within the museum context by which academic staff carry out and publish original research. Besides significant scholarly ability, exhibitions require substantial organizational skills and training and experience in the handling of works of art with significant aesthetic, cultural and historical value. Decisions involving the choice of objects appropriate to the exhibition require scholarship that is not necessarily evident in publications, since research that results in eliminating works from an exhibition due to inferior quality, inappropriate subject matter, or questionable authenticity or provenance may not be published. The same is true of research aimed towards locating unknown or unpublished works. Major scholarly exhibitions take three to five years to organize, contain significant numbers of works of art, and are accompanied by scholarly catalogues or the equivalent. Exhibition projects and catalogues are reviewed not only by a critical internal curatorial committee, but also by the agencies responsible for supporting them through grants. Thus, exhibitions and their related publications are considered at least the equivalent of scholarly books. Curators are expected to organize at least one significant exhibition during the first five years of employment. Criteria for evaluation will include originality and creativity, breadth of dissemination, and impact on scholarship and/or practice in the candidate’s field. One type of work or merit is peer-reviewed publications. It should be emphasized that quality of scholarship is not measured in numbers; however, quantity of contributions is useful in demonstrating the candidate's growth as a scholar, ongoing dissemination of research findings, and continuing commitment to scholarship. During the promotion process, curators/specialists submit research for consideration as major or minor works based on a variety of factors including, but not limited to, the reputation of the venue of publication/presentation, the depth and rigor of the research, and the impact of the research on the discipline and society. Determination of which category to use is made by the individual curator/specialist. New scholarship in books, articles, exhibition catalogues, or electronic media, all of which are subject to outside peer review. This category includes exhibitions and their catalogues at the Spencer that are supported by grants from such federal entities as the National Endowment of the Arts and National Endowment for the Humanities, as well as scholarly articles and other publications. These projects are evaluated by committees of the curator’s peers with rigor comparable to that of reviewers of a journal or university press. Publications not supported by government grants and articles for Spencer Museum publications, such as the Register, the SMA’s journal. These are subjected to internal review by the museum staff and relevant faculty, as well as occasional outside review. They may present new research on objects in the museum’s collections or new ways of understanding groups of works in the collections and are final, not preliminary publications. Edited volumes of articles by colleagues or other peers. This can be either within the Spencer Museum of outside of it. Essays and articles for various museum publications, such as newsletters, publicity, and gallery guides. While they also often present original material, they are subject to limited internal review and the publications are ephemeral. Scholarly presentations. This includes sharing active research at professional conferences, colloquia, and other academic settings through formal presentations. Scholarly digital projects. This category includes digital scholarship that either disseminates new research and/or promote new research, including online exhibitions, publications and academic resources. The evaluation of creative and scholarly research at the SMA requires the broad judgment of professionals and peers, which reflects the diversity of our practices. Under the University standards for the award of promotion to the rank of Associate Curator/Specialist, the record must demonstrate a successfully developing scholarly career, as reflected in such factors as the quality and quantity of scholarly/creative research activities and publications, external reviews of the candidate’s work by respected scholars or practitioners in the field, the candidate’s regional, national, or international reputation, and other evidence of an active, focused, developing and productive scholarly agenda. Under the University standards for promotion to the rank of Full Curator/Specialist, the record must demonstrate an established scholarly career, as reflected in such factors as a substantial and ongoing pattern of scholarly/creative research activity and publications, external reviews of the candidate’s work by eminent scholars or practitioners in the field, the candidate’s national or international reputation, and other evidence of an substantial, ongoing, active and productive scholarly career. Service expectations for Unclassified Academic Staff are contingent on the individual assignment and generally should be consistent with the equivalent expectations listed below. A strong service profile is highly valued by the museum. Curators/Specialists are expected to demonstrate a consistent record of service beyond their assigned museum responsibilities, with contributions to the Spencer, the University, and to professional organizations. Under the University standards for the award of promotion to Associate Curator/Specialist, the record must demonstrate a pattern of service to the University at one or more levels, to the discipline or profession, and/or to the local, state, national, or international communities. Under the University standards for promotion to the rank of Full Curator/Specialist, the record must demonstrate an ongoing pattern of service reflecting substantial contributions to the University at one or more levels, to the discipline or profession, and/or to the local, state, national, or international communities. “Excellent” means that the candidate substantially exceeds expectations for promotion to this rank. “Very Good” means the candidate exceeds expectations for promotion to this rank. “Good” means the candidate meets expectations for promotion to this rank. “Marginal” means the candidate falls below expectations for promotion to this rank. “Poor” means the candidate falls significantly below expectations for promotion to this rank.
http://policy.ku.edu/spencer/discipline-expectations
This handbook has a special focus on how to design engagement processes – for example, which method to use in different types of situation and how to keep track of stakeholder participation. Dynamics of Rural Innovation - a primer for emerging professionals Authors: Pyburn, R. & Woodhill, J. (eds.) Publication date: 2014 Dynamics of Rural Innovation – a primer for emerging professionals is a co-publication of KIT and Wageningen University’s Centre for Development Innovation (CDI) that brings together the experiences of over 40 conceptual thinkers and development practitioners to articulate lessons on agricultural innovation processes and social learning. Aid and the Islamic State Author: Svoboda, E. & Redvers, L. Publication date: 2014 The IRIN/HPG Crisis Brief is a new product designed for aid workers, policy makers and donors to address a gap in current analysis of humanitarian research and action. This pilot examines the flows of international aid into parts of Iraq controlled by militants from the so-called Islamic State (IS). A Peacebuilding Tool for a Conflict-Sensitive Approach to Development: A Pilot Initiative in Nepal Author: Asian Development Bank (ADB) Publication date: 2014 The peacebuilding tool is an analytical tool for assisting project team leaders and social experts in understanding the local context, and in identifying potential risks to implementation of development projects that are linked to social conflicts, as well as in formulating mitigation measures for addressing these risks. Inclusion, Resilience, Change: ADB’s Strategy 2020 at Mid-Term Author: Asian Development Bank Publication date: 2014 The midterm review of Strategy 2020 provides the Asian Development Bank (ADB) a precious opportunity to draw on its vast experience over the first 5 years of the strategy's implementation. From best practice to best fit: understanding and navigating wicked problems in international development Authors: Ramalingam, B., Laric,M. & Primrose, J. Publication date: 2014 This Working Paper summarises the findings of a series of small-scale pilots of selected complex systems methods in DFID’s wealth creation work. The pilots contributed to improved analysis and understanding of a range of wicked problems, and generated tangible findings that were directly utilised in corporate and programmatic decisions. Women’s participation in green growth – a potential fully realised? Author: von Hagen, M. & Willems, J. Publication date: 2012 This study analyses opportunities and challenges for women’s participation in green growth in developing countries. The purpose of the study is threefold: - to shed more light on the gender dimension of green growth, especially in the context of private sector development and thus fill a knowledge gap in the green growth discourse - to validate women’s contributions to green growth and sustainable private sector development - to promote women’s empowerment and gender equality Civicus Civil Society Index - rapid assesment - West Africa regional report Author: CIVICUS: World Alliance for Citizen Participation Publication date: 2014 In 2013, civil society in six West African countries - Benin, Ghana, Liberia, Nigeria, Senegal and Sierra Leone - undertook an assessment of the health and conditions of civil society. They applied the CIVICUS: World Alliance for Citizen Participation Civil Society Index-Rapid Assessment (CSI-RA) tool. This summary report draws from the six country reports to present key common findings, particularly as they relate to the health of and conditions for civil society organisations (CSOs). Facilitating Multi-stakeholder Partnerships : Lessons from PROLINNOVA Authors: Critchley, W., Verburg, M. & Veldhuizen, L.v. (for PROLINNOVA) Publication date: 2006 This booklet gives you practical lessons and insights into partnership building gained within the PROLINNOVA network over the last four years. The booklet is written for anyone who is involved in trying to bring stakeholders together into effective partnerships, and is looking for practical ideas and lessons on what works and what does not. Two case studies are included. FAMILY-CENTERED, CULTURALLY COMPETENT PARTNERSHIPS in Demonstration Projects for Children, Youth, and Families Author: Institute for Educational Leadership This toolkit is designed to provide ideas and linkages to other resources that will increase the capacity of US based demonstration projects engaged in systemic reform efforts to partner with communities and families in the development of family-centered, culturally competent approaches. It offers case study examples and a variety of tools communities may want to use as they consider plans for implementing, monitoring and institutionalizing family partnership and culturally competent policies and practices.
http://www.mspguide.org/resources?filter=290%2C285
“Arbeit macht kapital”: this is what says the neon work by Claire Fontaine , at S.a.L.E.-Docks in Venice, a space for the promotion of contemporary art and cultural production. It was open last October and it is self-managed by a group of students and militants with a background in community centers. Arbeit Macht Kapital (work makes capital) is the detournement by Claire Fontaine of the slogan placed at the entrances to a number of Nazi concentration camps. It is a detournement that invites us to reflect on the capital valorisation processes of cultural production and creativity in this Post-Fordism era, that means a production model where languages, affections, relationships and, last but not least, creativity are used to generate exploitation, uncertainty and new kinds of alienation. herefore, Claire Fontaine’s work introduces a new perspective of the (really serious) problem of political impasse in the field of contemporary art production, and it also introduces the theme of capital valorisation of cultural production and, most of all, of that subversive power that can be represented by the metropolis cognitive work. These political problems promoted “Multiversity, or the Art of Subversion“, the event that took place at S.a.L.E.-Docks between May 16th and May 18th. . The event can be seen as an important stage of a project that is not only an artistic exhibition project because, on the contrary, it has a militant nature in Venice, where the field of cultural production is becoming more and more important. However, it is not a localist perspective; indeed it underlines the fact that every kind of knowledge, including the artistic one, must be used, in fighting situations, to take a stand (as Michel Foucault said) and to be tested every day in the real world. Thus, Multiversity is aimed at bringing the discussion about art and cultural production to a militant context, through the meeting of twenty speakers and a comparison among different kinds of movement research and institutional points of view. The event was organized into three seminar sessions: art and activism, art and the market, art and multitude. . Just to mention some of the speakers: Antonio Negri, Brian Holmes, Giovanna Zapperi, Osfa, Hans Ulrich Obrist, Matteo Pasquinelli, Gigi Roggero and Anna Daneri . But I would like to talk especially about two speakers and their seminars, that underline the importance of a theme that is crucial to those who are interested in activism in the so-called “culture factory”. The first one is Maurizio Lazzarato. He is a Paris-based Italian philosopher who introduced an interesting vision of art practices as a governmental tool. Going back to Félix Guattari, Lazzarato underlined the coexistence of two dimensions in the artistic context: molar and molecular. The first one represents functions (work, artist, public), tools (festivals, museums, biennials) and assessment criteria. The second one represents what Guattari defined as freedom diversity, heterogeneity and subjectivity of every art practice. But there is a problem: there isn’t a dialectical relation between the two dimensions, and the molecular part is constantly absorbed in the molar one, and so it is impossible to subvert art as institution. In art, the molar and the molecular exist at the same time, without canceling each other out, and we can find the same process in the valorisation of cultural production, where the subjectivity production diversity is absorbed and subsumed into the culture industry that is a means of promotion of intellectual tourism and luxury business. . Judith Revel, a sociologist of Centre Foucault, started from where Lazzarato stopped. Lazzarato decided to concentrate on the analysis of the relation between capitalist apparatus and art production, while Judith Revel underlined how art can face, with subjectivity, that apparatus. Judith Revel claimed that the contemporary capitalism is responsible for an equation of production and creation i.e. valorisation of relationships, affections, languages and cooperation and all this means that capital depends on what can’t be dominated (even if it can be governed): subjectivity production, experimenting, invention, human capacity to create ontology, to produce new forms of existence that can defeat and subvert power. This subversive feature, according to Judith Revel, comes from the concept of power by Michel Foucault, who claimed that power is always wielded on free movements (otherwise, we should talk of dominance). . Capital and, in this specific case, cultural industry and art as institution function as parasites of these free movements, by eating them, by valorising them, but, at the same time, these freedom elements can be seen as a representation of the huge power conflicts with capital, that depends on them, always comes after, able to reproduce, but not to create, something. So, creation is the prerogative of men and women, and it is denied to capital. Art and politics share the potential for being fields of experimenting, (potentially subversive) invention of new spaces in the urban context, of new kinds of subjectivity and of metropolitan communities. Revel’s speech ends with a warning (that S.a.L.E in Venice has already understood) about the importance of investigation into the social structure of metropolitan Post-Fordism working class. According to Judith Revel, this investigation is not just a Post-working class desire; it is an important part of a policy that wants to generate real conflicts, and not self-referential knowledge. . S.a.L.E.-Docks hopes that soon the Multiversity debate can be read in some publications and, most of all, behind the reasons of uncontrollable large and small conflicts.
http://digicult.it/hacktivism/militant-art-cultural-subversion/
I recently gave presentations at the University of Wisconsin Milwaukee for GIS Day, and took the opportunity, as most geographers would, to get out onto the landscape. I walked on the Lake Michigan pier at Manitowoc, enjoying a stroll in the brisk wind to and from the lighthouse there, recording my track on my smartphone in an application called Runkeeper. When my track had finished and been mapped, it appeared as though I had been walking on the water! Map of my walk from Runkeeper.com. Photograph of my destination: The lighthouse at the North Pier, Manitowoc, Wisconsin. According to my map, I walked on water. Funny, but I don’t recall even getting wet! It all comes down to paying close attention to your data, and knowing its sources. Showing these images provides a teachable moment in a larger discussion on the importance of scale and resolution in any project involving maps or GIS. In my case, even if I scrolled in to a larger scale, the pier did not appear on the Runkeeper’s application’s base map. It does, however, appear in the base map in ArcGIS Online. In the book that Jill Clark and I wrote entitled The GIS Guide to Public Domain Data, we discuss how scale and resolution can be conceptualized and put into practice in both the raster and vector worlds. We cite examples where neglecting these important concepts have led not only to bad decisions, but have cost people their property and even their lives. Today, while GIS tools allow us to instantly zoom to a large scale, the data being examined might have been collected at a much smaller scale. Much caution therefore needs to be used when making decisions when the analysis scale is larger than the collection scale. What example have you used in class that well illustrates the importance of scale and resolution? I recently created a map in ArcGIS Online and a series of videos that shows the location of what may be the biggest city that never was: Cairo, Illinois. During the mid-1800s, many believed that this city, founded on the confluence of the Ohio and Mississippi Rivers, gateways to settlement of the central and western United States, could someday surpass Philadelphia or even New York City. I created the map for several reasons. First, like many of you, I am fascinated by maps. Mapping is a natural way to tell a story, and Cairo has a very interesting story to tell. For several geographic reasons, Cairo not only didn’t live up to its expectations, and has been declining by 10% to 20% per decade for the past 70 years (2010 population, 2,831). While Cairo has a good situation on the point of land divided by the rivers, the site is flood-prone. In addition, the rise of St Louis upstream on the Mississippi River also posed challenges for Cairo. In fact, socioeconomically, Cairo remains one of the poorest communities in the region, which you can investigate for yourself by pulling up the “USA Demographics for Schools” layer in ArcGIS Online and investigating median income and median home value. It nevertheless has a fascinating and unique character steeped in history and geography. The second reason I created the map was because ArcGIS Online allows for the easy integration of multimedia elements to tell a story. In my case, I created the map only after having the opportunity to visit Cairo this year en route to Murray State University, taking videos and photographs to be sure, but also getting a “sense of place” for Cairo. During my visit, my discovery of a tiny community just north of Cairo dubbing itself “Future City” seemed to fit perfectly with the above themes. At the river confluence, a weathered monument in the shape of Lewis and Clark’s boat the Merrimack standing in a rather forlorn state park seemed to reinforce the fact that this was the Biggest City That Never Was. The photographs and videos I took there were easily integrated into my ArcGIS Online map. What important places on the landscape have you visited or read about, and how might you create stories about them using ArcGIS Online? I have created a data set containing electoral history for the past 56 years in ArcGIS Online, so you and your students can interact with it, teach with it, and explore patterns. To accompany the data set, I wrote a lesson entitled, "Which states went for which candidate? Elections" is in the ArcLessons library. What is the difference between the popular vote and the electoral vote? What influences voting patterns at present and what influenced the patterns in the past? Why do electoral votes sometimes exhibit a regional or national pattern and sometimes exhibit no pattern? After examining the maps dating back to 1956, which election years would you say were the closest in terms of the electoral vote, and which were the most one-sided? Which states voted consistently Republican, or Democratic, in the past? When have third-party candidates been a factor? When did the candidate lose his “home state?” Which states change back and forth in terms of political party over time, and do these correspond to what are referred to as “swing states”? How does population distribution influence the electoral vote and where candidates spend their time and money? These questions and many more can be effectively analyzed by using the above maps and lesson. ArcGIS Online provides an excellent platform for learning about issues, patterns, and phenomena. Because elections data in the USA are tied to administrative boundaries, elections maps can be easily created. Examining election data in ArcGIS Online allows the data to be effectively and easily used by educators, students, and others, anywhere around the world.Another map and data set containing electoral votes by state for the upcoming election, along with demographic information and much more, was compiled by my colleague Charlie Fitzpatrick, and makes an excellent accompanying data set. These data sets can be used with an accompanying blog post describing what is there and how to use it. It is my hope that these data sets and lessons will be helpful in teaching and learning in these next few weeks, and beyond.
https://community.esri.com/community/education/blog/2012/11
Rivers, lakes and streams are considered sentinels of environmental change. Deforestation, urbanization, and nutrient runoff are increasingly recognized as drivers of change for freshwaters, yet most research analyzing the impact of these forcings occurs at the watershed scale. While smaller scale studies provide valuable insight into physical processes, few studies describe the vulnerability of inland water quality (WQ) to climate change and anthropogenic activities at larger scales. This type of synthesis knowledge is crucial for informing policy-making, water resources management and conservation, yet is lacking at a national scale. However, advances in cloud-based data analytics has created a new research landscape making possible the rapid analysis of public datasets to monitor changes in surface waters at large spatial and temporal scales. This project seeks to create a national tool for relating changes in water quality signals to land use and precipitation change, representing a significant step forward for understanding the impact of human activities and climate change on US surface waters. In our data synthesis and visualization system, large datasets will be queried to create simple geovisualizations of historic WQ changes and to establish foundations for distilling broad national patterns related to satellite remote sensing. We hypothesize regions with rapid land use change will also experience shifts in WQ signals.
https://escience.washington.edu/incubator-17-freshwater/
To Streamline Machine Learning Operations, You Need to Flip Software Development on Its Head Data scientists are often armed with the necessary tools to explore data and train models, but what can be overlooked is the creation of the right environments and processes needed to streamline the full AI/ML model production process. This article illuminates the challenges data science teams currently face due to this oversight and recommends leveraging best-in-class agile software development concepts to build the right environments and processes to support a streamlined, mature AI/ML capability. In our previous blog post, we described the Data Maturity Curve (Figure 1) in detail and talked about its use in assessing our customer’s capabilities. This approach provides us a baseline for developing the recommended path-forward to becoming a more data-driven organization. Based on these assessments across a number of customers in a variety of industries, WWT is observing interesting trends in the machine learning (ML) space. Over the last several years, there has been a dramatic shift in the maturity of ML and its use in large enterprises. A majority of the companies have gone from experimentation in small pockets (maturity level 0-1) to having an established team of data scientists/engineers and a platform in place to allow for exploratory analysis and ML model training (maturity level 2-3). Of course, there are still those companies that are at a “0” that haven’t yet made ML a priority, and there are those unicorns at a “5” with ML at their core, but these companies are the in the minority right now. Figure 2 below shows this shift conceptually overlaid on top of WWT’s data maturity curve. As the data maturity of companies increases, the speed at which they get value out of their data should theoretically increase as well. The process by which a data science team does their ML work will dictate how efficiently they are able to create and deploy models that bring value to the organization. Shown below in Figure 3 are the typical steps in a model building process (this is more specific to supervised learning; other ML model types may have different steps not shown here). Each bar shows the relative amount of time data scientists spend in each area and note that about 50 percent of the time is on the upfront discovery and model training process (this will become an important point for later in this post). One of the biggest factors holding companies back from leveraging ML to its fullest potential is having the right environmental setup to support a streamlined and holistic process across these five steps. Data scientists with access to the right set of data and tools may be able to do endless data exploration, and they also may have the compute resources necessary to train a variety of models. The end-to-end process, however, is typically manual, and there is limited standardization across the enterprise. Moreover, the ability to test trained models thoroughly and promote them into production is typically done in an ad-hoc fashion, with limited to no testing performed on the data, the model and its integration into the production environment. How can that be? We have been developing software and putting it into production for many years. There are tried and true methodologies, tools and processes available at our finger tips. Can’t we just take what we have done in the software development space and do the same thing for ML models? Let’s explore this question a bit deeper. The software development process and the promotion through environments Note: If you are a software developer, or have some software development experience, this section may be pretty obvious to you, so feel free to just skim it over or skip directly to Why developing and productionizing machine learning code is different. Software development has gone through a dramatic paradigm shift over the years, with the main goals of accelerating time-to-market and allowing for extensibility of applications and services. Software applications traditionally built with a waterfall approach and a monolithic architecture had long development cycles and hard-coded dependencies, making it difficult to get new features and functionality into production. Now with the ability to build microservices-based applications in an agile fashion, leveraging continuous integration/continuous development (CI/CD) tools, development cycles have decreased significantly and services are abstracted away from each other allowing for updates/changes to be made without effecting the entire system. Even with all of these dramatic shifts in architecture and methodologies, one aspect has remained unchanged — the promotion pathway of software code through the different environments: - Development - Quality assurance (QA) - Production Promotion may happen more rapidly and with more iterations, but the overall path has remained unchanged. For software development, these three environments have distinct characteristics. For some organizations, these characteristics may be just rules-of-thumb, for others they may be memorialized as policy. In general, however, the three environments have a few commonalities. Development environments are “quick and dirty” As a software developer begins his/her journey building an application, they start in a development environment which allows for exploration and free-flowing ideation. Typically, development environments are small in size and have just enough horsepower to try different features and functionality. Security measures are light (if any) and backup and recovery is typically non-existent. Because of these characteristics, the developer is typically not permitted to bring in production data. As long as the simulated data has a similar look and feel to what the production data will be, a small amount can be used to test out different features and functionality of the software application. Once unit testing has been performed and the developer feels their code is production-ready, it is promoted to the QA environment for acceptance and functional testing. QA environments should focus on testing code for production use cases A software application will encounter a variety of situations in production that the code should be tested for to ensure behavioral reliability and end-user satisfaction. QA engineers perform acceptance and functional testing in a QA environment to ensure that the code itself is behaving as intended and that it performs seamlessly when integrated with the rest of the environment. The tests can be performed in an automated or manual fashion, and the suite of tests performed should be managed and updated as new features and functionality are written. Overall, the QA environment should mimic the production environment, and data should be production data to ensure the code is robust. Once the software application code passes all QA testing, user-acceptance testing and user experience testing, it gets promoted to production. Production environments lock-down the code and are ready for users to interact Production is where all of the hard work gets put into action. Production code should be secure, and the environment should be sized to handle production data and workloads. Overall, a production environment will be reliable, fast, secure and flexible in order to deliver the intended user experience and have the ability to grow and scale as needed. Once a software application is in production, its code is versioned and any changes or updates must go through all of the previous stages mentioned above (we realize we are simplifying what is typically a very complex version control branching strategy, but this section is meant to merely setup the discussions below). Why developing and productionizing machine learning code is different “Machine learning systems differ from traditional software-based systems in that the behavior of ML systems is not specified directly in code but is learned from data,” (Breck, Cai, Nielsen, Salib, & Scully, 2017). This simple quote elegantly describes the subtle difference between software application development and ML code development. In other words, for traditional software development, humans develop all of the code and directly program in rules and logic. For ML development, however, the developer programs a methodology for the computer to learn from data and the computer develops the ML code to mimic patterns it uncovers during the learning process. This subtle difference leads to major differences in the characteristics of the environments and processes needed for code development and production. Development environment for ML is not so “quick and dirty” The development environment in the ML space is a place for ML developers to perform two tasks: - Discovery — Explore the historical production data and generate insights that will inform their training process. - Model training — Build and run code that allows the computer to learn patterns from historical data. As mentioned above in Figure 3, these two tasks take a significant portion of a data scientists' time when developing ML models. Let’s take a step back and talk through some of the foundations of supervised ML models before we continue to put some context around the details of the discovery and model training processes… Typical supervised ML models have two aspects: targets and predictors. Targets are information you are trying to predict (e.g., likelihood that a customer will buy a certain product) and predictors are the information that have a high likelihood of predicting the target (e.g., number of similar products that customer has bought in the recent past). The target should be chosen based on business value of the actions that can be taken based on the predictions. The predictors should be chosen based on their predictive power. Predictors can be very basic data elements but can also be complex aggregations of data elements. The overall goal of building a valuable ML model is to find the right predictors for a given target amongst the endless combinations of data elements available. This can be a daunting task. …now back to our regularly scheduled programming. Discovery process The discovery process allows for ML developers to “get a feel” for the data before diving in to training a model. Typical investigations include understanding the data quality, examining statistical distributions and finding correlations between predictors and other predictors, as well as predictors and targets. ML developers will create visualizations to interrogate the data from different angles and generate many new data elements through manipulation and aggregation of the provided data set. A major aspect of the discovery process called “feature engineering” is where new potential predictors are created for the ML model to use during training. Tens of thousands of features may be engineered during discovery, many of which, however, will not show a high relative predictive power and be dropped from the final model. Model training process After the ML developer has a good feel for the data and has engineered a number of features, he/she is ready to train a model. There are many algorithms that can be trained, each with their own nuances and trade-offs. The ML developer must be acutely aware of these trade-offs and understand the business use case to ultimately select the right model to be used in production. Of course, predictive performance is the overall goal, but aspects like over-fitting, quality and speed of data available in production should also be considered. Depending on the model and size of data, the training process can be compute-intensive; depending on the code efficiency and hardware available, the ML developer may have extremely long iterations between trainings. In the end, a model will be chosen and moved into testing and production. Now, let’s understand what is needed for a development environment in ML knowing that it will be used for discovery and model training. Both discovery and model training require an environment that allows for hands-on access to large amounts of production data, extensive computing power and a set of ML, statistical and visualization tools. Because large amounts of production data are absolutely necessary, the development environment for ML should be secure, have strict access controls and back-up and recovery should be required. This is the exact opposite of the development environment for traditional software engineering, and in fact looks more like a production environment. This major difference can get companies hung up on what exactly an ML environment is. Is it a development environment or a production environment? Is it some new hybrid of the two? Mature companies handle this in different ways, but the bottom line is that this is a different environment than what is needed for software development. Companies should be aware of this as they build out their ML capabilities, tackle it head on and be ready to make some decisions that are outside of the norm. QA environments should focus on testing code and the data used for training As mentioned above, the ML model code is essentially written by the computer to codify what it learns from the historical patterns found in the large amount of data available. This subtle difference from traditional software development leads to the need for QA testing above and beyond the typical unit and integrations tests. If proper QA testing is not performed for ML models, the company is at risk of creating tremendous amounts of technical debt that will ultimately have to be unwound. (Scully, et al., 2014) Google has developed a rubric of 28 actionable tests (Breck, Cai, Nielsen, Salib, & Scully, 2017) that they recommend be performed before productionizing an ML model, and they are building a platform in Tensorflow to automate some of these tests (Baylor, et al., 2017). We find this rubric to be an excellent starting point for companies aspiring to build a robust QA pipeline for ML models. Some examples of the new dimensions to test based on this rubric are: - data efficacy and quality used for training the model; - quality of code used to engineer the features used as predictors in the model; - thorough hyperparameter tuning; - model quality across different slices of data; and - integration testing of the full model pipeline (assembling training data, feature generation, model training, model verification and deployment to a serving system). We encourage you to read the full paper here. ML models are not applications and may not move to a production environment immediately A production-ready ML model is essentially a sophisticated calculation that intakes data and outputs a prediction. The models themselves are not production-ready applications, but rather a production-ready service that will be leveraged by the enterprise within production applications, data pipelines and/or reports. When a model is ready for production, it may be leveraged by an enterprise in three ways: - Embedded in an Extract, Transform, Load (ETL) pipeline for use in reports, other models, etc. - Embedded within a production application - Exposed as an API for ad-hoc calculations by a variety of applications and/or end-users There will have to be tight coordination between the software development process of the application hosting the ML model and the actual ML model development. The software itself will go through the its own development/QA/production process while the ML model is being trained and tested simultaneously. The details of the agile coordination process are out of scope for this article, but will be discussed in a future post. Initial guidance for integrating ML and software development processes Developers should think through the integration of ML and software development prior to going off into their separate ML and software worlds. This article won't go deep into operational models for these teams to work together, but offers some initial guidance to ensure a streamlined path to production. - Collaborate closely, especially upfront: The ML and the software development teams should work closely, especially in the beginning phases of development when several critical decisions are being made on tools, standards, etc. - Align on the datasets: Historical datasets used to train the model will have several nuances that the ML development team will have to make assumptions on and adjust for while training the model (formatting, missing data points, etc.) — these nuances and adjustments need to be known by the software development team real-time while developing the software - Align on performance needs: In production, ML models will typically have a large throughput of data with complex calculations that make the predictions in a timely fashion. To meet the prediction cadence required, infrastructure performance requirements should be discussed upfront to handle the volume, velocity and variety of data that will be coming through the ML model in production. Model refresh cycle should be considered upfront before putting a model into production As shown above in Figure 3, the model building process is cyclical in nature. At the end of the process the model needs to be refreshed in order to maintain its predictive power (the predictive power of models decreases over time due to changes in the environment and/or data quality/availability). Moreover, new data may be available that could boost the overall predictive nature of the model that should be introduced. Models may be refreshed at different rates (daily, weekly, bi-annually, etc.) depending on the nature of the data and model being used. The governance for refreshing models should be thought through as part of putting models into production. Processes should be put in place to monitor the overall model performance in production and thresholds should be set to determine the best cadence for a refresh. In addition, once a model is in production, a team should be devoted to start the exploration process all over again to ensure the most predictive features are being developed and selected for the next refresh cycle. Conclusions The ML model development process has some subtle differences from the traditional software development process. However, these subtle differences drive major changes to the processes and environments typically used to develop software. In order for companies to mature their ML development teams and streamline the time-to-market for ML models, they should embrace these changes and build out the right environments with the right governance and processes. Not doing so will stifle innovation in the ML space and slow down the ability to make valuable predictions. Moreover, the complexity and sheer amount of technical debt that can be created within a full ML pipeline across both data and software may lead to untrustworthy recommendations and disastrous business decisions. References Baylor, D., Breck, E., Cheng, H.-T., Fieldel, N., Yu Foo, C., Haque, Z., . . . Roy. (2017). TFX: A TensorFlow-Based Production-Scale Machine Learning Platform. Google AI R&D. Breck, E., Cai, S., Nielsen, E., Salib, M., & Scully, D. (2017). The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction. Google AI R&D. Scully, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., . . . Young, M. (2014). Machine Learning: The High-Interest Credit Card of Technical Debt. Google AI R&D.
https://www.wwt.com/article/streamline-machine-learning-operations-software-development
The proliferation of computing has created an enormous amount of data. There is data about everything from sensors that track whales in the ocean to data about visitors to web sites. Below is a picture of whale tracking. Computers are used in an iterative and interactive way when processing digital information to gain insight and knowledge. Iterative means that computers can go through all data in large data sets to filter and clean it. Combining data sources, clustering data and data classification are part of the process of using computers to process information. Interaction means that people can gain insight and knowledge from translating and transforming digitally represented information. Patterns can emerge when data is transformed using computational tools. Computing allows people to share data to collaborate, such as by shared Internet access to large databases, or by using a shared Google Sheet spreadsheet. Collaboration is an important part of solving data-driven problems. Collaboration facilitates solving computational problems through multiple perspectives, experiences, and skill sets. Communication between participants working on data-driven problems gives rise to enhanced insights and knowledge. Collaboration in developing hypotheses and questions and in testing hypotheses and answering questions about data helps gain insight and knowledge. Collaborating face-to-face and using online collaborative tools can facilitate processing information to gain insight and knowledge. Investigating large data sets collaboratively can lead to insight and knowledge not obtained working alone. Visualization Visualization tools and software can communicate information about data. Tables in a document, diagrams generated from a spreadsheet, and textual displays in a presentation can be used in communicating insight and knowledge gained from data. Summaries of data analyzed computationally, as opposed to showing all of the vast data, can be effective in communicating insight and knowledge gained from digitally represented information. Transforming information can be effective in communicating knowledge gained from data. Interactivity with data, such as showing a colleague the effects of changing one cell in a spreadsheet and its impact on related cells, is an aspect of communicating with computing. Below is a picture of formulas which are impacted if one cell is changed. Metadata Metadata is data about data. Metadata can be descriptive data about an image, word processed, or other complex objects. Metadata can increase the effective use of data or data sets by providing additional information about various aspects of that data. Large Data Computing facilitates exploration and the discovery of connections in information. The use of large data sets, such as the logs of all visitors to a web site, provides opportunities and challenges for extracting information and knowledge. Below is a visualization of data collected by Google Analytics about visitors to a web site. Large data sets, like all of the Google searches done in a two day period, provide opportunities for identifying trends, making connections in data, and solving problems. Computing tools facilitate the discovery of connections in information within large data sets. Search tools such as the Google Search Engine, are essential for efficiently finding information. Information filtering systems, which take large data sets and eliminate data that is not of interest, are important tools for finding information and recognizing patterns in the information. Software tools, including spreadsheets and databases, help to efficiently organize and find trends in information. Large data sets include data such as transactions, measurements, text, sound, images, and video. The storing, processing, and curating of large data sets is challenging simply because of the amount of data it is now possible to obtain. For instance, NASA obtains incredibly vast amounts of data from its satellites, but much of that data is redundant among satellites and/or not of use; NASA's information filtering systems seek to eliminate redundant and useless data to help manage the size and complexity. Structuring large data sets for analysis can be challenging. Maintaining privacy and cyber security of large data sets containing personal information can be challenging. Scalability of systems is an important consideration when data sets are large - techniques that worked on a smaller data set may not work when the size of the data set increases. Analytical techniques to store, manage, transmit, and process data sets change as the size of data sets scale. The size or scale of a system that stores data affects how that data set is used. The effective use of large data sets requires computational solutions. Trade-offs and Concerns Digital data representations involve trade-offs related to storage, security, and privacy concerns. Security and privacy concerns, as described in the chapter on Cyber Security, arise with data containing personal or otherwise sensitive information and engender trade-offs in storing and transmitting it. For instance, storing and transmitting encrypted data is more secure, but makes the data slower to access. There are other trade-offs such as using lossy and lossless compression techniques, as described in the chapter on Data In The Computer, for storing and transmitting data. Lossless data compression reduces the number of bits stored or transmitted, but allows complete reconstruction of the original data. Lossy data compression can significantly reduce the number of bits stored or transmitted at the cost of being able to reconstruct only an approximation of the original data. Data is stored in many formats depending on its characteristics, such as size and intended use. The choice of storage media affects both the methods of and costs of manipulating the data it contains. Reading data, which multiple users can do concurrently, and updating data, which typically only one user at a time can do, have different storage requirements. Database A database is a system for storing and taking care of data (any kind of information). A database engine can sort, change or serve the information on the database. The information itself can be stored in many different ways - before digital computers, card files, printed books and other methods were used. Now most data is kept on computer files. A database system is a computer program for managing electronic databases. A very simple example of a database system would be an electronic address book. The data in a database is organized in some way. Before there were computers, employee data was often kept in file cabinets. There was usually one card for each employee. On the card, information such as the date of birth or the name of the employee could be found. A database also has such "cards". To the user, the card will look the same as it did in old times, only this time it will be on the screen. To the computer, the information on the card can be stored in different ways. Each of these ways is known as a database model. The most commonly used database model is called relational database model; it uses relations and sets to store the data. Normal users talking about the database model will not talk about relations, they will talk about database tables. Uses for database systems include: - Storing data - Storing special information used to manage the data. This information is called metadata and it is not shown to all the people looking at the data. - Solving cases where many users want to access (and possibly change) the same entries of data. - Managing access rights (who is allowed to see the data, who can change it).
https://computing-concepts.cs.uri.edu/wiki/Data_and_Its_Analysis
This article requires a subscription to view the full text. If you have a subscription you may use the login form below to view the article. Access to this article can also be purchased. Abstract BACKGROUND AND PURPOSE: Focal cortical dysplasias are the most common resected epileptogenic lesions in children and the third most common lesion in adults, but they are often subtle and frequently overlooked on MR imaging. The purpose of this study was to evaluate whether MP2RAGE-based morphometric MR imaging analysis is superior to MPRAGE-based analysis in the detection of focal cortical dysplasia. MATERIALS AND METHODS: MPRAGE and MP2RAGE datasets were acquired in a consecutive series of 640 patients with epilepsy. Datasets were postprocessed using the Morphometric Analysis Program to generate morphometric z score maps such as junction, extension, and thickness images based on both MPRAGE and MP2RAGE images. Focal cortical dysplasia lesions were manually segmented in the junction images, and volumes and mean z scores of the lesions were measured. RESULTS: Of 21 focal cortical dysplasias discovered, all were clearly visible on MP2RAGE junction images, whereas 2 were not visible on MPRAGE junction images. In all except 4 patients, the volume of the focal cortical dysplasia was larger and mean lesion z scores were higher on MP2RAGE junction images compared with the MPRAGE-based images (P = .005, P = .013). CONCLUSIONS: In this study, MP2RAGE-based morphometric analysis created clearer output maps with larger lesion volumes and higher z scores than the MPRAGE-based analysis. This new approach may improve the detection of subtle, otherwise overlooked focal cortical dysplasia. ABBREVIATIONS: - FCD - focal cortical dysplasia - MAP - Morphometric Analysis Program Footnotes Disclosures: Andreas Schulze-Bonhage—UNRELATED: Board Membership: advisory boards on antiepileptic drugs of pharmaceutical companies; Consultancy: pharmaceutical and medical device consulting; Grants/Grants Pending: research on seizure detection and neurophysiologic correlates of cognition*; Payment for Lectures Including Service on Speakers Bureaus: lectures on epileptology. Tobias Kober—UNRELATED: Employment: Siemens Healthcare AG Switzerland, Comments: full employment. Horst Urbach—UNRELATED: Board Membership: coeditor Clinical Neuroradiology; Payment for Lectures Including Service on Speakers Bureaus: Bayer AG, Stryker, UCB, Eisei Co, Bracco; OTHER RELATIONSHIPS: shareholder of VEObrain GmbH. *Money paid to the institution. Paper previously presented as a poster at: Annual Meeting of the German Society of Neuroradiology, October 3–6, 2018; Frankfurt, Germany.
http://www.ajnr.org/content/early/2020/06/04/ajnr.A6579
GIS Data - Category View The Bureau of Land Management Oregon data library contains spatial data of the Oregon and Washington BLM. The Data Library allows the user to obtain datasets and metadata via download. Please note that the available data below does not represent the BLM Oregon's entire Data Library. The GIS data of the BLM Oregon consist of 1) statewide or regional data captured at a scale of 1:100,000 (or smaller) and 2) framework, or base, data captured at the 1:24,000 scale (or larger) that have been built and maintained by BLM Oregon, 3) data cover mostly BLM-managed lands with some private lands included. Data Standards Data Downloads Spatial data is provided in ESRI ArcGIS 10.1 file GeoDatabase format, which is available for download as a compressed WinZip file. All data is in Geographic (GCS) NAD83 projection, unless otherwise noted in the provided metadata. Metadata is embedded within each spatial dataset. It is also available to view and for download by following the datasets links from the "GIS Download Listing" link below. The data listed in the link below is what the BLM is currently making available for download. Data Library Contact Information For questions about BLM Oregon data library and metadata contact: Eric Hiebenthal at 503-808-6565 or Roger Mills at 503-808-6430. We have grouped our datasets into the following categories to help you quickly locate data useful to your interests. You can also display an alphabetized list.
https://www.blm.gov/or/gis/data.php
Machine learning has the potential to radically change the application and development of software. However, it is not a fully autonomous process. There is still very much a human element to machine learning and the overall process involves several precise steps. Here is a broad overview of the different steps in the machine learning process. - Data Acquisition: the results of machine learning are only as good as the data it has access to. The first step of machine learning is acquiring relevant data sets. Datasets should be relevant to the topic at hand however, they don’t need to be overly organized or examined as a review of the information is part of the machine learning process. Plus due to the fact machine learning can process a great deal of information these datasets can be quite large. The more information the better as machine learning works with the information it is provided to produce results. - Data Cleaning: is the next step after gathering relevant information. Because machine learning relies on broad data sets the acquisition and collection phase should cast a wide and gather as much data as possible. Data cleaning should remove repeated data, remove information that is not needed, remove any data pattern organization (as you don’t want to influence the results), and lastly, the data should be properly formatted and stored as needed on your servers or storage of choice. - Training: is the most involved part of the machine learning process. For a person learning any skill such as playing a musical instrument or riding a bike requires practice and the taking in of new information. Machine learning is similar to this as its first output is not going to be completely exact but over time improves. The point of the training process is to improve the learning algorithm. The learning algorithm tells the machine what patterns it’s looking for in a larger dataset. Subtle adjustments to learning algorithm allow for more accurate results. Over time a well-trained algorithm can pull information out of datasets that it has never encountered and it ‘learns’ what to expect. - Test For Accuracy: related to training is the accuracy test. An accurate model should be able to produce factually correct results when presented with an entirely new dataset. Machine learning results should be predictive and able to be applied to datasets that were not used for training. In real-world applications, new data will have to be tested constantly and if an algorithm is accurate when applied to sets of ever-changing information it can then be applied in real world situations. In general 70 to 80 percent correct is considered accurate for current machine learning applications, this is likely to increase in the future. - Predict: the end goal of all of this testing and algorithm correction is to generate results and use the information to answer a question you have. The predict step can be considered the final step in the process and one that only a successful application of machine learning can accomplish it allows for not only the analysis of data but the ability to predict future occurrences or trends. Summation Machine learning is an involved process. The chief concern is that the results it produces are not only accurate but also able to be repeated with accuracy. Only through careful testing and providing well sourced information can machine learning produce the results it is capable of.
https://www.imre.uk/2019/06/machine-learning-process-data-clean-train-test-accuracy-predict/
Project EDDIE Environmental Data-Driven Inquiry & Exploration) is a community effort aimed at developing teaching resources and instructors that address quantitative reasoning and scientific concepts using open inquiry of publicly available data. Project EDDIE modules are designed with an A-B-C structure to make them flexible and adaptable to a range of student levels and course structures. Summary Many species' life cycles are strongly influenced by temperature, but other cues, like day length and precipitation, can also trigger life cycle changes. Phenology is a way of recording the time when events, like bud break or insect emergence, occur, and these events can be important for everything from predicting the timing of disease or insect outbreaks to predicting the impacts of climate change on particular species. This activity explores the question: how is bumblebee phenology affected by climate, and are patterns in the phenology of an organism better explained at smaller scales? Strengths of Module Students should be able to clean and wrangle data, create and interpret figures, compare R2 values among bivariate relationships and think about data reliability. What does success look like Through this module, students should develop data analysis skills that help them to evaluate the relationship between a variety of climate-related environmental cues and a taxa's phenology. In the context of climate change, they will be able to make an argument using data about whether changing climates are likely to impact the phenology of a particular species of interest. They will compare the results using different subsets of a large dataset and make decisions about how to create subsets of data for the analyses they plan to complete. Students will able to compare the strength of the association between temperature/climate-related variables and phenology for different species. To achieve these goals, students will develop abilities to generate, read, and evaluate scatterplots and regressions between sets of variables. Context for Use Description and Teaching Materials Why this Matters: Quick outline/overview of the activities in this module - Worksheet Part A: Orientation to phenology via the National Phenological Network website and critical thought of phenology, season and latitude. - Worksheet Part B: Determine whether there is a detectable trend in bumblebee emergence date in the Spring over time for a continental and regional dataset. - Worksheet Part C: Compare the date of bumblebee emergence with a variety of climate-related site traits, including: Winter and Spring max/min temperatures and precipitation using scatterplots and regressions. Then, exploration of smaller scale variation within a chosen climate variable via latitude, elevation, or state (USA). Worksheet Part A Students learn to define phenology and understand how phenological data are collected. Then, they evaluate the role that phenology plays in human society and begin to think about the broad roles of climate on phenological behavior. Worksheet Part B In this section, students manipulate data to make and interpret scatterplot graphs and regressions about the phenology of bumblebees across time for a large dataset and a provided subset (Minnesota) of the dataset. They will make interpret and compare the biological patterns in both datasets, and evaluate their confidence in their conclusions given the nature of the available data. Worksheet Part C In activity B, students tested change over time, it is uncertain if changing climate is responsible for the patterns observed. In activity C, students use environmental data recorded for each site to identify how much variation in bumblebee emergence phenology is explained by other climate-related variables. Students are required to justify their chosen climate variable. Additionally, students are asked to explore smaller scale sources of variation within relationship of bumblebee phenology and their climate variable of choice. Teaching Materials: - Phenology Data Exercise (handout, Microsoft Word 2016 and PDF) - phenology_data_student.xlsx (Microsoft Excel 2016) - pre-lab_presentation_v2 (Power Point 2016, not specifically used but kept as a resource from original module) Teaching Notes and Tips Workflow of this module: - This module is designed to be completed in a 2.5 hour laboratory session or asynchronously - Give students digital access to handout and dataset when they arrive to class - Instructor gives brief PowerPoint presentation with background material (not used). Instructor may need to introduce concepts of phenology, climate vs weather or summary statistics depending on the level of the students - Students can then work through the module activities. Notes on the student handout: The handout is meant to increased in difficulty and complexity from one question to the next. Students should easily be able to complete Part A without assistance. The final parts of Part B and most of Part C will likely require guidance from the instructor, depending on the skill level of students with Excel. Potential pre-class readings: Instead of attaching one specific paper for pre-reading, I suggest that instructors find a very recent paper that discusses insect/plant phenology and global change. Their are many wonderful papers on bee phenology that are continually being added to the scientific literature. Measures of Student Success Success is shown if: - Students can clean, manipulate and subset data from the NPN network before visualizing or analyzing data trends. - Students can create, compare, and interpret scatterplots and correlations between pairs of variables. - Students can articulate the relationship between temperature, environmental cues, and a taxon's phenology, and use these relationships to predict how the taxon will be affected by climate change. Notes For this module I combined the pre-class handout and the student handout part A & B into one exercise. I stripped away the questions asking student to hand-draw figures and instead focus more on working specifically with the data. I did not ask the students to subset specific years of phenology data (with clearer climate patterns) because I disagree with that premise. I do ask students to explore smaller scale sources of variation within the broad climate dataset (state, elevation, latitude). Cite this work Researchers should cite this work as follows: - Campany, C. (2021). Climate Drivers of Phenology Across Scales (adapted during Project EDDIE FMN). Project EDDIE Faculty Mentoring Network Spring 2021, QUBES Educational Resources. doi:10.25334/NN2P-ZK50 Comments There are no comments on this resource.
https://qubeshub.org/publications/2328/about/1
If you're looking to accurately model a specific real world area in a model railroad layout, you'll likely be considering building in a smaller model train scale such as N gauge. It's one of the most popular smaller scale options and allows hobbyists the opportunity to model more of a specific area in a smaller space. But just how much space do you actually need? In this article we'll take a look at how many feet are in a mile of N scale trains and whether this allows us to accurately model the area in the space available. N scale miles N gauge is the second smallest commercially available model train scale (after Z gauge). It allows us to fit greater amounts of N scale track, landscaping, buildings and infrastructure into a much smaller area in comparison to larger model train scales (e.g. HO gauge or OO gauge). This makes it an attractive scale for modellers looking to build a railroad layout in a smaller space. However, before you make any decisions regarding the most suitable model train scale for you, it would be a good idea to calculate how many feet in a mile there are for N scale trains. How many feet and inches do you need in order to model one mile of N gauge track or landscaping? The results may surprise you... How many feet and inches is an N scale mile? In the real world one mile is equivalent to 5,280 feet. This fact allows us to calculate how many feet and inches there are in a mile for N gauge railroads. However, before we do that we need to consider one important point. US and European N scale differs slightly from British N scale. We need to be careful to distinguish between the two variants when doing the calculations. Let's take a look. US & European N scale Remember, we said earlier that there are 5,280 feet for a mile in the real world. American and European N gauge has a scale ratio of 1:160. So, if we divide 5,280 by 160 that gives us 33. 5280 / 160 = 33 Therefore, in US and European N scale you require 33 feet in order to model a real world mile. British N scale However, British N scale has a scale ratio of 1:148. So, using the same calculation we need to divide by 148 instead. 5280 / 148 = 35.68 Therefore, in British N scale you'll need 35.68 feet to model a mile. Conclusion These results are really quite eye-opening. N gauge is a popular choice for those wanting to model at a smaller scale, however it's surprising just how much space is required per real world mile. You certainly need a greater area than most modellers anticipate. Fortunately, using the scale ratio and a technique called compression we're able to model the real world in significantly less space than 33 - 36 feet! Summary of N scale miles We've calculated the number of feet and inches in N scale model trains. However, it's important to note the subtle difference between US, European and British variants. - There are 33 feet in an N scale mile for US and European model railroads. - There are 35.68 feet in an N scale mile for British model railways. This gives us a good indication of the amount of space that an N gauge layout will take up in the real world. However, we can reduce this by using the compression method to only model what's necessary to achieve a realistic outcome. If you're unsure whether N gauge trains are right for you or would like some comparisons as to which train gauge is best for different use cases, we have a complete guide to model train scales where we cover this in great detail.
https://www.modelrailwayline.com/how-many-feet-in-a-mile-n-scale/
Unified Modeling Language (UML) is a third generation modeling language in object-oriented software engineering. It provides constructs to specify, construct, visualize, and document artifacts of software-intensive systems. This paper presents a technique that uses Offutt's state-based specification test data generation model to generate test cases from UML statecharts. A tool TCGen has been developed to demonstrate this technique with specifications written in Software Cost Reduction (SCR) method and Unified Modeling Language (UML). Experimental results from using this tool are presented. ISE-TR-99-08 Clustering is a widely used knowledge discovery technique. It helps uncovering structures in data that were not previously known. Clustering of large datasets has received a lot of attention in recent years. However, clustering is a still a challenging task since many published algorithms fail to do well in scaling with the size of the dataset and the number of dimensions that describe the points, or in finding arbitrary shapes of clusters, or dealing effectively with the presence of noise. In this paper, we present a new clustering algorithm, based in the fractal properties of the datasets. The new algorithm, which we call Fractal Clustering (FC) places points incrementally in the cluster for which the change in the fractal dimension after adding the point is the least. This is a very natural way of clustering points, since points in the same cluster have a great degree of self-similarity among them (and much less self-similarity with respect to points in other clusters). FC requires one scan of the data, is suspendable at will, providing the best answer possible at that point, and is incremental. We show via experiments that FC effectively deals with large datasets, high-dimensionality and noise and is capable of recognizing clusters of arbitrary shape. ISE-TR-99-07 With the development of the Internet and the on-line availability of large numbers of information sources, the problem of integrating multiple heterogeneous information sources requires reexamination, its basic underlying assumptions having changed drastically. Integration methodologies must now contend with situations in which the number of potential data sources is very large, and the set of sources changes continuously. In addition, the ability to create quick, ad-hoc virtual databases for short-lived applications is now considered attractive. Under these new assumptions, a single, complete answer can no longer be guaranteed. It is now possible that a query could not be answered in its entirety, or it might result in several different answers. Multiplex is a multidatabase system designed to operate under these new assumptions. In this paper we describe how Multiplex handles queries that do not have a single, complete answer. The general approach is to define flexible and comprehensive strategies that direct the behavior of the query processing subsystem. These strategies may be defined either as part of the multidatabase design or as part of ad-hoc queries. ISE-TR-99-06 As the software industry has matured, we have shifted our resources from being devoted todeveloping new software systems to making modifications in evolving software systems. A major problem for developers in an evolutionary environment is that seemingly small changes can ripple throughout the system to cause major unintended impacts elsewhere. As such, software developers need mechanisms to understand how a change to a software system will impact the rest of the system. Although the effects of changes in object-oriented software can be restricted, they are also more subtle and more difficult to detect. Maintaining the current object-oriented systems is more of an art, similar to where we were 15 years ago with procedural systems, than an engineering skill. We are beginning to see "legacy" object-oriented systems in industry. A difficult problem is how to maintain these objects in large, complex systems. Although objects are more easily identified and packaged, features such as encapsulation, inheritance, aggregation, polymorphism and dynamic binding can make the ripple effects of object-oriented systems far more difficult to control than in procedural systems. The research presented here addresses the problems of change impact analysis for object-oriented software. Major results of this research include a set of object-oriented data dependency graphs, a set of algorithms that allow software developers to evaluate proposed changes on object-oriented software, a set of object-oriented change impact metrics to evaluate the change impact quantitatively, and a prototype tool, ChaT, to evaluate the algorithms. This research also results in efficient regression testing by helping testers decide what classes and methods need to be retested, and in supporting cost estimation and schedule planning. ISE-TR-99-05 As we move from developing procedure-oriented to object-oriented programs, the complexity traditionally found in functions and procedures is moving to the connections among components. More faults occur as components are integrated to form higher level aggregates of behavior and state. Consequently, we need to place more effort on testing the connections among components. Although object-oriented technology provides abstraction mechanisms to build components to integrate, it also adds new compositional relations that can contain faults, which must be found during integration testing. This paper describes new techniques for analyzing and testing the polymorphic relationships that occur in object-oriented software. The application of these techniques can result in an increased ability to find faults and overall higher quality software. ISE-TR-99-04 Many techniques for watermarking of digital images have appeared recently. Most of these techniques are sensitive to cropping and/or affine distortions (e.g., rotations and scaling). In this paper we describe a method for recognizing images based on the concept of identification mark; the method does not require the use of the original image, it requires only a small number of salient image points. We show that, using our method, it is possible to recognize distorted images and recover their "original" appearance. Once the image is recognized we use a second technique based on the normal flow to fine-tune image parameters. The restored image can be used to recover the watermark that had been embedded in the image by its owner. ISE-TR-99-03 A lot of real datasets behave in a fractal fashion, i.e., they show an invariance with respect to the scale used to look at them. Fractal sets are characterized by a family of fractal dimensions, each with a particular interpretation. In this paper we show examples of how the fractal dimension(s) can be used to extract knowledge from datasets. Techniques that use the fractal dimension to detect anomalies in time series, to characterize time patterns of association rules, discover patterns in datacubes and cluster multidimensional datasets are described as part of an on-going research effort. ISE-TR-99-02 Exploratory Data Analysis is a widely used technique to determine which factors have the most influence on data values in a multi-way table, or which cells in the table can be considered anomalous with respect to the other cells. In particular, median polish is a simple, yet robust method to perform Exploratory Data Analysis. Median polish is resistant to holes in the table (cells that have no values), but it may require a lot of iterations through the data. This factor makes it difficult to apply to large multidimensional tables, since the I/O requirements may be prohibitive. This paper describes a technique that uses median polish over an approximation of a datacube, easing the burden of I/O. The results obtained are tested for quality, using a variety of measures. The technique scales to large datacubes and proves to give a good approximation of the results that would have been obtained by median polish in the original data. ISE-TR-99-01 This report presents results for the Rockwell Collins Inc. sponsored project on generating test data from requirements/specifications, which started January 1, 1998. The purpose of this project is to improve our ability to test software that needs to be highly reliable by developing formal techniques for generating test cases from formal specificational descriptions of the software. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be easily manipulated by automated means. This Phase II, 1998 report presents results and strategies for practically applying test cases generated according to the criteria presented in the Phase I, 1997 report. This report presents a small empirical evaluation of the test criteria, and algorithms for solving various problems that arise when applying test cases developed from requirements/specifications. One significant problem in specification-based test data generation is that of reaching the proper program state necessary to execute a particular test case. Given a test case that must start in a particular state S, the test case prefix is a sequence of inputs that will put the software into state S. We have addressed this problem in two ways. First is to combine various test cases to be run in test sequences that are ordered in such a way that each test case leaves the software in the state necessary to run the subsequent test case. An algorithm is presented that attempts to find test case sequences that are optimal in the sense that the fewest possible number of test cases are used. To handle situations where it is desired to run each test case independently, an algorithm for directly deriving test sequences is presented. This report also presents procedures for removing redundant test case values, and develops the idea of "sequence-pair" testing, which was presented in the 1997 Phase I report, into a more general idea of "interaction-pair" testing.
https://cs.gmu.edu/techreports/1999
SpaceStat software, first released in 1991, is the international standard for spatial econometric modeling. Until SpaceStat, there was no comprehensive software package that covered a reasonable range of techniques in spatial statistics, geostatistics and spatial econometrics. Time is an integral part of SpaceStat. All views of the data can be animated, from maps to histograms to scatter plots, and all analytics are accomplished through time. All views of the data, including animations and statistical results, can be linked together to enable data discovery and reveal new insights. SpaceStat provides an extensive suite of spatiotemporal and statistical tools including: exploratory spatial data analysis; spatial econometric analyses; and the creation of spatial weights sets and variogram models. Use SpaceStat by itself or alongside Esri’s ArcGIS to substantially extend your GIS analysis capabilities. SpaceStat allows you to view the data in your project, display these data in maps and graphs, and to perform statistics to evaluate spatial patterns. What makes this software unique is that it is a space-time information system. In SpaceStat, time is a dimension of the data, rather than an attribute linked to each object in the dataset. Thus, almost all views of the data can be animated, from maps to histograms to tables. But, SpaceStat does more than just animate your data, it allows you to make statistical inferences about patterns in the data as well. SpaceStat allows you to view, interact with and statistically analyze your space-time data so that you can make informed decisions. Over the past year, we have added powerful new tools for manipulating data and performing statistical analyses, including several which are not available in any other software package. In addition, we have improved data import by adding the option to import geographies or attribute data directly from Microsoft Office Excel files, and made it easy to copy graphs or maps from SpaceStat and paste them into other programs. Here we provide an overview of the features we have availble in SpaceStat. SpaceStat allows you to aggregate (change the spatial support) for your data in four different ways: from a point or polygon geography to a different point or polygon geography, or from points to polygons and vice versa. To simplify the process of importing data, we now provide the option of importing data directly from Microsoft Office Excel. You can use this option to bring in a new point geography, or to add datasets to existing geographies in your SpaceStat project. Although certainly not unique to SpaceStat, aspatial regression tools, including linear, Poisson, and logistic regression allow you to go from spatio-temporal data exploration to model building within one software package. In addition to the “simple” forms of these regression tools, SpaceStat also allows you to use model building tools such as best subset, and forward and backward stepwise regression. All of these tools can be applied to datasets that encompass multiple times, which allows you to quickly evaluate how model fit and other parameters change over time. In SpaceStat, you can also perform geographically-weighted regression (local, rather than global regression analyses); as with aspatial regression, your GWR models can have a linear, Poisson, or logistic form. Using SpaceStat, you can not only build both spatial and aspatial models, you can run models developed using one method (i.e., as an aspatial linear model) in another form (i.e., aspatial Poisson, or linear GWR), and use our wealth of data exploration tools to explore how this change in tools influences the results of your analysis over all of the time periods in your datasets. Following the publication of advances in quantifying health disparities by Goovaerts et al. (2007), we are the first to make these advances available in a software package. The Map/Graph menus now include an option to copy an image of a given graph or map to the clipboard so that you can paste the image into word processing or presentation (e.g., Microsoft Powerpoint) files open in different software programs. You can perform a similar function with the copy option in the Edit menu for tables, and copy tabular information directly into Microsoft Excel. SpaceStat 4.0 represents a major reworking of the underlying architecture of the application. Multithreading has been introduced improving the performance of many methods. A LeSage-Paceestimator for spatial-error and spatial-lag analyses has been added to the spatial regression method. We have designed feature enhancements in SpaceStat 4.0 that improve the appearance, functionality and performance of maps and graphs. You will also find that the extensive help documentation has been updated, revised and expanded. SpaceStat requires Microsoft Windows™ operating systems. SpaceStat is protected by U.S. patents 6,360,184, 6,460,011, 6,704,686, and 6,738,728.
https://www.tstat.it/software/spacestat-4/
Laura Hollink, group leader of CWI's Human-Centered Data Analytics group, was being interviewed by I/O magazine. Below the article 'In Conversation with Laura Hollink: Diving into cultural data', which appeared in I/O Magazine (vol. 19 nr. 2) in July 2022, p.16-17 (with permission). I/O Magazine, July 2022 In Conversation with Diving into cultural data Help the culture and media sector in the Netherlands to make optimal use of innovations in data science and AI. That is what drives the work of Laura Hollink, group leader of the Human-Centered data science group at CWI. ‘Since the cultural sector upholds high moral standards, it is the perfect context to research how AI and data science impact real people in the real world.’ What is your group’s research about? ‘Our main research question is how to responsibly apply AI and data science in the culture and media sector. That sector is characterized by high ethical standards and clear moral principles. We translate those high level principles like inclusivity or transparency into demands for the AI pipeline. And we evaluate to what extent certain data science methods can help cultural heritage organizations achieve their goals.’ What kinds of topics do you study? ‘One of our projects is on the use of controversial terms in cultural heritage collections. Since those collection datasets may be the basis for AI systems that produce automatic descriptions or query extensions, it would be good to be able to automatically detect at a large scale which terms are controversial given the context that they are in . A second project is about evaluating the inclusivity of a recommender system for libraries, assessing for example whether or not it favors authors from a specific gender or origin.’ Could you name any striking results? ‘In a study on the online newspaper archive Delpher we were able to identify distinct search patterns depending on people’s interests. We discovered that people looking for WOII-related subjects use very complex search behaviour, whereas people interested in their ancestry typically have short sessions with only a couple of simple queries and very little clicks or downloads. These results lead to recommendations to digital archives on how to best support the different search behaviours of their users.’ What is the biggest challenge in this field? ‘Combining heterogeneous and often cross-media collections, data modelling, and semantic search. Our data consists of physical objects, short pieces of texts in natural language, newspapers, audiovisual archives… And they do not always completely meet our purpose. For example, in the book recommender project, we have data about which people read which books. But we do not have any information about the authors. A lot of our work therefore involves finding smart ways of combining datasets to distill the information we need.’ Your group is part of two larger scale research labs. What is their added value? ‘Both the Cultural AI Lab and the AI, Media and Democracy Lab bring together different scientific disciplines and societal organizations to collaborate for the longer term. That allows us to build a deep mutual understanding for each other’s language, challenges and expertise, which is essential if you want to make true impact.’ Brief biography Laura Hollink obtained a Master of Arts in Information Science with a minor in Urban Sociology from the University of Amsterdam and a PhD in Computer Science from VU University. After holding research positions at TU Delft and VU University, she came to CWI in 2015. There she has been leading the Human-Centered Data Analytics group since the winter of 2021.
https://www.cwi.nl/news/blogs/laura-hollink-in-i-o-magazine-diving-into-cultural-data
Data change, all the time. In this project we want to explore and understand those changes. We call this activity change exploration: For a given, dynamic dataset, we want to efficiently capture and summarize changes at instance-, and schema-level, enable users to effectively explore this change in an interactive and graphical fashion and analyze patterns in the changing data. <Time, ID, Property, Value> or in brief <t, id, p, v>. Its semantics is: At time t the property p of the entity identified with id was created as or changed to v. A change-cube is a set of such changes. For more details on our data model see our vision paper at VLDB 2019 (see below). Bleifuß, Tobias, Leon Bornemann, Dmitri V. Kalashnikov, Felix Naumann, and Divesh Srivastava. DBChEx: Interactive Exploration of Data and Schema Change. In Proceedings of the Conference on Innovative Data Systems Research (CIDR), 2019. Bleifuß, Tobias, Leon Bornemann, Theodore Johnson, Dmitri V. Kalashnikov, Felix Naumann, and Divesh Srivastava. Exploring Change - A New Dimension of Data Analytics. Proceedings of the VLDB Endowment (PVLDB). 12(2):85-98, 2018. Data and metadata in datasets experience many different kinds of change. Values are inserted, deleted or updated; rows appear and disappear; columns are added or repurposed, etc. In such a dynamic situation, users might have many questions related to changes in the dataset, for instance which parts of the data are trustworthy and which are not? Users will wonder: How many changes have there been in the recent minutes, days or years? What kind of changes were made at which points of time? How dirty is the data? Is data cleansing required? The fact that data changed can hint at different hidden processes or agendas: a frequently crowd-updated city name may be controversial; a person whose name has been recently changed may be the target of vandalism; and so on. We show various use cases that benefit from recognizing and exploring such change. We envision a system and methods to interactively explore such change, addressing the variability dimension of big data challenges. To this end, we propose a model to capture change and the process of exploring dynamic data to identify salient changes. We provide exploration primitives along with motivational examples and measures for the volatility of data. We identify technical challenges that need to be addressed to make our vision a reality, and propose directions of future work for the data management community. Bornemann, Leon, Tobias Bleifuß, Dmitri Kalashnikov, Felix Naumann, and Divesh Srivastava. Data Change Exploration using Time Series Clustering. Datenbank-Spektrum. 18(2):1-9, 2018. DOI:https://doi.org/10.1007/s13222-018-0285-x. Analysis of static data is one of the best studied research areas. However, data changes over time. These changes may reveal patterns or groups of similar values, properties, and entities. We study changes in large, publicly available data repositories by modelling them as time series and clustering these series by their similarity. In order to perform change exploration on real-world data we use the publicly available revision data of Wikipedia Infoboxes and weekly snapshots of IMDB. The changes to the data are captured as events, which we call change records. In order to extract temporal behavior we count changes in time periods and propose a general transformation framework that aggregates groups of changes to numerical time series of different resolutions. We use these time series to study different application scenarios of unsupervised clustering. Our explorative results show that changes made to collaboratively edited data sources can help find characteristic behavior, distinguish entities or properties and provide insight into the respective domains. Bleifuß, Tobias, Theodore Johnson, Dmitri V. Kalashnikov, Felix Naumann, Vladislav Shkapenyuk, and Divesh Srivastava. Enabling Change Exploration (Vision). In Proceedings of the Fourth International Workshop on Exploratory Search in Databases and the Web (ExploreDB), pages 1-3, 2017. Data and metadata suffer many different kinds of change: values are inserted, deleted or updated, entities appear and disappear, properties are added or re-purposed, etc. Explicitly recognizing, exploring, and evaluating such change can alert to changes in data ingestion procedures, can help assess data quality, and can improve the general understanding of the dataset and its behavior over time. We propose a data model-independent framework to formalize such change. Our change-cube enables exploration and discovery of such changes to reveal dataset behavior over time.
https://hpi.de/naumann/projects/data-profiling-and-analytics/change-exploration.html
Routine collection of quality structural data is frequently overlooked or undervalued in our industry at present; the lack of which can often lead to poor interpretations, ineffective targeting, and unnecessary additional costs. There are several challenges that can prevent exploration, mining and database geologists from maximising the value of their structural data sets. These include what type of structural information should be collected, how much, and how to assess the quality of data collected from drill core. This course teaches participants how to improve their structural data collection, recognise and manage quality control issues in oriented drill core and demonstrates how structural data can be analysed to allow a better understanding of structural controls on mineralisation. The course ensures the structural data you are collecting is useful to all stakeholders in the life of mine value chain. Learning Outcomes Upon completion of this course, you will have a solid grounding in the following: - Spot how to quickly assess the quality of structural data collected from drill core; - Learn methods of applying core orientation confidence; - Identify problem data in the database for filtering during analysis and/or interpretation; - Understand how planar and linear data is plotted on a stereographic net; - Learn how to examine populations of planar structures; - Recognise what can be achieved with effective analysis of structural data; - Extract meaningful geometric outcomes from analysis of structural data; - Learn different methods of sub-dividing data; - Discover how to recognise important patterns within data sets; - Learn approaches to extracting value from both individual oriented drill holes or extensive oriented drill hole datasets. Our facilitators Our facilitators are experienced practitioners with a robust mix of academic and practical expertise. In Australia JAMIE ROBINSON PhD, BSc (Hons), MAIG Jamie is a principal consultant geologist with extensive experience in the analysis of structurally controlled mineral systems and exploration targeting. He has worked extensively in exploration focused research and development project roles, as well as precious and base metal exploration. Jamie has strong skills in data integration, mineral system analysis and 3D modelling and is a Principal Consultant in exploration with CSA Global. ROB HOLM Ph.D. Earth Science, P.G.Dip. Engineering Geology, B.Sc. Hons. Geology, GAIG Rob is an expert geologist with an extensive and diverse background in the minerals as well as oil and gas sectors. He specialises in structural geology, geophysical interpretation, petrology and geochemistry, coupled with strong analytical and problem-solving skills to deliver integrated geological solutions. Rob holds extensive experience in geoscience training and education, and is currently a consultant with CSA Global. In Canada LUKE LONGRIDGE PhD Geology, BSc (Hons) Geology, BSc Geology/Chemistry Luke is a structural and economic geologist with technical and project management experience in both exploration and mining. He has worked on development of exploration projects, district-scale structural geology problems, discovery of new mineral deposits, integrating multivariate datasets for district-scale exploration targeting, and improving efficiencies on active mines. Luke has advanced structural geology, GIS and 3D modelling abilities and experience in a wide range of commodities. Who should attend This one-day course will teach participants how to improve their structural data collection and assist them to gain a working knowledge of how structural data can be analysed to allow better understanding of structural controls on mineralisation. This course is available on-site or as a customised training program. For more information, please contact Magda Fimmano, Marketing Communications Manager on: +61 89355 1677 or by email.
https://www.csaglobal.com/training/structural-data-collection/
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST’s ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST’s ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST’s response to less resolved data. Finally, we evaluate RST’s performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology. Citation: Torres LG, Orben RA, Tolkova I, Thompson DR (2017) Classification of Animal Movement Behavior through Residence in Space and Time. PLoS ONE 12(1): e0168513. https://doi.org/10.1371/journal.pone.0168513 Editor: Mark S. Boyce, University of Alberta, CANADA Received: August 24, 2016; Accepted: December 1, 2016; Published: January 3, 2017 Copyright: © 2017 Torres et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The data used in this study are available on Movebank (movebank.org<http://movebank.org>, study name "Grey-headed albatross, New Zealand") and are published in the Movebank Data Repository with DOI 10.5441/001/1.694p666h. Funding: Funding was provided by New Zealand's Ministry for Business, Innovation and Employment for albatross data collection (C01X0905). The National Science Foundation (NSF) REU Site program (NSF OCE-1263349) supported IT. Funding for collection of the fisher tracking data was provided by National Geographic Society Waitt Grant #W157-11; the African buffalo data collection was supported by NSF and National Institutes of Health Ecology of Infectious Disease program DEB-0090323; collection of the Galapagos tortoise track was funded by the Max Plank Institute of Ornithology, NSF (1258062), The Galapagos Conservation Trust, Swiss Friends of Galapagos, and e-obs GmbH, Galapagos National Park, and The Charles Darwin Foundation; The blue whale research was conducted under U.S. National Marine Fisheries Service permit No. 369-1757 authorizing the close approach and deployment of implantable satellite tags on large whales, issued to Dr. Bruce Mate. Support was provided by the Tagging of Pacific Pelagics (TOPP) program of the Census of Marine Life, the Office of Naval Research (Grants 9610608, 0010085 and 0310861), the National Science Foundation, the Alfred P. Sloan Foundation, the Moore Foundation, the Packard Foundation, the National Geographic Society, and private donors to the Oregon State University Endowed Marine Mammal Institute. The National Institute of Water and Atmospheric Research, Ltd. (NIWA) provided support in the form of salaries for authors LGT and DRT, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the 'author contributions' section. Competing interests: The National Institute of Water and Atmospheric Research, Ltd. (NIWA) provided support in the form of salaries for authors LGT and DRT, and the collection of the Galapagos tortoise track was supported by e-obs GmbH, but these commercial affiliations do not alter our adherence to PLOS ONE policies on sharing data and materials. Introduction Time and space are fundamental to animal ecology, as these factors limit and scale behavior patterns. Animal-borne location tags are prolifically used to capture animal movement in both of these dimensions, yet behavioral analyses of these data have primarily focused on the assessment of temporal patterns across space (i.e., first passage time ; residence time ; time-in-grid ). While informative, the omission of analogous cumulative spatial metrics limits the ability of these methods to discriminate between time intensive behaviors such as rest and area restricted search (ARS; [1, 4]), which can have variable distance values, but similar time values. Additionally, commonly applied imputs to describe behavior states such as step-length and turning angle are often constranind to the scale of the sampling interval rather than a scale selected based on the movement or perception of the animal (behavioral change point analysis ; hidden Markov models ). Therefore, classification of behaviors can be enhanced by describing both spatial and temporal occupancy patterns, while also considering both the temporal and spatial scale of the analysis. To illustrate this, consider an area of constant scale (e.g., 1 x 1 km), within which animal behaviors differentiate based on the relationships between the total distance traversed and the amount of time spent in the area of constant scale (Fig 1). The axes of this schematic scale from low to high distance (x-axis) or time (y-axis) so that when an animal’s spatial and temporal occupancy patterns are related, behavioral groupings emerge. The corners of this schematic represent the polar, dichotomous behavior states of (1) transit, near the origin, where the animal incurs low time and low distance in the area, (2) time intensive behaviors such as rest, in the upper left, where the animal incurs high time in the area but covers little distance, and (3) time & distance intensive behaviors in the upper right where the animal incurs high time and distance covered within the area, representing behaviors such as ARS that are influenced by any combination of reduced speed, increased turning, and increased time spent in the area. Given the inability to move large distances in short time periods (teleportation), it is impossible to fall within the ‘black hole’ of our schematic in the bottom right. Within the boundaries of these three dichotomous behavior groupings, multiple other behavior states can be identified and grouped also based on the comparative amount of time and space within the area such as graze, feast, and quick search. Three polar behavior states across this continuum are represented in the corners: Transit (low time, low distance in an area), time intensive behaviors such as rest (high time, low distance), and time and distance intensive behaviors (high time, large distance) such as area restricted search (ARS). Three other possible behavior states are denoted within the continuum of this schematic. When applying RST, the origin will be double the sampling interval (y-axis) and double the R applied (x-axis), which are the minimal scales at which behaviors can be described. Examination of behavioral subsets of movement data allows focused and comparative studies. Thus, behavior classification is often an early and critical component to movement data analysis that guides further analysis pathways. While behavioral interpretation is often intuitive upon visual assessment of each track, the classification of behavior states can be difficult to automate and objectively quantify. Many quantitative methods to classify behavior states are in use but, in addition to being biased toward temporal metrics, these are often statistically complex (e.g., Bayesian state-space models ; biased random bridges ; tortuosity entropy ) or require advanced programming skills and ample time to run the models, especially for first-time users (e.g., hidden Markov models ; wavelet analysis ). Therefore, there is a need for a simple and quick method to explore, segment, and behaviorally annotate movement data with limited supervision . Additionally, these methods may lack transferability between taxa or studies, or be difficult to successfully apply to large and varied datasets with high individual variability . These challenges are becoming increasingly salient with the increasing number and size of animal movement datasets due to miniaturization, and increased resolution, memory capacity, and battery life. Over 3,500 animal movement studies containing over 260 million locations have been contributed to movebank.org, seabirdtracking.org, and OBIS-SEAMAP (tabulated on 31 March 2016). The growth of biotelemetry offers immense opportunities for discovery, yet ‘methodological ambiguity’ for data exploration leads to confusion and inconsistency and movement ecologists may struggle to balance the analytical demands of Big Data with the individuality of each track. In this study, we offer an efficient, objective and broadly applicable method to explore and identify behavior patterns at multiple scales in movement data. Building off the concept of residence time , we first develop a metric of residence distance. These two metrics quantify cumulative area occupancy in time and distance respectively, and when related to each other, behavioral groups can be discerned (Fig 1). The method identifies three fundamental movement states: transit, time intensive movement, and time & distance intensive movement. These states are identified on a continuous scale that can be applied in further post hoc analyses. Initially, we develop and test our Residence in Space and Time (RST) method using a highly resolved grey-headed albatross (Thalassarche chrysostoma) GPS track. We discuss the impact of scale on RST behavior classifications and present methods to evaluate scale choice. Next, we demonstrate the ability of RST to discriminate between three discrete behavior states of an albatross (rest, ARS and transit) relative to other classical movement metrics. The RST method is then applied to 24 albatross tracks to assess the method’s ability to describe population-level behavior grouping while assessing individual variation. Next, we explore RST’s ability to accurately describe behavior states in movement data from less temporally resolved and temporally intermittent datasets (mimicking Argos/PTT tracks). Finally, we apply the RST method to animal movement datasets from diverse taxa and ecosystems to evaluate performance and versatility. This exploration demonstrates that RST is flexible and robust for application to multiple taxa and movement data types, which allows an efficient initial data exploration method to inform subsequent hypothesis testing, data partitioning, and appropriate analyses. Materials and Methods Ethics statement All handling of albatross was conducted under permit issued by the New Zealand Department of Conservation and was approved by the NIWA animal ethics committee. All effort was made to minimize handling time and any suffering to animals. RST development and dataset During October and November 2013, grey-headed albatrosses breeding at Campbell Island in the New Zealand sub-Antarctic were tagged with igotU GPS archival tags (GT-600; http://www.i-gotu.com/), set to record a position and time every five minutes. We recorded incubation foraging trips of adult albatross (n = 24) after securing the GPS tag to back feathers using Tesa® tape. To focus on at-sea behaviors we removed all points within 5 km of the colony . We completed all analysis in R and implemented in C, with adapted code from Chirico and Kahle and Wickham . We then calculated residence distance (RD) and residence time (RT) for all points along the track. A circle of radius R is constructed around every point and the distance traveled (RD; sum of path lengths within the circle) and time spent (RT; sum of time between locations within the circle) between consecutive points within the circle is calculated. Unlike Barraquand and Benhamou (2) Residence Time method, our calculations of RT and RD do not include the ‘tails’, which are the path segments between the first or last point in the circle and the perimeter. With our approach, all points alone within the circle are assigned a value of zero for both RT and RD. If the path trajectory exits and reenters the circle with no more than a threshold distance value (Th) traveled outside, the stretches of track outside the circle are also included in the RD and RT values. We include the option to set a threshold distance in the RST method for consistency with the original Residence Time method , yet within the RST method its functionality for behavior classification is limited. Therefore, in the following examples we set Th equal to zero. To test the hypothesis that variation between RT and RD is related to movement behavior, we calculated the residuals (difference in value) between these metrics for each point. First, RD and RT values were normalized by dividing by the maximum respective value within each track so that distance and time values were unit-less and therefore comparable, and so all values consistently ranged between 0 and 1. Then residuals for each location were calculated by subtracting RT from RD. To complete these steps the following formula was applied: (1) We used the difference between RD and RT to describe behavior patterns, rather than proportion, sum, or other complex comparison, because this approach (1) results in a consistent range of residuals between -1 and 1 that is comparable between individuals and datasets, and (2) allows for a relatively limited chance that the same value will result from different combinations of RD and RT (S1 Appendix). Speed also describes the relationship between distance and time, but is not directly suitable for behavior classification because speeds at large and small scales can be equivalent and therefore difficult to relate to behavior states. The scale-dependence of RST relates to R, and both RD and RT assign zero to locations that are > R away from other points. The appropriate R value depends on the temporal sampling interval and animal behavior patterns captured by the data. We offer two approaches to the selection of R based on animal transit speed. Transiting is a fundamental and shared behavior between animals, which is constrained by physiology, morphology and environment. Therefore, with RST, transit points separate positive (time & distance intensive) and negative (time intensive) residuals, so that the classification of transit points influences the behavior types described. One approach to R selection is derived by the following formula: (2) which assumes that the average distance between transit points should be approximately equal to the average transit speed multiplied by the sampling rate, and divided by two to uncouple two consecutive points. This approach assumes a priori knowledge of transit speed. Alternatively we apply a diagnostic tool to calculate the percent of points with positive, negative and zero residual values at multiple (user defined) scales to assess the impact of R selection. We apply Formula 2 again and determine the numerator as the scale where the number of transit points approaches zero (where all points have at least one other point inside its circle). Extremely fast movements or large data gaps prevent this value from actually reaching zero, so we use <5% transit points as the cutoff. A benefit to this approach is automated dynamic scaling for each track. Example application of RST to one albatross track Grey-headed albatrosses have three dominant and discrete behavior states at-sea: transit, ARS foraging, and rest; which are linked to strong diurnal patterns of limited activity during darkness . We illustrate the behavioral classification capability of RST using one albatross GPS track (Bird 23059) by assessing the relationship between RD and RT, and the variation in residual values relative to day and night. A static R of 1.935 was applied based on Formula 2, using a mean transit speed of 45 km/hr and mean time interval between locations of 5.16 ± 1.0 min. The dynamic scaling approach was also applied to this albatross track for comparison of radii values. Comparison of metrics in three behavior states RST’s ability to discriminate between three discrete behavior states along this grey headed albatross track (Bird 23059) was directly compared to other classical movement metrics of speed, path straightness (straight-line distance between points / cumulative path lengths between points), and residence time and residence distance using the Barraquand and Benhamou approach that includes the ‘tails’. Three experienced seabird ecologists very familiar with albatross movement data (L.G.T., R.A.O. and D.R.T.) manually and independently classified each GPS location into rest, transit or ARS behavior states. Without direct observation it is near impossible to know the true behavior state of a tracked animal. Therefore, we assumed the points with matching behavior state assignment between the three classifiers to be ‘true’, and compared frequency histograms of the movement metrics speed, path straightness, residence time, residence distance, and RST in the three behavior states rest, transit, and ARS. All metrics were calculated using an R = 1.935. From individual to population To evaluate RST’s ability to classify behavior states within movement data from a sampled population, we applied the method to all albatross incubation trips (n = 24). We analyzed the albatross tracks with Th = 0 and (1) a constant R based on a transit speed of 45 km/hr and a mean GPS fix interval = 5.63 ± 0.59 min, and (2) using the dynamic scaling method for each track. As before, we assessed behavior classification based on residual variation relative to daylight. We also timed this analysis to demonstrate the method’s speed. Impact of temporal resolution on RST To evaluate RST’s ability to classify behaviors using less temporally resolved data, we completed two subsampling exercises. First, we subsampled all albatross tracks at across a range of increasing temporal intervals (10, 20, 30, 60, 120, 180 min) and applied the dynamic scaling method to choose an appropriate R for each sampling interval and individual combination (S2 Appendix). Secondly, we stochastically subsampled the 60 min subsample of a single albatross track (Bird 23059) 100 times to randomly select 1/3 of the locations. These subsampled tracks mimic the erratic sampling of commonly used satellite telemetry. For each subsampled track, we calculated the percent of locations matching the residual state (positive, negative, or zero) of the original 5-min sampling interval track to assess the variance of behavior classification relative to temporal resolution of the tracking data. Application of RST to diverse taxa To evaluate and expand the application of the RST method, we used movement datasets from four taxa with diverse life-history patterns (predator, prey, grazer, migrator), with variable home range scales, from terrestrial and marine ecosystems, and of different data types. Three datasets were freely downloaded from the Movebank Data Repository (https://www.movebank.org/), which has proven to be a powerful resource for our exercise: (1) a 2-month GPS track of a medium-sized carnivore, the fisher (Martes pennanti), tagged in New York, USA, in March. 2011, with dynamic sampling using tri-axial accelerometer data (2-min sampling when moving; 1-hr sampling when resting (tag M4 [22, 23]); (2) a 2-month GPS track of an African buffalo (Syncerus caffer) collected in Kruger National Park, South Africa, from 10 October to 7 December 2005, with 1-hr sampling interval (tag 1764827 ); (3) a 5-year GPS track of a Galapagos tortoise (Chelonoidis vandenburghi), tracked on Isabela Island, Galapagos, beginning in October. 2010, with 1-hr sampling intervals and a duty cycle shutdown period from 0100 to 1100 GMT when the animal is generally stationary (tag 1388 ). Additionally, we analyzed a satellite telemetry track of a blue whale (Balaenoptera musculus) tagged off Southern California, USA, with movement data from September 2007 to February 2008 (tag 23043 [26, 27]). We analyzed these four datasets using the RST method and a dynamic scaling approach (Th = 0), as we assumed no a priori knowledge of animal transit speed. Results Application of RST to one albatross track A very similar R value of 1.9 was selected by the dynamic scaling approach when applied to the albatross track compared to the static R value calculated through Formula 2 (R = 1.935). The resulting scale plot (Fig 2) illustrates that as R increases the number of transit points decreases while positive and negative residuals increase. Dark gray bar = fixed radius (R = 1.935). Light gray bar = dynamically scaled radius (R = 1.9). Dashed line indicates 5% transit points. Overall, the response of RD and RT to albatross track geometry agree during daylight (Fig 3a). However, during nighttime, RT values are elevated compared to RD values that remain at a more average value compared to daytime variation. The inflation of RT illustrates the behavioral bias of a time metric toward resting behavior, which albatross are generally engaged in at night. In contrast, RD is immune to this response. Yet, behavioral separation of the movement data is evident when RD and RT are compared using the RST method (Fig 3b). Time intensive behaviors, representing rest periods in this case, are evident at night with RT > RD, equaling negative residuals. Positive or zero value residuals generally occur during daylight, when albatross are travelling or engaged in ARS. Correspondence between behavior and residual groups is visually evident (Fig 3c and 3d) with transit between foraging areas (black), clustered ARS (blue), and interspersed rest segments (red). Day and night (shaded) periods compared to (a) normalized residence distance (black) relative to normalized residence time (blue), and (b) residuals of normalized residence distance minus normalized residence time (positive = blue, negative = red; zero = black). (c) GPS track color coded by residuals (black = transit, red = rest, blue = area restricted search). The three movement states identified by RST are illustrated and (d) enlarges a region of the track to demonstrate the classification of three locations into these movement states within the applied radius size. Grey arrows indicate direction of travel. Green star is colony location at Campbell Island, New Zealand. Comparison of metrics in three behavior states Behavior states matched between the three expert classification efforts in 66% of locations (2336 of 3548 points; n = 708 transit; n = 1080 rest; n = 548 ARS), which were considered the ‘true’ behavior states. The variability in behavior state classification of the remaining 1212 ‘ambiguous’ points is likely due to (1) differences in the inferred scale of assessment by each classifier, (2) presence of points recorded during transitions between states, and (3) the inherent ambiguity of assigning points into one discrete behavior group that are simultaneously multiple behavior states (e.g., slightly sinuous travel, which can be interpreted as either transit and ARS). RST residuals aligned with our manual classification effort for 90% of the locations (2112 of 2336 points; Fig 4a). The majority of discrepancy occurred due to RSTs tendency to identify points as time & distance intensive movement (n = 143), while the classifiers labeled such points transit. Similarly, RST classified the majority of ambiguous points as time & distance intensive points (black bars in Fig 4b). (a) Depicts only the ‘true’ behavior states of rest (red), transit (black), and area restricted search (blue) as agreed on by expert classifiers. Bars are colored based on RST classification with transparency so that overlap between distributions is illustrated. (b) Describes the distribution of all points along the track (white) and the ambiguous points where the classifiers did not agree on behavior state assignment (black). When compared to other time series metrics, RST residuals were able to discriminate between the three ‘true’ behavior states with little overlap. Residence time as calculated by Barraquand and Benhamou also shows little overlap between ‘true’ behavioral states (Fig 4a). However, determining breakpoints of behavior states from the continuous range of residence time values is difficult (white bars Fig 4b). Furthermore high residence times does not equate to a distinct behavioral state (either rest or ARS). Speed is almost discrete between the three ‘true’ behavior states as color-coded by RST classification but, like residence time, is unable to independently group behavior states or classify the ambiguous points (Fig 4b). Path straightness and residence distance were both unable to distinguish between transit and time intensive behaviors because these points have relatively straight paths and low distance. Behavior classification based on RST benefits from its integration of multiple movement data measurements into one combined metric. Due to the calculation of metrics within an area, RST’s classification of each point depends on its neighboring points, which results in more stable behavior states compared to point-based approaches [11, 28] that produce more erratic behavior state switching between points. Population-level performance of RST To evaluate the population-level performance of RST, all incubation albatross tracks (n = 93,481 locations) were analyzed. Using a fixed R = 2.11 km, behavioral classification of locations resulted in 28.0% transit (residual = 0), 48.8% ARS (residual > 0), and 23.2% rest (residual < 0). Using the dynamic scaling approach to determine R for each track (mean R = 2.55 ± 0.41 km), behavioral classification of locations resulted in 22.9% transit, 50.9% ARS, and 26.2% rest. Using a fixed radius and dynamic scaling, respectively, 74.4% and 76.5% of the negative residuals (rest) occurred at night, while 82.6% and 82.6% of positive residuals (ARS) occurred during the day. Similar R values, proportions of behavioral classifications, and diurnal behavioral assignment were determined by both methods of R selection, indicating that dynamic scaling can perform well if animal speed is unknown. Running the RST code to identify the dynamically scaled radii for each of 24 tracks using 44 radii options took 52 seconds (CPU time = 9 sec, Processor = 2.66 GHz Intel Core 2 Duo), and once the preferred radius for each track was identified, these 24 tracks took a mere 22 s (CPU time = 1.8 s) to compute. RST’s response to less temporally resolved data The RST behavior class (ARS: residuals > 0; rest: residuals < 0; transit: residuals = 0) agreement test between each location in the original 5-min interval track and the temporally subsampled tracks demonstrate the impact of behavior bout length on behavior class detection (Fig 5a). At longer time intervals, time intensive behaviors (rest) remain relatively well classified, but behaviors with shorter bout lengths (ARS and short transits) are increasingly misclassified as the sampling interval grows longer than the bout length (S2 Appendix). In this example, albatross ARS bouts appear to occur at temporal scales < 30 mins, and transit periods longer than 60 mins are consistently identified, which likely represent persistent travel to and from the colony (Fig 5a). The satellite telemetry simulation of stochastically sampled data reiterates this pattern: negative values (rest) remain well classified, while positive (ARS) and zero (transit) value residuals are misclassified more than half the time (Fig 5b; S2 Appendix). This exercise demonstrates that behavioral analysis of satellite telemetry data may indicate where animals spend greater time, but not necessarily where they conduct ARS. Speed filtered satellite telemetry data may reduce spatial error and provide more accuracy in behavior classification. Additionally, track interpolation would decrease the sampling interval, reducing R (Formula 1) and increasing the percent of transit points (Fig 2). Blue = area restricted search (positive residuals); red = rest (negative residuals); black = transit (zero residuals). RST analysis of diverse datasets Analysis of the high-resolution fisher track (R = 40 m) through an urban habitat, reflects discrete and clustered locations of periodic short-term resting places , with more dispersed searching/foraging locations interspersed with relatively linear transit segments (Fig 6a). RST classification of resting/stationary behavior states in this fisher track was not influenced by the less frequent GPS sampling caused by accelerometer-informed data loggers because RT is a cumulative measure of time spent within circle of radius R and therefore a resting fisher would accumulate the same RT value regardless of GPS sampling frequency. (a) 2-month GPS fisher track in an urban area of New York, USA, and residuals (tag M4 [22, 23]). (b) 2-month GPS African buffalo track and residuals split at 11 Nov 2005 to demonstrate behavior and distribution change with onset of wet season (tag 1764827 ). (c) Residuals from 5-year GPS Galapagos tortoise track, and spatial representation of track segment from 1 Aug 2011 to 30 Mar 2012; inset map shows fine-scale movements in southeastern area (tag 1388 ). (d) 5-month satellite telemetry blue whale track starting off southern California and ending near the Costa Rica Dome, and residuals (tag 23043 [26, 27]). Maps produced using R code by Kahle and Wickham . RST analysis of the African buffalo track (R = 375 m) effectively describes transit locations between areas of increased RT or RD. Additionally, the RST analysis highlights a behavior shift around November 11 with the onset of the wet season (rains began in early Nov. 2005) to predominantly time & distance intensive behaviors (positive residuals; blue locations) and altered distribution patterns as the animal moves away from river beds and spends more time in the plains (Fig 6b), matching their known ecology . Evaluation of the long-term tortoise track (R = 25 m) revealed oscillation of residual values and intensities relative to its location in NW and SE seasonal areas, indicating different movement strategies between habitats (Fig 6c). During one migration cycle depicted (Aug. 2011 –Mar. 2012), transit points are identified between the two areas, and fine-scale assessment of the SE area illustrates discrete areas of time intensive and time & distance intensive behaviors. RST analysis of the lower resolution blue whale track (R = 35 km) identifies alternating time intensive and time & distance intensive behaviors while foraging off Southern California and central Baja California, interspersed with transit periods (Fig 6d). The animal switches to mainly time & distance intensive behaviors off central Mexico, and then to transit behavior during migration toward the Costa Rica Dome where time intensive behavior is exhibited. At this scale of analysis, the shifts between time intensive and time & distance intensive behaviors may represent two different scales of area restricted searching by this whale. Considering the results of our satellite telemetry simulation, behaviors with bout lengths smaller than the temporal sampling may be misclassified, yet the results coincide with known blue whale ecology in this region . Overall, the application of the RST method to these various movement datasets illustrates its flexibility and explanatory power. For each taxa, RST describes alternating behavior states that correspond to their known ecology, and comparatively reveals the fisher’s striking preference for distance intensive movement patterns (Fig 7). The comparison illustrates how R influences the proportion of positive (blue), negative (red) and zero (black) residuals. Dashed line indicates 5% transit points. Light gray line indicates the dynamically scaled R for each track: Fisher (R = 40 m), African buffalo (R = 375 m), Galapagos tortoise (R = 25 m), blue whale (R = 35 km). Discussion Given the large and increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Indeed, movement ecology needs unifying paradigms to converge diverse studies and foster a mature scientific discipline . The RST method offers a fast approach to the analysis of movement data that requires low computational power and time investment, while also allowing individualization by track using the dynamic scaling approach. Therefore, we advocate that RST is an effective and efficient method for initial exploration of movement data to inform hypothesis testing, data partitioning, and choice of modeling or statistical framework for subsequent analyses. Such close and detailed exploratory analysis of behavior state and scale before fitting complex movement models is critical as movements are often hierarchical and cyclical . Furthermore, RST appears to be robust across taxa, ecosystems, and movement data types, and generates a consistent range of residual values that are comparable, making it an appropriate method of meta-analyses of movement data. RST is based on our conceptual schematic illustrating how the comparison of animal movement patterns through space and time are able to discriminate between behaviors states resolved in the data (Fig 1). RST is a composite of other movement analysis metrics (RT, RD, speed, and path straightness) that integrates these descriptions of movement patterns through both space and time to distinguish between multiple behavior states. RST allows behavior classification to move beyond the dichotomy of ‘travel’ and ‘resident’ (e.g., ), and is a one-step method of behavior classification, unlike many other methods that first necessitate metric calculation and then the application of a subsequent time-series or clustering algorithm to define breakpoints (e.g., [2, 5, 11, 28]). Our novel method is intuitive and simple to implement, offering a flexible framework to quickly and objectively characterize behavior states, point-by-point, in diverse movement data types. The premise of all movement analyses is that animals change movement patterns relative to different behavior states. But ultimately it is the scale of analysis that determines the movement patterns described , and therefore the behaviors characterized. RST allows various scales (R) to be examined simultaneously, and we offer two approaches to help the researcher discern an appropriate scale. The first approach assumes a priori knowledge of the animal’s mean transit speed and would apply a constant scale across a single-taxa dataset. The dynamic scaling approach offers two benefits: (1) it allows for scale-dependent comparison of behavior states similar to Postlethwaite et al. but with objective discrimination between behaviors, and (2) it adjusts R for each track, enabling flexibility of scale application that accounts for inherent individual movement patterns, such as speed and tag variability. Dynamic scaling prioritizes the classification of transit points, at all scales analyzed, and therefore performs best on tracks with some transit behavior. Nonetheless, one scale is unlikely to be appropriate for long duration tracks with high sampling resolutions due to various behavior patterns layered in the data at multiple scales, and variable transit speeds during different life history stages. In such cases, tracks may be split by phase (e.g., migration, breeding, season) prior to final RST analysis, or multiple R can be applied to resolve behaviors at different scales. This is exemplified by our choice to limit RST analysis of albatross tracks to movement behavior at-sea. Alternatively, if we had included incubation periods (high RT, low RD), this would bias the RST values of at-sea resting behavior towards positive values, especially as resting at sea is not stationary. Ultimately, partitioning of tracks and scale choice is case-dependent and should be based on study questions, taxa, and environment. However, the primary determinant of minimum scale is data resolution. Only behaviors that occur at spatial and temporal scales larger than the sampling interval and spatial resolution of the movement data are recorded, and hence described. This effect is emphasized by our subsampling analysis. With less resolved data, behaviors with long bout lengths remain well described, but short-term behaviors, such as ARS, are not consistently captured. Researchers often make logistical trade-offs for tag deployments between cost, battery power, tracking duration, recapture probability, and data resolution. Yet, sampling interval should not be sacrificed idly due to implications on the ability to record shorter-term behaviors. For instance, if fine-scale management schemes are to be derived from movement data, deployment durations may need to be sacrificed in favor of a higher sampling resolution. RST’s value can be broadly extended toward habitat and distribution studies to better connect movement patterns with resource selection. To understand the behavioral mechanisms of animal space use, species distribution models and resource selection functions should be calibrated using behaviorally partitioned movement data . Such partitioning can allow ecological questions to be addressed, such as elucidating environmental co-variates of resting and foraging areas, and how animals use wind, currents and topography during transit. RST can efficiently contribute to these efforts, allowing researchers to dedicate more time toward ecological models and interpretations. Although RST describes three discrete behavior groups (time & distance intensive positive points, time intensive negative points, and transit points where residuals equal zero), the residual values are continuous between -1 and 1, which offers more descriptive capacity of functional response curves derived by modeling studies. Furthermore, the confidence of behavior state assignment of each point by RST can be described by examining the mean and sd of residuals across variable R, enabling the identification of locations with simultaneously mixed behavior states (e.g., transit and ARS) or locations in transition between behavior states. As expected with hierarchical analyses, RST behavior groupings, as described by residuals, change with scale (Fig 7) and quantifying confidence of each point assignment as described here will help movement ecologists move away from identification of dichotomous behavior states and toward a more continuum approach to behavior description (e.g., ). Additionally, the normalized and continuous range of RST residuals allows for further examination based on range, clusters, percentage and intensity to compare patterns across individuals, populations, seasons, habitat, life-history groups and movement association with anthropogenic entities (e.g., fishing vessels, trash dumps, urban areas). Unlike most other behavioral classification methods, RST’s functionality is based on classification of transit points (residuals = 0) as determined by the choice of R. These transit points then partition time & distance intensive positive residuals from time intensive negative residuals. Interestingly, while these positive and negative residuals identify groups of behaviorally similar points within a track, it is up to the user to interpret the meaning of this time & distance intensive and time intensive classification based on scale and ecological knowledge of the study species. For example, while time intensive points indicate where the animal spent more time and less distance within the analysis circle relative to other areas where distance traveled was larger, these negative residuals are interpreted as rest locations in our fine-scale albatross track example, but are more likely areas of concentrated feeding behavior in the larger scale blue whale track. Locations with positive residuals along both the albatross and blue whale tracks indicate where distance traveled was relatively larger at the scale of analysis and therefore describe more intensive searching behaviors, but at two different scales. Additionally, due to the great diversity of how animal movement patterns relate to behavior state, such as the unusual resting behavior of frigate birds (Fregata minor) while in flight , the RST user must interpret the meaning of residuals based on the scale of analysis and the study animal’s ecology. As a new method, we promote the cross assessment of RST relative to other movement data behavior analyses, as these efforts frequently reveal the strengths and weaknesses of various approaches [14, 35]. To focus analyses and limit time investment, it is important to understand nuance in both the behavior of the tracked animal and the dataset to be analyzed prior to implementing hypothesis testing and computationally intensive analysis. It is here that the RST method can provide insight into the individuality of each track. Furthermore, we encourage other researchers to implement RST on movement data across taxa, scales and ecosystems to examine method performance and to conduct meta-analyses. With diverse datasets, if a desired scale of analysis is undefined, application of the track-specific dynamic scaling approach will allow description of scale consistency across the movement datasets and identification of outliers that require data exploration and possible correction. Once reliable RST behavior classifications are derived for each track, then comparisons are feasible due to normalized values of RD, RT and residuals. Additionally, complimentary biologging, such as immersion, accelerometer, and time-depth recorder data, can be used to further describe taxa specific behaviors and movements related to the residual results (e.g., ) or incorporated into the RST method. For example, RST could be extended from 2D to 3D by converting from a circle to a sphere-based analysis, complimentary to spherical first passage time . RST recommendations The RST code is freely available (S3 Appendix) and we recommend the following initial settings: Implement dynamic scaling approach with a range of R based on prior knowledge of animal movement patterns and scale of sampling (how far is the animal likely to move between locations?); visually inspect the classification of the tracks; assess the consistency of choice of R across individual tracks; investigate tracks with outlier values for R; interpret states. Despite these recommendations, no one-setting fits all data, but RST analysis of movement data is fast, allowing users the freedom to iterate analyses to test and refine parameters; this flexibility allows the user to hone in on the behavioral profile of interest and appropriate spatio-temporal scales, thus focusing subsequent analyses . Conclusions Animal tracking is revolutionizing our understanding of animal ecology in a myriad of ways including behavior, social systems, habitat use, and population connectivity. Yet, choosing and applying the appropriate analytical method can be challenging and cumbersome, making the simplest approach often the most desirable [11, 14, 37]. The RST method offers an intuitive, rapid, iterative and flexible approach to explore movement data, with limited a priori assumptions (except the assumption that the sampling interval of the data is low enough to capture meaningful movement behaviors), that can assist more sophisticated explanatory and predictive analyses . As a stand-alone method, RST analysis provides the ability to standardize movement data exploration across taxa, ecosystems, and data-types, offering immense opportunities for meta-analyses and initial steps toward answering pressing ecological questions regarding animal movement drivers, response and scale. Supporting Information S1 Appendix. Probability of equal residual value resulting from different combinations of Residence Distance (RD) and Residence Time (RT). https://doi.org/10.1371/journal.pone.0168513.s001 (DOCX) S2 Appendix. Temporal sub-sampling of gray-headed albatross GPS tracks using Residence in Space and Time (RST) method. https://doi.org/10.1371/journal.pone.0168513.s002 (DOCX) S3 Appendix. Zip file containing R code, documentation and example dataset for running Residence in Space and Time (RST) method. https://doi.org/10.1371/journal.pone.0168513.s003 (ZIP) Acknowledgments We thank the following animal movement data contributors: S. LaPoint (fisher), P. Cross (African buffalo), S. Blake (Galapagos tortoise), and B. Mate (blue whale). We are grateful to D. Palacios and R. Phillips for insightful comments on earlier drafts of this manuscript, and the RV Tiama, C. Kroeger, L. Sztukowski, R. Buchheit, A. Larned, and the New Zealand Department of Conservation for field support. Author Contributions - Conceptualization: LGT RAO IT. - Data curation: IT RAO. - Formal analysis: LGT RAO IT. - Funding acquisition: LGT DRT. - Investigation: LGT RAO DRT. - Methodology: LGT RAO IT. - Project administration: LGT DRT. - Resources: LGT DRT. - Software: RAO IT. - Supervision: LGT RAO DRT. - Validation: LGT RAO IT. - Visualization: LGT RAO. - Writing – original draft: LGT RAO. - Writing – review & editing: LGT RAO DRT. References - 1. Fauchald P, Tveraa T. Using first-passage time in the analysis of area-restricted search and habitat selection. Ecology. 2003; 84: 282–8. - 2. Barraquand F, Benhamou S. Animal movements in heterogeneous landscapes: Identifying profitable places and homogeneous movement bouts. Ecology. 2008; 89: 3336–48. pmid:19137941 - 3. Pedersen MW, Patterson TA, Thygesen UH, Madsen H. Estimating animal behavior and residency from movement data. Oikos. 2011; 120: 1281–90. - 4. Kareiva P, Odell G. Swarms of Predators Exhibit "Preytaxis" if Individual Predators Use Area-Restricted Search. Am Nat. 1987; 130: 233–70. - 5. Gurarie E, Andrews RD, Laidre KL. A novel method for identifying behavioural changes in animal movement data. Ecol Lett. 2009; 12: 395–408. pmid:19379134 - 6. Dean B, Freeman R, Kirk H, Leonard K, Phillips RA, Perrins CM, et al. Behavioural mapping of a pelagic seabird: combining multiple sensors and a hidden Markov model reveals the distribution of at-sea behaviour. J R Soc Interface. 2012. - 7. Jonsen ID, Flemming JM, Myers RA. Robust state-space modeling of animal movement data. Ecology. 2005; 86: 2874–80. - 8. Benhamou S. Dynamic approach to space and habitat use based on biased random bridges. PloS one. 2011; 6: 1–8. - 9. Liu X, Xu N, Jiang A. Tortuosity entropy: A measure of spatial complexity of behavioral changes in animal movement. J Theor Biol. 2015; 364: 197–205. pmid:25261731 - 10. Sur M, Skidmore AK, Exo K-M, Wang T, Ens BJ, Toxopeus A. Change detection in animal movement using discrete wavelet analysis. Ecol Inform. 2014; 20: 47–57. - 11. Garriga J, Palmer JRB, Oltra A, Bartumeus F. Expectation-Maximization Binary Clustering for Behavioural Annotation. PLoS ONE. 2016; 11: e0151984. pmid:27002631 - 12. Jonsen I. Joint estimation over multiple individuals improves behavioural state inference from animal movement data. Sci Rep. 2016; 6: 20625. pmid:26853261 - 13. Nathan R, Getz WM, Revilla E, Holyoak M, Kadmon R, Saltz D, et al. A movement ecology paradigm for unifying organismal movement research. Proc Natl Acad Sci USA. 2008; 105: 19052–9. pmid:19060196 - 14. Gurarie E, Bracis C, Delgado M, Meckley TD, Kojola I, Wagner CM. What is the animal doing? Tools for exploring behavioural structure in animal movements. J Anim Ecol. 2016; 85: 69–84. pmid:25907267 - 15. Hampton SE, Strasser CA, Tewksbury JJ, Gram WK, Budden AE, Batcheller AL, et al. Big data and the future of ecology. Front Ecol Environ. 2013; 11: 156–62. - 16. Torres LG, Thompson DR, Bearhop S, Votier SC, Taylor GA, Sagar PM, et al. White-capped albatrosses alter fine-scale foraging behavior patterns when associated with fishing vessels. Mar Ecol Prog Ser. 2011; 428: 289–301. - 17. R Development Core Team. R: A language and environment for statistical computing. 2015; R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org - 18. Chirico M. Sourceforge.net. 2004; URL http://souptonuts.sourceforge.net/code/sunrise.c.html - 19. Kahle D, Wickham H. ggmap: Spatial Visualization with ggplot2. The R Journal. 2013; 5: 144–61. - 20. Phalan B, Phillips RA, Silk JRD, Afanasyev V, Fukuda A, Fox J, et al. Foraging behaviour of four albatross species by night and day. Mar Ecol Prog Ser. 2007; 340: 271–86. - 21. Wakefield ED, Phillips RA, Matthiopoulos J, Fukuda A, Higuchi H, Marshall GJ, et al. Wind field and sex constrain the flight speeds of central-place foraging albatrosses. Ecol Monogr. 2009; 79: 663–79. - 22. LaPoint S, Gallery P, Wikelski M, Kays R. Animal behavior, cost-based corridor models, and real corridors. Landsc Ecol. 2013; 28: 1615–30. - 23. LaPoint S, Gallery P, Wikelski M, Kays R. Data from: Animal behavior, cost-based corridor models, and real corridors. Movebank Data Repository. 2013; - 24. Cross P, Bowers J, Hay C, Wolhuter J, Buss P, Hofmeyr M, et al. Data from: Nonparameteric kernel methods for constructing home ranges and utilization distributions. Movebank Data Repository. 2016; - 25. Blake S, Yackulic CB, Cabrera F, Tapia W, Gibbs JP, Kümmeth F, et al. Vegetation dynamics drive segregation by body size in Galapagos tortoises migrating across altitudinal gradients. J Anim Ecol. 2012; 82: 310–21. pmid:23171344 - 26. Bailey H, Mate BR, Palacios DM, Irvine L, Bograd SJ, Costa DP. Behavioural estimation of blue whale movements in the Northeast Pacific from state-space model analysis of satellite tracks. Endang Species Res. 2009; 10: 1–14. - 27. Irvine LM, Mate BR, Winsor MH, Palacios DM, Bograd SJ, Costa DP, et al. Spatial and temporal occurrence of blue whales off the U.S. West Coast, with implications for management. PLoS One. 2014; 9: e102959. pmid:25054829 - 28. Berman GJ, Choi DM, Bialek W, Shaevitz JW. Mapping the stereotyped behaviour of freely moving fruit flies. J R Soc Interface. 2014; 11. - 29. Blackwell PG, Niu M, Lambert MS, LaPoint SD. Exact Bayesian inference for animal movement in continuous time. Methods Ecol Evol. 2015; 7: 184–95. - 30. Bar-David S, Bar-David I, Cross PC, Ryan SJ, Knechtel CU, Getz WM. Methods for assessing movement path recursion with application to African buffalo in South Africa. Ecology. 2009; 90: 2467–79. pmid:19769125 - 31. Levin SA. The problem of pattern and scale in ecology. Ecology. 1992; 73: 1943–67. - 32. Postlethwaite CM, Brown P, Dennis TE. A new multi-scale measure for analysing animal movement data. J Theor Biol. 2013; 317: 175–85. pmid:23079283 - 33. Wilson RR, Gilbert-Norton L, Gese EM. Beyond use versus availability: behaviour-explicit resource selection. Wildlife Biol. 2012; 18: 424–30. - 34. Weimerskirch H, Bishop C, Jeanniard-du-Dot T, Prudor A, Sachs G. Frigate birds track atmospheric conditions over months-long transoceanic flights. Science. 2016; 353: 74–8. pmid:27365448 - 35. Benhamou S. How to reliably estimate the tortuosity of an animal's path: straightness, sinuosity, or fractal dimension? J Theor Biol. 2004; 229: 209–20. pmid:15207476 - 36. Bailleul F, Lesage V, Hammill MO. Spherical First Passage Time: A tool to investigate area-restricted search in three-dimensional movements. Ecol Model. 2010; 221: 1665–73. - 37. Thiebault A, Tremblay Y. Splitting animal trajectories into fine-scale behaviorally consistent movement units: breaking points relate to external stimuli in a foraging seabird. Behav Ecol Sociobiol. 2013; 67: 1013–26.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0168513
There is a vast and rapidly increasing quantity of scientific, corporate, government and crowd-sourced data published on the emerging Data Web. Open Data are expected to play a catalyst role in the way structured information is exploited in the large scale. This offers a great potential for building innovative products and services that create new value from already collected data. It is expected to foster active citizenship (e.g., around the topics of journalism, greenhouse gas emissions, food supply-chains, smart mobility, etc.) and world-wide research according to the “fourth paradigm of science”. The most noteworthy advantage of the Data Web is that, rather than documents, facts are recorded, which become the basis for discovering new knowledge that is not contained in any individual source, and solving problems that were not originally anticipated. In particular, Open Data published according to the Linked Data Paradigm are essentially transforming the Web into a vibrant information ecosystem. Published datasets are openly available on the Web. A traditional view of digitally preserving them by “pickling them and locking them away” for future use, like groceries, would conflict with their evolution. There are a number of approaches and frameworks, such as the LOD2 stack, that manage a full life-cycle of the Data Web. More specifically, these techniques are expected to tackle major issues such as the synchronisation problem (how can we monitor changes), the curation problem (how can data imperfections be repaired), the appraisal problem (how can we assess the quality of a dataset), the citation problem (how can we cite a particular version of a linked dataset), the archiving problem (how can we retrieve the most recent or a particular version of a dataset), and the sustainability problem (how can we spread preservation ensuring long-term access). Preserving linked open datasets poses a number of challenges, mainly related to the nature of the LOD principles and the RDF data model. In LOD, datasets representing real-world entities are structured; thus, when managing and representing facts we need to take into consideration possible constraints that may hold. Since resources might be interlinked, effective citation measures are required to be in place to enable, for example, the ranking of datasets according to their measured quality. Another challenge is to determine the consequences that changes to one LOD dataset may have to other datasets linked to it. The distributed nature of LOD datasets furthermore makes archiving a headache. == TOPICS == - Change Discovery * Change detection and computation in data and/or vocabularies * Change traceability * Change notifications (e.g., PubSubHubPub, DSNotify, SPARQL Push) * Visualisation of evolution patterns for datasets and vocabularies * Prediction of changes - Formal models and theory * Formal representation of changes and evolution * Change/Dynamicity characteristics tailored to graph data * Query language for archives * Freshness guarantee for query results * Freshness guarantee in databases - Data Archiving and preservation * Scalable versioning and archiving systems/frameworks * Query processing/engines for archives * Efficient representation of archives (compression) * Benchmarking archives and versioning strategies Ideally the proposed solutions should be applicable at web scale. == SUBMISSION GUIDELINES == Papers should be formatted according to the Springer LNCS format. For submissions that are not in the LNCS PDF format, 400 words count as one page. All papers should be submitted tohttps://easychair.org/conferences/?conf=mepdaw2016. We envision four types of submissions in order to cover the entire spectrum from mature research papers to novel ideas/datasets and industry technical talks: A) Research Papers (max 15 pages), presenting novel scientific research addressing the topics of the workshop. B) Position Papers and System and Dataset descriptions (max 5 pages), encouraging papers describing significant work in progress, late breaking results or ideas of the domain, as well as functional systems or datasets relevant to the community. C) Industry & Use Case Presentations (max 5 pages), in which industry experts can present and discuss practical solutions, use case prototypes, best practices, etc., in any stage of implementation. D) Open RDF archiving challenge (max 5 pages), is intended to encourage developers, data publishers, and technology/tool creators to apply Semantic Web techniques to create, integrate, analyze or use an archive of linked open datasets. Thus, we expect developments showcasing developments demonstrating one (or all) of: - useful functionality over RDF archives - a potential commercial application or RDF archives - tools to support/manage RDF archives at Web scale (*) A list of recommended datasets for the challenge is available at the workshop homepage: http://eis.iai.uni-bonn.de/Event/mepdaw2016.html#challenge All accepted papers will be published in the CEUR workshop proceedings series.
http://wikicfp.com/cfp/servlet/event.showcfp?eventid=51929&copyownerid=85446
Probability Sampling – In this sampling method the probability of each item in the universe to get selected for research is the same. Random Sampling. Simple random sampling: One of the best probability sampling techniques that helps in saving time and resources, is the Simple Random Sampling method. Since it involves a large sample frame, it is usually easy to pick a smaller sample size from the existing larger population. The primary types of this sampling are simple random sampling, stratified sampling, cluster sampling, and multistage sampling. It is also the most popular way of a selecting a sample because it creates samples that are very highly representative of the population. Types of Probability Sampling Methods. It is a fair method of sampling, and if applied appropriately, it helps to reduce any bias involved compared to any other sampling method involved. Sampling for the experimental class and the control class used a simple random sampling technique, namely taking random sample members without regard to the strata in the sample population. Sampling methods can be categorised into two types of sampling:. The purest form of sampling under the probability approach, random sampling provides equal chances of being picked for each member of the target population. This is the purest and the clearest probability sampling design and strategy. 2. There are numerous ways of getting a sample, but here are the most commonly used sampling methods: 1. It is a reliable method of obtaining information where every single member of a population is chosen randomly, merely by chance. Simple Random Sampling. It is also called probability sampling.The counterpart of this sampling is Non-probability sampling or Non-random sampling. Advantages of simple random sampling. Basic Sampling Techniques. Random sampling is a method of choosing a sample of observations from a population to make assumptions about the population. Random sampling is designed to be a representation of a community or demographic, but there is no guarantee that the data collected is reflective of the community on average. Hence the sample collected through this method is totally random in nature. Simple random is a fully random technique of selecting subjects. One possible method of selecting a simple random sample is to number each unit on the sampling frame sequentially and make the selections by generating numbers from a random number generator. Random sampling refers to a variety of selection techniques in which sample members are selected by chance, but with a known probability of selection. Simple random sampling can involve the units being selected either with or … Stratified Sampling Se4400a Frequency Response, Introductory Macroeconomics Pdf, Toilet Detail Drawing Autocad, Kellogg Garden Organics Raised Bed & Potting Mix, Sausage And Bean Casserole, Winter Orb Revised, Holy Spirit Scriptures For Power, Lenovo Ideapad Flex 4 Ram Upgrade,
http://www.nareshcricketevents.com/blog/253a87-random-sampling-techniques
Basic Concepts in Samples and Sampling…cont. Sampling error: any error that occurs in a survey because a sample is used Sample frame: a master list of the entire population Sample frame error: the degree to which the sample frame fails to account for all of the population 4 Reasons for Taking a Sample Practical considerations such as cost and population size Inability of researcher to analyze huge amounts of data generated by census 5 Two Basic Sampling Methods Probability samples: ones in which members of the population have a known chance (probability) of being selected into the sample Non-probability samples: instances in which the chances (probability) of selecting members from the population into the sample are unknown 6 Two Basic Sampling Methods Simple Random Sampling Probability Simple Random Sampling Simple random sampling: the probability of being selected into the sample is “known” and equal for all members of the population Blind Draw Method Random Numbers Method Advantage: Known and equal chance of selection Disadvantages: Complete accounting of population needed Cumbersome to provide unique designations to every population member 7 Two Basic Sampling Methods Probability Systematic Sampling Systematic sampling: way to select a random sample from a directory or list that is much more efficient than simple random sampling Skip interval ( k= 20,000/200=100 ) Advantages: Approximate known and equal chance of selection Efficiency Less expensive Disadvantage: Small loss in sampling precision 8 Two Basic Sampling Methods Probability Cluster Sampling Cluster sampling: method in which the population is divided into groups, any of which can be considered a representative sample Area sampling Advantage: Economic efficiency Disadvantage: Cluster specification error 9 Two Basic Sampling Methods Probability Stratified Sampling Stratified sampling: method in which the population is separated into different strata and a sample is taken from each stratum Proportionate stratified sample (F=60, M=40) Disproportionate stratified sample( F=50,M=50) Advantage: More accurate overall sample of skewed population Disadvantage: More complex sampling plan requiring different sample sizes for each stratum 10 Two Basic Sampling Methods Nonprobability Convenience samples: samples drawn at the convenience of the interviewer Error occurs in the form of members of the population who are infrequent or nonusers of that location. Judgment samples: samples that require a judgment or an “educated guess” as to who should represent the population Subjectivity enters in here, and certain members will have a smaller chance of selection than others. (e.g., Focus group participants) 11 Two Basic Sampling Methods Nonprobability…cont. Referral samples (snowball samples): samples which require respondents to provide the names of additional respondents Members of the population who are less known, disliked, or whose opinions conflict with the respondent have a low probability of being selected. Quota samples: samples that use a specific quota of certain types of individuals to be interviewed Often used to ensure that convenience samples will have desired proportion of different respondent classes 12 Online Sampling Techniques Random online intercept sampling: relies on a random selection of Web site visitors Invitation online sampling: is when potential respondents are alerted that they may fill out a questionnaire that is hosted at a specific Web site Online panel sampling: refers to consumer or other respondent panels that are set up by marketing research companies for the explicit purpose of conducting surveys with representative samples Other online sampling approaches: e.g., please forward the questionnaire to a friend(s). 13 Developing a Sample Plan Sample plan: definite sequence of steps that the researcher goes through in order to draw an ultimately arrive at the final sample 14 Developing a Sample Plan 7 steps Step 1: Define the relevant population. Step 2: Obtain a listing of the population. Incidence rate Step 3: Design the sample method (size and method). Step 4: Access the population 15 Developing a Sample Plan 7 steps…cont. Step 5: Draw the sample. Drop-down substitution Oversampling Resampling Step 6: Assess the sample. Sample validation Step 7: Resample if necessary. Incidence rate 16 Case 12.3 The Hobbit’s Choice Please read Case 12.3 in p. 369. Analyze the case and then answer questions 1, 2, and 3. Similar presentations © 2020 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/5318980/
The main types of probability sampling methods are simple random sampling, stratified sampling, cluster sampling, multistage sampling, and systematic random sampling. The key benefit of probability sampling methods is that they guarantee that the sample chosen is representative of the population. (B) Yes, because each buyer in the sample had an equal chance of being sampled. (C) Yes, because car buyers of every brand were equally represented in the sample. The sampling strategy that you select in your dissertation should naturally flow from your chosen research design and research methods, as well as taking into account issues of research ethics. To set the sampling strategy that you will use in your dissertation, you need to follow three steps: (a) understand the key terms and basic principles; (b) determine which sampling technique you will use to select the units that will make up your sample; and (c) consider the practicalities of choosing such a sampling strategy for your dissertation (e.g., what time you have available, what access you have, etc.).If there is more about sampling that you would like to know about, please leave feedback. If you view this web page on a different browser (e.g., a recent version of Edge, Chrome, Firefox, or Opera), you can watch a video treatment of this lesson.In this problem, there was a 100 percent chance that the sample would include 100 purchasers of each brand of car.There was zero percent chance that the sample would include, for example, 99 Ford buyers, 101 Honda buyers, 100 Toyota buyers, and 100 GM buyers.Only probability sampling methods permit that kind of analysis.Two of the main types of non-probability sampling methods are voluntary samples and convenience samples.Non-probability sampling methods offer two potential advantages - convenience and cost.The main disadvantage is that non-probability sampling methods do not allow you to estimate the extent to which sample statistics are likely to differ from population parameters.Similarly, the fact that each buyer in the sample had an equal chance of being selected is characteristic of a simple random sample, but it is not sufficient.The sampling method in this problem used random sampling and gave each buyer an equal chance of being selected; but the sampling method was actually stratified random sampling. Comments Sampling Procedure Research Paper - Why Do Researchers Use Sampling Procedures Psychology. A sample is a subset or subgroup of the population. It comprises some. Why Do Researchers Use Sampling Procedures Psychology Essay. 2349 words 9.… - How to write a great Sampling Strategy section Lærd. The Sampling Strategy section of Laerd Dissertation provides articles to help you. research design and research methods, as well as taking into account issues.… - Sampling Methods in Research Methodology; How to Choose. Jul 31, 2018. This paper presents the steps to go through to conduct sampling. Furthermore, as there are different types of sampling techniques/methods.… - Methods of Survey Sampling - What sampling method should. The survey method is usually preferred by researchers who want to include a large number of participants in their study. However. Sampling is done to get a number of people to represent the population. This article is a part of the guide.… - Sampling Methods Simply Psychology Many psychology studies have a biased sample because they have used an. We have to work out the relative percentage of each group at a university e.g.… - Sampling Methods Essay - 2036 Words Bartleby Free Essay Sampling is the framework on which any form of research is carried out. A suitable sample that meets the inclusion and exclusion criteria of a.… - Sampling Strategies - NATCO Doctoral Research Fellow, University of California-San Francisco. Specific sampling procedures are less likely to result in biased samples than others, yet.… - Understanding Health Research Sampling methods Here is a list of what those methods are, and why they might be used. In stratified random sampling, researchers select groups or 'strata' and randomly.… - The Methodology - Research Guides - University of.
https://domug.ru/sampling-procedure-research-paper-14488.html
The sample reflects the characteristics of the population from which it is drawn.Tags: Creative Writing AppResearch Paper On ElectronicsElement Oxygen EssayWebsites To Help Elementary Students With WritingFederalist EssaysCan You Start An Analytical Essay With A QuestionDbq Essay Examples In non-probability sampling, on the other hand, sampling group members are selected on non-random manner, therefore not each population member has a chance to participate in the study. Non-probability sampling methods include purposive, quota, convenience and snowball sampling methods. Usually, the population is too large for the researcher to attempt to survey all of its members. A small, but carefully chosen sample can be used to represent the population. The following observations need to be taken into account when determining sample size: a) The magnitude of sampling error can be diminished by increasing the sample size. b) There are greater sample size requirements in survey-based studies than in experimental studies.The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection.Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in this e-book in simple words. (2003) “Essentials of Marketing Research”, 3rd edition, Prentice Hall Source: Saunders, M., Lewis, P. In probability samples, each member of the population has a known non-zero probability of being selected.Probability methods include random sampling, systematic sampling, and stratified sampling.In probability sampling every member of population has a known chance of participating in the study.Probability sampling methods include simple, stratified systematic, multistage, and cluster sampling methods.The Figure 2 below illustrates specific sampling methods belonging to each category: Figure 2.Categorisation of sampling techniques The following table illustrates brief definitions, advantages and disadvantages of sampling techniques: My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach contains a detailed, yet simple explanation of sampling methods.c) Large initial sample size has to be provisioned for mailed questionnaires, because the percentage of responses can be as low as 20 to 30 per cent.d) The most important factors in determining the sample size include subject availability and cost factors For example, for the same research of ‘ Sampling methods are broadly divided into two categories: probability and non-probability. Comments Sampling Strategy Dissertation - Understanding Sampling and Recruitment in Social Work.
https://srk-msk.ru/sampling-strategy-dissertation-4294.html
One of the bigger tasks the communication researcher faces is obtaining the data to answer the research question or test the hypothesis that motivated the research in the first place. In the field of communication, data comes primarily from two sources – human beings, or some media form (which, it could be argued, ultimately are produced by human beings, so perhaps these are not distinct sources). Human beings may be asked by a researcher to fill out a survey or participate in an experiment. Alternatively, a researcher might seek out political advertisements, newspaper editorials, or speeches to include in a content analysis. Regardless, practical considerations typically require the researcher to limit his or her data collection to a sample drawn from a larger population of interest. For instance, a researcher interested in examining the effectiveness of a communication campaign in reducing unsafe sexual practices among high-school students will not be able to get all high-school students to participate in the study to evaluate its effectiveness. At best, he or she will only be able to get a certain fraction of students who are enrolled in a particular set of high schools, probably in a restricted location of the country, to provide data relevant to studying the effectiveness of the program. Those who participate in the study constitute the sample (those students enrolled in the high schools where data collection occurs) drawn from the broader population of interest (all high school students). Similarly, a content analyst interested in describing themes politicians have used recently in their television campaign advertisements is probably not going to analyze every campaign advertisement produced recently (the population). Instead, he or she may restrict the content analysis to most advertisements that aired during a national network evening news broadcast between 2001 and 2007 (the sample). The Nature Of Sampling Researchers rarely care much about the sample per se – what particular people in the study say or do, or the content of those advertisements that the investigator happened to content analyze. Instead, researchers are usually interested in making some kind of an inference from the data obtained from the sample – a “generalization” of some sort. The inference or generalization often focuses on using information from the sample to infer characteristics of the population that the sample hopefully represents – a “population inference.” For example, the investigator might want to make an inference about the fraction of the kids in the population who would be likely to change their behavior as a result of the communication campaign, or what percentage of advertisements in the population include a fear appeal. The ability to make a population inference is going to depend in large part on how the sample was obtained, for the method chosen influences how representative the sample is of the population of interest – how similar the sample is to the population on all dimensions, characteristics, or features that are likely to influence or be related to the measurement of the variables in the study. When population inference is the goal (which it may not be; see Frick 1998; Hayes 2005; Mook 1983), the researcher is well advised to employ some kind of random sampling method. Random sampling (also called “probability” or “probabilistic” sampling) requires that the process through which members of the population end up in the sample be determined by chance. Furthermore, for each member of the population, it must be possible to derive the probability of inclusion in the sample (even if you never actually calculate that probability). Random sampling is extremely important when the goal of the research is population inference, for it is the random sampling process that will, over the long haul, produce a sample that represents the population. Although it is possible that, just by chance, a specific sample will be unrepresentative of the population as a whole on one or more relevant dimensions, random sampling ensures that no conscious or unconscious biases the investigator brings into the sampling will influence who ends up included in the sample. This is good, for such biases, when they exist, can limit the generalizability of any study results and limit the ability to make a sound population inference. Simple Random Sampling The most basic form of random sampling is “simple random sampling.” With a simple random sample, each member of the population must have an equal probability of being included in the sample. In order to conduct a simple random sample, the researcher must have some means of identifying who is in the population in order to implement a method for making sure that each member has an equal chance of being included. Thus, simple random sampling requires that the investigator have some kind of list of the population prior to sampling – the “sampling frame.” An example of simple random sampling would be sampling members of the International Communication Association (ICA) by obtaining a membership list from the headquarters of the association and then randomly selecting names from the list. If there are 3,000 members and you wanted a sample of 100 members, you might assign each member a unique number between 1 and 3,000 and then have a computer randomly select 100 numbers between 1 and 3,000 to identify who to include in the sample. Many of the statistical methods that communication students learn about in introductory statistics classes and books assume simple random sampling, although simple random sampling is rarely actually done. The problem with simple random sampling is that it requires that all members of the population can be identified and enumerated so that a simple random sampling plan can actually be implemented. But for many populations that communication researchers would be interested in sampling from, such lists do not exist. Instead another method of sampling would have to be used. An exception would be media content of certain newspapers, magazines, television shows, or other things that are printed, published, or broadcast regularly and frequently. For example, we know that the Los Angeles Times is printed every day. So if we wanted to analyze the content of the front page of the Los Angeles Times between 1998 and 2007, it would be possible to enumerate the days of the year (there are 3,655 days during that period), assign a unique number between 1 and 3,655 to each day, and then use a computer to randomly select a few hundred days (i.e., a few hundred numbers between 1 and 3,655), thereby ensuring that each day has an equal chance of being included in the sample. Stratified Sampling There are reasons not to use simple random sampling even when it is possible. For instance, it might be particularly important that you not leave it to chance whether the sample is representative of the population on certain variables that you know are likely to be related to what you are measuring. For example, if the goal is to estimate the average number of peer-reviewed publications of ICA members employed at universities in the US, you can be pretty sure that how many publications a person has is related to his or her academic rank. If you were to collect a simple random sample of ICA members in this population, it is possible that your sample, just by chance, may include assistant professors in greater proportion than they exist in the population, which could lead to a substantial underestimate of the quantity of interest. To minimize this likelihood, you could conduct a stratified random sample, using academic rank as the stratification variable. When conducting a stratified random sample, the population is first split into groups (“strata”) that are homogeneous on the stratification variable. Then a simple random sample of each stratum is taken. So in this example, academic rank would be the stratification variable. The population is then divided up into strata (lecturers, assistant professors, associate professors, full professors). A simple random sample of lecturers is then taken, as is a simple random sample of assistant professors, and so forth. The final sample is then constructed by aggregating the simple random samples of each stratum into a single sample. When collecting a stratified random sample, the sample will contain as many members of population in each stratum as you desire, with that number being a function of whether the stratified sampling is done proportionally or non-proportionally. The distinction relates to whether the researcher attempts to sample the strata in proportion to their size in the population. For example, if 75 percent of the members of the population are in stratum A and 25 percent are in stratum B, proportional stratified sampling would require the researcher to apportion to the total sample in such a way that 75 percent of the total sample is taken from members of stratum A and 25 percent is taken from stratum B. By contrast, non-proportional sampling allows the investigator to concentrate the sampling in a manner disproportionate to the population distribution on the stratification variable. For example, even though 25 percent of the population may be in stratum B, the investigator might intentionally apportion half of the total sample to stratum B. This is called “oversampling,” and can increase estimation precision when oversampled strata are more variable on whatever is being measured compared to the other strata. After the sampling, cases from oversampled strata might then be mathematically underweighted so that cases in the sample from those strata do not “count” as much when deriving the study results statistically, to compensate for their increased likelihood of being included in the sample relative to members of other strata. Cluster Sampling Both simple and stratified random sampling require a list of the population prior to sampling. For instance, to conduct a simple random sample of ICA members, you would need to have a list of all ICA members. Stratified sampling imposes the additional requirement that each member of the population be placed into one and only one of perhaps several subpopulations defined by the stratification variable. In order for that to happen, information about each member’s value on the stratification variable must be available. For example, to stratify by academic rank, you would need to know not only who is a member of ICA, but also each member’s academic rank. This information might not be available. A related method easily confused with stratified sampling is “cluster sampling.” To conduct a cluster sample, it must be possible for members of the population to be classified into groups (“clusters”) in some fashion. However, there is no requirement, as there is when doing stratified sampling, that these groups be defined by a measured variable (such as academic rank), or even that you know before sampling which members of the population are in which group. Indeed, you do not even need to know how large the population is, so long as each and every member of the population can be said to be a member of one and only one cluster. When you cluster sample, all you need to have available is the universe of clusters. You randomly sample clusters from the universe of clusters, and for those clusters that are randomly selected, you include each and every cluster member in the sample. For example, suppose you wanted to sample the residents of a multistory apartment building to measure their attitudes about building management. For privacy purposes, the manager of the building might be reluctant to give the names and contact information for everyone living there. But she or he might be comfortable giving you limited access to the building, allowing you to knock on doors and talk to the residents. In the absence of prior information about who lives in the building and how to get in touch with them, you could not collect a simple or stratified random sample. But knowing that the building contains 30 floors, you could treat the floors as clusters, and then randomly select perhaps five floors. Once those five floors are randomly selected (probably through a simple random sample), you then approach everyone who lives on those five floors and include them in the sample. This would produce a bona fide random sample of residents of the building, even though you did not even know the specific identities of members of this population in advance of sampling from it. A communication researcher interested in sampling the advertising content of newspapers could use cluster sampling quite easily. Perhaps the population is defined as all advertisements published in the London Times in 2007. No doubt it would be difficult to obtain a list of all members of this population (i.e., every single advertisement published during this period), making it impossible to conduct a simple random sample or a stratified random sample. But the investigator could easily conduct a random cluster sample, defining each day of the year as a cluster, randomly selecting perhaps 30 days in the calendar year, and then scanning the paper on those days, including every advertisement that appears on those days in the sample. See Lacy et al. (2001) and references cited within for advice on how to approach the sampling of media content. Random Digit Dialing The advent of the telephone and its penetration into most households, at least in industrialized countries, has made sampling of people much easier than in the past. By randomly dialing telephone numbers, it is possible to collect random samples of large populations of people who are geographically dispersed. This approach does not require an enumeration of the members of the population in advance of sampling because it relies on the assumption that most people are attached to at least one phone number. Numbers need not be dialed completely randomly, and indeed often are not. For instance, to sample a particular region of a country, one might restrict the dialing to certain area codes or phone exchanges. And because many phones are not connected to residences, it would be advantageous to do list-assisted random digit dialing by purchasing a list of random phone numbers from a company that has already been culled of numbers that are disconnected or assigned to businesses or fax machines, for instance. Many companies exist that are in the business of constructing specialized lists of phone numbers to sell to researchers interested in sampling populations varying in size and specificity, from entire countries to people with specific occupations or interests. Multistage Sampling In practice, random sampling plans are often “multistage,” mixing sampling methods of different types that are conducted at different stages during the sampling process. For example, a researcher who wanted to collect data by doing face-to-face interviews of a random sample of urban city dwellers of an entire country would find if very difficult to collect a simple random or stratified sample of that population. Even if it were possible to enumerate the population, it might be cost-prohibitive to travel to the residences of, say, 1,000 different people dispersed across an entire country. But it might be possible to divide the population up into clusters such as cities, then randomly select cities, perhaps stratified by number of residents (small, medium, and large, defined in some justifiable manner). Once a small number of cities is randomly selected from the population of cities, the researcher could get a residential phone directory for those cities and conduct a simple random sample of pages of each city’s directory. With pages randomly selected, the researcher could then randomly sample phone numbers from those pages that were randomly selected. To reduce the likelihood of excluding people from the sample who are not listed in the phone directory, the researcher might permute the last two digits of the phone numbers randomly selected from each page. Once this is done, calls are made to these numbers to set up a face-to-face interview of whoever answers the phone. As a result of this process, the researcher would not have to travel the entire country, thereby substantially reducing data-collection costs. And yet this approach would produce a bona fide random sample of the entire population of interest (or at least be very close to one). Caveats Of Random Sampling It is important to acknowledge that even if the selection of members of the population for inclusion in a sample is governed by a random process, nonrandom processes can adulterate random samples, and this can disrupt the ability to make an accurate population inference. For instance, an investigator might select a sample of people randomly from a population of interest, but certain people who are approached for inclusion in the study are likely to choose not to participate. The process that drives that choice may not be a random one. An example would be people who have a relatively low level of education being more likely to refuse to participate in the study. This is called “nonresponse bias,” and it is very difficult to avoid entirely. In this case, the sample of people who ultimately provide data to the investigator will over-represent those in the population who are more educated, and that might be very important if the measurement of the variables of interest to the researcher is likely to vary systematically as a function of level of education of the participants. So random sampling does not guarantee a representative sample, although compared to nonrandom sampling methods, it is superior when representativeness is the goal. When thinking about population inference, it is important to keep in mind whether the population is dynamic or static, for this determines how time invariant the inference is. A static population is one that does not change over time, whereas a dynamic population changes over time. For example, the political advertisements broadcast on the major television networks prior to the 2004 presidential election represent a static population both in size and any study-relevant feature one can conceive. The number of these advertisements does not change, and the characteristics of those advertisements that a researcher might be interested in measuring are fixed. But the population of adult citizens of a specific nation is dynamic in size and features. The number of adult citizens of a nation fluctuates in size daily, as people die and adolescents “come of age” and become adults. As the members of a population change, so too will aggregates of the features of this population, some perhaps more than others. It is unlikely that there will be dramatic shifts in time of the distribution of men versus women, for example, in a large dynamic population of people. However, the attitudes of the members of this population may shift considerably with time, some attitudes perhaps more than others. Inferences about a dynamic population must necessarily be conditioned at least to some extent on the time the data was collected. Any pollster would know, for instance, that one cannot infer much about a politician’s current approval from data collected more than a few weeks prior, as national and world events, even a single event, can move such judgments quickly. Other features of a population, even if dynamic, change much more slowly. It would be fairly safe to infer that if sample data suggests that the population distributes itself equally among a few political parties, for example, that distribution is not likely to shift much in a matter of weeks, months, or perhaps even a year or so. So inferences from “old” data about some features of a population are more reasonably made, even if the population is dynamic on that feature. The point is that random sampling from a dynamic population affords an inference only about the characteristics of that population at the time of data collection. The inference may be ephemeral, depending on what the researcher is measuring and attempting to make an inference about. For a good overview of the details of the methods of random sampling described here, see Stuart (1984). References: - Frick, R. W. (1998). Interpreting statistical testing: Process and propensity, not population and random sampling. Behavior Research Methods, Instruments, and Computers, 30, 527–535. - Hayes, A. F. (2005). Statistical methods for communication science. Mahwah, NJ: Lawrence Erlbaum. - Lacy, S., Riffe, D., Stoddard, S., Martin, H., & Chang, K.-K. (2001). Sample size for newspaper content analysis multi-year studies. Journalism and Mass Communication Quarterly, 78, 836–845. - Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38, 379–387. - Stuart, A. (1984). The ideas of sampling. New York: Macmillan.
https://communication.iresearchnet.com/research-methods/random-sampling/
Stratified random sampling is a type of random sampling or probability sampling. Random sampling ensures that every member of the population has an equal and independent chances of selection. In a population where there is low heterogeneity simple random sampling is very effective. In case the population has elements that does not share common characteristics the researcher should use another sampling method. A very effective method is to use stratified sampling instead of simple random sampling. It helps reduce and manage the heterogeneity of the population. In stratified sampling each strata is mutually exclusive and collectively exhaustive. Also, each element in one strata share a common basis, heterogeneous elements of the population are easy to manage. The shared attributes may include income, education, age, gender, race, social status, or common ailments etc. Procedure for stratified random sampling In stratified sampling the researcher stratifies the population in clearly identifiable strata based on some common characteristics. The process of classifying the elements in separate strata is called stratification. Within each strata the researcher selects the sample randomly. Benefits of stratified random sampling - It assures that all groups in the population have equal chances of representation in the sample. This makes the sample highly generalizable and representative of the real characteristics of the population. - Since the researcher classifies the population in strata therefore, it is easier to make comparisons between different groups of the population. It also makes it easier to estimate the population characteristics. - The variability of the population is easy t reduce in this way. Drawbacks of stratified sampling - Stratified random sampling requires accurate information abut the population characteristics in order to classify it in strata. - Sometimes it is expensive to conduct stratified random sampling especially when the population is large and widespread. Example An example of stratified sampling can be to study the prevalence of hypertension in a specific population of elderly people. So, to study this we may need a sample that includes both genders, participants from different socioeconomic backgrounds, or different occupations etc. The variation of this population is hard to control in a simple random sampling but it is much easier in stratified random sampling.
https://researcharticles.com/index.php/stratified-random-sampling/
A population is the total number of elements in a group while a sample is a portion of the population. Sample statistics—quantities such as sample mean that describe sample data—generalize the information about the population parameter. As such, we draw samples from a particular population mainly for two reasons. To begin with, when a population is large, it is expensive to study each member of the population. Besides, studying each member of a large population is time-consuming. The following are two types of sampling methods: Probability sampling and non-probability sampling. In probability sampling, every member of the population has an equal chance of being selected. Probability sampling techniques include simple random sampling, stratified random sampling, cluster sampling, and systematic sampling. These techniques will be discussed further in the next reading objective. In non-probability sampling, samples are selected on the basis of judgment or the convenience of accessing data. As such, non-probability sampling largely depends on a researcher’s sample selection skills. There are two types of non-probability sampling methods: All else equal, probability sampling provides a more accurate and reliable representation of the population than non-probability sampling. A more convenient summary of sampling techniques is illustrated below: Question A junior analyst wishes to study the spending patterns of employed investment professionals. To ease his data collection process, he selects only investment professionals in his firm. Which of the following is the most likely sampling method the analyst used? - Cluster sampling. - Convenience sampling. - Judgmental sampling. Solution The correct answer is B. In convenience sampling, a population element is selected based on how easily a researcher can access it. Clearly, the analyst used convenience sampling since he only focused on the investment professionals in his firm. Obviously, his choice was guided by the ease with which he could access the investment professionals in his firm. A is incorrect. Cluster sampling is a type of probability sampling, which cannot fit in this context. C is incorrect. Judgmental sampling involves handpicking elements from a sample based on a researcher’s knowledge and expertise.
https://analystprep.com/cfa-level-1-exam/quantitative-methods/probability-and-non-probability-samples/
What Is So Fascinating About SampleSelection? Here's What I Know About Sample Selection The voluntary sampling system is a kind of non-probability sampling. The very best sampling is probability sampling, as it increases the chance of getting samples that are representative of the populace. It's simpler and cheaper than a number of other forms of random sampling, while at the very same time providing you a more effective sample than a non-probability sample. It's equal to random sampling provided that no specific order exists in the list. Two-stage sampling is a strong sample design way of systems that are hierarchical in nature. Accidental sampling (sometimes called grab, convenience or opportunity sampling) is a sort of nonprobability sampling which includes the sample being drawn from that area of the population that is close to hand. Perhaps the best method to describe adaptive sampling is by way of a good example. Choice-based sampling is just one of the stratified sampling strategies. Non probability sampling equally plays a major part in the subject of explanatory research. Regarding Sample Selection, there are many kinds. To the contrary, you want to study them all, but you decide on the sample for practical explanations. Non-probability samples are limited with respect to generalization. You will, therefore, have to have a sample and normally a little sample. It is possible to define a sample as a more concrete part of a population or populations that you decide to represent. A voluntary sample consists of individuals who self-select in the survey. When determining outputs for many segments, it could be essential to assemble a sufficiently large sample to permit in-depth analysis. The New Angle On Sample Selection Just Released You've got to choose your sample size. The sample size will also be dependent on what you would like to do with your results. So as to use the table, you'll need to decide on the size of your sample frame and the greatest number in your sample frame needs to be included into the table. The report provides detailed information concerning the candidate's strengths and weaknesses pertinent to the role along with general information concerning the candidate's relevant behavior style and abilities. Without objectives, the survey is not likely to generate usable outcomes. It is going to also appear at how to decide on the ideal approach to distributing your survey and the way to find the proper folks to answer it. You would have exactly the most suitable answer. As an issue of fact, the ideal question to ask is if this unit is suitable for your specific snoring problem. In the event the incorrect questions are posed to the incorrect men and women, statisticians aren't going to receive information which will be useful when applied to the whole population. You're already in an issue. The issue is that these samples might be biased because not everybody receives a possibility of selection. In both these cases there'll be quite a few residents whose names aren't listed. It's almost always a good idea to talk about this with the laboratory if you're unsure. A poorly executed essay can lead to a stellar student to receive rejected. It isn't necessary to to check at all them to ascertain the topics that are discussed during the day, nor is it essential to look at all the tweets to find out the sentiment on every one of the topics. When subjects of certain characteristics are not simple to locate. The assumption of an equal possibility of selection usually means that sources like a telephone book or voter registration lists aren't adequate for giving a random sample of a community. Specifically, the variance between individual results within the sample is a superb indicator of variance in the general population, which makes it relatively simple to estimate the truth of results. Furthermore, the Soft cascade method shows large variations between the results obtained in the majority of the circumstances, which isn't appropriate whilst training detectors. Whispered Sample Selection Secrets There are lots of possible advantages to stratified sampling. Every individual is provided an equal prospect of being selected. In survey sampling, several of the individuals identified as a member of the sample could possibly be unwilling to participate, not have enough time to participate (opportunity cost), or survey administrators might not have been in a position to get hold of them. Various people are indicated by numbers. Each individual in the populace of interest has an equal chances of selection. You will also have to consider the way you sleep before you can opt to buy this gadget. The subjective nature of picking out the sampling locations, though, can easily introduce bias into the outcome and preclude having the ability to assess sampling errors.
https://statisticshelponline.xyz/sample-selection/
Chaplain vs. Pastor: What’s the Difference? April 7, 2021 | Category: Spiritual Care Chaplain vs. Pastor: What’s the Difference? More than half of adults in the U.S. consider religion to be an important aspect of their lives. A Pew Research Center study reveals that 47% of adults deem religion to be ‘very important’ and 23% consider it to be ‘somewhat important.’ Individuals with religious affiliations often belong to communities that share their same beliefs and world views. They can look to religious leaders — such as chaplains and pastors — for spiritual guidance, moral support, and advice about life decisions. Chaplains and pastors minister to individuals regularly by leading religious services or offering spiritual guidance. The core responsibilities of the two roles are similar. Each occupation also necessitates an educational background in spiritual studies, such as a master’s in divinity or spiritual care. However, when considering the career path of chaplain vs. pastor, there are some differences. Defining the Roles of Chaplains and Pastors Chaplains and pastors play a significant role in the lives of diverse groups of people. They are both theologically educated and certified ministers. However, their job descriptions vary in a few ways. An easy way to remember the difference is that while all chaplains are pastors, not all pastors are chaplains. What Does a Chaplain Do? A chaplain is a certified clergy member who provides spiritual care for individuals in a non-religious organization, rather than a church congregation. Chaplains can work in government roles and serve members of the military in different locations. They can serve patients in healthcare or hospice facilities. Working in police departments, fire departments, and prisons is also common for chaplains. Since chaplains are ordained ministers, they can officiate ceremonies such as weddings and funerals. They can lead baptism services and provide final rites for patients who are passing away. Chaplains can also take on the role of a spiritual leader for individuals who do not belong to a specific religious community. Rather than preaching messages directed toward one religious group, chaplains lead non-denominational religious services that can benefit individuals from a variety of religious or spiritual backgrounds. Chaplains who hold positions at different institutions can also minister to staff members. For example, chaplains at hospitals can provide spiritual care to nurses, doctors, and administrators, as well as to patients and their families. What Does a Pastor Do? The main difference between a chaplain and a pastor is that they serve people in different locations. A pastor is an ordained clergy member who works in one religious organization, such as a church or parish. Pastors serve their congregation consistently by planning and overseeing weekly church services. They typically lead worship services and preach sermons. Providing spiritual guidance for specific communities of believers, according to the beliefs of a certain denomination, is the most important duty of pastors. Delegating responsibilities to staff members — to ensure the church can effectively function — is another essential aspect of a pastor’s job. Often, pastors hire worship leaders, youth pastors, administrators, and community outreach leaders to perform various tasks within a church. Volunteers can also take on certain roles during weekly services. Sometimes, pastors can also serve in a chaplain-like role, administering to individuals at a local hospital, prison, or military base. However, rather than being a permanent board-certified chaplain for an organization, pastors usually volunteer a certain amount of their time each week or month. A pastor can have a different title in different religious settings. For example, many Protestant Christians refer to their religious leaders as pastors, while Catholics refer to theirs as priests. Believers in non-Christian faiths or other religions also have different names for their spiritual leaders. How to Become a Chaplain or Pastor Even though spiritual leaders work in different types of organizations and have different responsibilities, the processes to become a chaplain or a pastor are relatively similar. Education and Experience Individuals who are looking to become chaplains or pastors should begin by earning a bachelor’s degree in theological studies, pastoral studies, biblical studies, or religious studies from an accredited institution. Most organizations and churches expect chaplains and pastors to also have an advanced degree in the field. Pastors often attend seminary and earn a Master of Divinity. Chaplains may also earn a Master of Divinity or a Master of Science in Spiritual Care to prepare them for roles outside traditional religious settings. After gaining a strong educational foundation, prospective chaplains and pastors gain experience in the field through internships, residencies, or working under the supervision of a lead chaplain or pastor. Some seminaries offer this hands-on training as part of their master’s degree programs. Ordination To find employment in religious or non-religious organizations, pastors and chaplains should seek ordination. They typically go through an application process in which they provide essential documents and sit for interviews. Pastors can become ordained through organizations such as the National Association of Christian Ministers. Specific churches may have different procedures and additional requirements. Chaplains can earn certification through accredited organizations such as the Board of Chaplaincy Certification Incorporated (BCCI) or the National Association of Catholic Chaplains (NACC). Salary and Job Outlook The salaries for full-time ministry positions can vary, based on the organization, job location, and position in the organization. However, the U.S. Bureau of Labor Statistics (BLS) notes the annual median salary for clergy members as $50,400. While the lowest 10% earn an average salary of $26,810, the highest 10% can earn an average of $86,970. The job outlook for clergy members is projected to grow by 6% between 2019 and 2029, which is slightly faster than the average rate for all careers. Provide Spiritual Guidance as a Chaplain Working in a spiritual leadership role is rewarding, and can be more encompassing in non-religious settings such as hospitals or hospice facilities. The Master of Science in Spiritual Care degree at AdventHealth University Online is focused on spiritual care in healthcare settings. Its core curriculum includes courses such as Christian Ethics in Healthcare; Spirituality, Health, and Wholeness; Grief and Loss; and Living from a Pastoral Theology. Electives offer specialized areas of study such as Spiritual Care in Pediatrics, Spiritual Care and Mental Illness, and Spiritual Care in Crisis and Trauma. Explore the university’s spiritual care program today, and learn how you can have a successful and meaningful career as a chaplain. Recommended Readings AdventHealth University’s Master of Science in Spiritual Care Gains Important Accreditation Affirmation Sources:
https://online.ahu.edu/blog/chaplain-vs-pastor/
What are Baptists? There is no one distinctive Baptist belief! Although probably most people think of believers' baptism as the primary Baptist distinctive, Baptists are not the only Christians to practise believers' baptism. Nor are they the only Christians to believe in congregational church government, the priesthood of all believers, or the separation of church and state. It is the combination of these various beliefs which make Baptists distinctive. Baptist distictives may be likened to a set of genes which, because of their particular arrangement, produce a family likeness wherever they are found. The Lordship of Christ 'Jesus is Lord' is our distinctive confession of faith. As individuals and as churches, Baptists seek to make Jesus Lord of every aspect of their lives. The authority of the Bible Baptists believe that the Bible is the Word of God and that the Holy Spirit through the scriptures shows us God's way for living. As radical believers Baptists seek to root their lives in the revelation of God's truth. Believers' baptism On the basis of the New Testament, Baptists claim that baptism is for believers only. Baptism is only for those who are able to declare 'Jesus is Lord'. As a symbol of Jesus' claim on their lives, Baptists practise baptism by 'immersion', in which candidates symbolise their desire to 'die to self' and to live for Christ. A believers' church Baptists understand the church as a community of believers gathered by the Holy Spirit in the name of Jesus Christ for worship, witness and service. Central to Baptist worship is prayer and praise, listening to God's word in preaching and gathering around the Lord's Table. The priesthood of all believers In the Baptist model of a believers' church every member has a role to play, whether in teaching, faith-sharing, evangelism, social action, pastoring, guiding, serving, prophetic insight, praying, healing, administration or hospitality. The church meeting In a Baptist church, one expression of the 'priesthood of all believers' is the church meeting. This is the occasion when members come together prayerfully to discern God's will for their life together. In Baptist churches the final authority rests not with the minsters or deacons but with the members gathered together in church meetings, it is the church meeting which, for instance, appoints leaders, agrees financial policy and determines mission strategy. Church meetings tend to take place mid-week, normally on a monthly or bi-monthly basis. Associating together Baptist churches have always come together in regional, national and international 'associations' for support and fellowship. On the basis of the New Testament, Baptists believe that churches should not live in isolation from one another but rather be 'interdependent', both as Baptists and as part of the Church Universal. The missionary task Compelled by the Holy Spirit, Baptists seek to be missionary communities. Baptists believe that each Christian has a duty to share their faith with others. William Carey was a Baptist who is known as the father of the modern missionary movement. Along with this emphasis on evangelism, however, Baptists recognise that mission includes social action and involves promoting justice, social welfare, healing, education and peace in the world. Religious freedom Religious freedom for all has always been a keystone of the Baptist way. Within Baptist churches, tolerance for differences of outlook and diversity of practice is encouraged.
https://www.beestonbaptists.org.uk/about/baptists/
Dane Co. churches must maintain capacity of 50 people or face up to $1K fine Religious entities holding mass gatherings in Dane County must maintain up to 25 percent of their capacity or have 50 people inside their building at a time, whichever is less, or face a penalty up to $1,000 plus court costs. City of Madison Assistant City Attorney Marci Paulsen insisted to NBC15 News that religious entities are being treated the same as other essential businesses, per Public Health Madison & Dane County's stay-at-home order issued two weeks ago. According to the "Forward Dane" order, all places that hold a mass gathering, including places of worship, concerts, movie theaters, conventions and other venues, are limited to the same limit of 25 percent of capacity up to 50 people, whichever is less. A "mass gathering" is defined as a “planned event with a large number of individuals in attendance, such as a concert, festival, meetings, training, conference, religious service, or sporting event," according to the "Forward Dane" order. The public health department says it is asking for voluntary compliance, and since issuing the order says it has not issued any citations to any organizations or businesses. "We hope that everyone continues to voluntarily comply with the Order and take an active role in helping with the suppression of this virus," Assistant City Attorney Paulsen said in an email to NBC15 News. But the Rev. Greg Ihm of the Madison Diocese earlier Friday alleging that religious entities are not being treated the same as businesses by the stay-at-home order. The Reverend argued that churches, which were previously determined as essential organizations before the "Forward Dane" order, are now ordered to maintain the same capacity as previously non-essential businesses like fitness centers and movie theaters. Ihm seems to allude in his post to social media that because churches were deemed essential in the eyes of the stay-at-home laws, they should have greater capacity, like grocery stores and pharmacies. Ihm continues that this order is unfair because, for example, a 1,000-person capacity church can now only allow 50 people indoors, which is effectively 5 percent of its capacity, according to Ihm. "As faithful we have a right to the Sacraments and an obligation to the health and safety of the faithful," Ihm posted to social media. "The Church is taking every precaution for the safety and well fair of the faithful that has been placed on other essential businesses but are not receiving equal treatment." Ihm went on to allege that the public health department would send out "government watchers" to parishes to keep tabs on compliance. However, Assistant City Attorney Paulsen as well as Public Health officials tell NBC15 News that enforcement will remain voluntary. "There are no 'government watchers' who will be policing any business or religious entity. In the shared spirit of keeping our friends, neighbors, and loved ones well, we ask everyone to identify ways to comply with these orders to reduce the risk of transmission of COVID-19," according to Public Health in an email.
https://www.nbc15.com/content/news/Dane-Co-churches-must-maintain-capacity-of-50-people-or-face-up-to-1K-fine-570884901.html
Places of worship in the Amsterdam Amsterdam Though the Netherlands isn't a particularly religious country, there are plenty of different places of worship all across the Amsterdam Area. Due to it's thriving international community, several of these places also conduct religious services in English. Below you can find a list of churches, synagogues and other religious centres where you can find a service to meet your beliefs in the city. Catholic churches Onze Lieve Vrouwekerk Keizersgracht 220 1016 DZ Amsterdam Parish of the Blessed Trinity Zaaiersweg 180 1097 ST Amsterdam Telephone: 020-465 2711 Email Saturday Mass in English De Krijtberg Singel 446 1017 AV Amsterdam Telephone: 020-623 1923 International Family Mass in English in the Obrechtkerk Jacob Obrechtstraat 28 1071 KM Amsterdam Email Non-denominational churches C3 Church Amsterdam Paasheuvelweg 24 Telephone: 020-331 2366 Email Crossroads International Church Sportlaan 27 1185 TB Amstelveen Telephone: 020-545 1444 Email Hillsong Church Theater Amsterdam Danzigerkade 5 1013 AP Amsterdam Liberty Church Meets at the Vondelkerk Vondelstraat 120 1054 GS, Amsterdam Email Netherlands Unitarian Universalist Fellowship Keizersgrachtkerk Keizersgracht 566 1017 EM Amsterdam Email River Amsterdam Joop Geesinkweg 313 1114AB Amsterdam Telephone: 020-334 2010 Online Church services: River Amsterdam on YouTube Vineyard Amsterdam Zuiderkerk Zuiderkerkhof 72 1011 WB Amsterdam Email Synagogues Beth Yeshua Messianic Synagogue Veluwelaan 20 1079 RA Amsterdam Telephone: 020-890 6950 Email Synagogue Kehilas Ja’akow Gerrit van der Veenstraat 26 1077 ED Amsterdam Telephone: 020-676 3602 Email Portuguese Synagogue Mr. Visserplein 3 1011 RD Amsterdam Telephone: 020-624 5351 Email Presbyterian churches English Reformed Church Begijnhof 48 1012 WV Amsterdam Telephone: 020-624 9665 Email Church of England Three locations:
https://www.iamsterdam.com/en/living/feel-at-home-in-amsterdam/community/english-church-services
Sarajevo’s churches, mosques and synagogues — often standing just meters apart — are quiet. Worshippers of all faiths have been instructed by their respective religious leaders to shelter, and pray, in their homes and to forgo the traditional communal celebrations of their religious holidays. However, rather than locking the doors of the houses of prayer in Sarajevo, Christian, Jewish and Muslim religious leaders are allowing small groups of worshippers, hand-picked from among healthy community members with lower risk levels, to pray inside. They are livestreaming the weekly prayers and sermons to the homes of their followers from the spacious vastness of their places of worship, to convey a sense of community while following the dictates of social distancing. There will be no gathering of Christians in churches and communal egg-cracking contests this Easter. Muslims will not meet in mosques to usher in Ramadan, nor will they share the fast-breaking meals during the holy Islamic month. Last week, on the first night of Passover, for the first time since 1950, the estimated 500 members of Sarajevo’s small Jewish community did not gather to share the ritual Seder feast.
https://armindurgut.com/portfolio/faith-covid-and-religion/
I believe that the local churches and others all over Canada are doing a good job at helping and integrating people, especially those that were part of the Syrian crisis, however, as you stated in your post, a lot of these people were greeted with hate. Although I read from the article that you mentioned, that the refugees are not blaming all Canadians, because this was an isolated accident, I still believe that there might be a way to change this perception and that people can become more acceptive of the refugees. Fear was the most probable motor that was driving the people that committed the shameful attacks, it would be therefore imperative to understand the causes of this fear. Some Canadians probably fear that these refugees might hold terrorists within their ranks; however, these people are simply victims of an atrocious war. By making Canadians and other countries around the world who accept Syrian refugees to be understandable and to become aware of their situation, the situation might turn to its best. I believe that the churches are doing a very good job at helping refugees, but if they would make some public demonstrations that would prove how normal these people are, the situation might reverse and those who are welcoming refugees might change their opinion on these victims. Love Thy Neighbour: How the church is getting involved by julie.brown on May 8, 2016 - 9:16pm Over the past few weeks, I’ve had the opportunity to become acquainted with a couple churches around the South Shore and in Montreal, getting a better sense of what each is doing in order to get involved with their community and with global issues. Growing up in a Christian community, I’d sometimes felt that the church was separate from the outside world. It seemed as though there was an invisible rift between the people I would see during a Sunday service and those that would sit next to me in class; sermons and worship seemingly never intersected with real-world issues. However, nothing could be further from the truth. Refugees in the Year of Mercy “The Pope proclaimed this year the year of Mercy,” said Matheus Schultz, a college student and friend that I interviewed who attends Saint-Marc’s Parish in Candiac. “So every church is doing everything they can do demonstrate mercy towards people, towards those who need it.” On Sunday, April 3rd, I helped provide musical entertainment for a large benefit lunch in a Candiac community building hosted by Schultz’s parish. Complete with songs, raffles and a delicious plate of spaghetti, they sold over 300 tickets and made over 4,700$. All the funds collected will be used to sponsor a family of Syrian refugees that are moving to Canada in the coming months. The benefit luncheon was the idea of Matheus Schultz and his twin sister, Thabata Schultz, whose families have been attending Saint-Marc’s Parish for three years. Schultz had been volunteering with the Red Cross, welcoming refugees that were entering the country, when he found out that the parish was sponsoring a family as well. “I really wanted to help,” he adds, mentioning the important role he held in the decision-making committee. A large part of Saint Marc’s Parish’s youth group, La Relève, was also involved, most helping in the kitchen or serving tables. “When someone from La Relève does something, everyone wants to go and do it together.” The issue of countries being unwelcoming to those fleeing difficult situations is one the Pope is repeatedly cited as holding dear. Since 2012, a civil war in Syria has pushed over 11 million people out of there homes, and relocated many of them out of their country (Carter). Hearing the call, and noticing the unkind and even hateful reaction in some places – a newly arrived family was attacked with pepper spray in Vancouver (CBC News) – churches in Canada and Quebec and are working hard to be welcoming and open-hearted to incoming families. A church in Outremont, similarly to Saint-Marc’s Parish, held a welcome luncheon for Syrian Armenian refugees arriving in the area. They saw it as an opportunity “to get to know each other, and talk to each other, and see how we can help them” (CBC News). Involvement: A Church Policy Indeed, hospitality was a recurring theme in the interviews I conducted. Bianca Hébert, the chair of the youth group, Breakaways, at Rosemount Bible Church (RBC), said that the church wants “to be as welcoming as possible, and as warm and open to anyone who comes.” The church is very conscious of the needs of the neighbourhood. Rosemount Bible Church “is located in a place that’s really not as rich,” explains Hébert, so RBC has done its best to help the families and single moms in the area. From corn boils, day care services, parenting conferences and baskets of food passed out on Mother’s Day – “we have all these things [in order] to really reach out to the community in ways that aren’t necessarily Christian.” Breakaways has also done fundraising work for humanitarian trips by cleaning homes, mowing lawns or painting fences. RBC has even been involved with non-profit organizations like Share the Warmth, World Vision, and Welcome Home Mission. On the other side of the river, La Prairie’s Église Évangélique du Semeur has put into motion an initiative called “Générosité Extrême”. They’re challenging Sunday morning churchgoers and teenagers of their youth group, Kontraste Jeunesse, to give more than they regularly would. Last year, the funds collected went to Canoé de l’Espoir, an organization that works with Native American reserves, filling boxes with Christmas gifts for the children there. The church also held a Christmas service, and invited La Prairie locals to attend with boxes containing cookies and comic books. Faith, family and community The undercurrent of all these works is, of course, faith; these religious groups unanimously believe in the importance of extending a hand to others, extending beyond those included in the church community. “We consider ourselves a family,” said Schultz, speaking about his parish’s youth group. Bianca Hébert echoed the sentiment about Breakaways. They both emphasize the importance of community within the church, as well as the strength-in-numbers way they’ve been able to work in their neighbourhoods and communities. This is how they are able to move and impact real, important issues; by a cooperative, peer-driven and motivated effort. This is the kind of initiative that reflects the way society works now. Not one person wielding power over others, but the effort of a collective: a community, a family, working together for those who need it. Yes, it takes faith, but it also takes people. And that's something that humanity has got plenty of. Here are the transcripts of the full interviews for interested readers: Sources: Carter, Joe. “Explainer: What you should know about the Syrian Refugee Controversy.” Acton Institute Power Blog, Acton Institute. 20 Nov. 2015. Web. 6 May 2016. http://blog.acton.org/archives/83541-explainer-what-you-should-know-about-the-syrian-refugee-controversy.html?gclid=Cj0KEQjwx7u5BRC1lePz2biJpIYBEiQA-ZeDmjxjqOU24WLT0lydUdV6v-nSl3zbGRsk0_DiY1ok6iMaAgkm8P8HAQ “Syrian Armenian refugees welcomed with mass and meal in Outremont church.” CBC News. 20 Jan. 2016. Web. 6 May 2016. http://www.cbc.ca/news/canada/montreal/syrian-refugees-montreal-1.3397866. “Syrian refugees confused, disappointed by pepper spray attack in Vancouver.” CBC News. 9 Jan. 2016. Web. 6 May 2016.http://www.cbc.ca/news/canada/british-columbia/vancouver-police-chief-constable-speaks-on-hate-motivated-pepper-spray-incident-1.3397228 Comments Changing the opinion Church involvement in the Community Your volunteer opportunity post is very interesting and original, as I have never thought about the church’s connection with the community. I personally love getting involved in religious events and even non-religious events that I think that individuals should get involved with their local church and organize an event where the community can get to meet new people. This idea can help expand communication and companionship between individuals, which can thus result in a better ambiance within the society. I personally have never participated in an event organized by the church, however I have partaken in an event organized by a mosque and I have also organized an event myself which allowed me to meet new people and increase bondage between myself and the Muslim community. These events are also a great way of clearing up misconceptions about certain religious interpretations made by various individuals. Your post is motivating me to work with a local place of worship as my next volunteer opportunity as I would like to invite individuals to participate in events created by the church or a mosque so that they may understand that places of worship aren’t only useful for religious people rather anyone is welcome to get involved with the church. One does not have to be Christian to enter a church and that is something that I would like to preach and show to everyone as places of worships aren’t only limited to religious people but they are rather open to anyone willing to get involved. A great initiation to making the church and other places of worship known to the community and allow them to connect with society would be to organize a children’s event with fun games and activities so that the parents of the children can come to the event and meet other parents and create a bond between people of different cultures. The event can be an open event where anyone from any religious background can partake in as this will avoid any discrimination and will create a compassionate relationship between individuals from different backgrounds as well as create a positive association with the church or place of worship. Wow Julie! As always, I am so Wow Julie! As always, I am so impressed with your writing. I love how you’ve connected your article with the feelings you’ve had regarding the place of the Church in nowadays society (the rift). In Quebec, I also have the feeling that the Church has been pushed out of the “scene” and that it is much less popular. However, we also have to look at Quebec and the historical context regarding religion; since the Revolution Tranquille, I feel as if a majority (probably) thinks that the Church isn’t part of the community anymore or as much or that it shouldn’t be. This said, I love how your article challenges that idea and shows how involved churches still are. It was really interesting to see the impacts that local churches and how the “Year of Mercy” affects these impacts. Your full-length interviews (I’ve only skimmed them) also look super intriguing. We can really see that you’ve taken a very personal approach with this project, and I love it. I think that it was a brilliant idea to interview younger individuals that are also involved because it perfectly supports your text. In short, I love the way you write, and I love how the approach you have taken is so unique. You’ve helped me see a new side of the medal, and it has slightly changed my opinion regarding churches (not faith or Christianity). Great job! Thank you for this post!
http://newsactivist.com/en/articles/champlain-college-2016-newsactivist-contemporary-issues-complementary-course/love-thy
“What are the expectations of membership?” I am often asked that question in one form or another. “What am I getting myself into if I join this congregation.?” Although the technical answer will vary from congregation to congregation depending on By-Laws and cultural norms, my answer is always the same. We expect you to be present, I say. This is primarily a religious institution. Come to Sunday services as often as possible. Be an active participant in the worship life of the congregation. Furthermore, this is a self-governing religious institution. We place the democratic process at the very heart of our ethical principles. Be present at congregational meetings and forums; inform yourself, listen and learn, speak your mind, and vote. We expect you to be transformed. Author Michael Durall wrote in The Almost Church, “The congregation of the future is one that will recognize the unique ability of the church to radically alter a person’s worldview, and help people realize they are no longer the people they had once been. Too often we view Unitarian Universalist churches as safe havens, places of comfort that are perceived as a final destination rather than a port of embarkation.” Spiritual transformation is not something that can happen in one hour on Sunday morning in a large room. Seek out a smaller group of people – a covenant group, a social justice task force, a musical group, a religious education teaching team – with whom you can develop your personal spiritual practice. When you fellowship with others who care as deeply as you do, transformation is sure to follow. We expect you to give service to the congregation and to the larger community. “Unitarian Universalism has a proud history and tradition,” Durall goes on to say. “One with its saints and martyrs. But what are our churches called to do in this place and time? The primary purpose of the church is to create a community of compassion. All else flows from this. Unitarian Universalist churches should call their members to lead lives of dedication and commitment – lives not just of success, but also of service, and when called upon, sacrifice.” I want to repeat that last phrase. “Unitarian Universalist churches should call their members to lead lives of dedication and commitment – lives not just of success, but also of service, and when called upon, sacrifice.” So volunteer. Some folks look for a volunteer opportunity that particularly fits their talents or experiences. Others say, “I want to do whatever is needed.” Either way, this congregation is eager to help you find your own path to active involvement. “Wow,” you may say. “That’s a lot of expectations.” Yes, and there’s one more because this is also a self-sustaining religious institution. That means we expect you to be faithful and responsible stewards by taking part in the annual budget drive. Read the materials provided, respond promptly to the call of your visiting steward, take seriously the Fair Share Guidelines, and pledge an amount that will realistically sustain the programs, buildings, and staff for another year. The annual Budget Drive begins in March. Many of you will be contacted by a fellow member who has agreed to make several visits to others for one-on-one stewardship conversations. These conversations will not (that is NOT) be mostly about money. They will be about your experience here and how you can make sure that experience continues and is available to others as well. Please respond to the call from your visiting steward. Please have the conversation. Please look carefully at the Fair Share Guidelines and make your pledge out of a sense of abundance and generosity. If every family made a pledge that truly reflected their sense of dedication and commitment – and, as the times call for, a little bit of sacrifice – we would have an operating budget that was robust enough to fund all the programs we cherish and dream of. Barry and I have made our pledge of $7500 to this congregation, even though we are only here temporarily. Please join us in sustaining this beloved community. That is what we should expect of each other! That is what membership means. In the interim,
https://uuwestport.org/in-the-interim-february-19-2014/
The Constitution provides for freedom of religion; however, the Government places some limits on this right. The Constitution also provides that the State protect the freedom to practice religion in accordance with established customs, "provided that it does not conflict with public policy or morals." The Constitution states that Islam is the state religion and that Shari'a (Islamic law) is "a main source of legislation." There was no major change in the status of respect for religious freedom during the period covered by this report; however, construction proceeded on three new Shi'a mosques approved in 2001 and an Apostalic Nunciature continued to represent Vatican interests in the region. The generally amicable relationship among religions in society contributed to religious freedom. The U.S. Government discusses religious freedom issues with the Government in the context of its overall dialog and policy of promoting human rights. Section I. Religious Demography The country's total area is 6,880 square miles, and its population is 2.4 million. Of the country's total population, approximately 1.6 million persons are Muslim, including the vast majority of its nearly 900,000 citizens. The remainder of the overall population consists of the large foreign labor force and tens of thousands of "Bidoon" (officially stateless) Arabs with residence ties to the country who claim to have no documentation of their nationality. While the national census does not distinguish between Sunni and Shi'a adherents, the majority of citizens, including the ruling family, belong to the Sunni branch of Islam. The total Sunni Muslim population is well over 1 million approximately 600,000 of whom are citizens. The remaining 30 to 35 percent of Muslim citizens (approximately 270,000-315,000) are Shi'a, as are approximately 100,000 non-citizen residents. Estimates of the nominal Christian population range from 250,000 to 500,000 (including approximately 200 citizens, most of whom belong to 12 large families). The Christian community includes the Roman Catholic Diocese, with 2 churches and an estimated 100,000 members (Latin, Maronite, Greek Catholic, Coptic Catholic, Armenian Catholic, Malabar, and Malankara congregations worship at the Catholic cathedral in Kuwait city); the Anglican (Episcopalian) Church, with 115 members (several thousand other Christians also use the Anglican Church for worship services); the National Evangelical Church (Protestant), with 3 main congregations (Arabic, English, and "Malayalee") and 15,000 members (several other Christian denominations also worship at the National Evangelical Church Compound); the Greek Orthodox Church (referred to in Arabic as the "Roman Orthodox" Church, a reference to the Eastern Roman Empire of Byzantium), with 3,500 members; the Armenian Orthodox Church, with 4,000 members; the Coptic Orthodox Church, with 70,000 members; and the Greek Catholic (Eastern Rite) Church, whose membership totals are unavailable. In September 2001, diplomatic relations between the Vatican and Kuwait were upgraded to ambassadorial status. There are many other unrecognized Christian denominations in the country, with tens of thousands of members. These denominations include Seventh-day Adventists, the Church of Jesus Christ of Latter-day Saints (Mormons), Marthoma, and the Indian Orthodox Syrian Church. There are also communities of Hindus (estimated 100,000 adherents), Sikhs (estimated 10,000), Baha'is (estimated 400), and Buddhists (no statistics available). Missionary groups in the country serve non-Muslim congregations. Section II. Status of Religious Freedom Legal/Policy Framework The Constitution provides for freedom of religion; however, the Government places some limits on this right. The Constitution also provides that the State protect the freedom to practice religion in accordance with established customs, "provided that it does not conflict with public policy or morals." The Constitution states Islam is the state religion and that Shari'a (Islamic law) is "a main source of legislation and that Shari'a is "a main source of legislation." The Government observes Islamic holidays. The procedures for registration and licensing of religious groups are unclear. The Ministry of Awqaf and Islamic Affairs has official responsibility for overseeing religious groups. Officially recognized churches must deal with a variety of government entities, including the Ministry of Social Affairs and Labor (for visas and residence permits for pastors and other staff) and the municipality of Kuwait (for building permits). While there reportedly is no official government list of recognized churches, seven Christian churches have at least some form of official recognition that enables them to operate openly. These seven churches have open "files" at the Ministry of Social Affairs and Labor, allowing them to bring in the pastors and staff necessary to operate their churches. Three of the country's churches are widely understood to enjoy "full recognition" by the Government and are allowed to operate compounds officially designated as churches: The Catholic Church, the Anglican Church, and the National Evangelical Protestant Church of Kuwait; however, they face quotas on the number of staff they can bring in, and their existing facilities are clearly inadequate to serve their respective communities. The other four churches--Greek Orthodox, Armenian Orthodox, Coptic Orthodox, and Greek Catholicism--reportedly are allowed to operate openly, hire employees, invite religious speakers, etc., without interference from the Government; however, their compounds are, according to government records, registered only as private homes. Church officials themselves appear uncertain about the guidelines or procedures for recognition. Some claim that these procedures are purposely kept vague by the Government to maintain the status quo. No other churches and religions have legal status but they are allowed to operate in private homes. The procedures for registration and licensing of religious groups also appear to be connected with government restrictions on nongovernmental organizations (NGOs), religious or otherwise. In 1993 all unlicensed organizations were ordered by the Council of Ministers to cease their activities. This order never has been enforced; however, since that time all but three applications by NGOs have been frozen. There were reports that in the last few years at least two groups have applied for permission to build their own churches, but the Government has not responded to their requests. The Government announced in October 2001 that all unlicensed branches of Islamic charities would be closed by the end of 2002. During the period covered by this report, the Government removed a large number of unlicensed streetside charity boxes. In August 2002, the Acting Minister of Social Affairs and Labor issued a ministerial decree to create a charitable organizations department within the Ministry of Social Affairs and Labor. The new department has been established with the mandate to regulate Kuwaiti -based religious charities by reviewing their applications for registration, monitor the operations of charities, and establish a new accounting system to comply with regulations of charity based operations. The following religious holidays are considered national holidays: Eid al-Adha, Islamic New Year, Prophet's Birthday, and Eid al-Fitr. Restrictions on Religious Freedom Shi'a are free to worship according to their faith without government interference; however, members of the Shi'a community have expressed concern about the scarcity of Shi'a mosques due to the Government's slow approval of the construction of new Shi'a mosques and the repair of existing mosques. (There are approximately 36 Shi'a mosques, compared to 1,300 Sunni mosques, in the country.) During the period covered by this report, no additional Shi'a mosques were guaranteed beyond the three approved for construction in 2001. The Shi'a appellate court for family law cases and the Shi'a charity authority established in 2001 reportedly are operating smoothly. The Government did not, however, approve the Shi'a request for their own Awqaf. Shi'a who aspire to serve as imams are forced to seek appropriate training and education abroad due to the lack of Shi'a jurisprudence courses at Kuwait University's College of Islamic Law, which only offers Sunni jurisprudence courses. The Ministry of Education is still reviewing an application to establish a private college to train Shi'a clerics within the country. If approved the new college could reduce Shi'a dependence on foreign study, for the training of Shi'a clerics. The Roman Catholic, Anglican, National Evangelical, Greek Orthodox, Armenian Orthodox, Coptic Orthodox, and Greek Catholic Churches operate freely on their compounds, holding worship services without government interference. Their leaders also state that the Government generally has been supportive of their presence, even providing police security and traffic control as needed. Other Christian denominations (including Mormons, Seventh-day Adventists, Marthoma, and Indian Orthodox) are not recognized legally, but are allowed to operate in private homes or in the facilities of recognized churches. Members of these congregations have reported that they are able to worship without government interference, provided that they do not disturb their neighbors and do not violate laws regarding assembly and proselytizing. Members of religions not sanctioned in the Koran, such as Hindus and Buddhists, may not build places of worship, but are allowed to worship privately in their homes without interference from the Government. In January 2002, after mounting pressure from citizens in the district of Salwa, the Government ordered the closure of the Sikh gurudwara, or temple. Sikhs who had worshipped there were still able to worship at another Sikh temple. During the period covered by this report, the closed temple was allowed to reopen. The Government prohibits missionaries from proselytizing to Muslims; however, they may serve non-Muslim congregations. The law prohibits organized religious education for religions other than Islam, although this law is not enforced rigidly. Informal religious instruction occurs inside private homes and on church compounds without government interference; however, there were reports that government inspectors from the Awqaf Ministry periodically visit public and private schools outside of church compounds to ensure that religious teaching other than Islam does not takes place. The Roman Catholic Church has requested that Catholic students be allowed to study the catechism separately during the period in which Muslim students receive mandatory instruction in Islam. During the period covered by this report, the Government still had not responded to the request. The Roman Catholic Church faces problems of overcrowding at its two official church facilities. Its cathedral in downtown Kuwait City regularly draws as many as 100,000 worshippers to its more than 30 weekly services. Due to limited space on the compound, the church is unable to construct any new buildings. The National Evangelical Church also faces overcrowding at its compound, which serves a weekly average of 20,000 worshippers in 55 congregations. There has been no change in the status of the Coptic Church since the Government notified it last year of its intention to appropriate the parcel of land on which the country's only Coptic church is located for a road project. The Government plans to grant the Church a land parcel of equal or greater size in the same general vicinity to relocate the church, but it has not guaranteed financial assistance to construct a new church. The Government does not permit the establishment of non-Islamic publishing companies or training institutions for clergy. Nevertheless, several churches publish religious materials for use solely by their congregations. Further, some churches, in the privacy of their compounds, provide informal instruction to individuals interested in joining the clergy. A private company, the Book House Company Ltd., is permitted to import a significant number of Bibles and other Christian religious material--including videotapes and compact discs--for use solely among the congregations of the country's recognized churches. The Book House Company is the only bookstore that has an import license to bring in such materials, which also must be approved by government censors. There have been reports of private citizens having non-Islamic religious materials confiscated by customs officials upon arrival at the airport. Although there is a small community of Christian citizens, a law passed in 1980 prohibits the naturalization of non-Muslims; however, citizens who were Christians before 1980 (and children born to families of such citizens since that date) are allowed to transmit their citizenship to their children. According to the law, a non-Muslim male must convert to Islam when he marries a Muslim woman if the wedding is to be legal in the country. A non-Muslim female is not required to convert to Islam to marry a Muslim male, but it is to her advantage to do so. Failure to convert may mean that, should the couple later divorce, the Muslim father would be granted custody of any children. Women continue to experience legal and social discrimination. In the family courts, one man's testimony is sometimes given the same weight as the testimony of two women; however, in the civil, criminal, and administrative courts, the testimony of women and men is considered equally. Unmarried women 21 years old and over are free to obtain a passport and travel abroad at any time; however, a married woman who applies for a passport must obtain her husband's signature on the application form. Once she has a passport, a married woman does not need her husband's permission to travel, but he may prevent her departure from the country by contacting the immigration authorities and placing a 24-hour travel ban on her. After this 24-hour period, a court order is required if the husband still wishes to prevent his wife from leaving the country. All minor children must have their father's permission to travel outside of the country. Inheritance is governed by Islamic law, which differs according to the branch of Islam. In the absence of a direct male heir, Shi'a women may inherit all property, while Sunni women inherit only a portion, with the balance divided among brothers, uncles, and male cousins of the deceased. The law requires jail terms for journalists who defame religion. There were no reports during the period covered by this report of Islamists using this law to threaten writers with prosecution for publishing opinions deemed insufficiently observant of Islamic norms as had occurred in the past, nor of religiously based prosecutions of authors or journalists. There were no reports of religious prisoners or detainees. Forced Religious Conversions There were no reports of forced religious conversion, including of minor U.S. citizens who had been abducted or illegally removed from the United States, or of the refusal to allow such citizens to be returned to the United States. There have been cases in which U.S. citizen children have been abducted from the United States and not allowed to return under the law; however, there were no reports that such children were forced to convert to Islam, or that forced conversion was the reason that they were not allowed to return. Improvements and Positive Developments in Respect for Religious Freedom The overall situation for Shi'a improved during the period covered by this report. The Government approved the construction of 3 new Shi'a mosques in addition to the 3 that were approved in 2001, bringing the total to 36 Shi'a mosques in the country. The Government is currently considering a request to establish a Shi'a "Supreme Court" to handle matters of family law. The Government now allows Shi'a to follow their own jurisprudence in matters of personal status at the first instance and appellate levels, but not yet at the cassation level. Shi'a leaders no longer express concern that proposed legislation in the National Assembly does not take their beliefs into account. An Apostolic Nunciature, headed by an Apostolic Nuncio, accredited to Kuwait, Bahrain, and Yemen, was upgraded from charge d'affaires to full ambassadorial status in September 2001, to represent Vatican interests in the region. The Vatican Ambassador is resident in Kuwait City. The Catholic Church views the Government's agreement to upgrade to full diplomatic relations with the Vatican as significant in terms of government tolerance of Christianity. The Ministry of Education has announced its intention to combat religious intolerance by clarifying the concept of "jihad" in school curricula; this initiative encountered strong condemnation from Islamist members of parliament. During the year, the Ministry removed teachers thought to be Islamic extremists. Section III. Societal Attitudes In general there are amicable relations among the various religions, and citizens generally are open and tolerant of other religions; however there is a small minority of ultraconservatives opposed to the presence of non-Muslim groups. While some discrimination based on religion reportedly occurs on a personal level, most observers agree that it is not widespread. There is a perception among some domestic employees and other members of the unskilled labor force, particularly nationals of Southeast Asian countries, that they would receive better treatment from employers as well as society as a whole if they converted to Islam; however, others do not see conversion to Islam as a factor in this regard. The conversion of Muslims to other religions is a very sensitive matter. While such conversions reportedly have occurred, they have been done quietly and discreetly. Known converts face harassment, including loss of job, repeated summonses to police stations, and imposition of fines without due process. In May the Awqaf Minister advised Kuwait's imams "not to pray against Christians." In response, however, some Muslim leaders argued that it is the duty of Muslims to foster hatred for Christians and Jews. While some individuals incite hatred for Christians and Jews, in general the society is peaceful and tolerant. Hostility towards Israel is pervasive, but typically comes with a disavowal of hostility towards the Jewish religion. After Kuwaiti Al-Qaeda sympathizers murdered a Marine in October, mainstream Muslim leaders made efforts to teach that Islam forbids such acts and prescribes peaceful relations. During the period covered by this report, on several occasions local newspapers have published photographs of Christian worship in Kuwait, in a factual, non-critical manner. Section IV. U.S. Government Policy The U.S. Government discusses religious freedom issues with the Government in the context of its overall dialog and policy of the promoting human rights. U.S. Embassy officials frequently meet with representatives from Sunni, Shi'a, and various Christian groups. Intensive monitoring of religious issues has long been an embassy priority. Embassy officers have met with most of the leaders of the country's recognized Christian churches, as well as representatives of various unrecognized faiths. Such meetings have afforded embassy officials the opportunity to learn the status and concerns of these groups.
https://www.jewishvirtuallibrary.org/kuwait-religious-freedom-report-2003
This article does not cite any sources . (February 2007) (Learn how and when to remove this template message) An assistant pastor is a person who assists the pastor in a Christian church. The qualifications, responsibilities and duties vary depending on church and denomination. A pastor is an ordained leader of a Christian congregation. A pastor also gives advice and counsel to people from the community or congregation. Christianity is a religion based on the life and teachings of Jesus of Nazareth, as described in the New Testament. Its adherents, known as Christians, believe that Jesus Christ is the Son of God and savior of all people, whose coming as the Messiah was prophesied in the Old Testament. Depending on the specific denomination of Christianity, practices may include baptism, Eucharist, prayer, confession, confirmation, burial rites, marriage rites and the religious education of children. Most denominations have ordained clergy and hold regular group worship services. A church building or church house, often simply called a church, is a building used for Christian religious activities, particularly for Christian worship services. The term is often used by Christians to refer to the physical buildings where they worship, but it is sometimes used to refer to buildings of other religions. In traditional Christian architecture, the church is often arranged in the shape of a Christian cross. When viewed from plan view the longest part of a cross is represented by the aisle and the junction of the cross is located at the altar area. In many Christian churches, an assistant pastor is a pastor-in-training, and in most cases, they are awaiting full ordination. In many instances, they are granted limited powers and authority to act with, or in the absence of, the congregation's Pastor. Some churches that have outreach programs, such as hospital visitations, in-home programs, prison ministries, or multiple chapels, will appoint assistant pastors to perform duties while the Pastor is busy elsewhere. Some churches use the title brother or ordained brother in place of assistant pastor. In larger Roman Catholic parishes, the duties of an assistant pastor can be broken up into duties performed by deacons and non-ordained lay people. Ordination is the process by which individuals are consecrated, that is, set apart as clergy to perform various religious rites and ceremonies. The process and ceremonies of ordination vary by religion and denomination. One who is in preparation for, or who is undergoing the process of ordination is sometimes called an ordinand. The liturgy used at an ordination is sometimes referred to as an ordination. The term chapel usually refers to a Christian place of prayer and worship that is attached to a larger, often nonreligious institution or that is considered an extension of a primary religious institution. It may be part of a larger structure or complex, such as a college, hospital, palace, prison, funeral home, church, synagogue or mosque, located on board a military or commercial ship, or it may be an entirely free-standing building, sometimes with its own grounds. Chapel has also referred to independent or nonconformist places of worship in Great Britain—outside the established church. |This Christianity-related article is a stub. You can help Wikipedia by expanding it.| A bishop is an ordained, consecrated, or appointed member of the Christian clergy who is generally entrusted with a position of authority and oversight. In the Christian churches, holy orders are ordained ministries such as bishop, priest, or deacon, and the sacrament or rite by which candidates are ordained to those orders. Churches recognizing these orders include the Catholic Church, the Eastern Orthodox, Oriental Orthodox, Anglican, Assyrian, Old Catholic, Independent Catholic and some Lutheran churches. Except for Lutherans and some Anglicans, these churches regard ordination as a sacrament. The Anglo-Catholic tradition within Anglicanism identifies more with the Roman Catholic position about the sacramental nature of ordination. A priest is a religious leader authorized to perform the sacred rituals of a religion, especially as a mediatory agent between humans and one or more deities. They also have the authority or power to administer religious rites; in particular, rites of sacrifice to, and propitiation of, a deity or deities. Their office or position is the priesthood, a term which also may apply to such persons collectively. Clergy are some of the main and important formal leaders within certain religions. The roles and functions of clergy vary in different religious traditions but these usually involve presiding over specific rituals and teaching their religion's doctrines and practices. Some of the terms used for individual clergy are clergyman, clergywoman and churchman. Less common terms are churchwoman, clergyperson and cleric. A deacon is a member of the diaconate, an office in Christian churches that is generally associated with service of some kind, but which varies among theological and denominational traditions. Some Christian churches, such as the Catholic Church, the Eastern Orthodox Church and the Anglican church, view the diaconate as part of the clerical state; in others, the deacon remains a layperson. Presbyterian polity is a method of church governance typified by the rule of assemblies of presbyters, or elders. Each local church is governed by a body of elected elders usually called the session or consistory, though other terms, such as church board, may apply. Groups of local churches are governed by a higher assembly of elders known as the presbytery or classis; presbyteries can be grouped into a synod, and presbyteries and synods nationwide often join together in a general assembly. Responsibility for conduct of church services is reserved to an ordained minister or pastor known as a teaching elder, or a minister of the word and sacrament. The Reverend is an honorific style most often placed before the names of Christian clergy and ministers. There are sometimes differences in the way the style is used in different countries and church traditions. The Reverend is correctly called a style but is often and in some dictionaries called a title, form of address or title of respect. The style is also sometimes used by leaders in non-Christian religions such as Judaism and Buddhism. The Aaronic priesthood is the lesser of the two orders of priesthood recognized in the Latter Day Saint movement. The others are the Melchizedek priesthood and the rarely recognized Patriarchal priesthood. Unlike the Melchizedek priesthood, which is modeled after the authority of Jesus and the Twelve Apostles, or the Patriarchal priesthood, which is modeled after the authority of Abraham, the Aaronic priesthood is modeled after the priesthood of Aaron the Levite, the first high priest of the Hebrews, and his descendents. The Aaronic priesthood is thought to be a lesser or preparatory priesthood and an "appendage" of the more powerful Melchizedek priesthood. In Christianity, a minister is a person authorized by a church, or other religious organization, to perform functions such as teaching of beliefs; leading services such as weddings, baptisms or funerals; or otherwise providing spiritual guidance to the community. The term is taken from Latin minister, which itself was derived from minus ("less"). A sacristan is an officer charged with care of the sacristy, the church, and their contents. The sacrament of holy orders in the Catholic Church includes three orders: bishop, priest, and deacon. In the phrase "holy orders", the word "holy" simply means "set apart for some purpose." The word "order" designates an established civil body or corporation with a hierarchy, and ordination means legal incorporation into an order. In context, therefore, a group with a hierarchical structure that is set apart for ministry in the Church. The Anglican ministry is both the leadership and agency of Christian service in the Anglican Communion. "Ministry" commonly refers to the office of ordained clergy: the threefold order of bishops, priests and deacons. More accurately, Anglican ministry includes many laypeople who devote themselves to the ministry of the church, either individually or in lower/assisting offices such as lector, acolyte, sub-deacon, Eucharistic minister, cantor, musicians, parish secretary or assistant, warden, vestry member, etc. Ultimately, all baptized members of the church are considered to partake in the ministry of the Body of Christ. "...[I]t might be useful if Anglicans dropped the word minister when referring to the clergy...In our tradition, ordained persons are either bishops, priests, or deacons, and should be referred to as such." A marriage officiant, solemniser, or "vow master" is a person who officiates at a wedding ceremony. Some nonreligious couples get married by a government official like a judge, mayor, or justice of the peace. The priesthood is one of the three holy orders of the Catholic Church, comprising the ordained priests or presbyters. The other two orders are the bishops and the deacons. Only men are allowed to receive holy orders, and the church does not allow any transgender people to do so. Church doctrine also sometimes refers to all baptised Catholics as the "common priesthood". In Christianity, an elder is a person who is valued for wisdom and holds a position of responsibility and authority in a Christian group. In some Christian traditions an elder is an ordained person who usually serves a local church or churches and who has been ordained to a ministry of word, sacrament and order, filling the preaching and pastoral offices. In other Christian traditions, an elder may be a lay person charged with serving as an administrator in a local, or be ordained to such an office, also serving in the preaching or pastoral roles. There is technically a distinction between the idea of ordained elders and lay elders, often the two concepts are conflated in everyday conversation. Particularly in reference to age and experience, elders exist throughout world cultures, and the Christian sense of elder is partially related to this. Ordination is the process by which individuals are consecrated, that is, set apart as clergy to perform various religious rites and ceremonies. The process and ceremonies of ordination varies by religion and denomination. One who is in preparation for, or who is undergoing the process of ordination is sometimes called an ordinand. The liturgy used at an ordination is sometimes referred to as an ordinal. William David O'Brien was an American prelate of the Roman Catholic Church. He served as an auxiliary bishop of the Archdiocese of Chicago from 1934 until his death in 1962, and was named an Archbishop in 1953. A rector is, in an ecclesiastical sense, a cleric who functions as an administrative leader in some Christian denominations. In contrast, a vicar is also a cleric but functions as an assistant and representative of an administrative leader. The term comes from the Latin for the helmsman of a ship.
https://wikimili.com/en/Assistant_pastor
A place of worship is a specially designed structure or consecrated space where individuals or a group of people such as a congregation come to perform acts of devotion, veneration, or religious study. A building constructed or used for this purpose is sometimes called a house of worship. Temples, churches, and mosques are examples of structures created for worship. A monastery, particularly for Buddhists, may serve both to house those belonging to religious orders and as a place of worship for visitors. Natural or topographical features may also serve as places of worship, and are considered holy or sacrosanct in some religions; the rituals associated with the Ganges river are an example inHinduism. Under Internationa- l Humanitarian Law and the Geneva Conventions, religious buildings are offered special protection, similar to the protection guaranteed hospitals displaying the Red Cross or Red Crescent. These international laws of war bar firing upon or from a religious building. Religious architecture expresses the religious beliefs, aesthetic choices, and economic and technological capacity of those who create or adapt it, and thus places of worship show great variety depending on time and place. Bahá'í Faith - Bahá'í House of Worship or Mashriqu'l-Adhkár (Arabic: مشرق اﻻذكار, "Dawning-place of the remembrances of God") is a place of worship or temple for theBahá'í Faith. Eight continental Houses of Worship have been built around the world. Buddhism - Vihara, a Buddhist monastery - Wat, the name for a monastery temple inCambodia and Thailand [edit- ]Christianity The word church derives from the Greekekklesia, meaning the called-out ones. Its original meaning is to refer to the body of believers, or the body of Christ. The wordchurch is used to refer to a Christian place of worship by some Christian denominations, including Anglicans and Rom- an Catholics. Other Christian denominations, including the Religious Society of Friends, Mennonites,Christade- lphians, and some unitarians, object to the use of the word "church" to refer to a building, as they argue that this word should be reserved for the body of believers who worship there. Instead, these groups use words such as "Hall" to identify their places of worship or any building in use by them for the purpose of assembly. - Basilica (Roman Catholic) - Cathedral or minste- r (seat of a diocesan bishop within the Catholic church and the Anglicanchurch) - Chapel ("- Capel" in Welsh) – Presbyterian Church of Wales (Calvinistic Methodism), and some other denominations, especially non-conformist de- nominations. In Catholicism and Anglican- ism, some smaller and "private" places of worship are called chapels. - Church – A- nglican, Roman Catholic, Episcopalian, Prot- estant denominations - Kirk (Sc- ottish–cognate with church) - Meeting House – Religious Society of Friends - Meeting house – Christadelphians - meetinghou- se and temple – Mormons L- atter-day Saints use meetinghouse and- temple to denote two different types of buildings. Normal worship servicesare held in ward meeting houses (or chapels) while Mormon temples are reserved for special ordinances. - Temple – French Protestants Protestant denominations installed in France in the early modern era use the word temple(as opposed to church, supposed to be Roman Catholic); some more recently built temples are called church. - Orthodox temple – Orthodox Christianity (both Eastern and Oriental) an Orthodox temple is a place of worship with base shaped like Greek cross. - Kingdom Hall – Jehovah's Witnesses may apply the term in a general way to any meeting place used for their formal meetings for worship, but apply the term formally to those places established by and for local congregations of up to 200 adherents. Their multi-congregation events are typically held at a meeting place termed Assembly Hall of Jehovah's Witnesses (or Christian Convention Center of Jehovah's Witnesses). Classical antiquity Ancient Greece - Greek temple, for religions in ancient Greece Ancient Rome - Roman temple, for the religions of ancient Rome - Mithraeum, for the Mithraic mysteries Hinduism - Hindu Temple (Mandir), Hinduism- Islam - Masjid (mosque) - Mus- alla (informal or unsanctified mosque) Jainism - Jain temple – Jainism Juda- ism - Synagogue - Judaism - Some synagogues, especially Reform synagogues, are called temples, but Orthodox and Conservative Judaism consider this inappropriate as they do not consider synagogues a replacement for the Temple in Jerusalem. Some Jewish congregations use the Yiddish term 'shul' to describe their place of worship. Norse Paganism - hof – Norse Paganism Shinto - Jinja – Shinto Sikhism - Gurdwara - Sikhism Taoism - Daoguan- – Taoism Zoroastrian- ism - Fire temple - All Zoroastrian temples fall into the Fire temple category. - Atash Behram - Agyari - Dadgah Vietna- mese ancestral worship - Nhà thờ họ. Historically speaking Vietnamese people venerate their ancestors, as they somehow still exist among them. However, there is a large diversity of religions in Vietnam, Christianity, Buddhism and Cao Dai religion.
https://www.speakingtree.in/blog/k1-places-of-worship
- Alternate meanings: see Church of Christ (disambiguation). The Churches of Christ are a body of autonomous Christian congregations. Since the Churches of Christ claim to be a restoration of the first-century church, they trace their origin to the day of Pentecost. The Churches of Christ have the following distinctive traits: the refusal to hold to any creeds other than those specifically mentioned in the Bible itself ; the practice of adult baptism for the remission of sins; autonomous congregational church organization, with congregations overseen by a plurality of elders; the weekly observance of The Lord's Supper; and the belief in a cappella congregational singing during worship. The American Restoration Movement of the 19th century promoted returning to the practices of the first century Christian churches. Other churches that were advanced by the Restoration Movement include the Independent Christian Churches/Churches of Christ (Instrumental) and the Disciples of Christ. The churches of Christ are distinct, in that they believe that they are not another denomination, but rather are striving to be the one, true Church. Many members today descrine themselves as "Christians only" ("but not the only Christians" is often added). Some Churches of Christ are called non-institutional and may have strong disagreements with other Churches of Christ. It should be noted that some members, particularly older members, of this group are apt to object to being referred to as "Protestants", believing that Christ's Church was not founded as a protest against anything, other than perhaps the domination of the present world by Satan. The church of Christ has firm disagreements with the Roman Catholic Church and does not recognize the authority of the Holy See. Some, and probably most, members would also object to the categorization of their church as a "denomination", as one of the tenets of this movement is that they are not a denomination and that denominationalism is a sinful departure from the original plan laid down in the Bible for the Church. |Contents| Church organization There is no headquarters for the Church of Christ; each congregation has its own structure, consisting of Elders, Deacons, and one or more Preachers/Ministers/Evangelists. Typically, the churches participate in a loose, informal network of other local Churches of Christ. From the beginning of the Restoration Movement, newspapers and magazines edited by church leaders have been important forces in unifying like-minded churches. Also, most congregations value the influence of church of Christ-affiliated colleges and universities, such as Abilene Christian University, Freed-Hardeman University, Harding University, Oklahoma Christian University, and Heritage Christian University (formerly International Bible College (IBC)) . Elders are spiritually mature Christian men whose religious work may be in some specialized capacity of a spiritual nature. They provide moral guidance, and they or their designees approve and establish Bible study curriculum, select Sunday school teachers, and select the Preacher/Evangelist when the position becomes vacant. In some congregations, elders also select the deacons. Elders are also called pastors, shepherds, and bishops (all Biblical terms referring to the same office), but the use of "elder" is the most common by far. Elders are selected by the members of a congregation; the method of doing this varies considerably between congregations, but involves confirming that a potential elder does indeed embody all of the characteristics of elders which are listed in the Bible in 1 Timothy and Titus. In a decreasing number of congregations, the eldership is something of a self-perpetuating board in which its members are the determiners of the qualfications of their sucessors and announce whom they have selected to join them with little or no congregational input; this practice was at one time fairly widespread but is no longer acceptable to many members of many congregations. Deacons are recognized special servants of the church and most often take care of specialized needs of the congregation. Typically, the physical building in which services are held is overseen by a Deacon. Like Elders, Deacons are generally selected by the congregations in a manner very similar to that of elders. Qualifications of Deacons are also listed in the Bible in 1 Timothy. The Preacher/Evangelist/Minister prepares and delivers sermons, teaches Bible classes, performs weddings, preaches or evangelizes the gospel, and performs baptisms. This position is typically paid. (People associated with the Churches of Christ do not use the title "pastor" to refer to their pulpit minister, as this term is held to refer to the same position as "elder" or "bishop" in the Bible, which they feel requires a certain set of qualifications outlined in 1 Timothy and Titus.) Typically these ministers are not 'ordained' as is the tradition of many denominational organizations, and do not use the salutation 'Reverend' or 'Rev.' before their name, professing that only God should be recognized as Reverend. Many congregations also employ other paid ministers besides the pulpit minister, such as ministers for youth. Some members of the church of Christ, and some groups within the churches of Christ, do not believe in paid ministers or youth ministers. Hermeneutics A closer look at the Church of Christ requires an understanding of its historically accepted hermeneutic. This hermeneutic is often summarized in three parts: "Command", "Example", and "Necessary Inference". - "Command" refers to a direct command found in the Scriptures (this being further complicated by what some mainstream evangelicals would refer to as the dispensation principle; for example, the command to build an ark was directed to Noah specifically, as opposed to being directed to Christians in general. Additionally, commands are classified as 'Specific' or 'Generic' in nature.) - "Example" is sometimes phrased as "an approved Apostolic example." The intent here is that the apostles or 1st century Christians performed some action or engaged in some practice that was approved of (or not condemned). - "Necessary inference" refers to some interpretational conclusion that would be necessary in order to obey a command or example. The principle of silence is also observed by the Churches of Christ, to varying degrees. When the Bible does not specifically or indirectly allow a practice, it is considered forbidden. The disagreements within the Churches of Christ primarily derive from differences in interpretation of the meaning of "necessary inference", and the conclusions which can be rightly drawn from "silence". The non-instrumental Chuches of Christ agree that the absence of references to instrumental music in New Testament worship mean that their use is forbidden. (It should be noted, however, that the Independent Christian Churches and Churches of Christ do use musical instruments and do not see their use as forbidden.) However, the New Testament is necessarily silent about many other issues, such as orphanages/children's homes, Sunday school, and congregationally-owned houses of worship ("church buildings"). In each case, the "mainstream" group has reasoned that "necessary inference" allows their use as a way of providing for otherwise-homeless children, facilitating study of the Scriptures, and providing for a reasonable and convenient setting for worship services. In each case, a dissident non-institutional faction, using the "principle of silence", finds these developments to be unwarranted and sinful innovations, although by far the majority (but not all) of the "non-institutional" congregations do own their own buildings for use as houses of worship, and most have likewise come to approve of Sunday school. Specific teachings and prohibitions Churches of Christ mostly agree with the theology of other Fundamentalist Christian groups, believing in Jesus as the Son of God, the death of Jesus by crucifixion as atonement for sin, and most other basic Christian teachings. However, there are many specific practices that distinguish them from these other bodies. The Church of Christ believes that the organization and structure of the church was laid down by Jesus Christ himself through his apostles in the form of the New Testament. Since this church has no headquarters and each congregation is independent, the teachings may vary somewhat, but overall there is a remarkable degree of uniformity among Churches of Christ in each region. The common variances are over the institution of Bible classes, the method that the Lord's supper is served (whether the fruit of the vine is served in one cup or many), the role of women in public worship, and whether ministers should be paid professionals or serve on a volunteer basis. Common beliefs and practices include: - The Bible was written by men who were inspired and guided by God the Father through the Holy Spirit. Most believe in "plenary" inspiration, whereby the inspired author is able to use his language to express divine truth, but the ultimate truthfulness is from God; this contrasts with "mechanical" inspiration, where the Biblical author is just a mortal "typewriter" for an immortal God, or a Divine "secretary" merely taking dictation. - No instrumental music in services (a cappella). The arguments against it are of two categories. A strong argument that claims that it is false doctrine prohibited by a principle or a law of silence, and a weak argument, similar to that originally held by Alexander Campbell, that such would be materialistic or inappropriate, but not necessarily false doctrine. Many congregations contain members with both positions. Commonly, larger congregations speak so as to not take sides between the weak or strong postion. More conservative congregations may still openly call it a sin, and more liberal congregations openly say that it is not a sin, while usually refusing to adopt instruments. - Children below the age of accountability are considered in a "safe" position in the eyes of God, and would not be condemned to hell if they died before the age of accountability (a denial of the common doctrine of Original Sin). Additionally, persons lacking the mental capacity to consciously choose between right or wrong are also saved, as they are incapable of truly choosing wrong. - The requirements for salvation are commonly presented in the following steps: - Hearing (the Word of God) - Believing (said Word) - Repenting (of one's sins) - Confessing (that Jesus Christ is Lord) - Being baptized (by immersion). - Continued faithfulness is enjoined because they do not believe in the doctrine of perseverance of the saints. - Because of the high value attached to the necessity of a believer's baptism by immersion, Churches of Christ are sometimes said to believe in "baptismal regeneration". Members deny that baptism without faith can bring salvation, but point out that the Bible does command believers to be baptized. - Celebration of religious holidays, such as Christmas and Easter, as religious holidays, is often discouraged, although secular observance of such days is usually tolerated. In recent years, this belief is in decline in many churches, and it is not unheard of for a church to have special events for such holidays or even to celebrate them with traditional religious significance. A number of churches, though, continue to practice complete rejection of holidays. - Women are not allowed to hold positions of spiritual authority over grown men. - The "lost" will be condemned to an eternity without God. The vast majority believe in a literal hell, while others believe it is a metaphorical eternity outside of the light of God. - Most churches forbid women from leading public worship when grown men are present. Women are generally not allowed to serve as elders, deacons, or preachers. - Worship can take place at any gathering of church members. Baptism can take place in any suitable body of water allowing total immersion, and may be administered by any member at any time of the day or day of the week. - There is no distinction between clergy and laity; all members are considered to be priests. Certain male members specialize in the field of teaching. These men are often called "Preachers" and, in mainstream Churches of Christ, are generally paid for their work. - The Lord's Supper can be served anywhere members are gathered on Sunday; no particularly "sanctified" location nor specifically "authorized" individual is needed to administer communion (except that those administering communion are almost invariably male as a matter of tradition in most congregations). The practice is to partake in the Lord's Supper each Sunday. - Divorce, except for reasons of marital unfaithfulness is condemned. Remarriage in these cases is considered adultery. - Abortion in most circumstances is considered to be a sin. - Homosexual activity is seen as a sin. They generally differentiate homosexual activity from homosexuality itself or homosexual people, often espousing the idea that while mere sexual orientation is not sinful per se, all homosexual acts are a choice. Many, however, denounce the idea of inherently homosexual people. - Satan is considered to be a literal being, not just a symbolic or allegorical representation of evil. He is seen as literally tempting Christ's followers away from their chosen path, usually by the use of human agents. His power is considerable, although vastly inferior to that of God, who allows Satan to exist so that God's followers worship and follow Him as a true act of free will, not predestination. - Many members of the Churches of Christ practice "closed fellowship" (fellowshipping only fellow members of the Churches of Christ), while others would extend the ties of fellowship to members of evangelical Protestant denominations. The issue of "fellowship" is a hotly debated one. - Generally, a belief that Churches of Christ are not a denomination. Most believe denominationalism itself is sinful, and hold that Christ established only one church. This doctrine is similar to earlier beliefs of Roman Catholicism. - In terms of eschatology, the Church of Christ is generally amillennial. - The theology of Churches of Christ is basically Arminian, although probably not often referred to as such. Original Sin and the whole idea of Total Depravity from which it ensues are rejected, although the human prediliction to sin due to temptations and the limitations of human nature is affirmed. Election and predestination are functions of the exercise of free will – those who freely choose God's way through Christ are elect and hence saved, others are lost. This decision can be changed based on the believer's behavior – he or she can consciously elect to cease following Christ and hence be lost ("fallen from grace"). God's sacrifice of Christ provided sufficient grace to save all persons from their sins, but it is imcumbent upon them to accept Christ's will and follow Him for this grace to save them personally. - A small subset of congregations are King James Only in orientation. Other, mostly older congregations use the KJV exclusively as a matter of tradition, but most congregations use a variety of translations of the Bible. - Miraculous Gifts – Most members of Churches of Christ do not believe supernatural miraculous events occur in the current times. They believe that these gifts died with those that were given supernatural Spiritual gifts during the time of Jesus and the apostles. - Several members of the Churches of Christ have claimed "conscientious objector" status during wartime. This opinion was "mainstream", at least in some circles, in the late 19th century and was the viewpoint frequenly published in mainstream Church of Christ publications such as David Lipscomb's Gospel Advocate. This movement lost most of its currency in the Churches of Christ during World War II, and has been fairly uncommon since World War II – the contemporary Church of Christ is not an historical peace church, but it is still listed as such by the US military for consideration of "conscientious objector" status. Most churches in the UK, consisted overwhelmingly of objectors. - Use of specialized vocabulary to circumvent common English usage which is in conflict with accepted doctrine. - "church" - The word is often left uncapitalized in the name "church of Christ" to emphasize that the churches are not a denomination. - "member of the church" - Many members of the Churches of Christ believe that only members of the Church of Christ are Christians. However, the English designation of "Christian" generally means anyone who calls himself a Christian. Thus the euphemism, "member of the church." - "obey the gospel" - be baptized - "religious" - Used instead of the word "Christian." For example, a conservative member of the Church of Christ might say "Religious Book Store," or "Religious Music" instead of "Christian Book Store," or "Christian Music," on the premise that only "real Christians", those found within the fellowship of his group, would actually make "Christian Music" or write truly "Christian" books. - "denomination" - Churches other than the Church of Christ, including the Roman Catholic and Orthodox churches. - Words and phrases common to most evangelical churches are often absent or modified in the Churches of Christ. - "Altar call" becomes "invitation." - "Sanctuary" becomes "auditorium" because of the belief that it isn't the building that is holy. - "Pastor" is never used to mean "minister." The term "preacher," "evangelist," or "minister" is used instead, i.e. "youth pastor" becomes "youth minister." Consequently, the word "pastor" is rarely used to refer to elders. - "Minister of Music" is "song leader", or, in more progressive congregations, "worship leader," or "worship minister." Because of the autonomous nature of Churches of Christ, practices vary greatly within Churches of Christ. As a whole this list reflects practices considered to be standard, with a focus on those beliefs that distinguish the Churches of Christ from Protestant groups. Other Restoration Movement bodies The Churches of Christ were advanced during the American Restoration Movement of the 19th century. As in the New Testament, this movement recognized the body as "The Churches of Christ" or "Christian Churches." After the American Civil War, there began to be divisions in this body over the issues of missionary societies and instrumental music in worship which reached a head in 1906 when the two groups formally split, agreeing to be listed separately in the religious census then conducted by the Bureau of the Census. Those holding to the prohibition of instrumental music are the Churches of Christ of today. Instrumental congregations began to divide in the 20th century during the fundamentalist response to modernism which solidified in the 1960s with two groups: the Independent Christian Churches/Churches of Christ and the Disciples of Christ. Other groups related to the Restoration Movement were the Christian Connexion and The Christian Church, both of which merged into the Congregational Church during the 1930s and thus eventually became part of the United Church of Christ, a group now part of the Protestant Mainstream and unrelated to the Churches of Christ. Disputes within the Church A major disagreement over the establishment of "institutions" at a level over that of the local congregations in order to serve works such as children's homes came to a head in the 1950s and 1960s. Today, those who disagree with this idea are referred to as The non-institutional or often by the pejoratives "anti-cooperation" or "anti." They represent approximately 15% of U.S. membership and are also represented by missionaries in other countries as well. What was called the International Churches of Christ (sometimes called "The Boston Movement" which was grounded in the Church of Christ "Crossroads Movement"), was often labeled a cult by mainstream congregations, had its origins in certain congregations of the Church of Christ. Since the late 1980s, however, some Church of Christ leaders had repudiated the Boston Movement as an apostatized, schismatic cult; the Boston Movement in turn declared itself to be a faithful remnant being called out of a dead or dying church, namely the mainstream Churches of Christ. The Crossroads/Boston/ICOC movement saw tremendous growth in comparison to the congregations led by the "mainstream" Church of Christ critics. (See the Paden article, second link below under the "ICOC" heading, for a fairly impartial examination of this subject.) Representatives of the ICOC and the mainstream Churches of Christ attended reconciliation meetings at the 2004 Abilene Christian University lectureships. See Also - Category:Universities and colleges affiliated with the Church of Christ - International Churches of Christ External links General Websites: - Church of Christ Network (http://www.church-of-christ.org/) - Heartlight.org (http://www.heartlight.org/) - What Do the Scriptures Say? (http://www.scripturessay.com) - Bibleweb.com (http://www.bibleweb.com/) by Bill Blue, "Provides a detailed analysis of the beliefs and practices of the church of Christ, a non-denominational, non-institutional assembly (or gathering) of Christians who worship God in truth and spirit according to the pattern set forth in the New Testament of the Bible." - TheBible.net (http://www.thebible.net/) - PreachersFiles (http://www.preachersfiles.com/) a site by G. E. Watkins and Kevin Cauley which contains hundreds of Bible Studies and Sermon Outlines on a variety of Bible Study topics. Online Magazines: - The Christian Chronicle (http://www.christianity.com/christianchronicle/) - a newspaper of the churches of Christ - Christian Courier (http://www.christiancourier.com/) - Investigating biblical apologetics, religious doctrine, and ethical issues - Restoration Quarterly (http://www.restorationquarterly.org) - magazine devoted to study of the Restoration Movement and Churches of Christ Directories: - Church of Christ Online Network (http://www.cocn.org/congreg.html) - A Online directory of US Churches of Christ web pages - Singapore Churches of Christ (http://www.citivision.org.sg/cocwho.htm) - Global Directory of Christian Universities Affiliated with the churches of Christ (http://church-of-christ.org/universities/) - Worldwide Church of Christ Locator (http://church-of-christ.org/churches) International Churches of Christ: - Selective amalgam of church ICOC related articles (http://www.icocinfo.org/) - International Churches of Christ web site (http://www.disciplestoday.com/) - reveal.org (http://www.reveal.org/) - a Support Group for former ICOC Members - From the Churches of Christ to the Boston Movement (http://www.reveal.org/library/history/paden.html) by Russell R. Paden. May 1994 thesis from the University of Kansas. An essential and thoroughgoing analysis and comparison of the history and development of the Church of Christ and its offshoot ICOC or Boston Movement faction, including the characteristics of "historylessness" and "traditionlessness." From the Abstract: "This thesis argues that while the Boston Movement has introduced some practices that are foreign to and have origins outside the Churches of Christ, both bodies remain quite similar in doctrine and attitude. This conclusion is supported through an historical examination of the Churches of Christ and the Boston Movement detailing the forces that have shaped the attitudes and doctrines of both religious bodies."
http://academickids.com/encyclopedia/index.php/Church_of_Christ
The overriding consideration in family proceedings is the question of “what is in the best interests of the child/children?” In answering this question, the court and other professionals are guided by a criteria known as the Welfare Checklist. This page will set out where the Welfare Checklist can be found in statute and will focus on each criterion in greater detail. Where can I find the Welfare Checklist? The Welfare Checklist can be found in Section 1 of the Children Act 1989. The Welfare Checklist Criteria 1. The ascertainable wishes and feelings of the child concerned (considered in the light of his age and understanding); The court are required to take the wishes and feelings of the child into consideration. It is not defined in law at which age the court will begin to listen to the child, but the court will tend to place more weight on a child’s wishes and feelings from the age of 11 or 12 onwards. However, it does depend on the individual circumstances of the child in question; The court will assess their maturity and understanding of the situation. Ordinarily it will be the role of CAFCASS to speak to the child and ascertain their wishes and feelings. In exceptional circumstances the Judge may speak to the child themselves. It is important for the court to be satisfied that these are indeed the true wishes and feelings of the child and they are not mirroring the views of a parent. It is important to be aware that the wishes and feelings of the child are viewed in conjunction with other factors and will not wholly dictate the outcome. 2. his physical, emotional and educational needs; The court are required to consider the child’s short term and long term physical, emotional and educational needs. They will consider which parent is best placed to provide these to the child and this will usually be based on evidence that has been submitted to the court. Physical needs tend to be straightforward whereas emotional needs may require more investigation. A child’s needs will change as they become older and therefore the court must be satisfied that the parents can manage these changes and provide stability for the child at the same time. 3. the likely effect on him of any change in his circumstances; The court are required to consider the potential impact of any change in circumstances on the child. The court will often take a decision that will cause the least disruption to a child’s life. An example of this may be where the non-resident parent applies for residence of the child. The court will need to consider the potential impact that the change in residence would cause, i.e. change of school, change of social environment. 4. his age, sex, background and any characteristics of his which the court considers relevant; The court are required to consider the child’s age, cultural and religious background and other characteristics which are specific to the child and the wider family. 5. any harm which he has suffered or is at risk of suffering; The court will examine harm that the child has suffered and harm that the child is at risk of suffering in the future. Harm is defined as ‘‘ill treatment or the impairment of health or development”. The court will weigh up the potential risk to the child and issue an order which is reflective of this. The order could feasibly contain protective measures which are aimed at safeguarding the child. This particular criterion will require the court to examine allegations of domestic abuse. 6. how capable each of his parents, and any other person in relation to whom the court considers the question to be relevant, is of meeting his needs; The court will want to ensure that both parents are putting the child first and are able to meet all the child’s needs. This criterion will therefore require the court to consider the respective accommodation that both parents are able to provide and the extent to which both parents can meet the child’s needs. This will be case specific and therefore it will depend on the specific needs of the child and the abilities of the parent. There is no assumption that a mother is better placed to meet a child’s needs compared to the father. 7. the range of powers available to the court under this Act in the proceedings in question. The court will consider every option and can make a wide range of orders, even if they have not been applied for. For example, there may be a case determining contact but it emerges that the resident parent intends to go abroad on a permanent basis with the child without seeking the consent of the other parent with Parental Responsibility. The court may therefore think it is appropriate to grant a prohibited steps order preventing the moving parent from leaving the jurisdiction.
https://childlawadvice.org.uk/information-pages/the-welfare-checklist/
Arizona lawmakers recently passed House Bill 2007 (HB2007), adding an aggravating circumstance to the list of 26 that were previously enumerated in the statute. The law became effective August 3, 2018. The newest aggravating circumstance is triggered when a defendant “uses a mask or other disguise to obscure the defendant’s face to avoid detection” during a crime. To be clear, the new aggravating circumstance doesn’t criminalize wearing a disguise. Rather, it allows for an increased punishment for those who are convicted of committing a crime, while wearing a mask or disguise. Opponents of the law expressed concerns that less serious, non-violent offenders may be subject to enhanced penalties because they wore a mask during the conduct giving rise to their arrest. Prior to finding that the aggravating circumstance exists, a jury must find that the defendant was not only wearing a mask or disguise but also wearing the mask or disguise “to avoid detection.” This may provide some relief to those who fear the aggravating circumstance would be applied in cases involving non-violent crimes in which a disguise was worn for reasons other than evasion. The most common crimes that will be impacted are by HB2007 are robbery, and burglary, since masks and disguises are often used in these offenses. This article includes a discussion of the new aggravated factor law pertaining to masks and disguises, other aggravated and mitigating factors in sentencing; the burden of proof for aggravated factors; examples of aggravated and mitigated factors; sentencing ranges and how penalties are imposed within them; and the role of a criminal defense attorney in the sentencing stage. What constitutes a mask or disguise? Currently, no statutory definition exists that describes what constitutes a mask or disguise as it pertains to this law. Typically when no definition exists under the law, the court will look to the literal definition, or previous case law, that addresses the question based on the facts of the case. Significant weight will be placed on the intended purpose of the disguise or mask at the time of the criminal offense, for purposes of deciding if A.R.S. – 13-701 (D) (26) applies. The court will determine if there has been a sufficient showing based on the trier of facts beyond a reasonable doubt, that the disguise was being worn by the defendant in an attempt to avoid detection. What is an aggravated factor in sentencing? An aggravated factor in sentencing is a circumstance surrounding a crime that makes it more serious, calling for more severe penalties. After a defendant is convicted of a crime, whether by judge or jury trial, or plea agreement, the court will then move on to the sentencing phase. This is when aggravated and mitigated factors are considered by the court. Arizona Criminal Code Sentencing Provisions include sentencing ranges for which the judge must sentence the defendant. The judge has the discretion as to what level of penalties to impose following a conviction, as long as they fall within this statutory sentencing range. A sentencing range is based upon specific classifications of charges. The level of penalties vary depending on the classifications, and type of charges. In general they include mitigated, minimum, presumptive, maximum, and aggravated. Mitigated sentences are those that are least severe within the range, and aggravated sentencing holds the harshest of penalties. For dangerous offenses the levels are minimum, presumptive, and maximum. More severe penalties are included within the maximum range designation of dangerous offenses. When aggravated factors exist, the judge will consider imposing the maximum sentence in these cases. Both dangerous and non-dangerous offenses have additional categories for repeat and historical offenses. What are some other aggravated factors? Some examples of aggravated factors under A.R.S. 13-701 (D) include: Displaying, threatening or using a deadly weapon during the crime; The crime involved an accomplice; The defendant was convicted of a prior felony within 10 years of the current conviction; During the crime the defendant impersonated a police officer; The defendant was paid to commit the crime; Immediately after the crime was committed, the defendant committed a hit and run; and The offense was a hate crime against a law enforcement officer. The law outlines a list of 27 statutory aggravated factors. However, the court may consider other aggravating circumstances that are not listed in the law, including the defendant’s character, criminal history or other relevant circumstances. What proof is needed to impose an aggravated factor? In order to admit an aggravated factor in sentencing, the prosecution needs to provide showing beyond a reasonable doubt based on the facts and evidence. If the aggravated factor involves prior felony convictions, it will be up to the court to determine the validity of prior convictions based on the information and facts introduced. The exception to this is if the defendant admits to, or validates the aggravated factor. What is the difference between a mitigated and aggravated factor? A mitigated factor is the opposite of an aggravated factor. Mitigating circumstances are those factors that serve to reduce the penalties. The judge will weigh the mitigating factors presented against the aggravated factors to determine of aggravated sentencing, or other level of sentencing will apply. If only aggravated factors were presented, and no mitigating factors existed, the judge will generally impose aggravated sentencing. If no aggravated factors were presented, and one or more mitigating factors exist, the judge will usually impose a mitigated sentence. A.R.S. 13-70. (E), lists a total of 6 mitigating factors including such things as age, duress, trivial amount of involvement in the crime. The court may also consider other relevant mitigating factors not on the statutory list, related to the defendant’s character, background, or history. How can a criminal defense attorney help me resolve my charges? For any person accused of a crime and facing criminal charges, the ideal outcome is to get the charges dismissed. An effective criminal defense attorney will look for ways to make that happen throughout the criminal justice process. If that is not possible, the prosecution will likely ask you to enter a plea agreement. If an acceptable resolution cannot be reached, you still have the right to take your case to trial. A skilled Arizona criminal defense attorney can help you compile compelling mitigation evidence in hopes of receiving a lenient sentence. Judges will normally look at a defendant’s background, including the defendant’s contributions to society as well as their prior convictions, when fashioning a sentence. A judge, however, cannot sentence a defendant to the maximum statutory term without a jury finding that an aggravating circumstance exists. Similarly, a judge must document at least one mitigating circumstance if he intends to sentence the defendant to the minimum allowable statutory sentence. The prosecution looks for evidence of aggravated factors in sentencing. However, they are not obligated to help you find or present mitigated factors to offset aggravated factors or reduce your sentence. This is also the case for presiding judge in that the judge has no obligation to introduce find or introduce mitigated factors to reduce penalties for the defendant. An experienced criminal defense attorney like James Novak will be your voice, look for mitigating circumstances, and provide a showing of the validity of these factors if they are applicable. If you face felony charges, especially if they involve aggravated circumstances, it is important that you obtain strong criminal defense representation. James Novak of the Law Office of James Novak, PLLC is an experienced criminal defense attorney, and former prosecutor. Attorney James Novak provides his clients with upfront and honest advice from the beginning of their case throughout the entire process. If retained he will protect your rights and defend your charges. With zealous representation and a keen knowledge of the relevant procedural and substantive laws, Attorney James Novak diligently represents clients in all types of Arizona criminal cases, including Arizona robbery crimes. James Novak offers a free initial consultation for those who face active charges in Tempe, Mesa, Gilbert, Scottsdale, Chandler, and Phoenix AZ. To learn more, and to speak with Attorney Novak about how he can help you defend against the charges you are facing, call (480) 413-1499 or complete our contact form on the website site. Additional Resources: - A.R.S. § 13-701 Aggravated Factors in Sentencing - A.R.S. § 13-702 First Time Felony Offender Sentencing - Arizona Criminal Code Sentencing Provisions 2018- 2019 - Armed Robbery Laws - Maricopa County Sheriff’s Office – Inmate Information for Families - Arizona Department of Corrections – Inmate Programs and Services - Maricopa County Superior Court – Warrant FAQs - Maricopa County Clerk of Court – Return and Release of Bond Monies Other Articles of Interest from The Law Office of James Novak’s Award Winning Blog:
https://blog.arizonacriminaldefenselawyer.com/new-law-makes-wearing-mask-or-disguise-while-committing-a-crime-aggravated-factor-in-sentencing/
Aubrey Dennis ADAMS, Jr. v. Richard L. DUGGER, Secretary, Florida Department of Corrections. No. 88-7140 (A-875). Supreme Court of the United States May 3, 1989 On petition for writ of certiorari to the United States Court of Appeals for the Eleventh Circuit. The application for stay of execution of sentence of death presented to Justice KENNEDY and by him referred to the Court is denied. The petition for a writ of certiorari is denied. Justice BRENNAN, with whom Justice MARSHALL joins, dissenting. * Adhering to my view that the death penalty is in all circumstances cruel and unusual punishment prohibited by the Eighth and Fourteenth Amendments, Gregg v. Georgia, 428 U.S. 153, 227, 96 S.Ct. 2909, 2950, 49 L.Ed.2d 859 (1976), I would grant the motion for a stay of execution and the petition for a writ of certiorari and vacate the death sentence in this case. II Even if I did not take this view, I would grant the petition to consider whether the sentencing procedure in this case violated the Eighth Amendment requirement that a convicted defendant have the opportunity to present any relevant mitigating evidence—not just statutory mitigating factors—at the sentencing hearing. Hitchcock v. Dugger, 481 U.S. 393, 107 S.Ct. 1821, 95 L.Ed.2d 347 (1987). The Florida Supreme Court declined to find petitioner's Hitchcock claim procedurally barred and addressed this claim on the merits. Adams v. State, 543 So.2d 1244, 1247 (1989). Likewise, the District Court addressed the merits, determining that petitioner's "is not a proper Hitchcock issue." No. 89-67-Civ.-Oc-16, p. 5 (MD Fla., May 3, 1989). Moreover, since Hitchcock was not decided until after petitioner had filed his second federal habeas petition, I detect no abuse of the writ in petitioner raising this claim now for the first time. At the time petitioner was sentenced, Florida's "standard jury instructions included a charge which had the effect of limiting the jury's consideration to the statutory aggravating and mitigating circumstances," 543 So.2d, at 1247, reflecting what may have been the general belief in the State, based upon decisions such as Cooper v. State, 336 So.2d 1133 (Fla.1976), that mitigating factors not specifically itemized in the statute, Fla.Stat. § 921.141 (1975), were not to be taken into account in sentencing. Reasonably believing that state law prohibited a jury or court from considering non-statutory factors at sentencing, a Florida lawyer might rationally have declined to divert resources from other aspects of case development into the investigation, development, and presentation of evidence of such factors. Indeed, two of petitioner's counsel have filed affidavits to the effect that, operating on the basis of what they understood to be the law at the time, they did not pursue nonstatutory mitigating evidence because they did not believe it would be admissible. I would grant certiorari in this case to consider whether state-generated disincentives to the pursuit and presentation of mitigating evidence infected petitioner's sentencing with constitutional error. The trial judge in this case initially denied petitioner's request for a sentencing instruction to the jury that they might consider nonstatutory mitigating factors. After the prosecutor's closing argument at sentencing, however, the trial judge announced that he had changed his mind, and that because the prosecutor's closing had listed statutory mitigating circumstances as the only mitigating factors the jury could consider, he would give the requested instruction that " '[t]he aggravating circumstances which you may consider are limited to those upon which I've just instructed you. However, there is no such limitation upon the mitigating factors you may consider.' " See 543 So.2d, at 1247. This change of mind appears to have come too late to allow petitioner's counsel to develop the mitigating evidence that the court's prior ruling, and existing Florida law, had reasonably led them to believe would be inadmissible. A belated instruction to consider mitigating evidence cannot cure a defect the effect of which had been to ensure that there is little or no nonstatutory mitigating evidence for the jury or court to consider.
https://m.openjurist.org/490/us/1059
- Case title: - Appellant name: - Status of case: Unreported - Hearing date: - Promulgation date: - Publication date: - Last updated on: - Country: - Judges: The decision Upper Tribunal (Immigration and Asylum Chamber) Appeal Number: HU/04856/2017 THE IMMIGRATION ACTS Heard at Field House Decision & Reasons Promulgated On 14 January 2019 On 13 February 2019 Before DEPUTY UPPER TRIBUNAL JUDGE NORTON-TAYLOR Between ms Charmaine Camille Morris (ANONYMITY DIRECTION NOT MADE) Appellant and THE SECRETARY OF STATE FOR THE HOME DEPARTMENT Respondent Representation: For the Appellant: Mr D Sellwood, Counsel, instructed by Bindmans LLP For the Respondent: Mr D Clarke, Senior Home Office Presenting Officer DECISION AND REASONS 1. This is a challenge by the Appellant against the decision of First-tier Tribunal Judge Pacey (the judge), promulgated on 15 June 2018, by which she dismissed the Appellant's appeal against the Respondent's refusal of her human rights claim. 2. In essence the Appellant's claim had been based on the following. She had had a very traumatic life in Jamaica. Having left that country and come to the United Kingdom in December 1999, she established herself here, albeit as an overstayer, and lived with three of her four children and, over the course of time, her six grandchildren. On the basis of her circumstances she relied on paragraph 276ADE(1)(vi) of the Immigration Rules and on Article 8 in its wider context as regards her private and family life in the United Kingdom. The judge's decision 3. The judge deals with the paragraph 276ADE issue. At - she accepts the fact of the Appellant's very difficult childhood, going on to state that it was the Appellant's "choice" to cut all ties with Jamaica after she had come to this country. It is said that the Appellant had spent most of her life in Jamaica and had not severed all cultural links. 4. The judge concluded that the Appellant could rely on her own resourcefulness in order to help re-establish herself back in Jamaica. It was said that the Appellant had failed to provide evidence to show that she would not be able to find employment of one sort or another. 5. Having rejected the arguments under the Rules, the judge goes on to look at the Appellant's circumstances in this country. Reference is made to the best interests of the Appellant's grandchildren, and the fact that there was a close relationship between them all. It is noted that the Appellant was not the sole carer for any of the grandchildren Although it was said to be understandable that the Appellant's own children would want the grandchildren to be cared for by a grandmother, there would be no real disadvantage to those children if they had to have professional childcare instead. The judge concludes that the Appellant had not stepped into the shoes of the parents of the grandchildren. 6. Overall, the wider Article 8 claim is also rejected. The grounds of appeal and grant of permission 7. The general thrust of the grounds is as follows. There are said to be errors in respect of the paragraph 276ADE issue, namely that the judge should not have taken account of any "choice" made by the Appellant to sever ties with Jamaica, that relevant factors had not been taken into account adequately or at all, and that the judge had failed to carry out a proper assessment according to the relevant case law (that being Kamara EWCA Civ 813). 8. The second aspect of the grounds relates to a challenge to the wider Article 8 issue. Most significantly, it is said that the judge failed to have any regard to the report of an independent social worker, Ms Pearce, and that this constituted a material error of law in respect of an assessment of the grandchildren's best interests. 9. Permission to appeal was granted by First-tier Tribunal Judge Hodgkinson on 1 August 2018. The hearing before me 10. At the outset of the hearing Mr Clarke fairly accepted that the judge had failed to deal with the independent social worker's report and that this constituted an error of law insofar as the issue of the grandchildren was concerned. However, he opposed the Appellant's challenge in relation to paragraph 276ADE. 11. Mr Sellwood relied on those grounds relating to this particular issue. He submitted that the judge had not in substance carried out a broad evaluative judgment. He emphasised that all of the relevant factors set out in the grounds had been included in his skeleton argument that was before the judge. He emphasised the fact that the Respondent himself had acknowledged that there were employment difficulties in Jamaica. Aspects of the independent social worker's report related to the Appellant's own view of having to return to Jamaica and this evidence had not been referred to. No proper account had been taken of the Appellant's actual history as that related to the traumatic experiences of her childhood. 12. For his part Mr Clarke quoted from paragraph 14 of Kamara. The judge was entitled to have concluded as he did, particularly as the burden of showing very significant obstacles rested with the Appellant. It was difficult to see quite how her past experiences would be relevant to an assessment of her situation on return to Jamaica. In respect of any security issues in Jamaica, he noted that there was no protection claim here. The fact that the Appellant had spent very many years of her life in Jamaica before coming to this country was clearly relevant as was her potential ability to find work. 13. In reply, Mr Sellwood submitted that the judge had in fact factored in the Appellant's "choice" to sever ties with Jamaica, and this was wrong. The issue of security of returnees in Jamaica went to the issue of whether the Appellant would be considered an insider or an outsider. The Appellant's subjective views of trying to re-establish herself in Jamaica were clearly relevant and these had not been adequately dealt with by the judge. Decision on error of law 14. In light of Mr Clarke's properly made concession, I find that the judge materially erred in failing to address the independent social worker's report, particularly as that related to the circumstances of the Appellant's grandchildren and their relationship with her. On this basis alone I would set aside the First-tier Tribunal's decision. 15. However, it is important for me also to deal with the issue of paragraph 276ADE as well. I conclude that there are also material errors in respect of this matter. 16. At the time of the hearing before the judge the Court of Appeal's judgment in Kamara had been out for a considerable period of time. It is perhaps unfortunate that no reference to the Court's guidance was made by the judge, despite it being expressly referred to in Mr Sellwood's skeleton argument. Notwithstanding this, it is almost always more important to look at substance over form, and this I have done when assessing the body of the judge's decision. 17. In my view there is a strong possibility that the judge took account of and placed weight on the fact that the Appellant had made a conscious decision to sever all ties with Jamaica after she left that country in 1999. I base this on what I consider to be a reasonable reading of . This represents something more than a simple statement of bare fact: it appears to me as though this was an aspect (by no means a determinative one) of the assessment being carried out by the judge. I conclude that the so-called "choice" made by the Appellant was irrelevant for the purposes of the assessment. 18. First, in the circumstances of the Appellant's case, it is entirely understandable that she would have wanted to have left behind any contacts with the country in which she suffered so much in the past. 19. Second, in any event, choice and/or motive does not come into play when carrying out a broad evaluative assessment of the individual's circumstances. Rather, it is a question of taking into account and weighing up subjective and objective matters in the round. 20. Perhaps more importantly than the first point is my conclusion that the judge has failed to deal adequately with the Appellant's traumatic experiences whilst in Jamaica when undertaking the assessment of her circumstances on return. I fully appreciate that she has accepted the fact of those experiences (). However, what was important in this case was for those circumstances to be actually weighed up. It is clear that not only was there strong evidence from the Appellant about her subjective fear and anxieties about returning, but that this issue was also dealt with in the independent social worker's report, a source that has been overlooked by the judge both in respect of this issue and the best interests of the Appellant's grandchildren. 21. The country information relating to the potential security concerns of returnees (of which the Appellant would be one) was in my view a relevant factor and one that was not addressed by the judge. It is of course the case that there is no protection claim here and there is nothing to show that any and all returnees would face a risk from criminals. Having said that, the reality of the security situation combined with the Appellant's subjective fears was something that needed to be weighed up when considering whether the Appellant would consider herself or be considered by others as an "insider" or an "outsider". This has not been done. 22. There are other factors that the judge has taken account of which would point against the existence of very significant obstacles, and as I indicated to the parties at the hearing, my view on the error of law has been reached by a relatively narrow margin. If I had taken the view that this aspect of the Appellant's case would be bound to fail in any event, I would not of course regard any errors as being material. However, there is enough in the evidence to show that what I regard as being an erroneous approach by the judge had a genuine impact on the outcome. 23. In light of the above I set the judge's decision aside in respect of both issues in this appeal. Disposal 24. There was a discussion with the representatives as to what should happen next. Mr Sellwood was of the initial view that the matter could and should be retained in the Upper Tribunal for a resumed hearing. Mr Clarke suggested that fairly significant factual findings would need to be made in light of written and quite possibly oral evidence, and that remittal was appropriate. 25. Having considered this matter with care and with reference to paragraph 7.2 of the Practice Statement, I have concluded that this appeal should be remitted to the First-tier Tribunal. 26. It would appear to be the case that fairly significant findings of fact are indeed needed: there will have to be an assessment of oral evidence relating to both the main issues in the case; there will need to be findings on the independent social worker's report which until now has been overlooked; there are other related issues which require clear findings. 27. On top of this is the fact that the Appellant and all of her family members live up in Birmingham. It is important that the Appellant has the opportunity of presenting her evidence on its best footing and in my view this would be best facilitated by sending the case back to the First-tier in the home city rather than requiring everybody to come down to London (I have not forgotten that the Upper Tribunal also sits in Birmingham, but in light of the factual findings issue this possibility has not altered my view on the correct course of action.) 28. There is no good reason to disturb the judge's finding at as to the Appellant's childhood experiences, and I expressly preserve it. 29. I issue relevant directions to the First-tier Tribunal, below. Notice of Decision The decision of the First-tier Tribunal contains errors of law and I set it aside. I remit this appeal to the First-tier Tribunal. No anonymity direction is made. Directions to the First-tier Tribunal 1. This appeal is remitted to the First-tier Tribunal, for re-hearing at the Birmingham centre; 2. The appeal shall not be re-heard by First-tier Tribunal Judge Pacey; 3. Judge Pacey's finding at is preserved; 4. The relevant issues in this appeal are paragraph 276ADE(1)(vi) and Article 8 in its wider context, including the family ties in the United Kingdom.
https://tribunalsdecisions.service.gov.uk/utiac/hu-04856-2017
Welfare of children In family proceedings involving children, the Courts must consider the welfare of a child as of paramount concern. The child’s welfare is their priority. What is the Welfare Checklist? When the family court is making a decision on matters that will affect a child, the courts are required to look at the welfare of the child as the paramount consideration. The welfare checklist consists of seven statutory criteria that the courts must consider under the Children Act 1989 when reaching its decision in cases involving children. What are these criteria? The seven criteria set out in the welfare checklist under s1(3) Children Act 1989 are: - The ascertainable wishes and feelings of the child concerned - The child’s physical, emotional and educational needs - The likely effect on the child if circumstances changed as a result of the court’s decision - The child’s age, sex, backgrounds and any other characteristics which will be relevant to the court’s decision - Any harm the child has suffered or maybe at risk of suffering - Capability of the child’s parents (or any other person the courts find relevant) at meeting the child’s needs - The powers available to the court in the given proceedings 1. The ascertainable wishes and feelings of the child concerned The court must consider the wishes and feelings of the child, taking into account the child’s age and level of understanding in the circumstances. This will normally be determined by the Children and Family Court Advisory and Support Service (CAFCASS) or social services, and reported to the court. In some cases, a judge may speak directly with a child to determine their wishes and feelings if this is deemed necessary. The court will take into account whether or not a child’s wishes and feelings are their own, or whether outside factors may have influenced their decisions. There may also be a conflict of opinion between the parents’/guardians’ views and that of the child. The court will balance the views of the parties concerned, including the views of a child who is of an understanding age and mature enough to form their own opinions. 2. The child’s physical, emotional and educational needs The court will consider who is in the best position to provide for the child’s emotional, physical and educational needs. A child’s emotional needs can be more difficult to deal with, and the court will consider who is best able to provide for the emotional needs of the child – both short term and long term. 3. The likely effect on the child of changes in circumstances The potential impact of changes to the child’s life will be considered. The courts will aim to make an an order that causes the least disruption to a child’s life, however, this will be balanced against the other factors to be considered. 4. The child’s age, sex, background and other relevant characteristics The court will consider specific issues such as religion, race and culture when making a decision about a child. They may also take the parents’/guardians’ hobbies and lifestyle choices into account if they feel this will impact the child’s life, either now or in future. 5. Risk of harm to the child The courts will look at the risk of harm to the child. This means immediate risk of harm, as well as the risk of harm in the future. ‘Harm’ includes physical, emotional and mental harm. The courts will weigh up the potential risk of harm to the child in future and make an order as appropriate. An order may include safety measures to protect the child. 6. Parents’ ability to meet the child’s needs The courts will consider how able each parent is to care for the child and to meet their particular needs. This will be subjective and depend on the facts and circumstance of each case – the needs of the child and the abilities of the parents concerned. 7. The range of powers available to the courts The court must weigh up all the factors under the welfare checklist, and consider all available orders within their discretion. It will then make the best order available that is in the best interests of the child. We strongly advise you to take expert legal advice from specialist family lawyers if you have any concerns about family proceedings involving a child.
https://www.inbrief.co.uk/child-law/child-welfare-checklist/
With remote or hybrid working becoming commonplace, many workers no longer need to worry about commuting regularly, if at all. Some are now reconsidering the pros and cons of where they live. The property market remains buoyant thanks partly to individuals and families relocating from cities to greener pastures, where they can afford larger properties at a more affordable price than in metropolitan centres. It is safe to assume many will look North when eyeing up a new place to live, especially if they are prioritising value-for-money housing. For a separated parent, there’s more to worry about than good schools and transport links: they also have to consider the legal implications of moving house with a child or children. Do I need permission to move away from my child’s other parent? ‘Internal relocation’ is the term used for a parent or parents relocating a child from one part of the UK to another. If you are divorced or separated and are named in a child arrangements order (CAO) as a person with whom the child is to live, technically you do not need the permission of the child’s other parent to relocate within the UK in the same way you would if you were moving abroad. However, you must still be able to comply with shared care arrangements. There are practicalities to consider: as a relocating parent, you will need to get permission from the other parent for the child to change schools. If the other parent says they have a problem with you moving a much greater distance from them, there are likely to be problems in the co-parenting relationship. Getting the consent of the other parent, or otherwise obtaining the court’s permission, is therefore almost always advisable when moving house. Getting permission from the court Child welfare and their ‘best interests’ will always be the court’s paramount consideration and in 2015 the court established there is no difference in the basic approach between parents moving within and outside the UK. In no strict order, factors considered when determining a child’s ‘best interests’ include: - any harm the child has suffered or could be at risk of suffering - the child’s physical, emotional and educational needs - how any change in circumstances will affect them - their age, sex, background etc. - how the child feels about their living situation (dependent on their age) - the ability of each parent to care for them While there is no presumption in favour of the parent applying to relocate, the potential effect of refusing permission for the applicant can also carry weight. The court will always consider whether there is genuine motivation for the move, or if it is being done as a means of preventing the other parent from seeing their child. The potential disruption to contact with other family members will also be taken into consideration. Ultimately, a judge presiding over a relocation application must determine what is in the child’s best interests. The consideration of the welfare test requires a holistic balancing exercise. The court must balance all the relevant factors (which will change from case to case), weighing those factors against each other to determine which outcome most serves the child’s welfare. How has Covid-19 affected rules around relocating? A case from 2021 confirmed that regardless of the surrounding circumstances, even an unprecedented global pandemic, child welfare will always be paramount when parents relocate within the UK. The application concerned a girl aged three and a boy aged nine; the boy had been diagnosed with autism. The children’s parents separated in 2017, and shared childcare in London between them. When the country went into lockdown in March 2020, the father agreed the mother and children could temporarily move to the countryside, where there would be more space for the children to enjoy. By the summer of 2020, the mother had decided she wanted to remain there permanently, partly because of her new partner living nearby. When she made her court application, the father opposed it as he wanted to maintain close connections with the children and was unable to relocate due to work commitments. This case was unusual as it dealt with the two children differently. The father agreed the girl should remain with her mother and spend “as much time as possible” with him, but the court decided to minimise disruption to the boy by ordering he return to London. This meant he could stay at the primary school he had attended since reception. Weighing all the circumstances of the case against the welfare checklist listed above, the judge decided that the balance came down firmly in favour of the boy living with his father and spending “as much time as possible” with his mother and sister. This case confirms the importance placed on child welfare, even if it means one sibling living with their father and the other with their mother. The judge was uneasy about separating the boy and girl, but the order provided for them to spend every weekend and all their school holidays together, even amidst the ‘stay at home’ Covid-19 guidance.
https://www.thebusinessdesk.com/yorkshire/news/2091241-the-effect-of-covid-19-on-divorce-law-separated-parents-moving-house
Careless driving tickets, on their own, typically do not give rise to a license suspension for the defendant, and result only in fines and two points on a NJ License. However, pursuant to N.J.S.A. 39:5–31, a New Jersey Judge may suspend your license where there has been a willful disregard for the safety of other persons or property. Such a state of mind may be proven by careless driving tickets involving an accident. If you are texting or talking on your phone and accidentally crash into a parked car, you would be vulnerable to not only careless driving charges, but also the suspension of your drivers license. Whether your careless driving charges are the sole charges or you are also facing charges for speeding, use of a cell phone, or driving while intoxicated, it is imperative that you fight against any license suspension at sentencing. Call the Law Offices of Jonathan F. Marshall at 1 (877) 450-8301 and speak with an experienced traffic attorney about your pending careless driving ticket. What Factors Will the Judge Consider in Suspending My License for Careless Driving under N.J.S.A. 39:4-97? Under N.J.S.A. 39:5–31, and enunciated in State v. Moran, there are certain factors that the Judge must consider before imposing any license suspension at sentencing. These factors include: - The nature and circumstances of the defendant’s conduct, including whether the conduct posed a high risk of danger to the public or caused physical harm or property damage; - The defendant’s driving record, including the defendant’s age and length of time as a licensed driver, and the number, seriousness, and frequency of prior infractions; - Whether the defendant was infraction-free for a substantial period before the most recent violation or whether the nature and extent of the defendant’s driving record indicates that there is a substantial risk that he or she will commit another violation; - Whether the character and attitude of the defendant indicate that he or she is likely or unlikely to commit another violation; - Whether the defendant’s conduct was the result of circumstances unlikely to recur; - Whether a license suspension would cause excessive hardship to the defendant and/or dependents; - The need for personal deterrence; and - Any other relevant factor clearly identified by the court. How Can an Attorney Help With My Careless Driving Ticket? Beyond helping you avoid any points on your drivers license, legal representation may be able to help you avoid a suspension altogether or those of you facing careless driving charges involving an accident. If you have a less-than-pristine driving record, you may face a license suspension if you are involving in an accident and are issued a careless driving ticket. By placing mitigating factors on the record for the judge to consider, the Law Offices of Jonathan Marshall may be able to assist you in keeping your license. If you or a family member are facing careless driving charges, contact the our office today and speak with an experienced license suspension and traffic ticket attorney about how we can help.
http://njlicensesuspensionattorney.com/careless-driving-with-accident/
Following the case of Mitchell v News Group, the Courts have taken a very strict view indeed to failure to comply with Court directions. This appears to be loosening somewhat. Firstly we would refer the reader to the article on the Nearly Legal Housing Law Website – This is what we always meant and especially the piece concerning the amendments to the Civil Procedure Rules on 5th June 2014 – see the article at:- http://nearlylegal.co.uk/blog/2014/05/this-is-what-we-always-meant/ The piece on Nearly Legal also refers to the Judgment of Jackson LJ (yes, that Jackson!) in Hallam Estates Limited v Baker EWCA Civ 661. In terms of the amendment to the Civil Procedure Rules, Rule 3.8 (4) will now read:- (4) In the circumstances referred to in paragraph (3) and unless the Court orders otherwise, the time for doing the act in question may be extended by prior written agreement of the parties for up to a maximum of 28 days, provided always that any such extension does not put at risk any hearing date. Groarke -v- Fontaine Additionally in a case not mentioned in the Nearly Legal piece, Groarke v Fontaine EWHC 1676 (QB), 22 May 2014, concerning a road traffic accident and the question of potential contributory negligence by the Claimant, the Defendant made a very late application (at the outset of the trial!) to amend his Defence. This was initially refused but Sir David Eady sitting as a High Court Judge allowed the Defendant’s appeal, stating:- The District Judge (like myself) was doing his best to apply the relevant principles, as expounded in the recent authorities, to the facts of this case. Having considered his reasons, however, my own respectful conclusion is that in examining the trees he ultimately failed to see the wood. Insofar as he balanced the potential prejudice to the Claimant against that to the Defendant, the exercise yielded the wrong outcome. I believe that justice and fairness required that the amendment should have been allowed so that ‘the real dispute’ between the parties could be adjudicated upon. It is true that the burden was on the Defendant to establish not only that this objective was desirable but also that it should, in the particular circumstances, prevail. I can see, however, no good reason why it should not. There was no countervailing prejudice to the Claimant. In particular there was no need for any adjournment, any further delay or additional cost. The court was able to accommodate the issues of causation (including those relevant to contribution) on the appointed trial date and (whether it was appropriate to do so or not) the District Judge actually stated what his conclusion would have been on contributory negligence. Thus no court time would have been wasted or court resources diverted. Correspondingly, no other court users would have been inconvenienced. The only concrete result of the District Judge’s refusal was that, at least on his (obiter) finding, the Claimant was to gain a windfall payment unjustly (para 32). See: Groarke -v- Fontaine Judgment Denton -v- T H White Limited and Others Further to the case of Groarke v Fontaine, the Court of Appeal have addressed this matter again and sought to give a definitive ruling on the issue in the case of Denton v T H White Limited and Others EWCA Civ 906. This case involved three appeals (all heard together) against refusal to grant sanctions by parties who had failed to comply with Court directions. The Master of the Rolls and LJ Vos (giving the Judgment of the Court of Appeal) stated as follows:- 24…We hope that what follows will avoid the need in future to resort to the earlier authorities. 25. The first stage is to identify and assess the seriousness or significance of the ‘failure to comply with any rule, practice direction or court order’ which engages rule 3.9 (1)… 26…In these circumstances, we think it would be preferable if in future the focus of the enquiry at the first stage should not be on whether the breach has been trivial. Rather, it should be on whether the breach has been serious or significant…. 27…At the first stage, the court should concentrate on an assessment of the seriousness and significance of the very breach in respect of which relief from sanctions is sought. We accept that the court may wish to take into account, as one of the relevant circumstances of the case, the defaulter’s previous conduct in the litigation (for example, if the breach is the latest in a series of failures to comply with orders concerning, say, the service of witness statements). We consider that this is better done at the third stage…rather than as part of the assessment of seriousness or significance of the breach. 28. If a judge concludes that a breach is not serious or significant, then relief from sanctions will usually be granted and it will usually be unnecessary to spend much time on the second or third stages. If, however, the court decides that the breach is serious or significant, then the second and third stages assume greater importance. 29. The second stage cannot be derived from the express wording of rule 3.9 (1), but it is nonetheless important particularly where the breach is serious or significant. The court should consider why the failure or default occurred: this is what the court said in Mitchell at para 41. 30. It would be inappropriate to produce an encyclopaedia of good and bad reasons for a failure to comply with rules, practice directions or court orders… 31. The important misunderstanding that has occurred is that, if (i) there is a non-trivial (now serious or significant) breach and (ii) there is no good reason for the breach, the application for relief from sanctions will automatically fail…rule 3.9 (1) requires that, in every case, the court will consider ‘all the circumstances of the case, so as to enable it to deal justly with the application’…. 32…Although the two factors may not be of paramount importance, we reassert that they are of particular importance and should be given particular weight at the third stage when all the circumstances of the case are considered. That is why they were singled out for mentioning in the rules. It is striking that factor (a) is in substance included in the definition of the overriding objective in rule 1.1 (2) of enabling the court to deal with cases justly; and factor (b) is included in the definition of the overriding objective in identical language at rule 1.1 (2) (f)… … 34. Factor (a) makes it clear that the court must consider the effect of the breach in every case. If the breach has prevented the court or the parties from conducting the litigation (or other litigation) efficiently and at proportionate cost, that will be a factor weighing in favour of refusing relief. Factor (b) emphasises the importance of complying with rules, practice directions and orders. This aspect received insufficient attention in the past. The court must always bear in mind the need for compliance with rules, practice directions and orders, because the old lax culture of non-compliance is no longer tolerated. 35. Thus the court must, in considering all the circumstances of the case so as to enable it to deal with the application justly, give particular weight to these two important factors. In doing so, it will take account of the seriousness and significance of the breach (which has been assessed at the first stage) and any explanation (which has been considered at the second stage). The more serious or significant the breach the less likely it is that relief will be granted unless there is a good reason for it…. … 38. It seems that some judges are approaching applications for relief on the basis that, unless a default can be characterised as trivial or there is a good reason for it, they are bound to refuse relief. This is leading to decisions which are manifestly unjust and disproportionate. It is not the correct approach and is not mandated by what the court said in Mitchell… 40…Nor should it be overlooked that CPR rule 1.3 provides that ‘the parties are required to help the court to further the overriding objective’. Parties who opportunistically and unreasonably oppose applications for relief from sanctions take up court time and act in breach of this obligation. 41…The parties should in any event be ready to agree limited but reasonable extensions of time up to 28 days as envisaged by the new rule 3.8 (4). Lord Justice Jackson, whilst arriving at the same conclusion in all the three cases, came to that conclusion in a slightly different way (just to continue the complication of matters). He stated:- 85. I take a somewhat different view, however, in relation to the third stage. Rule 3.9 requires the court to consider all the circumstances of the case as well as factor (a) and factor (b). The rule does not require that factor (a) or factor (b) be given greater weight than other considerations. We wait to see whether this attempt to clarify the issues really has achieved that objective.
http://www.communitylawpartnership.co.uk/news/case-law-on-compliance-with-directions-post-mitchell-update
If you plead guilty to a federal criminal offense, you will enter your plea at a hearing in front of the judge overseeing your case. The judge will ask you questions to determine whether your guilty plea is voluntary and knowingly entered, and will generally accept your plea. Your case then moves on to the next phase before it can be resolved: sentencing. The federal sentencing process can be quite different from sentencing in a state-level criminal case. There are additional requirements and procedures to follow, so it’s critical that you have an attorney on your side who can advocate for your rights throughout the sentencing process. How Does a Judge Decide Your Sentence? Both federal law and federal criminal procedure dictate what a judge must consider in imposing a fair and just sentence. First, the court should examine any of the relevant sentencing factors set out in federal law,1 which include the following: - The nature and circumstances of the criminal offense - The character and history of the defendant - The need for the sentence to serve as punishment, to reflect the severity of the crime, and to encourage respect for the law - The need to deter from future criminal conduct - The need for public protections from future acts of the defendant - The need of the defendant for medical care, education, training, or other effective correctional treatment - What sentences are available - The need for restitution to any victims - Avoidance of sentencing disparities between the defendant and others in similar circumstances with similar convictions. In addition to considering many factors regarding the offense, the defendant, and comparable cases, the judge must consider the Federal Sentencing Guidelines2 for the offense in question. These guidelines are advisory and do not restrict the judge to a particular sentence for a particular offense. However, the guidelines factor in the severity of the crime (the “offense level”) with your prior criminal history and suggest a range for a just sentence. After considering the guidelines range for the particular circumstances of your case, the judge can then decide on a sentence within that range or above or below that range based on all additional information at the judge’s disposal. Finally, the court will receive several filings for consideration, which can include: Presentence report (PSR) – The PSR is not prepared by an attorney in the case, but instead by the federal probation office. This report will include various facts about you and your case for the judge to consider. The probation officer will interview you to gather information to include in the report, and you always want an attorney present during such interviews. Your attorney will have a chance to review the PSR and object to any inaccuracies, omissions, or falsehoods that have been included. Sentencing memorandum from the prosecutor – The prosecutor can submit certain information about you and legal arguments, generally in support of a stricter sentence. Sentencing memorandum from the defense – Your attorney can submit her own arguments to mitigate the sentence, including character references, expert analysis of your behavior or mental state, or to challenge any wrongful claims made by the prosecutor. Once she has reviewed everything, a judge will announce your sentence at a sentencing hearing. Judges have discretion when determining federal sentences other than going below a mandatory minimum sentence, so having an attorney who knows how to protect your rights and advocate for you during sentencing can significantly improve the outcome of a sentencing hearing. If you are facing any type of federal criminal charges, it is imperative to seek assistance from a law firm that has extensive experience dealing with the federal criminal justice system. The attorneys at Koffsky & Felsen, LLC have worked as a federal prosecutor and a federal defender, so you can trust that we will skillfully guide you through every step of the often-complex federal criminal process. We not only work to defend against your charges; we will also provide aggressive sentencing advocacy when needed. Please call our office for help today.
https://koffskyfelsen.com/brief-look-federal-sentencing-process/
In Australia, a person arrested under an extradition arrest warrant can be remanded in custody, or on bail. However, in contrast to usual domestic criminal practice, a person will not be released on bail unless there are special circumstances. Last year, two matters adjudicated by the Federal Court provided greater guidance on the ‘special circumstances’ requirement and confirmed the limited situations in which bail may be granted. In Tsvetnenko v United States of America (‘Tsvetnenko’) and Rivas v Republic of Chile (‘Rivas’), following arrest and remand in custody, the applicants applied for bail claiming that special circumstances existed. The magistrates in both instances refused the applications and judicial review was sought before a single judge in the Federal Court, and again before the Full Court in Tsvetnenko. The original decisions were upheld on review. Background In Tsvetnenko extradition was sought for charges related to wire fraud, identity theft and money laundering. Before the magistrate, the applicant identified a number considerations for the bail application, namely that: the offences were ‘bailable offences’ under the laws of Western Australia and the United States, the ‘serious risk of deterioration’ of the applicant’s health, the applicant’s supportive family and two dependant young children, and that the applicant was not a fugitive from justice nor a flight risk. In Rivas, extradition was sought for charges of aggravated kidnapping. Before the magistrate, the applicant identified a number of considerations for the bail application including: the applicant’s poor health, and the complexity, delay and prospects of success in respect to the substantive extradition matter. The leading authority on bail in extradition matters is United Mexican States v Cabal (2001) 209 CLR 165 (‘Cabal’), where the High Court observed that special circumstances exist when they are different to those a person facing extradition would ordinarily endure, they “need to be extraordinary and not factors applicable to all defendants””. In both matters the magistrates concluded that the applicants had not demonstrated circumstances different to the kinds of disadvantage all extradition defendants endure. Considerations by the Federal Court Nature of the review In both matters, the Court stressed the evaluation being undertaken was a judicial review pursuant to section 39B of the Judiciary Act 1903 (Cth) limited to jurisdictional error. The jurisdiction of an administrative entity will be exceeded where the decision is affected by: ‘”an error of law which causes it to identify a wrong issue, to ask itself a wrong question, to ignore relevant material, to rely on irrelevant material, or, at least in some circumstances, to make an erroneous finding or to reach a mistaken conclusion” Craig v State of South Australia HCA. In undertaking this review, “the reasons of an administrative decision maker are not to be construed minutely and finely with an eye keenly attuned to the perception of error.” It was stressed that the applicants had no ability to challenge the merits of the decision generally. In both matters the Federal Court found that the applicants had failed to demonstrate jurisdictional error. In Tsvetnenko the applicant claimed the magistrate had failed to proceed reasonably and on a correct understanding of the law. The Full Court noted that while reasonableness may be accepted as a pre-condition to the magistrate’s exercise of power, correctness could not, and thus failed to find any jurisdictional error of law. In Rivas the Court stated that much of the substance of the applicant’s submissions were no more than mere disagreements with the magistrate’s conclusions. Interpreting and applying “special circumstances” United States authorities are not binding In Tsvetnenko the Federal Court in both reviews rejected the applicant’s proposition that Cabal requires magistrates to follow and apply United States jurisprudence in determining special circumstances. Though Cabal stated that United States case law can provide valuable guidance, this did not suggest that magistrates or Federal Circuit judges are bound by foreign jurisprudence. The Regard may be had to the United States’ jurisprudence for assistance; however the magistrate is to perform his or her administrative task according to Australian law. Risk of flight is not a special circumstance In Tsvetnenko the Federal Court also rejected the applicant’s proposition that a low risk of flight is, in itself, a special circumstance. In Cabal, the High Court had noted that, contrary to the approach of the United States, it is proper “to determine whether special circumstances exist before considering the question of flight”. The judge in the first review stated that “it would not be possible to consider special circumstances before flight risk if flight risk was already part of the special circumstances consideration” and held that in Australia, special circumstances is a separate consideration. This approach was confirmed by the Full Court. What amounts to special circumstances? Health As noted in Cabal, “a serious deterioration of health” caused by imprisonment would constitute special circumstances. However the magistrate in Tsvetnenko, accepting that the applicant would suffer an increase in his level of anxiety and depression if he remained in custody without support, found that this was no different to what was faced by others held in custody. In relation to his back disc problems, which the applicant claimed required a hydrotherapic treatment not available in custody, the magistrate was not convinced that imprisonment would cause “a serious deterioration of health”. On review, the Full Bench found that these conclusions were reasoned by a reference to the evidence in respect of a factual matter entrusted to the magistrate to evaluate. The conclusions were not outside the bounds of reasonableness. Similarly in Rivas the magistrate considered the ongoing pain suffered by the applicant, and concluded that there was nothing to support a finding that the applicant would suffer from a serious deterioration of health caused by her incarceration. On review the judge found that the magistrate had considered the material relied on by the applicant and given detailed reasons for rejecting them. The applicant had failed to identify any error in the magistrate’s reasons. Other circumstances Furthermore the state magistrate in Tsvetnenko held that it is not unusual or special for a person wanted for extradition to co-operate with authorities. The applicant’s close-knit family and young children were also not circumstances out of the ordinary. On first review, the judge found that this conclusion was evaluative and had not been shown to reveal jurisdictional error. In Rivas the Federal Court held that the applicant’s denial of the alleged offence was irrelevant to establishing special conditions. Furthermore neither the delay, nor complexity of the applicant’s matter was any different to what any person facing extradition would ordinarily endure. Finally the applicant’s prospects of success in the substantive proceedings, could be a relevant consideration, but without more, would not amount to special circumstances. Key takeaways These two cases confirmed the narrow circumstances in which bail might be granted for extradition matters, and the limited scope of review available. Unless the technical grounds for review can be demonstrated a court will not challenge the magistrate’s decision. These two cases also confirmed that magistrates are not required to consider jurisprudence from the United States and that factors going to flight risk should not inform an assessment of special circumstances. They revealed the severity of the health deterioration an applicant must face before it is consider special, and enumerated a number of other factors which would not considered circumstances out of the ordinary.
https://ngm.com.au/securing-bail-extradition-cases-special-circumstances/
- Case title: - Appellant name: - Status of case: Unreported - Hearing date: - Promulgation date: - Publication date: - Last updated on: - Country: - Judges: The decision Upper Tribunal (Immigration and Asylum Chamber) Appeal Number: HU/25362/2016 THE IMMIGRATION ACTS Heard at Field House Decision & Reasons Promulgated On 22 November 2018 On 21 December 2018 Before UPPER TRIBUNAL JUDGE ALLEN Between Waleed Anwar (anonymity direction NOT MADE) Appellant and THE SECRETARY OF STATE FOR THE HOME DEPARTMENT Respondent Representation: For the Appellant: Ms C Jaquiss, instructed by MA Solicitors For the Respondent: Ms K Pal, Senior Home Office Presenting Officer DECISION AND REASONS 1. Mr Anwar appealed to a Judge of the First-tier Tribunal against the Secretary of State's decision of 22 January 2016 refusing him leave to remain based on his human rights. That appeal was allowed, but subsequently in a decision promulgated on 28 August 2018 I found that there were errors of law in the judge's decision and that the Article 8 issue would therefore have to be reconsidered. 2. The appellant had applied for indefinite leave to remain in the United Kingdom on the basis of ten years' residence in the United Kingdom. The respondent noted the requirements of paragraph 276B of HC 395 and in particular the requirement of continuous lawful residence in the United Kingdom. Mr Anwar had entered the United Kingdom on 31 December 2004 with entry clearance as a student and had remained lawfully thereafter. 4 September 2015 was taken to be the date on which his valid leave in the United Kingdom had expired. He had left the United Kingdom for Pakistan on 10 January 2012 before his leave to remain expired but did not return until 12 October 2012, 275 days later, with valid entry clearance as a Tier 4 (General) Student. 3. Under paragraph 276A(a): "'Continuous residence' means residence in the United Kingdom for an unbroken period, and for these purposes a period shall not be considered to have been broken where an applicant is absent from the United Kingdom for a period of six months or less at any one time ?" The respondent noted that his period of absence from the United Kingdom had exceeded the maximum amount allowed on any one occasion in order to meet the long residence requirements and as a consequence he was considered to have broken his continuous residence and could not satisfy the requirements of paragraph 276B(i)(a). 4. He had asked that discretion be exercised in his case on the basis that he had remained in Pakistan longer than he had planned in order to look after his brother who was bedridden and required constant looking after. He produced medical documents relating to a Ghulam Abbas but the decision maker considered there was no evidence to show that Ghulam Abbas was his brother, and it was noted that in any event the accident which caused Mr Abbas's injuries occurred in 2008 but the appellant had returned to Pakistan in 2012. He said he had had to stay to look after his brother as his mother and siblings were unable to, but it was considered that the family could have continued to do so and no exceptional evidence had been provided that he was required to remain in Pakistan in order to care for Mr Abbas. He had not shown evidence that Mr Abbas's condition deteriorated since the date of the accident four years previously and there was no evidence that the care his brother required could only have been provided by him or that Mr Abbas was in fact his brother. It was considered that the points he had raised were not considered to be sufficiently compelling such as to exercise discretion in his favour. 5. In relation to this point the judge had the benefits of statements from the appellant and his sister and an affidavit from the appellant's father. He accepted that Ghulam Abbas was the appellant's brother. The sister and the father confirmed that the appellant went to see his family and was asked to stay for longer to care for his brother as there were no other male siblings and their father was elderly. There was also a letter from a consultant neurologist, dated 28 September 2017, which confirmed that he was asked to attend on Ghulam Abbas due to his aggressive behaviour. He confirmed that he was non-communicative, restless, irritable and aggressive and that he could follow one-step commands but remained fully dependent for all activities of daily life. The judge concluded that on the medical evidence before him, when the appellant went to Pakistan in 2012, Ghulam Abbas was incapable of managing his own personal functioning and required 24-hour care. 6. The judge noted the respondent's guidance in cases of long residence which said that for single absences over 180 days consideration was required to be given to how much of the absence was due to compelling or compassionate circumstances and whether the appellant returned to the United Kingdom as soon as possible. Medical evidence confirmed that the accident to Mr Abbas occurred in September 2008 and that he would require full support in his daily activities and daily care and was fully dependent for all his needs. A letter from a neurologist dated 11 January 2012 stated that Mr Abbas required nursing support 24 hours a day and was unable to eat, wash or manage his own toileting needs. 7. The judge accepted that it was the Secretary of State's discretion in respect of the policy, but bearing in mind the terms of the policy and the judge's findings of fact, noting that it would not be unusual for the male line to take over responsibilities in particular when it came to the toileting and bathing needs of a man in his late 30s, that it would have been appropriate for the respondent to have exercised her discretion in line with her own guidance and to have viewed the period as being continuous and lawful. The judge took into account the public interest considerations set out in section 117A and 117B of the Nationality, Immigration and Asylum Act 2002 and considered that there were no factors going against the appellant and interference in a case where the Immigration Rules were satisfied was not in accordance with the law. It was considered that had the Secretary of State exercised her discretion lawfully there would have been compliance with the Rules and the judge said that in any event as it was a human rights case the factors in the appellant's favour weighed heavily against the one negative factor. She concluded that the interference was not proportionate. 8. There were other issues relating to jurisdiction which fell for consideration, but it was my conclusion at the error of law hearing that the judge had erred in law finding that the appeal fell to be allowed because the respondent should have exercised discretion in the appellant's favour and there would have been compliance with the Rules. The question of whether the circumstances were such as to amount to compelling and compassionate circumstances was a matter for the respondent and it was not open to the judge to substitute her own consideration of the matter, in particular taking into account in part evidence that was not before the decision maker in concluding as she did. Also, there was a failure to refer to the public interest which required consideration in an evaluation of Article 8 outside the Rules. 9. In her submissions Ms Jaquiss relied on and developed points made in her skeleton argument. It was common ground that the judge's findings on the evidence at paragraphs 21 and 22 of her decision stood. Ms Jaquiss observed that, as had been clarified by the Supreme Court in Rhuppiah UKSC 58, the presumptions in the 2002 Act are not necessarily determinative in all cases. Section 117A of the Act required judges to have regard to the statutory public interest considerations and hence the provisions of section 117B could not put decision makers in a straightjacket which constrained them to determine claims under Article 8 inconsistently with the Article itself. 10. Under the Home Office guidance there was a discretion to grant leave in compelling and compassionate circumstances outside the Rules. The guidance was a key consideration for any First-tier Judge, being the Secretary of State's view as to how the Rules were to be interpreted. The Rules and the guidance were the Secretary of State's view as to what was compliant with Article 8. Ms Jaquiss also referred to what had been said in SF and Others UKUT 120 that even in the absence of a "not in accordance with the law" ground of appeal, the Tribunal ought to take the Secretary of State's guidance into account if it pointed clearly to a particular outcome in the instant case, as the only way in which consistency could be obtained. It was also relevant to bear in mind what had been said by the Supreme Court in MM (Lebanon) UKSC 10 setting out guidance as to the relationship between the policy makers and Tribunals as a partnership. The Tribunal was asked to find that there were in this case compelling and compassionate circumstances in the appellant's case meaning he could not return to the United Kingdom. His brother was very seriously ill and their father was old and the appellant was the only person who could look after him. The Tribunal was not invited to substitute its own discretion for that of the Secretary of State, but to look at the terms of the policy and if it found compelling circumstances as to why the appellant could not return, that should inform the proportionality assessment and was a key consideration. It was not for the Tribunal to grant or refuse indefinite leave to remain. That was a matter for the Secretary of State, who might look at the exercise of discretion with regard to indefinite leave to remain but in circumstances where an appellant had been in the United Kingdom for ten years lawfully except for the 275 days and now in the guidance the circumstances which would make refusing indefinite leave to remain disproportionate, the conclusion was that under the Rules with the guidance to assist, his removal would be disproportionate. 11. Ms Jaquiss had made further points about proportionality at paragraph 19 of the skeleton including a reference to the financial independence of the appellant and the fact that he was fully supported by his family. Immigration control was a legitimate aim, but he was not a person with a bad immigration history and had no criminal convictions and was not a burden on the state. As had been said in UE (Nigeria) EWCA Civ 975, a person's value to the community was a factor which might legitimately be considered in the Article 8 balancing exercise. 12. In her submissions Ms Pal relied on the refusal letter of 22 January 2016. This addressed the consideration of the compelling factors relied on with regard to the appellant's brother. The appellant had not put forward any compelling or compassionate grounds for a grant of leave outside the Rules. He had been outside the United Kingdom for over ten months so there was a break in his continuity of residence. His private life should be given little weight as his leave was always precarious and there was no expectation of a grant of indefinite leave to remain, and that was even more so the case given his long absence. The Secretary of State had addressed the possibilities and the circumstances with regard to his return to Pakistan. It should be found that the decision to remove was proportionate. 13. By way of reply Ms Jaquiss argued that in fact the period of 275 days was nine months rather than over ten. It had not been argued today that there was relevance to the delay between his knowledge of his brother's accident and going to Pakistan. In that regard it was relevant to consider what was said in the appellant's cousin's letter. It was not until he had got to Pakistan that he had realised how bad the situation was. It was not clear what could be said to be compelling or compassionate grounds if these circumstances in this case did not. His brother was paralysed and had suffered significant brain damage. 14. I reserved my determination. 15. The effect of my judgment on the error of law is that the issue in this case has to be viewed not on the basis of the Tribunal exercising its own discretion as to the existence or otherwise of compelling or compassionate circumstances, but being fully entitled to take the policy into account when considering the Article 8 claim in this case. As Ms Jaquiss argued, any conclusion I might come to as to the nature of the circumstances can inform the evaluation of proportionality in this case. 16. I have set out above in quotations from the judge's decision the nature and extent of the physical and mental problems of the appellant's brother. The appellant explained in his statement of 21 September 2015 why it was that he had to remain for as long as he did as his brother was bedridden and required constant looking after and there was in effect no-one else who could do it. There was no assistance from his cousins. The medical report of Dr Al Baqer of 14 September 2008 bears out the nature of Mr Abbas's injuries, as does the medical evaluation of Dr Wasti of 3 November 2008 and that of Dr Ahmed of 11 January 2012 and also the statements from family members as to the circumstances in Pakistan. 17. I must of course bear in mind the provisions of section 117B of the 2002 Act which includes at subparagraph (5) the statement that little weight should be given to a private life established by a person at a time when the person's immigration status is precarious. As was clarified by the Supreme Court in Rhuppiah, a person who not being a UK citizen is present in the UK and having leave to reside here other than to do so indefinitely has a precarious immigration status for the purposes of section 117B(5). It was also however said at paragraph 49 in Rhuppiah that although the court has defined precarious immigration status in the manner set out above, with a width from which most applicants who rely on their private life under Article 8 will be unable to escape, section 117A(2)(a) necessarily enables their applications occasionally to succeed. The Supreme Court quoted from the judgment of Sales LJ in the Court of Appeal at EWCA Civ 803, paragraph 53, saying that the effects of section 117B(5) may be overridden in an exceptional case by particularly strong features of the private life in question. 18. The appellant has, it would appear, led a blameless life while in the United Kingdom and the only reason why he was unable to succeed in his application under paragraph 276B was on account of the amount of time he had spent outside the United Kingdom. The respondent's policy, which is essentially consistent with what was said by Sales LJ in Rhuppiah in the Court of Appeal, was that "it may be appropriate to exercise discretion over excessive absences in compelling or compassionate circumstances, for example where the applicant was prevented from returning to the UK through unavoidable circumstances". The decision maker is encouraged to consider how much of the absence was due to compelling circumstances, whether the applicant returned to the United Kingdom as soon as they were able to do so and to consider also the reasons for the absences. 19. It is clear that I cannot substitute my own view as to what compelling or compassionate circumstances are for that of the Secretary of State in this case. But, as noted above, they are a relevant factor to bear in mind in considering the proportionality of removal. And in that regard I also bear in mind the quotation from the Court of Appeal in Rhuppiah to which I have referred above. The door is therefore, albeit only slightly, open for a person to succeed where they have a private life established at a time when their immigration status is precarious. It is relevant to bear in mind that under section 117B(1) the maintenance of effective immigration control is in the public interest, and also the point that it is in the public interest that people who seek to enter or remain in the United Kingdom are able to speak English as the appellant can, and that such people are financially independent, as the appellant is. 20. The emphasis in Sales LJ's judgment was on an exceptional case, with particularly strong features of the private life in question. The private life the appellant has in the United Kingdom is essentially that of a person who has studied, worked and lived here for a number of years. He has family here, including his sister. It was argued in the skeleton that his sister's son's best interests are in the status quo being preserved in the family unit they maintain. That is certainly not an irrelevant factor but it is not a strong factor bearing in mind that the child is the child of his sister and not of the appellant. In my view, though the Sales LJ quotation was aimed at strong features of the private life in question, it can properly be taken, bearing in mind that it is a description of the effect of section 117A(2)(a) and it is a case where a Tribunal is required to determine whether a decision under the Immigration Act breaches a person's right to respect for private and family life under Article 8, to include the whole issue of proportionality. In that regard, I consider that it is appropriate to bear in mind and attach weight to the reasons why the appellant remained outside the United Kingdom for as long as he did. Though I do not seek, as I say, in any sense to trespass on the Secretary of State's territory, these appear to me to be clearly compassionate circumstances of a significantly compelling nature. That is a relevant matter to take into account in assessing the proportionality of the decision to remove. In the highly unusual circumstances of this case, I consider that the proportionality balance comes down in favour of the appellant and against removal and as a consequence his appeal under Article 8 is allowed. No anonymity direction is made.
https://tribunalsdecisions.service.gov.uk/utiac/hu-25362-2016
When you get a divorce, it’s natural to be curious about how your property and debt will be divided. While married, couples acquire numerous assets, including houses, cars, and retirement plans. In addition, couples also may acquire debts like a home mortgage or credit card debt. During the divorce process, couples must split both their assets and their debts. Here our Somerset County divorce lawyers explain how the division of debt occurs in a New Jersey divorce. Understanding Equitable Distribution New Jersey operates under the principle of equitable distribution in order to divide debts and assets in a divorce. In the equitable distribution process, the judge in the case will make a fair and equitable distribution of the couple’s property, which includes all of their marital property and debts. Equitable does not mean equal in this case. Judges will take appropriate steps and look at critical factors to distribute assets and debts in a fair and comparable manner for both parties. However, the monetary values may not be exactly the same for each party. Determining How Debt is Split According to New Jersey laws, there are 15 different criteria the judge will look at when determining how to distribute assets and debts for a divorcing couple. Factors that determine how debt and assets are distributed include: - The length of the marriage; - Each parties’ age and physical and emotional health; - The income or property that each party brought to the marriage; - The standard of living established during the marriage; - Any written agreements, like a prenuptial or postnuptial agreement; - Each parties’ economic circumstances after the division occurs; - Each parties’ income and earning capacity; - Any contributions made to the education, training, or earning power of the other; - Any assistance that increased or decreased the couple’s marital property; - The tax consequences of the proposed distribution of property; - The current value of the couple’s property; - The custodial parent’s need to use the marital home; - The couple’s debts and liabilities; - The need to create a trust fund to pay for medical or educations costs of a spouse or child; - If a party delayed achieving their career goals; - Any other factors the court deems relevant to the case. The judge will consider all of these factors when they decide how to divide assets or debts. This decision can also affect other matters in the divorce process, like child support or alimony. Dividing debt in a New Jersey divorce is a complicated matter that is determined by several factors. Working with a skilled Somerset County divorce lawyer can assist you with your property division case by locating all relevant evidence for applicable factors and presenting it to the court in the most favorable way. If you need advice about how debt will be divided in your divorce, call DeTommaso Law Group, LLC today at (908) 274-3028.
https://www.detommasolawgroup.com/blog/2021/january/what-happens-to-debt-in-a-divorce-/
Located just above the kidneys, the adrenal glands are responsible for the production of hormones crucial to regulating stress, blood pressure, and other bodily functions. In patients with Addison’s disease, the adrenal glands don’t make enough of these vital hormones. Individuals with this condition are likely to suffer a range of symptoms, including chronic fatigue, weight loss, hyperpigmentation, and depression. If you think you might be suffering from Addison’s disease or another form of adrenal insufficiency, don’t hesitate to contact your doctor for a check up. What Is Adrenal Insufficiency? A hormonal disorder, adrenal insufficiency results when the adrenal glands fail to produce the right amount of hormones. The adrenal gland consists of two sections. While the interior component creates adrenaline-like hormones, the outer component releases a group of hormones known as corticosteroids. With primary adrenal insufficiency, also known as Addison’s disease, the adrenal glands have suffered damage that renders them incapable of producing a sufficient quantity of the adrenal hormone cortisol, which is responsible for regulating numerous bodily functions. Symptoms of Adrenal Insufficiency Adrenal insufficiency can result in numerous symptoms that affect patients’ health and quality of life. Here are some of the most commonly reported effects of this condition. Patients with adrenal insufficiency or Addison’s may also experience mental health symptoms like depression, irritability, anxiety, and loss of interest in sex. According to the National Institutes of Health, Addison’s disease impacts more than 100 of every 1 million people in developed countries. Causes of Addison’s Disease While many cases of Addison’s disease result from adrenal gland damage, the condition can also develop as a result of an underlying illness. According to the Department of Health and Human Services, less-common causes of Addison’s include chronic fungal infections, amyloidosis, tuberculosis, AIDS-related infections, genetic defects, and cancer cells spreading to the adrenal glands from other body parts. It’s important to know that adrenal insufficiency (AI) can be either permanent or temporary. Patients with permanent AI will likely require lifelong treatment to stay healthy. Treating Adrenal Insufficiency and Addison’s Disease Because adrenal insufficiency typically results from a lack of cortisol, most AI/Addison’s patients receive some form of hormone replacement therapy to correct their steroid levels. If you have a diagnosis of Addison’s, your doctor may prescribe oral corticosteroids such as prednisone, hydrocortisone, cortisone acetate, or corticosteroid injections. The latter treatment is common among patients who are experiencing nausea and vomiting. Unfortunately, corticosteroids don’t always resolve all the symptoms of adrenal insufficiency. Because the body naturally produces more hydrocortisone in response to stress factors like fever, infection, and trauma, patients may need to to take more than their usual replacement dose under these circumstances. If you’re sick with a fever or experiencing an increase in the severity of symptoms, don’t hesitate to contact your doctor about adjusting your dose. You may want pursue additional treatments to mitigate the symptoms of adrenal insufficiency and improve quality of life moving forward. Adrenal Insufficiency and Amino Acid Therapy Amino acids are organic compounds that join together to form proteins. While the body creates some amino acids on its own, others can only be sourced through food. These essential amino acids include histidine, isoleucine, leucine, lysine, and methionine, all necessary components of an effective restorative adrenal program. Cortisol regulates the synthesis of the PNMT enzyme, which in turn controls the synthesis of epinephrine and norepinephrine in the adrenal glands. However, increasing the amount of cortisol in the body won’t necessarily resolve all the symptoms of adrenal insufficiency. If patients want to optimize the whole dopamine to epinephrine pathway, they may need to take a more comprehensive treatment approach that involves a carefully curated balance of amino acids. According to David S. Klein, M.D., of the Pain Center of Orlando, adrenal fatigue is most often caused by stress. Because amino acids like L-theanine encourage the release of gamma-aminobutyric acid (GABA),which promotes the release of relaxation neurotransmitters such as dopamine and serotonin, amino acids may be able to reduce stress levels and, as a result, alleviate adrenal fatigue symptoms. Klein isn’t the only medical professional to believe in the power of amino acids to help treat adrenal fatigue. At the Amino Co., we have developed customized amino acid formulas to help a wide range of individuals, including astronauts, athletes, and hospital patients. Along with researching the ways in which amino acids can help adrenal insufficiency patients, we are developing treatments for patients with heart failure, liver problems, and even mental illness. If you’re suffering from one of these conditions, don’t hesitate to get in touch. You can also find out more about amino acid support for adrenal fatigue in this article.
http://52.52.137.81/understanding-adrenal-insufficiency-and-addisons-disease/
Feeling chronically tired? You are not alone. It is one of the most common complaints of modern society. In fact, next to back and neck pain, it is one of the most common reasons patients come into my office. Feeling tired can be the result of overworked adrenal glands. The adrenal glands are responsible for producing several important hormones and are critical to the stress response. The adrenal glands are the first to be depleted by stress, whether the stress is from an emotional source, a nutrient imbalance, or a mechanical problem. When the brain interprets an event as threatening (stressful) the adrenals begin to make specific hormones to deal with that stress. When this state of emergency is maintained for unrelieved periods of time, the body’s reserves become depleted and the immune system is weakened. Long term over-activation of these hormones can deplete the adrenal glands, severely impairing the ability of the body to adapt to stress. One of the most prominent signs of adrenal gland insufficiency is chronic fatigue. Other common symptoms of adrenal stress include: A history of low blood pressure problems Awake after a few hours of rest and unable to go back to sleep Frequent periods of depression or the inability to think clearly Nervousness or frustration Sweet cravings Become light-headed when meals are missed Frequent nightmares or panic attacks When the adrenal glands can no longer effectively deal with stress, they produce these symptoms along with muscle contractions that can be objectively observed. Once correlated with symptoms and physical findings, the nutritional needs of the body can be determined and the appropriate nutrients or change in diet can be used. In some fatigued patients, thyroid problems overlap adrenal problems. Through simple changes in the diet and appropriate supplementation, many people have more energy, sleep better, think clearer and their symptoms of fatigue and low energy are a distant memory.
https://www.balancewellnesscare.com/single-post/2016/09/03/Are-You-Sick-and-Tired-of-Being-Sick-and-Tired
Depression is a life-impacting condition that all too frequently is misdiagnosed or disregarded. Unfortunately, it is shockingly prevalent in our society impacting over 19 million Americans annually, and over 350 million worldwide. To treat depression, we must better understand it and its various contributors. No matter the age, race, gender, or social standing, one can experience depression. Too many people who have depression have had friends, loved-ones, and strangers tell them to simply “get over it” or “it will pass, just give it time.” Unfortunately, depression cannot simply be willed away. Symptoms of depression manifest themselves in numerous ways including discomfort, mental fatigue, and emotional abnormalities. If you or anyone you know is experiencing some or all the following symptoms, it is critical that you seek appropriate medical assistance. - Feeling sad or lonely - Lost interest in hobbies - Irritability - Fatigue - Loss of appetite - Helplessness - Reduced libido and disinterest in sex - Difficulty concentrating - Insomnia - Pain including muscle aches, joint pain, and weakness - Thoughts of death or suicide Those who are depressed often experience hopelessness, a disinterest in what was once interesting, a general lack of motivation, and many other debilitating symptoms. Associated difficulties of depression can make daily activities including work, school, or even social interaction challenging. This condition has many possible causal factors, including hormonal imbalance and malfunction. Causes of Depression Depression is a multifactorial condition that can develop from a variety of difficult to diagnose conditions. The following factors can contribute to depression: - Low thyroid in the brain - Malfunctioning mitochondria - Genetics - Gender - Age - Stress level - Various prescription medications - Chronic conditions In addition to the elements above, hormonal dysfunction, imbalances, and deficiency are all major factors in developing depression. Hormones are powerful tools that are integral to one’s overall bodily function. They also play an important role in mood regulation and mental well-being. Because of their significant impact on the body, one should attempt to keep their hormones optimized. The following hormone groups are critical to one’s health and if properly cared for can help alleviate depression. Thyroid Hormones Thyroid dysfunction is closely associated with depression. Both hypothyroidism, underactive thyroid, and hyperthyroidism, overactive thyroid, can result in insomnia, mood swings, fatigue, anxiety, and depression. Unfortunately, these conditions frequently go unnoticed and therefore untreated. This is likely due to ineffective testing standards. Most physicians rely solely on TSH levels to gauge thyroid function, which provides an incomplete image of one’s thyroid health. Accurate measurement of thyroid function can only be achieved through testing multiple aspects of the thyroid including Free T4, Free T3, Reverse T3, and thyroid antibodies. Hypothyroid patients who experience depression frequently have poor T4 to T3 conversion. As the storage form of thyroid hormone, T4 must be converted into T3, the active form of thyroid hormone, to effectively impact the body. Optimizing this conversion may assist in resolving both hypothyroidism and depression. Recent studies have found that T3 may increase the efficacy of antidepressants and even improve depression symptoms on its own. Sex Hormones The primary sex hormones consist of progesterone, estrogen, and testosterone. Deficiency, excess, or imbalance of these hormones can bring severe symptoms and psychological difficulties. Even though testosterone and estrogen are respectively referred to as the male and female sex hormones, they can cause depression if either sex has a deficiency or imbalance. Testosterone deficiency is closely correlated to depression, irritability, and poor sense of well-being. Overabundance of estrogen, also known as estrogen dominance, in men can cause one to experience mood swings, depression, and further hormone imbalances. Women more frequently experience estrogen dominance due to hormonal fluctuations during menstruation. Increased estrogen levels are a primary contributor to severe PMS symptoms including depression. Oxytocin When we experience sensations of intimacy and closeness, particularly with partners, oxytocin is likely the cause. Decreased levels of this hormone can cause one to experience depressive episodes and shifts in mood. Studies have found that new mothers with reduced oxytocin levels are more prone to experience postpartum depression after childbirth. Supplementation and treatment with oxytocin has shown to promote self-esteem, optimism, confidence, and improve social interaction. Further studies have presented data that shows oxytocin helps relieve pain that contributes to depression. This hormone is often administered through injection or nasal spray and may prove to be an effective method of treating depression, anxiety, and post-traumatic stress disorder (PTSD). Adrenal Hormones The adrenals are the primary system involved in the body’s stress response. The three hormones most important to the adrenals are adrenaline, cortisol, and the precursor hormone, DHEA. Imbalance or deficiency of these hormones can result in anxiety, insomnia, and depression. Disruption of these hormones can be caused by a condition known as adrenal fatigue. As one experiences increasing levels of stress, or chronic stress, the body continues to produce and releases stress hormones like cortisol. Increased levels of cortisol can cause one to feel jittery yet fatigued. Furthermore, elevated cortisol levels effectively drain the adrenal glands, meaning that they are unable to properly recover. Eventually the adrenals become incapable of maintaining adequate levels of adrenal hormones, resulting in anxiety, insomnia, difficulty handling stress, and depression. Dealing with Depression Depression has many possible contributors and causes with hormone dysfunction being a significant one. It is critical that depression is respected as a true medical condition deserving of appropriate care. Frequently, depression is recognized by those experiencing it through the many infamous symptoms associated with it. Even if a person correctly diagnoses their condition, it is important that they seek out the assistance of a physician to rule out the presence of any underlying conditions such as thyroid dysfunction or hormonal imbalances. Common conditions such as thyroid disorders, adrenal fatigue, chronic infections, diabetes, obesity, and sleep disorders can wreak havoc on hormone function and have significant influence over the occurrence of depression.
https://www.holtorfmed.com/decoding-depression-is-it-your-hormones/
For a patient with suspected but unproved adrenal insufficiency, dexamethasone is best used to correct the glucocorticoid deficiency. This allows immediate procession to a cosyntropin stimulation test for confirming diagnosis. If a cosyntropin stimulation test is not planned, give stress doses of hydrocortisone (50-75 mg/m2 or 1-2 mg/kg) intravenously as an initial dose and followed by 50-75 mg/m2/d intravenously in 4 divided doses. Hydrocortisone may be given intramuscularly if no intravenous access is available but works less quickly. Comparable stress doses of methylprednisolone are 10-15 mg/m2 and of dexamethasone 1-1.5 mg/m2 intravenously or intramuscularly. Methylprednisolone and dexamethasone have negligible mineralocorticoid effects. Therefore, if the patient is hypovolemic, hyponatremic, or hyperkalemic, large doses of hydrocortisone (even double or triple the stress doses mentioned above) are preferred. At the present time, no parenteral form of mineralocorticoid is available in the United States. If the patient has good GI function, fludrocortisone (0.1-0.2 mg orally) may be given to replace aldosterone deficiency. In hypotensive patients, normal saline (ie, 0.9% NaCl) must be administered by rapid intravenous infusion over the first hour followed by a continuous infusion. A reasonable amount to restore intravascular volume is 450 mL/m2 or 20 mL/kg of normal saline intravenously over the first hour, followed by 3200 mL/m2/d or 200 mL/kg/100 kcal of estimated resting energy expenditure as normal saline or 0.45% NaCl in subsequent hours. Dextrose must also be provided. If the patient is hypoglycemic, 2-4 mL/kg of D10W corrects it. D5W must be provided to prevent further hypoglycemia or to prevent hypoglycemia from occurring if the patient is not hypoglycemic. Potassium is generally not needed in the acute situation, especially because patients with adrenal hypoplasia are often hyperkalemic. In growing children with adrenal insufficiency, chronic glucocorticoid replacement must be balanced to prevent symptoms of adrenal insufficiency, while still allowing the child to grow at a normal rate and prevent symptoms of glucocorticoid excess. The dose must be tailored to each patient but generally runs in the range of 7-20 mg/m2/d of hydrocortisone orally in 2-3 divided doses. Hydrocortisone is available as tablets of 5 mg, 10 mg, and 20 mg. Hydrocortisone is recommended in the pediatric population because of its lower potency, which permits easier titration of appropriate doses. In large patients, prednisone or even dexamethasone may be substituted. The estimated equivalency is 1 mg prednisone = 4 mg hydrocortisone and 1 mg dexamethasone = 50 mg hydrocortisone, but this varies from patient to patient. Patients with congenital adrenal hypoplasia also have mineralocorticoid deficiency and, therefore, must be provided with fludrocortisone (0.1-0.2 mg/d). Provide infants with NaCl (2-5 g/d PO) to counteract salt wasting. The dose of glucocorticoid is adjusted clinically (absence of symptoms of glucocorticoid deficiency or excess and normal growth). In the author's experience, plasma adrenocorticotropic hormone (ACTH) concentrations are of little help in adjusting doses of glucocorticoid in patients with primary adrenal insufficiency. Symptoms of salt craving, blood pressure, plasma renin activity, and electrolytes are helpful in adjusting the dose of fludrocortisone. Salt craving and an elevated plasma renin activity suggest the need for a larger dose of fludrocortisone, whereas elevated blood pressure or suppressed plasma renin activity suggests the need for a lower dose of fludrocortisone. One of the important physiological responses to stress is an increase in cortisol production mediated by ACTH. Patients with adrenal insufficiency, of whatever etiology, are unable to mount this response and must be provided with stress doses of glucocorticoids. In patients with minor illness (fever < 38°C) administer at least double the dose of hydrocortisone. In patients with more severe illness (fever >38°C), administer triple the dose of glucocorticoids. If the patient is vomiting or listless, give parenteral glucocorticoids (hydrocortisone 50-75 mg/m2 intramuscularly or intravenously or equivalent of methylprednisolone or dexamethasone). Because hydrocortisone succinate has a short duration of action, the dose must be repeated every 6-8 hours until the patient is well. Cortisone acetate and hydrocortisone acetate both have a longer duration of action (up to 24 h) but are often difficult to obtain in the United States. All patients with adrenal insufficiency must have injectable glucocorticoid available, and the caretaker must be instructed in its use and importance. Hydrocortisone suppositories may be tried in patients or families who cannot administer injectable glucocorticoids. However, absorption is less predictable. No contraindications to glucocorticoid or mineralocorticoid replacement are recognized when it is needed, and few adverse drug-to-drug interactions occur. Patients on physiologic replacement doses of glucocorticoids may receive live virus immunizations. This agent is responsible for the replacement of aldosterone deficiency. It is essential in maintaining electrolyte equilibrium and intravascular volume. Mineralocorticoid deficiency results in hyponatremia, hyperkalemia, and hypotension. The only available mineralocorticoid. It is only available PO in 0.1 mg tablets. If unable to tolerate PO medication, mineralocorticoid activity can be achieved with high-dose intravenous hydrocortisone. These agents are used to replace insufficient cortisol production resulting from adrenal hypoplasia. This is necessary in unstressed children to maintain appetite and weight. It is especially important in individuals who are stressed or ill because cortisol secretion is an important stress response. In this setting, glucocorticoids are important in maintaining cardiovascular stability. This is preferable to other glucocorticoids (ie, prednisone, dexamethasone) for long-term glucocorticoid replacement in children because its lower potency and shorter half-life make growth inhibition less likely as a complication, provided the dose is correct. Hydrocortisone is available in tablets of 5 mg, 10 mg, and 20 mg.
https://emedicine.medscape.com/article/918967-medication
Addison's disease is an uncommon and potentially fatal disorder of the adrenal glands. Doctors may also refer to this endocrine disorder as adrenal insufficiency or hypoadrenalism. People with Addison's disease can live full lives, but often require medicine to manage symptoms, as there is currently no cure. The adrenal glands are two small, triangular organs situated above the kidneys that produce the hormones that keep the body functioning correctly. They are part of the endocrine system and influence almost all the organs and tissues in the human body. The hormones these glands produce help the immune system, play a role in the stress response, maintain normal blood pressure, and balance salt and water throughout the body. Two of these important hormones are cortisol and aldosterone. Addison's disease occurs when the adrenal glands are harmed or injured, often when the immune system attacks the adrenal cortex. This is usually a result of an autoimmune disease, although it can also be caused by tuberculosis, genetic defects, surgical removal of the adrenal glands, or cancer or infection of, or bleeding into, the glands. Any of these factors can affect the glands' ability to produce hormones. Such causes result in a diagnosis of primary adrenal insufficiency. Hormones produced in the pituitary gland prompt production of hormones in the adrenal glands. Damage to the pituitary gland can prevent the adrenal glands from working properly, as they are not receiving the correct messages telling them how to function. This is secondary adrenal insufficiency; pituitary inflammation, tumors, or surgery can cause it. A reversible type of secondary adrenal insufficiency can result from sudden withdrawal from corticosteroids, prescribed for conditions such as asthma and arthritis. The symptoms of Addison's disease develop slowly and are often mistaken for other conditions in the early stages, such as the flu or depression. This can result in an individual ignoring their symptoms and putting off treatment. People with Addison's disease may experience Diagnostic tests for Addison's include determining the amount of sodium, cortisol, potassium, and adrenocorticotropic hormone (ACTH) in the blood, through blood tests and other methods. CT and MRI scans can assess the adrenal glands. If the doctor suspects secondary adrenal insufficiency, she may also record the level of insulin in the blood. Most of the time, an endocrinologist performs these diagnostic tests. Lifelong medications are the most common treatment for Addison's disease — corticosteroid replacement therapy requires tablets taken two or three times each day. Unfortunately, constant use of corticosteroids can cause side effects such as osteoporosis and reduced immunity, so doctors take great care to prescribe the minimum required dose and monitor their patients closely. People with Addison's disease must take their prescribed medication and attend regular appointments with an endocrinologist. Wearing a medical alert bracelet can inform medical staff of one's condition in the event of an emergency. Some people with Addison's also carry an emergency kit to be prepared should a crisis occur. People with Addison's disease are at risk of acute adrenal failure or Addisonian crisis, a medical emergency caused by severe adrenal insufficiency. It is potentially fatal and individuals experiencing the event require hospitalization and an injection of hydrocortisone. Symptoms include: Addison's is rare, affecting around one in 100,000 people in the United States, and approximately 40 to 60 million people worldwide. Determining an exact number is challenging as some people are not accurately diagnosed. The condition seems to affect both sexes equally and can develop at any age, though the majority of diagnoses are in people between the ages of 30 and 50. The first person to describe Addison's disease was a British doctor, Thomas Addison, in 1855. He called it "melasma suprarenale" after noting it in a number of patients with tuberculosis. Though the disease was initially fatal due to a lack of hormone-replacement drugs, between the 1930s and 1950s, cortisone was made available as a medical treatment to replace the missing cortisol. This site offers information designed for educational purposes only. You should not rely on any information on this site as a substitute for professional medical advice, diagnosis, treatment, or as a substitute for, professional counseling care, advice, diagnosis, or treatment. If you have any concerns or questions about your health, you should always consult with a physician or other healthcare professional.
https://facty.com/ailments/kidney/what-is-addisons-disease/
Addison’s disease, or adrenocortical insufficiency, results whenadrenal cortex function is inadequate to meet the patient’s need for cortical hormones. Autoimmune or idiopathic atrophy of the adrenal glands is responsible for 80% of cases (Rakel & Bope, 2001). Other causes include surgical removal of both adrenal glands or infection of the adrenal glands. Tuberculosis and histoplasmosis are the most common infections that de-stroy adrenal gland tissue. Although autoimmune destruction has replaced tuberculosis as the principal cause of Addison’s dis-ease, tuberculosis should be considered in the diagnostic workup because of its increasing incidence. Inadequate secre-tion of ACTH from the pituitary gland also results in adrenal insufficiency because of decreased stimulation of the adrenal cortex. Therapeutic use of corticosteroids is the most common cause of adrenocortical insufficiency (Coursin & Wood, 2002). The symptoms of adrenocortical insufficiency may also result from the sudden cessation of exogenous adrenocortical hormonal therapy, which suppresses the body’s normal response to stress and inter-feres with normal feedback mechanisms. Treatment with daily administration of corticosteroids for 2 to 4 weeks may suppress function of the adrenal cortex; therefore, adrenal insufficiency should be considered in any patient who has been treated with corticosteroids. Addison’s disease is characterized by muscle weakness, anorexia, gastrointestinal symptoms, fatigue, emaciation, dark pigmenta-tion of the skin, knuckles, knees, elbows, and mucous mem-branes, hypotension, and low blood glucose levels, low serum sodium levels, and high serum potassium levels. Mental status changes such as depression, emotional lability, apathy, and con-fusion are present in 60% to 80% of patients. In severe cases, the disturbance of sodium and potassium metabolism may be marked by depletion of sodium and water and severe, chronic dehydration. With disease progression and acute hypotension, the patient develops addisonian crisis, which is characterized by cyanosis and the classic signs of circulatory shock: pallor, apprehension, rapid and weak pulse, rapid respirations, and low blood pressure. In ad-dition, the patient may complain of headache, nausea, abdominal pain, and diarrhea and show signs of confusion and restlessness. Even slight overexertion, exposure to cold, acute infections, or a decrease in salt intake may lead to circulatory collapse, shock, and death if untreated. The stress of surgery or dehydration resulting from preparation for diagnostic tests or surgery may precipitate an addisonian or hypotensive crisis. Although the clinical manifestations presented appear specific, the onset of Addison’s disease usually occurs with nonspecific symptoms. The diagnosis is confirmed by laboratory test results. Laboratory findings include decreased blood glucose (hypoglyce-mia) and sodium (hyponatremia) levels, an increased serum potas-sium (hyperkalemia) level, and an increased white blood cell count (leukocytosis). The diagnosis is confirmed by low levels of adrenocortical hor-mones in the blood or urine and decreased serum cortisol levels. If the adrenal cortex is destroyed, baseline values are low, and ACTH administration fails to cause the normal rise in plasma cortisol and urinary 17-hydroxycorticosteroids. If the adrenal gland is normal but not stimulated properly by the pituitary, a normal response to repeated doses of exogenous ACTH is seen, but no response follows the administration of metyrapone, which stimulates endogenous ACTH. Immediate treatment is directed toward combating circulatory shock: restoring blood circulation, administering fluids and corticosteroids, monitoring vital signs, and placing the patient in a recumbent position with the legs elevated. Hydrocortisone (Solu-Cortef) is administered intravenously, followed with 5% dextrose in normal saline. Vasopressor amines may be required if hypotension persists. Antibiotics may be administered if infection has precipitated adrenal crisis in a patient with chronic adrenal insufficiency. Ad-ditionally, the patient is assessed closely to identify other factors, stressors, or illnesses that led to the acute episode. Oral intake may be initiated as soon as tolerated. Gradually, intravenous fluids are decreased when oral fluid intake is adequate to prevent hypovolemia. If the adrenal gland does not regain function, the patient needs lifelong replacement of corticosteroids and mineralocorticoids to prevent recurrence of adrenal insuffi-ciency. The patient will require additional supplementary ther-apy with glucocorticoids during stressful procedures or significant illnesses to prevent addisonian crisis (Coursin & Wood, 2002). Additionally, the patient may need to supplement dietary intake with added salt during times of gastrointestinal losses of fluids through vomiting and diarrhea. The health history and examination focus on the presence of symptoms of fluid imbalance and on the patient’s level of stress. To detect inadequate fluid volume, the nurse monitors the blood pressure and pulse rate as the patient moves from a lying to a standing position. The nurse assesses the skin color and turgor for changes related to chronic adrenal insufficiency and hypovolemia. Other key assessments include checking for weight changes, mus-cle weakness, and fatigue and investigating any illness or stress that may have precipitated the acute crisis. The patient at risk is monitored for signs and symptoms indica-tive of addisonian crisis. These symptoms are often the manifes-tations of shock: hypotension; rapid, weak pulse; rapid respiratory rate; pallor; and extreme weakness. The patient with addisonian crisis is at risk for circulatory collapse and shock; therefore, physical and psy-chological stressors must be avoided. These include exposure to cold, overexertion, infection, and emotional distress. The patient with addisonian crisis requires immediate treat-ment with intravenous administration of fluid, glucose, and elec-trolytes, especially sodium; replacement of missing steroid hormones; and vasopressors. During acute addisonian crisis, the patient must avoid exertion; therefore, the nurse anticipates the patient’s needs and takes measures to meet them. Careful monitoring of symptoms, vital signs, weight, and fluid and electrolyte status is essential to monitor the patient’s progress and return to a precrisis state. To reduce the risk of future episodes of addisonian crisis, efforts are made to identify and re-duce the factors that may have led to the crisis. To provide information about fluid balance and the adequacy of hormone replacement, the nurse assesses the patient’s skin turgor, mucous membranes, and weight while instructing the patient to report increased thirst, which may indicate impending fluid im-balance. Lying, sitting, and standing blood pressures also provide information about fluid status. A decrease in systolic pressure (20 mm Hg or more) may indicate depletion of fluid volume, es-pecially if accompanied by symptoms. The nurse encourages the patient to consume foods and fluids that will assist in restoring and maintaining fluid and electrolyte balance; along with the dietitian, the nurse assists the patient to select foods high in sodium during gastrointestinal disturbances and very hot weather. The nurse instructs the patient and family to administer hor-mone replacement as prescribed and to modify the dosage dur-ing illness and other stressful occasions. Written and verbal instructions are provided about the administration of mineralo-corticoid (Florinef) or corticosteroid (prednisone) as prescribed. Until the patient’s condition is stabilized, the nurse takes pre-cautions to avoid unnecessary activity and stress that could pre-cipitate another hypotensive episode. Efforts are made to detect signs of infection or the presence of other stressors. Even minor events or stressors may be excessive in patients with adrenal in-sufficiency. During the acute crisis, the nurse maintains a quiet, nonstressful environment and performs all activities (eg, bathing, turning) for the patient. Explaining all procedures to the patient and family will reduce their anxiety. Explaining the rationale for minimizing stress during the acute crisis assists the patient to in-crease activity gradually. Because of the need for lifelong re-placement of adrenal cortex hormones to prevent addisonian crises, the patient and family members receive explicit verbal and written instructions about the rationale for replacement therapy and proper dosage. Additionally, they are instructed about how to modify the medication dosage and increase salt intake in times of illness, very hot weather, and other stressful situations. The pa-tient also learns how to modify diet and fluid intake to help main-tain fluid and electrolyte balance. The patient and family are frequently prescribed preloaded, single-injection syringes of corticosteroid for use in emergencies. Careful instructions about how and when to use the injection are also provided. It is important to instruct the patient to inform other health care providers, such as dentists, about the use of cor-ticosteroids, to wear a medical alert bracelet, and to carry in-formation at all times about the need for corticosteroids. If the patient with Addison’s disease requires surgery, careful adminis-tration of fluids and corticosteroids is necessary before, during, and after surgery to prevent addisonian crisis. The patient and family need to know the signs of excessive or insufficient hormone replacement. The development of edema or weight gain may signify too high a dose of hormone; postural hypotension (decrease in systolic blood pressure, lightheadedness,dizziness on standing) and weight loss frequently signify too low a dose (Chart 42-9). Although most patients can return to their joband family responsibilities soon after hospital discharge, others cannot do so because of concurrent illnesses or incomplete recov-ery from the episode of adrenal insufficiency. In these circum-stances, a referral for home care enables the home care nurse to assess the patient’s recovery, monitor hormone replacement, and evaluate stress in the home. The nurse assesses the patient’s and family’s knowledge about medication therapy and dietary modi-fications. A home visit also allows the nurse to assess the patient’s plans for follow-up visits to the clinic or physician’s office. The nurse reminds the patient and family about the importance of par-ticipating in health promotion activities and health screening. Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Adrenocortical-Insufficiency-(Addison---s-Disease)---Management-of-Patients-With-Adrenal-Disorders_32188/
Home / Featured / What Is Stress? The condition of stress has two components: physical, involving direct material or bodily challenge, and psychological, involving how individuals perceive circumstances in their lives. These components can be examined in three ways. One approach focuses on the environment: stress is seen as a stimulus, as when we have a demanding job or experience severe pain from arthritis or a death in the family. Physically or psychologically challenging events or circumstances are called stressors. The second approach treats stress as a response, focusing on people’s reactions to stressors. We see an example of this approach when people use the word stress to refer to their state of tension. Our responses can be psychological, such as your thought patterns and emotions when you ‘‘feel nervous,’’ and physiological, as when your heart pounds, your mouth goes dry, and you perspire. The psychological and physiological response to a stressor is called strain. The third approach describes stress as a process that includes stressors and strains, but adds an important dimension: the relationship between the person and environment. This process involves continuous interactions and adjustments—called transactions—with the person and environment each affecting and being affected by the other. According to this view, stress is not just a stimulus or a response, but rather a process in which the person is an active agent who can influence the impact of a stressor through behavioral, cognitive, and emotional strategies. People differ in the amount of strain they experience from the same stressor. One person who is stuck in traffic and late for an important appointment keeps looking at his watch, honking his horn, and getting angrier by the minute; another person in the same circumstances stays calm, turns on the radio, and listens to music. We will define stress as the circumstance in which transactions lead a person to perceive a discrepancy between the physical or psychological demands of a situation and the resources of his or her biological, psychological, or social resources. Resources: Stress taxes the person’s biopsychosocial resources for coping with difficult events or circumstances. These resources are limited. as we saw when Vicki had depleted her ability to cope with her problems, became ill, and sought counseling. Demands: The phrase ‘‘demands of a situation’’ refers to the amount of our resources the stressor appears to require. Discrepancy: When there is a poor fit, or a mismatch, between the demands of the situation and the resources of the person, a discrepancy exists. Transactions: In our transactions with the environment, we assess demands, resources, and discrepancies between them. An important point to keep in mind is that a demand, resource, or discrepancy may be either real or just believed to exist. As an example, suppose you had to take an exam and wanted to do well, but worried greatly that you would not. If you had procrastinated and did not prepare for the test, the discrepancy you see between the demands and your resources might be real. But if you had previously done well on similar exams, prepared thoroughly for this one, and scored well on a pretest in a study guide yet still thought you would not do well, the discrepancy you see would not reflect the true state of affairs. Stress often results from inaccurate perceptions of discrepancies between environmental demands and the actual resources. Stress is in the eye of the beholder. (2) The resources available for meeting the demand. These are called primary and secondary appraisal. It is irrelevant—as you might decide if you had had similar symptoms of pain and nausea before that lasted only a short while and were not followed by illness. It is good (called ‘‘benign-positive’’)—which might be your appraisal if you wanted very much to skip work or have a college exam postponed. It is stressful—as you might judge if you feared the symptoms were of a serious illness, such as botulism (a life-threatening type of food poisoning). Circumstances we appraise as stressful receive further appraisal for three implications: harm-loss, threat, and challenge. Harm-loss refers to the amount of damage that has already occurred, as when someone is incapacitated and in pain following a serious injury. Sometimes people who experience a relatively minor stressor think of it as a ‘‘disaster,’’ thereby exaggerating its personal impact and increasing their feelings of stress. Threat involves the expectation of future harm—for example, when hospitalized patients contemplate their medical bills, difficult rehabilitation, and loss of income. Stress appraisals seem to depend heavily on harm-loss and threat. Challenge is the opportunity to achieve growth, mastery, or profit by using more than routine resources to meet a demand. For instance, a worker might view an offer of a higher-level job as demanding, but see it as an opportunity to expand her skills, demon- strate her ability, and make more money. Many people are happiest when they face challenging but satisfying activities. I can’t do it—I know I’ll fail. I’ll try, but my chances are slim. I can do it if Ginny will help. If this method fails, I can try a few others. I can do it if I work hard. No problem—I can do it. When we judge our resources as sufficient to meet the demands, we may experience little or no stress; but when we appraise demands as greater than our resources, we may feel a great deal of stress. Anyone who has experienced a very frightening event, such as a near accident or other emergency, knows that there are physiological reactions to stress for instance, almost immediately our heart begins to beat more rapidly and more forcefully, and the skeletal muscles of our arms and legs may tremble. The body is aroused and motivated to defend itself, and the sympathetic nervous system and the endocrine system cause this arousal to happen. After the emergency passes, the arousal subsides. The physiological portion of the response to a stressor or strain is called reactivity, which researchers measure by comparison against a baseline, or ‘‘resting,’’ level of arousal. Genetic factors influence people’s degree of reactivity to stressors. People who are under chronic stress often show heightened reactivity when a stressor occurs, and their arousal may take more time to return to baseline levels. The distinguished physiologist Walter Cannon (1929) provided a basic description of how the body reacts to emergencies. He was interested in the physiological reaction of people and animals to perceived danger. This reaction has been called the fight-or-flight response because it prepares the organism to attack the threat or to flee. In the fight-or-flight response, the perception of danger causes the sympathetic nervous system to stimulate many organs, such as the heart, directly, and stimulates the adrenal glands of the endocrine system, which secrete epinephrine, arousing the body still further. Cannon proposed that this arousal could have positive or negative effects: the fight-or-flight response is adaptive because it mobilizes the organism to respond quickly to danger, but this high arousal can be harmful to health if it is prolonged. Alarm reaction. The first stage of the GAS is like the fight-or-flight response to an emergency—its function is to mobilize the body’s resources. This fast-acting arousal results from the sympathetic nervous system, which activates many organs through direct nerve connections, including the adrenal glands, which when stimulated release epinephrine and norepinephrine into the bloodstream, producing further activation. Somewhat less quickly, the hypothalamus–pituitary–adrenal axis (HPA) of the stress response is activated, and this component of the stress response was Selye’s novel and main emphasis. Briefly, the hypothalamus triggers the pituitary gland to secrete ACTH, which causes the adrenal gland to release cortisol into the bloodstream, further enhancing the body’s mobilization. Stage of resistance. If a strong stressor continues, the physiological reaction enters the stage of resistance. Here, the initial reactions of the sympathetic nervous system become less pronounced and important, and HPA activation predominates. In this stage, the body tries to adapt to the stressor. Physiological arousal remains higher than normal, and the body replenishes the hormones the adrenal glands released. Despite this continuous physiological arousal, the organism may show few outward signs of stress. But the ability to resist new stressors may become impaired. According to Selye, this impairment may eventually make the individual vulnerable to the health problems he called diseases of adaptation. These health problems include ulcers, high blood pressure, asthma, and illnesses that result from impaired immune function. Stage of exhaustion. Prolonged physiological arousal produced by severe long-term or repeated stress is costly. It can weaken the immune system and deplete the body’s energy reserves until resistance is very limited. At this point, the stage of exhaustion begins. If the stress continues, disease and damage to internal organs are likely, and death may occur. Amount of exposure. This is obviously key: when we encounter more frequent, intense, or prolonged stressors, we are likely to respond with a greater total amount of physiological activation. Magnitude of reactivity. In response to any particular stressor, such as taking a major academic exam, some individuals will show large increases in blood pressure or stress hormones while others show much smaller changes. Rate of recovery. Once the encounter with a stressor is over, physiological responses return to normal quickly for some people, but stay elevated for a longer time for others. Continuing to think about a stressor after it is over, revisiting it mentally, or worrying about it recurring in the future can delay physiological recovery and add to the accumulated toll through prolonged physiological activation. Resource restoration. The resources used in physiological strain are replenished by various activities, and sleep may be the most important of them. Sleep deprivation can be a source of stress, and contributes to allostatic load directly. What’s more, poor sleep quality or reduced amounts of sleep predict the development of serious health problems, such as heart disease. Approach/approach or double approach: Choice involves two appealing goals that are incompatible. For example, individuals trying to lose weight to improve their health or appearance experience frequent conflicts when delicious, fattening foods are available. Although people generally resolve an approach/approach conflict fairly easily, the more important the decision is to them, the greater the stress it is likely to produce. Avoidance/avoidance or double avoidance: Choice between two undesirable situations. For example, patients with serious illnesses may be faced with a choice between two treatments that will control or cure the disease, but have very undesirable side effects. People in avoidance/avoidance conflicts usually try to postpone or escape from the decision; when this is not possible, people often vacillate between the two alternatives, changing their minds repeatedly, or get someone else to make the decision for them. People generally find avoidance/avoidance conflicts difficult to resolve and very stressful. Approach/avoidance: A single goal or situation has attractive and unattractive features. This type of conflict can be stressful and difficult to resolve. Consider, for instance, individuals who smoke cigarettes and want to quit. They may be torn between wanting to improve their health and wanting to avoid the weight gain and cravings they believe will occur. Coping is the process by which people try to manage the perceived discrepancy between the demands and resources they appraise in a stressful situation. It indicates that coping efforts can be quite varied and do not necessarily lead to a solution of the problem. According to Richard Lazarus and his colleagues, coping can serve two main functions. It can alter the problem causing the stress or it can regulate the emotional response to the problem. Emotion-focused coping is aimed at controlling the emotional response to the stressful situation. People can regulate their emotional responses through behavioral and cognitive approaches. Examples of behavioral approaches include using alcohol or drugs, seeking emotional social support from friends or relatives, and engaging in activities, such as sports or watching TV, which distract attention from the problem. Cognitive approaches involve how people think about the stressful situation. In one cognitive approach, people redefine the situation to put a good face on it, such as by noting that things could be worse, making comparisons with individuals who are less well off, or seeing something good growing out of the problem. People who want to redefine a stressful situation can generally find a way to do it since there is almost always some aspect of one’s life that can be viewed positively. Other emotion-focused cognitive processes include strategies Freud called ‘‘defense mechanisms,’’ which involve distorting memory or reality in some way. For instance, when something is too painful to face, the person may deny that it exists. This defense mechanism is called denial. In medical situations, individuals who are diagnosed with terminal diseases often use this strategy and refuse to believe they are really ill. This is one way by which people cope by using avoidance strategies. But strategies that promote avoidance of the problem are helpful mainly in the short run, such as during an early stage of a prolonged stress experience. People tend to use emotion-focused approaches when they believe they can do little to change the stressful conditions. An example of this is when a loved one dies in this situation, people often seek emotional support and distract themselves with funeral arrangements and chores at home or at work. Other examples can be seen in situations in which individuals believe their resources are not and cannot be adequate to meet the demands of the stressor. Coping methods that focus on emotions are important because they sometimes interfere with getting medical treatment or involve unhealthful behaviors, such as using cigarettes, alcohol, and drugs to reduce tension. People often use these substances in their efforts toward emotion-focused coping. Problem-focused coping is aimed at reducing the demands of a stressful situation or expanding the resources to deal with it. Everyday life provides many examples of problem-focused coping, including quitting a stressful job, negotiating an extension for paying some bills, devising a new schedule for studying (and sticking to it), choosing a different career to pursue, seeking medical or psychological treatment, and learning new skills. People tend to use problem-focused approaches when they believe their resources or the demands of the situation are changeable. For example, caregivers of terminally ill patients use problem-focused coping more in the months prior to the death than during bereavement. Can problem-focused and emotion-focused coping be used together? Yes, and they often are. For instance, a study had patients with painful arthritis keep track of their daily use of problem and emotion-focused coping. Most often, they used the two types of coping together; but when they used only one type, three-quarters of the time it was problem focused coping. The term stress management refers to any program of behavioral and cognitive techniques that is designed to reduce psychological and physical reactions to stress. Sometimes people use pharmacological approaches under medical supervision to reduce emotions, such as anxiety, that accompany stress. for long-term control of stress and emotions, using drugs for stress should be a temporary measure. For instance, they might be used during an acute crisis, such as in the week or two following the death of a loved one, or while the patient learns new psychological methods for coping. to control their feelings of tension is called progressive muscle relaxation (or just progressive relaxation), in which they focus their attention on specific muscle groups while alternately tightening and relaxing these muscles.
https://en.monerkhabor.com/featured/2017/08/19/what-is-stress/
College is the initial gateway to stress-related sensations; it is, indeed, a pathway to adulthood. According to the National College Health Assessment (NCHA), only about 1.6% of undergraduates report feeling no stress whatsoever (in the last 12 months). The other 98.4% aren’t that lucky. In fact, more than 45% of college students stated they experience more than average stress. In comparison, 87% of students reported feeling overwhelmed at least once in the previous year due to the volume of their college assignments. It is safe to say, the strain of the fast-paced contemporary world has left its mark. Although short-term stress can help with exceptional written and/or verbal academic performance (when needed), long-term stress is a looming consequence for our country’s bright future. The ability to manage stress is key to personal well-being and academic success. This is the college students’ guide to major stressors and coping mechanisms. We’re glad you’re here. Defining Stress So, how do we define stress, and how does it define us? From college relationships to academic achievement, the emotional apparatus is overwhelmed by all the stimuli young adulthood brings. According to the Anxiety and Depression Association of America, nearly 80% of college students experience stress daily. New responsibilities, environment, expansion of social circles, and a newly formed pattern of time distribution often lead to heightened anxiety levels. It is in our nature to disapprove of novelties, after all. Stress is an involuntary, natural reaction to an individual’s emotions. Both negative (exams, break-ups) and positive (parties, academic success, being in love) are known to cause stress. Learning to cope with an influx of new positive or negative experiences will lead to a balanced academic life. Categorizing Stress Levels We’re not here to talk about the adrenal gland today, but the subject sort of imposes itself due to the very nature of our topic. When you’re young, you don’t really cognize or elaborate on the benevolently given input; you’re young, life is ahead of you, and you have all the time in the world to think about hypothetical consequences – if and when they occur. The adrenal glands produce hormones that regulate our immune system, metabolism, blood pressure, and ability to respond to stress. According to Mayo Clinic reports, expected effects of stress include restlessness, irritability, and depression. Upset stomachs, insomnia, headaches, and exhaustion are also comorbidities. Any hormonal disharmony derives from these three types of stress: - Acute stress: known as the most common form, it’s a result of accumulated day-to-day stressors: running late to class, sleeping in, and poor grades; Luckily, this mild form leaves little to no consequences on your physical and mental health. - Episodic acute stress: if you disregard the initial signals, acute stress will turn into an acute spree. Much like Netflix, the stress will broadcast episodes, followed by a myriad of symptoms, including headaches, gastrointestinal issues, heartburn, possible panic attacks, and muscle tension. - Chronic acute stress: giving in after long-term stress. Students struggling with passing an exam (or scoring high) may experience chronic, acute stress, followed by a change in appetite, low energy, insomnia, social behavior, and emotional responses in interpersonal relationships. The Pain of Growing Up Our college students’ guide to major stressors and coping mechanisms declares homesickness a natural response to gaining independence. Growth equals unforeseen discomfort. The process of learning how to take care of yourself will most likely induce feelings of loneliness and sadness. Being away from your pillars of support (your family, childhood/high school friends, or your long-distance romantic relationship) is a buffet of potential sorrows. Not only will reflecting and articulating your struggles improve existing relationships, but it will also make room for social growth. The “betrayal syndrome” is strong once you leave home. Understanding the importance of your “empirical family” will ease the transition and make you feel less alone. The world is your oyster, and the people you meet might be your pearl. Stress Leads to Addiction Behavioral symptoms in students struggling with stress include erratic sleep patterns, binge-eating/loss of appetite, and alcohol or drug use. Many individuals lead a seemingly ordinary college life, despite their substance abuse. The reality is that it’s tough to recognize if someone’s struggling with addiction. The best way to cope with this particular stress “side effect” is to seek help and support. Additionally, if you are the one trying to help a friend with addiction, the best you can do is offer support and understanding. However, helping a friend with addiction issues could also be a significant trigger for the bystander, as it forces the individual not only to stay lucid but also transform into a nurturing grown-up persona, leading to premature development of their personality. Financial Struggles Around 70% of college students report financial stress. Many young individuals work while in college to afford tuition, meal plans, textbooks, and other general expenses. Financial strains take a toll on even the most resilient students due to overwhelming stimuli of over-engaging, making ends meet, and, to put it in modern terms, FOMO. For students forced to work part-time and calculate their academic budget, the chance of dropping out of college increases by the minute. The education you receive can make or break your future career. Speaking to your financial aid office to see whether you’re a candidate for loans or grants could alleviate the stress. Post-Graduation Anticipation “What’s next?” is the haunting question of all graduates. The post-graduation disorder can leave you physically, mentally, and emotionally drained if you decide to delve into it. Finding healthy coping mechanisms is critical. Facing your rumination about the future with a trusted advisor is the best way to debunk the doom and gloom premise about leaving your soon-to-be past behind. Billions have done it. So can you. Final Thoughts Our final college students’ guide to major stressors and coping mechanisms tip: prioritizing self-care as your most critical college course will help you navigate the hallways of stress and anxiety. Proper rest, staying active, and having a healthy stress outlet will bring the puzzle together. You can do this!
https://www.collegebasics.com/blog/the-college-students-guide-to-major-stressors-and-coping-mechanisms/
Addison’s disease, also known as adrenal insufficiency, is an extremely rare disease that occurs when the body cannot produce enough of certain hormones. In Addison’s disease, which is seen in one out of every 100,000 people, glucocorticoid (cortisol) and mineralocorticoid (aldosterone) hormones are reduced in the blood due to the insufficient secretion of the adrenal glands located just above the kidneys. What causes Addison’s disease? There are two types of Addison’s disease: primary adrenal insufficiency and secondary adrenal insufficiency. Approximately 70% of primary adrenal insufficiency is due to an autoimmune process. Other causes such as adrenal gland damage, tuberculosis, various bacterial, viral and fungal infections, adrenal gland bleeding, and metastasis of cancer to the adrenal glands can also cause primary adrenal insufficiency. Secondary adrenal insufficiency occurs due to a decrease in the production of the pituitary hormone ACTH (adrenocorticotropic hormone). Cortisol production is not stimulated in ACTH deficiency due to a pituitary tumor or another cause. Aldosterone production is usually not affected in secondary adrenal insufficiency. What are the symptoms of Addison’s disease? The symptoms of Addison’s disease vary according to the hormone deficiency. It is necessary to know the functions of these hormones in order to better understand the symptoms of the disease. Cortisol is a hormone that occurs due to stress and is secreted by the adrenal glands. So its most important task is to help the body respond to stress. It also helps the body regulate the use of protein, carbohydrate and fat. It maintains blood pressure and cardiovascular function and controls inflammation. Aldosterone, on the other hand, is a steroid-structured hormone secreted from the outer part of the adrenal glands (cortex), which has an effect on removing potassium from the kidney and reabsorbing sodium, and regulates the electrolyte balance in the body. When aldosterone levels are severely reduced, the kidneys cannot keep salt and water levels in balance. This causes dehydration and low blood pressure. Addison’s disease symptoms usually develop slowly over a period of several months. Until stress such as illness or injury arises and symptoms become more pronounced and worse, the disease progresses so slowly that some of its symptoms are ignored. The main symptoms of Addison’s disease can be listed as follows; - Overstrain - Weight loss and severe loss of appetite - Fasting hypoglycemia - Darkening in the oral mucosa and skin, especially in surgery and scars, nipples and genital areas (pigmentation) - Low blood pressure and fainting from it - Increased need for salt - Low blood sugar (hypoglycemia) - Nausea, diarrhea, or vomiting - Abdominal pain - Pain in the muscles or joints - Feeling angry - Depression or other behavioral disorders - Decreased sweating - Decrease in armpit and genital hair growth, especially in women How is Addison’s disease diagnosed? For the diagnosis of Addison’s disease, the specialist firstly listens to the patient’s history and examines the clinical findings. In case of doubt, various laboratory tests are performed to determine whether the patient has Addison’s disease and to distinguish between primary and secondary adrenal insufficiency. Tests performed to evaluate the patient’s electrolyte balance, blood sugar level and kidney functions are also necessary to determine the cause of the disease and to direct treatment. In some cases, alternative tests may be ordered, such as the insulin-induced hypoglycemia test, the low-dose ACTH stimulation test, the long-term ACTH stimulation test, or the glucagon stimulation test. Radiological scans such as CT (computed tomography) or MRI (magnetic resonance imaging) may also be used to examine the size and shape of the adrenal glands and pituitary. How is Addison’s disease treated? Because adrenal insufficiency causes a lack of functional hormones for the body, doctors often prescribe hormone replacement for the treatment of Addison’s disease. This is done once or twice daily with hydrocortisone tablets, a steroid hormone. If needed, aldosterone can be replaced with fludrocortisone acetate, a synthetic steroid taken orally once a day. These medications should be increased especially at times of stress, infection, surgery or injury. Hormone therapy usually gives successful results. When treatment is successful, people with Addison’s disease can lead a fairly normal life. However, it is recommended that they always carry a doctor alert bracelet and emergency ID card, and keep a small supply of medication at work or school. What is the Addison’s disease diet like? Fatigue is one of the most common symptoms of Addison’s disease, but under no circumstances should stimulants, energy drinks, soda, or coffee be used. These beverages extremely stimulate the adrenaline glands due to their high caffeine content. In addition to these, the stimulants and excessive sugar contained in these drinks damage the adrenaline glands. These warnings also apply to cigarettes and tobacco products. Ready food products containing carbohydrates and refined sugar should be avoided as much as possible. If you also have diabetes along with Addison’s disease, these foods can disrupt the balance of your insulin levels more than normal. These foods increase the symptoms of Addison’s disease, especially in cases where blood sugar is low. There has been a lot of controversy as to whether salt is beneficial or harmful for Addison’s patients. The truth is that; There is a need for proper nutrition to salt or sodium. Salt and sodium are of particular importance, as low blood sugar is one of the main symptoms of this disease. Getting enough sodium helps keep blood sugar at a certain level. However, you should take care to get this need from high quality sources. Examples of these sources are Himalayan salt and sea salt. Do not ignore your craving for salt. This situation may be due to a real need. If you sweat a lot, add a little more salt to your food and drink more fluids, especially water. Excessive stress can cause serious damage to the body by triggering the disease. During stressful times, take care to consume more foods containing vitamin C. Antidepressants also help strengthen your immune system and also help your body adapt better to stress. Thus, adrenaline prevents further damage to your glands. However, the use of antidepressants should be done with the recommendation of a specialist psychiatrist. Vitamin B intake stimulates the production of certain hormones and neurotransmitters, conductors that transmit impulses from our nervous system. You can increase your consumption of village eggs, shellfish, sardines and salmon to get more B vitamins. Zinc is important not only for a well-functioning immune system but also for the production of hormones that help fight stress. You can get zinc from seafood, snacks, beans, spinach, and mushrooms. Magnesium calms the nervous system. Avocados, black-eyed peas, bananas, yogurt, cookies and spinach are among the rich sources of magnesium.
https://www.healthmedic24.com/what-is-addisons-disease/
- Zeitschrift: - Critical Care > Ausgabe 4/2010 Wichtige Hinweise Competing interests The authors declare that they have no competing interests. Abstract The hypothalamic-pituitary-adrenal (HPA) axis response in sepsis remains to be elucidated. Apart from corticotropin-releasing hormone, adrenocorticotropic hormone, and cortisol, many other neuroendocrine factors participate in the regulation of HPA stress response. The HPA response to acute and chronic illness exerts a biphasic profile. Tissue corticosteroid resistance may also play an important role. All of these add to the complexity of the concept of ‘relative adrenal insufficiency' and may account for the difficulty of clinical diagnosis and for the conflicting results of corticosteroid replacement therapy in severe sepsis/septic shock. The study by Lesur and colleagues expands our understanding of the mechanism, and further study of HPA stress response is warranted.
https://www.springermedizin.de/sepsis-related-stress-response-known-knowns-known-unknowns-and-u/9732178
Here Are The Most Frequently Asked Adrenal Fatigue Questions Here is a list of the most commonly asked questions about adrenal fatigue. I have tried to answer them all for your convenience on one page. Please contact me if you have a question and it is not on this page, I’ll gladly answer your question. What is Adrenal Fatigue? Adrenal Fatigue is a syndrome (a related group of sings and symptoms) that results when the adrenal glands function below the necessary level, usually because of intense, prolonged or repeated stress. As its name suggests, the chief symptom is fatigue unrelieved by sleep. However, Adrenal Fatigue is not a readily identifiable entity like measles or allergies. Its severity can range from a general sense of tiredness, without obvious signs of physical illness, to difficulty getting out of bed for more than a few hours a day. With each increment of reduction in adrenal function, every organ and system in the body is more profoundly affected. Changes occur in carbohydrate, protein and fat metabolism; fluid and electrolyte balance; immune, cardiovascular and nervous system function; and even in libido. Numerous other alterations take place at the biochemical and cellular levels. This syndrome has been known by many other names throughout the past century, such as non-Addison’s hypo-adrenia, a sub-clinical hypo-adrenia, neurasthenia, adrenal neurasthenia, adrenal apathy and adrenal fatigue. Although it affects millions of people in New Zealand, Australia and around the world, conventional medicine does not yet recognise it as a distinct, treatable syndrome. Who gets Adrenal Fatigue? Anyone from birth to old age and from any race or culture can suffer from Adrenal Fatigue. People vary greatly in their ability to respond and withstand stress. An illness, a life crisis or a continuing difficult situation can drain the adrenal resources of even the healthiest person. However there are certain factors that increase susceptibility to Adrenal Fatigue. These include poor diet; substance abuse; too little sleep and rest; too many social, emotional or physical pressures; serious or repeated injury; chronic illness; repeated infections such as bronchitis or pneumonia; allergies; exposure to a toxic environment; and a mother with Adrenal Fatigue during gestation and birth. Unfortunately many of these factors are common in modern life. What causes Adrenal Fatigue? The adrenal glands mobilise the body’s response to every kind of physical, emotional and psychological stress through hormones that regulate energy production and storage, heart rate, muscle tone, immune function and other processes that deal with stress. Adrenal Fatigue is produced when the output of regulatory adrenal hormones is diminished through over-stimulation of the adrenals by severe, chronic or repeated stress, or because of adrenals weakened by poor nutrition, congenital factors or other causes. In Adrenal Fatigue the adrenal glands function, but not well enough to adequately meet the demands of stress and maintain normal, healthy homeostasis. The causes of Adrenal Fatigue usually stem from one of four common sources that overwhelm the glands: 1) Disease states such as severe or recurrent pneumonia, bronchitis or flu, cancer, AIDS, auto-immune and other illnesses. 2) Physical stress such as surgery, poor nutrition, addiction, injury, and exhaustion. 3) Emotional/psychological stress from relationships, work or other unavoidable life situations 4) Continual and/or severe environmental stress from toxic chemicals and pollutants in the air, water, clothing or food. What are the symptoms of Adrenal Fatigue? Some of the most common symptoms of Adrenal Fatigue are regular experience of: 1) Early morning fatigue & difficulty getting up in the morning, even after a full night’s sleep 2) Tiredness, especially in the early morning and mid afternoon. 3) Symptoms of hypoglycemia 4) Feeling rundown or overwhelmed 5) Difficulty bouncing back from stress or illness 6) Cravings for salty and/or sweet snacks 7) Feeling best after 6 PM Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. James L. Wilson contains a complete profile of symptoms and includes a comprehensive questionnaire and detailed descriptions of tests for Adrenal Fatigue. Is Adrenal Fatigue common? Yes, Adrenal Fatigue is a very common disorder, estimated to affect many millions of people worldwide at some point in their lives. The problem is that it is not recognised as a syndrome (just like chronic fatiguehas been for many years, with many doctors still being skeptical of its existence) in its own right, leaving the patient wandering from doctor to doctor only receiving symptomatic treatment for insomnia, depression, anxiety and a host of other complaints. Where are the adrenal glands? The adrenal glands are two small glands, about the size of large grapes, which sit over the kidneys. They are located towards the back of the body, near the bottom of the ribs on each side of the spine. How do doctors diagnose Adrenal Fatigue? Most medical doctors are not aware of Adrenal Fatigue. They only recognise Addison’s disease, which is the most extreme end of low adrenal function. Astute doctors who are familiar with the varying degrees of decreased adrenal function usually test the adrenal hormone levels in the saliva. This is an accurate and useful indicator of Adrenal Fatigue. There are other common lab tests that can be used more indirectly to detect Adrenal Fatigue, but the majority of medical doctors do not know how to interpret these tests for indications of Adrenal Fatigue. The Adrenal Fatigue Questionnaire on page 61 of Dr. Wilson’s book is the most widely used questionnaire to help doctors who are aware of low adrenal function make their diagnosis. Are there laboratory tests that detect Adrenal Fatigue? Yes. The most accurate and valuable test for detecting Adrenal Fatigue is a saliva adrenal hormone test for cortisol. This is a simple and relatively inexpensive test that has recently become available from a few labs such as Medlab in New Zealand. A kit can be obtained from your practitioner and the test completed at home by simply producing saliva and spitting into the test tubes 4 times throughout a 24 hour day. This test can be done for you privately through Medlab in New Zealand through you health-care professional. Nutrisearch, the exclusive agents for Dr. Wilson’s products and protocols in NZ regularly hold workshops to teach and train physicians in the correct interpretation of salivary hormone testing. There are some other lab tests but they need special interpretation by practitioners to recognize and treat Adrenal Fatigue. Can people recover from Adrenal Fatigue? Although Adrenal Fatigue may only last a short while, especially if it was caused by one transient stressful event, it can debilitating and last for many years – even a lifetime – without proper treatment. However, with proper treatment, most people can fully recover from Adrenal Fatigue. Can children have Adrenal Fatigue? Yes, children are susceptible to the same causative factors for Adrenal Fatigue as adults. Children whose mothers had Adrenal Fatigue during their gestation and/or birth are especially vulnerable to lowered adrenal function. These children are often more sickly, have less ability to handle stressful situations, and take longer to recover from illnesses. However, they too can greatly benefit from proper adrenal support and healthy lifestyle choices. We often see adolescents and young adults in the clinic with post-viral fatigue syndromes along with Adrenal Fatigue, who benefit tremendously from Dr. Wilson’s Adrenal Fatigue Program. Is age a factor in Adrenal Fatigue? People can suffer from Adrenal Fatigue at any age but both the very young and the very old are more vulnerable to stress and therefore Adrenal Fatigue. How often do bouts of Adrenal Fatigue occur? Frequency of occurrence varies with each person. Some people have only 1 episode of Adrenal Fatigue during their lifetime, some have several, and others experience chronic Adrenal Fatigue from which they never recover. Whether the Adrenal Fatigue is infrequent, or chronic, proper adrenal support will make all the difference. Can Adrenal Fatigue become chronic? Yes, in some people the adrenal glands do not return to normal levels of function without help, either because the stress was too great or too prolonged, or because their general health is poor. However, when Adrenal Fatigue becomes chronic, it is almost always because of factors that can be changed through modifications in lifestyle and proper adrenal support. How likely it is that Adrenal Fatigue will get worse or result in Addison’s disease? Proper adrenal support, as described in Dr. Wilson’s book Adrenal Fatigue: The 21st Century Stress Syndrome, combined with his Adrenal Recovery Program, using special adrenal dietary supplements he created for Future Formulations, is very effective and greatly decreases the likelihood that Adrenal Fatigue will worsen or progress to the extreme of Addison’s. Approximately 70% of Addison’s disease cases are actually an auto-immune disease. The rest (about 30%) are called idiopathic (no known cause) but can be precipitated by events in people’s lives that severely impair adrenal function. Within this 30%, each factor that protects the adrenals, healthy lifestyle, food choices, exercises, attitudes, stress management and supplemental adrenal support has a tremendous impact on whether Adrenal Fatigue progresses to recovery or collapse. What keeps the adrenal glands healthy? The guidelines for keeping the adrenal glands healthy are very similar to the overall principles of good health. A moderate lifestyle with good quality food, regular exercise and adequate rest, combined with a healthy mental attitude to the stresses of life goes a long way towards keeping the adrenal glands strong and resilient. However, because modern life is so stressful, certain nutritional supplements specially designed for adrenal support are also important to both maintaining healthy adrenal glands and helping depleted adrenal glands to recover. The nutritional supplements designed by Dr. Wilson, author of Adrenal Fatigue: The 21st Century Stress Syndrome, are available in NZ from The Naturpaths website. What can someone do to prevent Adrenal Fatigue? Read Part 3, “Helping Yourself Back to Health” in the book Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson, and faithfully follow the instructions. During any illness dramatically increase intake of Vitamin C, bioflavonoids, magnesium and pantothenic acid or, better yet, use Future Formulations dietary supplements designed by Dr. Wilson for adrenal support. After an illness, do not try to hit the floor running. Instead take an extra day off work in order to rejuvenate. If there is lingering tiredness after an illness, emotional shock or other event that produces Adrenal Fatigue, sleep in late, be especially conscious of eating high quality foods, and avoid caffeine and alcohol. In addition, saunas can be great for detoxifying and unwinding, thus lessening the stress load on the adrenals. Is there a genetic predisposition towards Adrenal Fatigue? It is not known if there is an actual genetic predisposition for Adrenal Fatigue. However, if one or both parents suffer from Adrenal Fatigue, either chronically or during the time of conception, and if the mother has Adrenal Fatigue during gestation, there is a grater than 50% chance that their children will also suffer from Adrenal Fatigue. This may be seen as a child with a weak constitution, early allergies, a propensity towards lung infections, a decreased ability to handle stress and longer recovery times after illnesses. Although these children will never have exceptionally strong adrenal glands, much can be done to help them recover by the use of proper adrenal support and healthy lifestyle choices as described in Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson and at his Adrenal Fatigue website www.adrenal fatigue.org. Is Adrenal Fatigue related to other health conditions? The processes that take place in any chronic disease, from arthritis to cancer, places demands on your adrenals. Therefore, as a general rule, if morning fatigue is a symptom of the chronic disease, the adrenals are likely fatigued to some degree. Also anytime a medical treatment includes the use of corticosteroids, diminished adrenal function is most likely present. All corticosteroids are designed to imitate the actions of cortisol, a hormone secreted by the adrenals, and so the need for them arises primarily when the adrenals are not providing the required amounts of cortisol. Is Adrenal Fatigue common in someone with cancer who is going through chemotherapy? The extreme fatigue of cancer and any other chronic illnesses is often the result of decreased adrenal function. Chronic illness and toxic treatments like chemotherapy are major stressors that the adrenals must respond to. In addition, because of the side effects of chemotherapy, and sometimes the cancer itself, nutrient consumption and absorption is often decreased, further impairing adrenal function. It is very important to provide adrenal support during this time. Does Adrenal Fatigue increase susceptibility to infections? Adrenal Fatigue often goes hand in hand with decreased immune function, which makes someone more prone to illnesses. There is an especially strong association between Adrenal Fatigue and respiratory infections, such as bronchitis and pneumonia. Does Adrenal Fatigue affect the thyroid gland? Approximately 80% of the people suffering from Adrenal Fatigue also suffer some form of decreased thyroid function. People shown to be low thyroid but unresponsive to thyroid therapy are most likely suffering from Adrenal Fatigue as well. For these people to get well, the adrenals must be supported in addition to the thyroid. Is Adrenal Fatigue related to fibromyalgia? Most people with fibromyalgia have a form of Adrenal Fatigue and sometimes the Adrenal Fatigue precedes the fibromyalgia. Many studies show that people with fibromyalgia also have reduced levels of the adrenal hormone, cortisol. Proper adrenal support improves adrenal function, including the production of cortisol. The subsequent higher levels of cortisol result in reduced signs and symptoms of fibromyalgia. Fibromyalgia and Hypothyroidism are linked. Is Adrenal Fatigue linked to clinical depression? It can be; a mild depression is often one of the symptoms of Adrenal Fatigue. A saliva test for adrenal hormones will determine whether the adrenals are involved when depression occurs. If the test indicates adrenal involvement, proper adrenal support will help eliminate the depression. Is Adrenal Fatigue related to chronic fatigue syndrome? Adrenal Fatigue is a common, but usually unrecognised, component of chronic fatigue syndrome (CFS). The adrenals can become overloaded with the lingering effects of the infectious agents that originally led to the CFS and the stress of the illness itself. With new diagnostic procedures available for detecting the specific infectious agents responsible, there have been encouraging results using a combination treatment that eliminates the specific pathogens while strengthening the adrenals. Is Adrenal Fatigue a factor in people with HIV or Hepatitis C? Adrenal Fatigue is a common factor in people with Hepatitis C and HIV. Unfortunately one of the treatments for Hepatitis C is the administration of corticosteroid drugs. This suppresses both the adrenals and the immune system, thus speeding the patient’s decline. A relationship has been demonstrated between survival of HIV infected patients and their levels of the adrenal hormone, cortisol. In both Hepatitis C and HIV, adequate adrenal support can be of significant benefit. Does Adrenal Fatigue cause or increase allergies? It has been long observed that people suffering from Adrenal Fatigue definitely have greater allergic responses or become allergic to things that previously did not bother them. This is because cortisol, the major adrenal hormone, is the most powerful anti-inflammatory substance in the body. When the adrenals fatigue, cortisol levels drop and make it more likely that the body will have allergic (inflammatory) reactions and that these reactions will be more severe. It is therefore essential for allergic individuals to receive proper adrenal support regardless of what other allergy treatment they try. Can Adrenal Fatigue affect my sex life? Yes, a common complaint from both men and women suffering from Adrenal Fatigue is decreased sex drive. This is because the sex hormones are manufactured by the adrenal glands, as well as by the sex organs themselves. Low adrenal function can lead to low sexual performance and/or low desire. If this diminished libido is the result of Adrenal Fatigue, proper adrenal support that results in adrenal recovery will usually restore sexual desire and performance as well. Can Adrenal Fatigue affect a woman’s menstrual cycles? Adrenal Fatigue can negatively affect many aspects of a woman’s hormone cycles, including menstrual flow, PMS, perimenopause and menopause. Does pregnancy set off Adrenal Fatigue? No, usually pregnancy decreases Adrenal Fatigue because the fetus produces a greater amount of natural adrenal hormones than the amount in the non-pregnant female. However, if the pregnancy is very stressful, it can lead to or increase Adrenal Fatigue. Are there any pre-surgery precautions that will protect the adrenal glands? Yes, (see Chapter 15 of Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson for details); and eat only high quality foods, especially good quality proteins and lots of dark green vegetables. Also use self-hypnosis, visualisation and/or relaxation methods to remain mentally and emotionally calm and positive throughout the procedure, and to heal more quickly afterwards. These measures will help protect the adrenal glands from the stresses of surgery. As far in advance of the surgery as possible, begin taking the Future Formulations Adrenal Fatigue products designed by Dr. Wilson. Are prescription drugs necessary to treat Adrenal Fatigue? No, most cases of Adrenal Fatigue can be remedied without prescription drugs. The treatments described in Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson, combined with the Future Formulations dietary supplements by Dr. Wilson created for Adrenal Fatigue are natural, relatively inexpensive and very effective. They have been used by many aware physicians and by those with Adrenal Fatigue themselves for recovery from Adrenal Fatigue. If a doctor says there is no such illness as Adrenal Fatigue, what options do patients have? Unfortunately, this is the view of many conventional doctors, but they are not as well informed as they believe. Adrenal Fatigue was first diagnosed over 100 years ago and has been successfully treated for decades. However, for various reasons the medical community has ignored the existence of Adrenal Fatigue syndrome over the past 40 years. The best thing to do is learn and do as much as possible to alleviate the Adrenal Fatigue by using the book, Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson, combined with the dietary supplements Dr. Wilson created for Adrenal Fatigue. It may also help to switch to a doctor who is familiar with Adrenal Fatigue syndrome or give the uninformed doctor a copy of Adrenal Fatigue: The 21st Century Stress Syndrome. Hopefully within 10 years many more physicians will know how to recognise and treat Adrenal Fatigue. Does smoking increase susceptibility to Adrenal Fatigue? Yes, smoking is a chronic stress on the body that makes it more difficult for the adrenals to function. Smoking, by itself, does not lead directly to Adrenal Fatigue unless the adrenals are already weak, however, smoking is one of the body burdens that accelerate Adrenal Fatigue and prevent complete recovery from occurring. Does diet have anything to do with Adrenal Fatigue? Yes, diet plays a critical role in Adrenal Fatigue. The phrase “garbage in, garbage out” aptly describes the relationship between poor diet and Adrenal Fatigue. A nutritionally inadequate diet that is high in sugar, caffeine and junk food places daily stress on the body that the adrenal glands have to respond to and , at the same time, deprives the adrenals of the nutrients they need to function. This alone can lead to Adrenal Fatigue or make the body more vulnerable to Adrenal Fatigue when any additional stress is added. Similarly, good nutrition helps protect and sustain adrenal function during stress. You may find the article Eating For Fatigue useful. When Adrenal Fatigue is already present, a healthy diet that supports adrenal function combined with the Future Formulations adrenal supplements created by Dr. Wilson, author of Adrenal Fatigue: The 21st Century Stress Syndrome, can lead to recovery. Are athletes or very fit people as susceptible as others to Adrenal Fatigue? Athletes and very fit people can suffer from Adrenal Fatigue under certain circumstances. If they push themselves too hard, skip meals, take drugs (e.g. steroidal drugs), and have a lifestyle that is otherwise not conducive to their health, they can lead themselves into Adrenal Fatigue, the same as anyone else. Relentlessly pushing themselves, as some athletes do, is also a significant risk factor. Additional factors, such as severe injuries, illnesses and emotional stresses can debilitate the adrenal glands of anyone, including athletes. Just because someone is an athlete does not necessarily mean they are in excellent health. The better overall health someone has, the less they will experience Adrenal Fatigue. Is someone who cannot exercise because they are disabled at greater risk for Adrenal Fatigue? Not necessarily, there are a variety of factors in addition to exercise that influence adrenal resiliency. It all depends on how many things are stacked in favour of health. The chapters in Part 3 of book Adrenal Fatigue: The 21st Century Stress Syndrome by Dr. Wilson provides easy to follow information about how anyone, including those with disabilities, can strengthen their adrenal glands. Are New Zealanders and Australians more prone to Adrenal Fatigue than people from other nations? Despite a relative abundance of resources, New Zealanders and Australians have increased their likelihood of suffering from Adrenal Fatigue because of their hectic lifestyle, poor food choices, lack of exercise, and drug, alcohol and caffeine consumption. People of less wealthy nations may be subject to other factors that are individually worse than those New Zealanders and Australians experience, but their overall lifestyle, less processed diets and better family or social structures help counter-balance these. Does Adrenal Fatigue cause ankle swelling after a long day of standing? There are many causes of ankle swelling, but one of the causes is Adrenal Fatigue. Your ankle swelling is more likely related to it if you have many other signs of Adrenal Fatigue. Does anyone get through life without Adrenal Fatigue or Adrenal Fatigue problems? Many people go through life with only a temporary decrease in adrenal function after an infection, the death of a loved one, loss of a job or other severe stress, because their adrenals are able to bounce back and recover. However, these are usually people who are born with good constitutions and who look after their health as well. What is the difference between Adrenal Fatigue and hypoadrenia? Hypo-adrenia, as it is used in the medical sense, refers to adrenal failure or the extremely low adrenal function which is called Addison’s disease. Although hypo-adrenia, in actuality, occurs in a spectrum ranging from almost normal to Addison’s , only the most extreme low end is recognised and called hypo-adrenia in medicine. The less severe forms of hypo-adrenia are now referred to as Adrenal Fatigue.
https://ericbakker.com/adrenal-fatigue-faqs/
What is Stress? Stress can mean different things to different people. Stress is defined as a conflict between the demands placed on us and our ability to cope. The way we cope with these demands will depend on many things. For example, the way we think, our personality and our previous life experiences can affect our stress levels. We live in a world where we hear about stress all the time, we know of the health issues it can cause, the depression, heart disease and the suicides that have occurred from stress. Stress is not a fad or a diminishing of one’s coping abilities or strategies. It is a danger to us. Ultimately it needs to be dealt with when it is presented. Good and Bad As with everything in the world, there is good and bad, and therefore there is positive and negative stress. A positive stress influence can help compel us into action; it can result in a new awareness and an exciting new outlook on a situation, event or life itself. Triggers Many things can trigger this stress, including change. Changes can be positive or negative, as well as real or perceived. These changes may be recurring, short-term, or long-term. Stress is the “wear and tear” we experience as we adjust to our continually changing environment, which creates the positive and negative physical and emotional effects. How we cope People differ dramatically in the type of events they interpret as stressful and the way in which they respond to such stress. For example, driving a car can be very stressful for some people and for others it’s simply relaxing. The ability to tolerate stress is linked to our individual personality, our relationships, energy levels, and emotional maturity: For instance, if we are introverted, we are generally more comfortable with fewer stimuli than if we are more extroverted. If we are in unhappy relationships, it is energy consuming and therefore our energy levels become depleted and drained. The result is our resistance to stress is compromised. Also when we are recovering from illness or simply tired at the end of the day, our ways of dealing with the world around us are less robust. Freeze-Fight-Flight The human “Freeze, Fight, Flight” response is written into our DNA. Its part of our blueprint. It’s a primitive design to allow the body to quickly adapt to its environment, in order to survive. During a freeze-fight-flight episode, breathing rate speeds up. The nostrils and air passages in the lungs open wider to get more air in quickly. Next heartbeat speeds up and blood pressure rises, sweating increases to help cool the body and blood. Therefore nutrients are the concentrated in the muscles to provide us with extra strength. Stress: Help or Hindrance? Executive Stress Management to reduce the effects of this stress. Stress is a natural instinct that exists as a bodily reaction to difficult situations as they may occur. Low level stress in some can actually assist in the enhancement of professional goal achievement however, as stress levels increase so can the negative impact on productivity and motivation in the work environment. Cortisol and Adrenaline Cortisol is released by the adrenal glands. Stress Hormones, Epinephrine (also known as adrenaline) and norepinephrine (also known as noradrenaline) are produced. These hormones help you think and move fast in an emergency, in the right situation, they can save your life. They don’t linger in the body, and dissipate as quickly as they were created. Cortisol, on the other hand, streams through your system all day long, and that’s what makes it so dangerous. But the hormones that are released and the physiological changes that occur are designed to be a “spurt” and not present long-term. Stress prolongs these changes because the body does not know that the threat is not real, as we live in stress, we have conditioned the body to believe it is in a permanent state of threat. Our Well-being The physical and mental well-being is compromised by the permanent state of stress and the presence of cortisol in the system throughout the day. The result may produce psychological conditions such as emotional disorder, irritability leading to anger, a sense of rejection moving us into depression and Physical conditions such as immune response disorder, chronic muscle tension, and increased blood pressure. These problems can eventually lead to serious life-threatening illnesses such as heart attacks, kidney disease, and cancer. The Mind Neuroscientists have discovered how chronic stress and cortisol can damage the brain. Stress triggers long-term changes in brain structure and function. Young people who are exposed to chronic stress early in life are more prone later in life to mental problems such as depression, anxiety, mood disorders as well as learning difficulties. Changes In The Brain Structure It has long been established that stress-related illnesses, such as post-traumatic stress disorder or PTSD trigger changes in brain structure, including differences in size and connectivity of the amygdala. Our brains are mouldable through the plasticity nature of the structure, chronic stress can prevent the neuropathways, connectivity and fluidity of the plasticity making our brain structure rigid and less pliable. The ‘stress hormone’ cortisol affects the neuropathways between the hippocampus and amygdala in a way that creates a vicious cycle within the brain leaving it predisposed to be in a constant state of freeze-fight-flight. Stress Reaction Stress on an extreme level can manifest in bodily reactions that have the ability to debilitate on both a personal and professional level. Typical physiological and psychological disorders brought on by stress include: Physical Health Issues Documented Physical health issues related to stress are listed below: - Weight gain - Diabetes - Diarrhoea - Nausea - Indigestion - Sphincter of Oddi dysfunction – SOD - Irritable Bowel Syndrome – IBS - Constipation - Colds and Sinus Infections - Yeast Infection - Bladder Infections - Fibromyalgia - Arthritis - High Blood Pressure – Hypertension - Cardiovascular Disease - Hyperventilation - Asthma - Headaches - Migraines Mental health Issues Documented Mental health issues related to stress are listed below:
http://www.thepositivemind.es/stress-management-individual/
- NEWS IN CONTEXT Should Scientists Create a Synthetic Human Genome? On June 2, a group of leading scientists announced their intention to build an entire human genome from scratch. While the fabricated genome would be inserted into cells in a laboratory dish, the research initiative has generated controversy because it “could theoretically allow the creation of babies without biological parents.” Shortly after the announcement, Hastings Center scholars Josephine Johnston and Gregory Kaebnick participated in two different panel discussions at the World Science Festival in New York in which the ethical and societal implications were debated. What is it? Constructing a human genome is one of the main goals of Human Genome Project-Write, a 10-year international project that the scientists want to launch this year. It would engineer, or “write,” an entire genome – all the heritable traits – of humans, as well as of other animals and plants that are significant for agriculture or public health or for furthering the understanding human biology. Potential applications include growing human organs for transplantation and genetically engineering immunity to viruses and resistance to cancer. The primary goal is to reduce the costs of genomic engineering and testing “by over 1000-fold within 10 years” and “push current conceptual and technical limits by orders of magnitude and deliver important scientific advances.” The scientists have raised $250,000 and are trying to raise $100 million this year from public, private, philanthropic, industry, and academic sources around the world. News in Context Scientists involved with the project say that they do not intend to create synthetic babies or engineer the human race (see here and here), but they recognize that it raises ethical and social concerns. Their announcement calls for public discourse to ensure responsible innovation, including equitable distribution of benefits globally. “Before I could say that the project makes sense, I would want to know a lot more about what it was likely to achieve and how much it was likely to cost,” says Kaebnick, who is an investigator on the Hastings Center’s project on gene editing and has led projects on synthetic biology. Even if the initiative would not create human babies, it still leads eventually to questions about the prospects of enhancing human traits on a larger scale than has been possible so far. And that prospect demands public deliberation. “The challenge is to figure out whether the public discussion can be structured in such a way that it has real bite—that it really makes a difference in how we think about the human enhancement questions.” “The call for funding for a project to “write” genomes raises questions about the adequacy of the United States’ existing governance mechanisms to deal with the implications of this project, were it to succeed in its goals,” says Johnston, a principal investigator on The Hastings Center’s gene editing project and the PI on a project on next-generation prenatal testing. “While the scientists involved may not intend to generate synthetic genomes for reproductive use, others might seek to do so. We should ensure that our regulatory mechanisms are equipped to deal with these possibilities.” Published on: June 8, 2016 Published in:
https://www.thehastingscenter.org/news-in-context/should-scientists-create-a-synthetic-human-genome/
Surgeons sometimes operate on the developing fetuses in utero of pregnant women as a medical intervention to treat a number of congential abnormalities, operations that have ethical aspects. A. William Liley performed the first successful fetal surgery, a blood transfusion, in New Zealand in 1963 to counteract the effects of hemolytic anemia, or Rh disease. Format: Articles Subject: Ethics Fetal Surgery Fetal surgeries are a range of medical interventions performed in utero on the developing fetus of a pregnant woman to treat a number of congenital abnormalities. The first documented fetal surgical procedure occurred in 1963 in Auckland, New Zealand when A. William Liley treated fetal hemolytic anemia, or Rh disease, with a blood transfusion. Format: Articles Subject: Disorders, Ethics, Reproduction US Endocrine Disruptor Screening Program In 1996, the US Congress mandated that the US Environmental Protection Agency (EPA) create and regulate the Endocrine Disruptor Screening Program. The program tests industrial and agricultural chemicals for hormonal impacts in humans and in wildlife that may disrupt organisms' endocrine systems. The endocrine system regulates the release of small amounts of chemical substances called hormones to keep the body functioning normally. Format: Articles Social Implications of Non-Invasive Blood Tests to Determine the Sex of Fetuses By 2011, researchers in the US had established that non-invasive blood tests can accurately determine the gender of a human fetus as early as seven weeks after fertilization. Experts predicted that this ability may encourage the use of prenatal sex screening tests by women interested to know the gender of their fetuses. As more people begin to use non-invasive blood tests that accurately determine the sex of the fetus at 7 weeks, many ethical questions pertaining to regulation, the consequences of gender-imbalanced societies, and altered meanings of the parent-child relationship. Format: Articles Subject: Reproduction, Ethics, Legal "Ethical Issues in Human Stem Cell Research: Executive Summary" (1999), by the US National Bioethics Advisory Commission Ethical Issues in Human Stem Cell Research: Executive Summary was published in September 1999 by The US National Bioethics Advisory Commission in response to a national debate about whether or not the US federal government should fund embryonic stem cell research. Ethical Issues in Human Stem Cell Research recommended policy to US President William Clinton's administration, which advocated for federal spending on the use of stem research on stem cells that came from embryos left over from in vitro fertilization (IVF) fertility treatments. Format: Articles Eugenical Sterilization in the United States (1922), by Harry H. Laughlin Eugenical Sterilization in the United States is a 1922 book in which author Harry H. Laughlin argues for the necessity of compulsory sterilization in the United States based on the principles of eugenics. The eugenics movement of the early twentieth century in the US focused on altering the genetic makeup of the US population by regulating immigration and sterilization, and by discouraging interracial procreation, then called miscegenation. Format: Articles Subject: Outreach, Legal, Ethics, Publications Free Hospital for Women Scrapbook by Harvard University Library This scrapbook is part of the Harvard University Library's collection on "Working Women, 1800-1930," which is itself part of the Open Collections Program. The print version is located at the Francis A. Countway Library of Medicine. It contains information about the hospital, including articles from newspapers, magazines, and other publications; photographs of the hospital, employees, and special events; lecture announcements; letters and other forms of correspondence; ration cards; tickets; forms; certificates; posters; programs; and playbills. Format: Articles Subject: Organizations, Ethics, Reproduction Abortion Abortion is the removal of the embryo or fetus from the womb, before birth can occur-either naturally or by induced labor. Prenatal development occurs in three stages: the zygote, or fertilized egg; the embryo, from post-conception to eight weeks; and the fetus, from eight weeks after conception until the baby is born. After abortion, the infant does not and cannot live. Spontaneous abortion is the loss of the infant naturally or accidentally, without the will of the mother. It is more commonly referred to as miscarriage. Format: Articles Subject: Processes, Ethics, Reproduction "Alternative Sources of Human Pluripotent Stem Cells" (2005), by Leon Kass and the President’s Council on Bioethics Human pluripotent stem cells are valued for their potential to form numerous specialized cells and for their longevity. In the US, where a portion of the population is opposed to destruction of human embryos to obtain stem cells, what avenues are open to scientists for obtaining pluripotent cells that do not offend the moral sensibilities of a significant number of citizens? Format: Articles Subject: Publications, Ethics Quickening Quickening, the point at which a pregnant woman can first feel the movements of the growing embryo or fetus, has long been considered a pivotal moment in pregnancy. Over time, this experience has been used in a variety of contexts, ranging from representing the point of ensoulment to determining whether an abortion was legal to indicating the gender of the unborn baby; philosophy, theology, and law all address the idea of quickening in detail. Beginning with Aristotle, quickening divided the developmental stages of embryo and fetus. Format: Articles Subject: Processes, Ethics, Reproduction Stem Cell Tourism When James Thomson of the University of Wisconsin announced in 1998 that he had derived and cultured human embryonic stem cells(hESCs), Americans widely believed-and accepted-that stem cells would one day be the basis of a multitude of regenerative medical techniques. Researchers promised that they would soon be able to cure a variety of diseases and injuries such as cancer, diabetes, Parkinson's, spinal cord injuries, severe burns, and many others. But it wasn't until January 2009 that the Food and Drug Administration approved the first human clinical trials using hESCs. Format: Articles The Report of the Committee of Inquiry into Human Fertilisation and Embryology (1984), by Mary Warnock and the Committee of Inquiry into Human Fertilisation and Embryology The Report of the Committee of Inquiry into Human Fertilisation and Embryology, commonly called the Warnock Report after the chair of the committee Mary Warnock, is the 1984 publication of a UK governmental inquiry into the social impacts of infertility treatment and embryological research. The birth of Louise Brown in 1978 in Oldham, UK, sparked debate about reproductive and embryological technologies. Brown was conceived through in vitro fertilization (IVF), a process of fertilization that occurs outside of Format: Articles Subject: Publications, Legal, Ethics Assisted Human Reproduction Act (2004) The Assisted Human Reproduction Act (AHR Act) is a piece of federal legislation passed by the Parliament of Canada. The Act came into force on 29 March 2004. Many sections of the Act were struck down following a 2010 Supreme Court of Canada ruling on its constitutionality. The AHR Act sets a legislative and regulatory framework for the use of reproductive technologies such as in vitro fertilization and related services including surrogacy and gamete donation. The Act also regulates research in Canada involving in vitro embryos.
https://embryo.asu.edu/search?text=fetal%20alcohol%20spectrum%20disorders&amp%3Bamp%3Bf%5B0%5D=dc_subject_embryo%3A5&amp%3Bamp%3Bf%5B1%5D=dc_subject_embryo%3A310&amp%3Bf%5B0%5D=dc_description_type%3A200&f%5B0%5D=dc_subject_embryo%3A52
Theory & Social Science Agamben, Giorgio. The Open: Man and Animal. Trans. Kevin Attell. Stanford: Stanford University, 2004. In this work, contemporary Italian philosopher Giorgio Agamben examines the question of human-animal distinctions. He is interested in looking at how Western thought has privileged humans as a superior type of animal, and how this Western construction has far-reaching implications. Allen, Colin and Wallach, Wendell. “Moral Machines: Contradiction in Terms or Abdication of Human Responsibility?” Robot Ethics: The Ethical and Social Implications of Robotics. Ed. Patrick Lin, Keith Abney and George A. Bekey. MIT Press, 2014. In this work, Allen and Wallach ask the question of if robots and computer systems (AI) can make moral decisions, and what are the implications of these decisions? Badmington, Neil. Alien Chic: Posthumanism and the Other Within. London and New York: Routledge, 2004. Badmington seeks to explore how posthumanism functions in our current age where the boundaries of the human and the non-human are problematized. In order to do this, he traces a cultural history of the alien from the 1950s onward in order to map how our attitudes have shifted from fear to affection. — ed. Posthumanism. Basingstoke and New York: Palgrave, 2000. This book asks the question of “what is posthumanism and why does it matter?”. It seeks to give an introduction to the concept of posthumanism by looking at several concepts such as the humanist notion of humanity’s natural supremacy, humanity’s relationship to technology, it’s relationship to politics, and what all of this could mean for us. Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” Illuminations. Ed and Trans. Hannah Arendt. London: Fontana, 1968. In this essay, Benjamin argues that the mechanical reproduction of art lessens its value, or, “aura”. In this age of mass production and mechanical reproduction (Benjamin was writing during the Nazi regime), it is important to have a theory of art that deals with envisioning a revolutionary political praxis because this particular type of artistic production has become something firmly grounded in political action. Bennett, Jane. Vibrant Matter: A Political Ecology of Things. Durham, NJ: Duke University Press, 2010. In this work, Bennett argues that we need to shift our focus in political theory from being human-centric in order to encompass the active participation of nonhuman forces in events. Agency does not belong only to humans, and this is important to recognize so that we can begin to formulate a more inclusive political theory of ecology Bergson, Henri. Creative Evolution. Trans. Arthur Mitchell. New York: Henry Holt and Company, 1911. Written at the beginning of the 20th century, Creative Evolution is Bergson’s response to Darwin’s theory of evolution wherein he suggests orthogenesis – the biological hypothesis that organisms have an innate tendency to evolve in a definite direction towards some goal (teleology) due to some internal mechanism or driving force – as an alternative. Bogost, Ian. Alien Phenomenology: or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press, 2012. In this work, Bogost seeks to develop a metaphysics that explores the interactions, connections, and experiences of all things. He does this by beginning with an “object-oriented” ontology where things are privileged as being the focus of being, and by “being” he is referring to a way of thinking where all things possess the same level of existence. In this way, he seeks to move from the Western notion of the human being the focus of philosophical thinking and reason and to instead examine the relationality and beingness of these different things Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press, 2014. Bostrom examines the event of machines gaining superintelligence and what this would spell out for humanity. He argues that if machine intelligence (brains) is able to surpass that of its human counterparts, then humans will inevitably be replaced as the dominant species on Earth. Due to the exponential rate at which this new superintelligence would be able to improve itself and develop, a Terminator or Matrix-esque end would be in store for humanity. Brynjolfsson, E. and A. McAfee. The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies. New York: WW Norton, 2014. This work examines how our lives and work are being altered by digital technologies. Examples such as Google’s self-driving cars or IBM’s Watson showcase how we can already develop technology that can mimic and even surpass human equivalents. This advancing technology will herald a new age of prosperity, interconnection, availability of information, but most of all, social change. This book seeks to treat this question of uncertainty in an optimistic way by proposing strategies by which we can keep up with and prosper from these changes already taking place. Calo, Ryan, M. “Robots and Privacy.” Robot Ethics: The Ethical and Social Implications of Robotics. Ed. Patrick Lin, Keith Abney and George A. Bekey. Cambridge, MA: MIT Press, 2012. In this article, Calo explores the different ways in which our increasing concerns about privacy in this technological age can be addressed. Calo specifically looks at the ways in which robots add to our anxiety regarding privacy because robots are perfectly equipped with their superior technology and processing power to monitor people at all times. These concerns are then examined in regard to the current state of privacy law, and how there are many instances in which robotics could easily circumvent this. Campbell, Timothy C. Improper Life: Technology and Biopolitics from Heidegger to Agamben. Minneapolis: University of Minnesota Press, 2011. In this book, Campbell poses the question, “Has biopolitics actually become thanatopolitics?” By this, he is asking if biopolitics has become obsessed with the study of death. The origin of this in modern thought can be traced back to Heidegger, in whose work, specifically his critique of technology, there is a “crypto-thanatopolitics” that can be found. In order to correct this, Campbell suggests a new theorization of a biopolitics that, instead of Heidegger, begins with Foucault, Freud, and Deleuze. Christian, Brian. The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. New York: Doubleday, 2011. In this work, Christian is interested in exploring what it means to be human, and how our interactions with each other and with computers inform notion. He looks at the nature of human interactions, the meaning of language, and the questions that arise when faced with machines who possess a far greater processing ability than we do. Clark, Andy. Being There: Putting Brain, Body and World Together Again. Cambridge, MA: MIT Press, 1997. In this work, Clark is interested in examining foundational questions relating to how the brain, body, and the world are all interconnected. He brings this back to an analysis on the emerging sciences of robotics and AI through an interrogation of the tools and techniques that will be needed to make sense of them in our current age. —. Natural Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. New York: Oxford University Press, 2003. Moving away from the traditional Western narratives of the cyborg is something to be feard, Clark argues that we should not fear them because we are already cyborgs. This has been accomplished through our ability to so fully incorporate tools into our existence – more so than other species, which is also what sets humans apart. This work examines the different ways that technologies have been thus incorporated into our lives, and the different ways these incorporations have effected adaptations within humanity. Clarke, Arthur C. Profiles of the Future: An Inquiry Into the Limits of the Possible (1962). London: Pheonix, 2000. Based on a series of essays penned by Clarke 1959 and 1961, this work is interested not as much in speculated achievements but rather with “ultimate possibilities”. Clark expounds upon a number of grand ideas concerning the future of humanity and lists various examples of spectacular futurity, not with the strict purpose of saying, “this is what will be”, but rather to say, “this is what could be”. David-Floydl, R. and J. Dumit, eds. Cyborg Babies: From Techno-Sex to Techno-Tots. Routledge, 1998. The work is interested in exploring the ways in which children in this age of technological ubiquity are rendered as cyborgs precisely by this technoculture. More specifically, it raises questions about reproduction and how this process is influenced by technological processes and what this then means for humanity. Deitch, Jeffrey, ed. Post Human. New York: Distributed Art Publishers, 1992. The work looks at how we as a species are developing into a posthuman state through various technological means such as genetic engineering or body alterations. Contemporary images from a wide variety of artists are utilized in order to explore this emergent posthuman state and the implications therein. DeLanda, Manuel. War in the Age of Intelligent Machines. New York: Zone Books, 1991. In this work, DeLanda examines the relationship of technology and weapons, and how advances in computing, AI, surveillance, and robotics have made for increasingly efficient and deadly weapons in warfare. However, he takes his analysis further by looking at the historical shift this advancement harkens; an advancement that, for him, is indicative a paradigm shift with humans’ relationality to machines and information. Dennett, Daniel. Consciousness Explained. New York: Little, Brown & Company, 1991. This monumental work challenges the hitherto accepted theory of consciousness by arguing for a new model of consciousness. Dennett draws inspiration for this new model from fields such as AI and robotics, medicine and neuroscience, and psychology. Derrida, Jacques. The Animal That Therefore I Am. Trans. David Wills. New York: Fordham University Press, 2008. Based upon a lecture given by Derrida to the Carisy Conference in 1997, this work poses a series of challenging questions concerning the nature of human ontology, animal ethics, and the difference (and similarities) between humans and animals. —. The Gift of Death & Literature in Secret. Trans. David Wills. Chicago: University of Chicago Press, 2008. In this work, Derrida critically considers religion and questions surrounding it relating to the limits of rational thought and about the ethics of accepting death in different forms i.e. murder, suicide, execution. Part of his exegesis focuses on questions drawn from Bible about the sacrifice of Isaac and the flood in Genesis to consider questions of divine sovereignty and its implications. —. Dissemination. Trans. Barbara Johnson. Chicago: University of Chicago Press, 1983. This work is primarily concerned with exploring the relationality between literature, philosophy, and language in a Western context. —. Writing and Difference. Trans. Alan Bass. London: Routledge, 1978. Descartes, Rene. A Discourse on Method: Meditations and Principles. Trans. John Veitch. London: J.M. Dent & Sons, 1912. Dobrin, Sidney, ed. Ecology, Writing Theory, and New Media: Writing Ecology. New York: Routledge, 2011. — and Sean Morey, eds. Ecosee: Image, Rhetoric, Nature. Albany, NY: SUNY Press, 2009. Donath, Judith. The Social Machine: Designs for Living Online. MIT Press. 2014. Dunn, T.P. and R.D. Erlich, eds. The Mechanical God: Machines in Science Fiction. Westport Connecticut: Greenwood, 1982. Dyer-Witheford, N. Cyber-Proletariat: Global Labour in the Digital Vortex. London: Pluto Press, 2015. Feenberg, A. Transforming Technology: A Critical Theory Revisited. Oxford: Oxford University Press. 2002. Foucault, Michel. The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books, 1973. Franchi, Stefano, and Güzeldere Güven, eds. Mechanical Bodies, Computational Minds: Artificial Intelligence from Automata to Cyborgs. Cambridge, Mass., MIT Press, 2005. Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus & Giroux, 2002. Gray, C. H., ed. The Cyborg Handbook. New York: Routledge, 1995 Grebowicz, Margret and Helen Merrick. Beyond the Cyborg: Adventures with Donna Haraway. New York: Columbia UP, 2013. Haraway, Donna. Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge, 1991. —. “From Cyborgs to Companion Species: People, Dogs and Technoculture.” Sept. 16, 2003. Lecture presented by The Doreen B. Townsend Center for the Humanities. —. Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse. New York: Routledge, 1997. —. When Species Meet. Minneapolis: University of Minnesota Press, 2008. Hayles, Katherine N. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: University of Chicago Press, 1998. —. “Afterword: The Human in the Posthuman.” Posthumanism 53 (2003): 134-137. —. My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago: University of Chicago Press, 2005. —. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago Press, 2012. —. Writing Machines. Cambridge: The MIT Press, 2002. Halberstram, Judith and Ira Livingston, eds. Posthuman Bodies. Bloomington: Indiana UP, 1995. Heidegger, Martin. “The Question Concerning Technology.” Basic Writings. Ed. David Krell. New York: HarperCollins Publishers, 1993. Hudson, Laura. “The Political Animal: Species-Being and Bare Life.” Mediations 23, 2 (2008): 88–117. Hughes, James. Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. Westview Press, 2004. Kakoudaki, Despina. Anatomy of a Robot: Literature, Cinema, and the Cultural Work of Artificial People. Rutgers University Press, 2014 Kadoudaki, Despina. “Studying Robots, Between Science and the Humanities.” The International Journal of the Humanities, 5:8 (Dec. 2007): 165-182. Kang, Minsoo. Sublime Dreams of Living Machines: The Automaton in the European Imagination. Harvard University Press. 2011. Kelly, Kevin. Out of Control: The New Biology of Machines. London: Fourth Estate, 1994. Lanier, Jared. You Are Not A Gadget. New York: Alfred A. Knopf. 2010. —. Who Owns The Future. San Jose, CA: Simon & Schuster. 2013. —. “The Myth of AI.” Edge.org. 14 Nov. 2014. Latour, Bruno. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, MA: Harvard UP, 1991. —. Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard UP, 1999. —. Politics of Nature: How to Bring the Sciences into Democracy. Trans. Catherine Porter. Cambridge, MA: Harvard UP, 2004. Latour, Bruno and S. Woolgar. Laboratory Life: The Social Construction of Scientific Facts. London: Sage Publications, 1979. Leist, Anton and Peter Singer, eds. J. M. Coetzee and Ethics: Philosophical Perspectives on Literature. New York: Columbia UP, 2010. Lévy, Pierre. Becoming Virtual: Reality in the Digital Age. Trans. Robert Bononno. New York and London: Plenum, 1998. Levy, Steven. Artificial Life. London: Jonathan Cape, 1992. Lin, Patrick, K. Abney and G.A. Beckey (Eds). Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, 2014. Lyotard, Jean François. The Postmodern Condition: A Report on Knowledge. Trans. Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press, 1984. Mason, Paul. 2015. PostCapitalism: A Guide to Our Future. New York: Farrar, Straus and Giroux. Mazis, Glen A. Humans, Animals, Machines: Blurring Boundaries. Albany: State University of New York Press, 2008. McHugh, Susan. Animal Stories: Narrating Across Species Lines. Minneapolis: University of Minnesota Press, 2011. Menzel, Peter. F. and Faith D’Aluisio. Robo sapiens: Evolution of a New Species. Cambridge, MA: MIT Press, 2000. Miburn, Colin. “Nanotechnology in the Age of Posthuman Engineering: Science Fiction as Science.” Configurations 10 (2002): 261-295. Mindell, David. Our Robots, Ourselves: Robotics and the Myths of Autonomy. New York: Viking. 2015. Riskin, Jessica, ed. Genesis Redux: Essays in the History and Philosophy of Artificial Life. Chicago: Chicago University Press, 2007. Pettman, Dominic. Human Error: Species-Being and Media Machines. Minneapolis: University Minnesota Press, 2011. —. Look at the Bunny: Totem, Taboo, Technology. Hants, UK: Zero Books, 2013. Richardson, K. An Anthropology of Robots and AI: Annihilation Anxiety and Machines. New York and London: Routledge, 2015. Riskin, Jessica.Genesis Redux: Essays in the History and Philosophy of Artificial Life. U of Chicago Press. 2007. Sharkey, Noel. “Killing Made Easy: From Joysticks to Politics.” Robot Ethics: The Ethical and Social Implications of Robotics. Ed. Patrick Lin, Keith Abney and George, A. Bekey. Cambridge, MA: MIT Press, 2012. Singer, Peter. “Reflections.” The Lives of Animals. Princeton, NJ: Princeton UP, 1999. Singer, Peter and Agata Sagan. “When Robots Having Feelings.” The Guardian, Monday 14 December 2009. Shukin, Nicole. Animal Capital: Rendering Life in Biopolitical Times. Minneapolis: University of Minnesota Press, 2009. Spivak, Gayatri. An Aesthetic Education in the Era of Globalization. Cambridge, MA: London, 2012. Somerville, Margaret. The Ethical Canary: Science, Society and the Human Spirit. Toronto and New York: Penguin Group, 2003. Smith, Wesley J. A Rat is a Pig is a Dog is a Boy: The Human Cost of the Animal Rights Movement. New York: Encounter Books, 2010. Squier, Susan Merrill. Liminal Lives: Imagining the Human at the Frontiers of Biomedicine. Durham, NJ: Duke UP, 2004. Richardson, Kathleen. An Anthropology of AI and Robots: Annihilation Anxiety and Machines. New York: Routledge. 2015. Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf, 2017. Thacker, Eugene. After Life. Chicago: University of Chicago Press, 2010. —. The Global Genome: Biotechnology, Politics, and Culture. Cambridge, MA: MIT Press, 2005. —. Biomedia. Minneapolis: University of Minnesota Press, 2004. — and Jeremijenko Natalie. Creative Biotechnology: A User’s Guide. Newcastle, UK: Locus+ Publishing, 2004. Truitt, E.R. Medieval Robots Mechanism, Magic, Nature, and Art. University of Pennsylvania Press. 2015. Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin, 2015. — . Alone Together: Why We Expect More From Technology and Less From Each Other. New York: Basic Books, 2011. — . The Second Self: Computers and the Human Spirit. Cambridge, MA: MIT Press, 2005. Voskuhl, Adelheid. Androids in the Enlightenment: Mechanics, Artisans, and Cultures of the Self. U of Chicago Press. 2013. Waldby, Catherine. The Visible Human Project: Informatic Bodies and Posthuman Medicine. London: Routledge, 2000. Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right From Wrong. Oxford and New York: Oxford University Press, 2009. Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the Machine. MIT Press. 1965. Wiener, Norbert. The Human Use Of Human Beings: Cybernetics And Society. Da Capo Press; Revised ed. edition (March 22 1988). 2008. Westlake, Stian (Ed). Our Work Here Is Done: Visions of a Robot Economy. London: Nesta Foundation. 2014. Online Publication. Wolfe, Cary. What is Posthumanism? Minneapolis: University of Minnesota Press, 2010. —. Animal Rites: American Culture, the Discourse of Species, and Posthumanist Theory. Chicago: University of Chicago Press, 2003. Wood, Gaby. Edison’s Eve: A Magical History of the Quest for Mechanical Life. New York : Anchor. 2003. Wosk, Julie. My Fair Ladies: Female Robots, Androids and Other Artificial Eves. New Brunswick, London, New York: Rutgers. 2015. Yaszek, Lisa. The Self Wired: Technology and Subjectivity in Contemporary Narrative. New York: Routledge, 2002.
https://socialrobotfutures.com/resources/theory-social-science/
The issues that that biotechnology patenting must deal with is that some of these materials are naturally occurring substances or organisms within the nature as biological materials and therefore the mere discovery of the same would not classify as an invention. Therefore, the complexity lies in proving that the biotechnology product or process is novel and not a natural biological substance and the inventor further should reveal that their invention is the first in the world to attain a specific purpose. INTRODUCTION Biotechnology is the process that pertains to the application of molecular and cellular biology to create, modify any products or processes. It therefore is the scientific discipline that focuses on manipulating the genetic material of living beings or biologically active material that helps in improving the quality of human life, health, and different organisms. It includes harnessing the bio-molecular processes and cellular commonly involving DNA techniques and analyzing the genetic makeup. Modern biotechnology is significant in various fields such as food, medicine, energy, and environment. It also helps in developing technologies to combat rare and life- threatening diseases and assist industrial processes2. While the criterion of patentability of inventions exists for inventions made in all fields, the application of the patent laws to the biotechnological inventions must deal with some 1 4th year, BBA-LLB, 2018-23 (Hons), School of Law, Bennett University. 2 Patent Expert Issues: Biotechnology ( https://www.wipo.int/patents/en/topics/biotechnology.html) considerable oppositions due to its unique features that may not exist in other fields of technology. The issue with patentability of the biotechnological inventions must deal with various unique issues which are absent in the other areas of technological inventions. The controversy surrounding these biotechnological patents is if the essential preconditions for patentability exist or if they merely classify as discoveries. Another problem is that some of these biological materials are capable of reproduction meaning that the biological material that is patented today can morph and mutate into something different, therefore the question is what the scope of what the patent would cover i.e., original state in which the invention was made or if the patent would include the future generations of the biological organisms which can possibly include the morphing and mutations. 1. HISTORY OF BIOTECHONOLOGICAL PATENTS The earliest patents originating from the field of biotechnology is from Europe which claimed a patent for the yeast like material which is used in baking and making mashed potatoes. Further in the year 1873, a microbiologist, Louis Pasteur patented the process of fermentation of beer and acetic fermentation and improved yeast making3. The recombinant DNA technology (rDNA) where the genetic material such as DNA molecules from multiple sources or organisms formed by recombination has enhanced the comprehension of molecular and genetic expression of life forms. Following the first rDNA insertion into a host in 1973, scientists concluded that, there was a high potential for the cellular processes to develop new products and processes which can be used in many industrial sectors. The development of the rDNA technology first initiated the issue of patenting a biotechnological invention4. The US Supreme court in the case of Diamond v Chakrabarty5, in the year 1980 decided upon this issue where Ananda Chakrabarty, a microbiologist in the General Electric research in New York, developed a genetically engineered bacterium which can split the components of crude oil, and this was a character which was not possessed by any naturally occurring bacteria and this bacterium could hold significant value for clearing any oil spillage. Chakrabarty filed a patent application for the process of producing the bacteria, the product claims of the bacteria. While the patent examiner allowed for the process but rejected the product claims on the ground 3 The complications around patenting Biotechnology (https://www.labiotech.eu/in-depth/biotechnology-patents- intellectual-property/) 4 First Recombinant DNA https://www.genome.gov/25520302/online-education-kit-1972-first-recombinant-dna 5 477 U.S. 303(1980) that the living organisms cannot be patentable. On multiple appeals, the case on reaching the Supreme Court held that a live, human made microorganism can constitute a patentable subject matter as a composition. It also drew a distinction between human made bacterium and bacterium which occurred naturally. Since it has a human made genetically engineered microbe created with a unique function to dissolve the oil components6, it could be subject to patent laws. In Europe, a very similar case called the Rote Taube7 decision, the patent application was denied on the grounds in the complexities in reproducing the invention but affirmed that the process of animal breeding based on selection and cross selection was a patentable subject matter. The decision in the German case of BpatG (Bundespatentgericht) in the case of Antamanid8, there is a clear distinction between invention and discovery. Any substances which are naturally occurring can be patented if they have been isolated due to a technical intervention which is facilitated by the human beings, such that the nature is incapable of accomplishing it by itself. However, it explicitly restricts the patenting of any human parts and the genetic modification and genetic identity of human beings used for processes of cloning human beings. In 1973, European Patent office leads the development of the European Patent Convention (EPC) based on the domestic laws of various nations within the EU. In the year 1998, an EU Directive9 was proposed to extend legal protection to biotechnological inventions and distinguishes between what is patentable and what cannot be patented. However, this directive received some opposition where Netherlands proposed an amendment and Germany, France denied its implementation till 2004. 2. INTERNATIONAL LANDSCAPE OF BIOTECHONOLOGY PATENTING According to Article 27 of TRIPS, patents would be extended to any inventions, products, and processes in various fields of technology provided that such an invention involves an inventive step, is novel and capable of industrial application or usage. In the United States, the novelty requirement is that the inventor should not patent something that is present within the public domain where it has not been patented priorly or the information for the same has not been published elsewhere. In the case of Europe, the invention has not been available or presented to 6 Patenting Of Microorganisms and Cells ( https://www.princeton.edu/~ota/disk1/1989/8924/892406.PDF) 7 2BGH, Beschluss vom 27.03.1969 – X ZB 15/67 (BPatG) 8 BpatG, Beschluss vom 28.07.1977 – 16 W (pat) 64/75 “Naturstoffe.” 9 Directive 98/44/EC on 6th July 1998 on the legal protection of the Biotechnological inventions (accessible at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A31998L0044) the public. In case of isolation of gene sequences when the function is unknown, even if there is a structural similarity between claimed gene and the existing gene sequence, the inventor can acquire the patent even if they can describe a new function emerging from such a sequence. Article 3 of the EC Directive states that the invention shall be patentable if they pertain to a product consisting or sustaining biological material or the process by which such a biological material is produced, processed, or utilized. This essentially implies that any biological material which is removed from the natural environment or produced by undergoing any technical process can be patented even if it previously exists or occurs in the nature. The patent laws does not focus on protecting the living organisms but specific reliance is placed on the substance although naturally occurring in nature when is isolated to produce a specific element which is capable only through technical processes such as identification, purification, engineering and classification. The reproduction outside of the human body which is techniques those are only possible to be performed by human beings and techniques which cannot be achieved by the nature itself. About patentable inventions and non-patentable inventions or discoveries excludes the human body which at various stages at the time of development or formation, an easy discovery of its elements that includes the sequencing or partial sequencing of gene. It further elaborates that any element which is isolated from the human body or by the any technical process can be patentable even if the structure of the element that is invented is like the natural element, this should be followed by an industrial application of the invented sequence or the partial sequence which should be revealed in the patent application. ESTs (Expressed Sequence Tags and their patentability) Article 5 of the EC directive raises concern about the patentability of ESTs. ESTs are smaller portions of the DNA nucleotides which compose the structural component of the DNA, contain, and provide information for carrying out the cellular functions. They act as genetic markers or tags which help in picking out the genes from the DNA. ESTs aid in searching and identification of hereditary diseases and cancer cells. This method helps in isolating genes in several diseases such as Alzheimer’s and cancer research. While many researchers argue that ESTs constitute a discovery and thus cannot be patented. However, others argue that the preparation involves technical processes which does not occur in the nature and thus may be an invention and thus capable of industrial application. The EC has clarified that the ESTs should meet the utility requirement and without the specification of the function of the ESTs, they would not be patented10. The patentability of ESTs has significant economic, social implication which needs to be accurately decided by the law and hold them towards clearer standards for granting patents11. Ethical dilemma concerning Biotechnology Patenting The ethical concerns revolve around the exclusion of the plant and animal varieties or any biological processes to produce any plants or animals. This includes the modification of the human germ line, using human embryos for any industrial or commercial purposes and the genetic modification of animals without any purpose that causes suffering and does not prove any greater medical incentive or benefits, will be restricted under the patent regime. In accordance with Article 53 of the EPC (European Patent Convention) any invention that is in contravention with the ordre public. This word has a characteristic meaning within Europe and as a result it means public order and security and is very expansive to include the accurate order or to retain the order within the whole of the society. The concerns surrounding morality within patent law is very uncanny as it revolves around economics, competition and production and business and the subjective and contextual element within morality provides it with a character which is very difficult to determine and can vary from the domestic laws of one country to another. However, the dilemma arises out of the consequences and the probable harms that can be incurred due to negligent or irresponsible and monopolistic behavior while undertaking any invention that can lead to widespread societal implications 12if there is no barrier or restrictions or ethics in place for conducting and undertaking any research activities. However, from the perspective of many scholars, the patent system is an improper instrument for determining the questions of public order, morality and questioning the ethical practices involved in the invention. The patent granted to an invention cannot possibly touch upon the consequences of how such an invention can be exploited. Therefore, in view of many scholars, they stated that the questions of ethics and morality should be dealt with by a different system and countries have their own Bioethics committee such as The French Bioethics Committee. 10 Denver Law Review Volume 68, Issue 2 Symposium-Intellectual Property Law; Lorance L. Greenlee (1991) Biotechnology Patent Law: Perspective of the first Seven Years, Prospective on the next seventeen Years. ( https://digitalcommons.du.edu/dlr/vol68/iss2/) 11 Fisher of Genes: Patentability of Expressed Sequence Tags; Volume 29, Number 3, Article 3. 12 Soini, S., Aymé, S., Matthijs, G. et al. Patenting and licensing in genetic testing: ethical, legal, and social issues. Eur J Hum Genet ;(2008). https://doi.org/10.1038/ejhg.2008.37 Hence the inventions which are patented upon commercial usage should nevertheless not cause any public order issues or pose determinantal to the public order of the society13. The requirement of ethics or morality rises not to hinder inventions and scientific advancement but rather promote them in an increasingly sustainable and responsible manner. Therefore, usually an impartial authority is necessary to discuss in detail any ethical issues surrounding inventions. For instance, Article 7 of the EC directive proposes the European Group on ethical aspects of biotechnology, and this is a group would evaluate any questions of patents on biotechnological inventions and the ethical principles surrounding it14. The issue however remains is that there is no standard framework for issuing or evaluating any ethical concerns. The legal standards are to be supplemented by an ethical standard. The ethical standards are subjective, relative and are not easily ascertainable which causes several problems in determining and raising questions on the grounds of morality or ethics as it is attributed to the lack of uniformity and rests on ambiguous principles of morality15. Patenting of Plant and Animal varieties The positions of patenting plant vary from America to Europe. America does not restrict patenting biological material and is protected under the Plant variety Protection Act (PVPA) or as utility patents. The improvement of plant varieties is protected in various jurisdictions through the International Convention for the Protection of New Varieties of Plants, 1991 (UPOV) and countries that are members of the TRIPS agreement are obligated under Article 27.3(b) which provides for the protection of plant varieties either through the patent protection system or a system specifically created for the purpose of protecting plant varieties (sui generis systems) or a combination of both16. Before The Doha declaration, the review of Article 27.3 of TRIPS had been extensively discussed and there were several concerns that were raised, such as whether the TRIPS provisions extend to patent animals or plant. It also included ethical and moral issues and to the extent of which any life forms should be given patent protection. It raised questions about how 13 See Tade Matthias Spranger, Ethical Aspects of Patenting Human Genotypes According to EC Biotechnology Directive, 31 INT’L REV. INDUS. PROP. COPYRIGHT L. 373, 376 (2000). 14 https://www.wipo.int/wipo_magazine/en/2006/04/article_0003.html (Bioethics and Patent Law: The case law of Myriad) 15 Minnesota Intellectual Property Review; Volume 3 Issue 2, 2002; Europe’s Biotech Patent Landscape- Tade Matthias Spranger 16 Michael Blakeney, Patenting of plant varieties and plant breeding methods, Journal of Experimental Botany, Volume 63, Issue 3, February 2012. ( https://doi.org/10.1093/jxb/err368https://doi.org/10.1093/jxb/err368) commercial utilization of Traditional knowledge by other countries except from where these countries originate. This led to the examination of the relationship between the Convention on Biological Diversity (CBD) and the TRIPS agreement and the Traditional Knowledge protection by other countries. There has been informal discussions with the WTO director general upon these subjects and different countries have submitted different proposals where India and other countries such as Brazil, Cuba, Colombia, Peru and Thailand backed by some of the African groups raised a suggestion to amend the TRIPS agreement that would make it obligatory for the patent applicants to disclose and inform the country of origin from any genetic resources and the traditional knowledge that has been used in such an invention . This also further includes the concept of obtaining the evidence through a prior informed consent adopted from the Biological Diversity convention which allows equitable benefit sharing17. Global South has voiced several problems with the patenting of the plant materials and varieties as a patenting of the same would raise some serious implications upon the accessibility, availability and may hinder the process of research and breeding as there is patent on a plant type, then the breeding and research upon it would be difficult. Further the patents on the processes to produce the plant are patented, and then by extension of Article 28.1(b) of the TRIPS agreement, the processes which directly lead to the product obtained there-from would receive protection. This is of particular importance because when plant related patents are granted, they not only cover the processes but the produce from the plants such as food. Patent laws in some countries such as China, Vietnam have followed the European Patent Convention which state that plant varieties are not patentable instead of plants. This exclusion is narrower than the exclusion of “plants” as transgenic plants and their parts and components such as their seeds, cells and genetic material can be patented. Therefore, some countries contain patent laws that specifically exclude plant varieties and other countries have a wider ambit which excludes plants or plant varieties from being patented as in the case of Brazil18. In India, there are specific guidelines such as the guidelines for examination of Biotechnology Applications for Patent which provide that pure hybrid seeds and plants under specific processes provided will constitute a biological process and hence would not be applicable for a patent under section 3(j) of the Patents Act, 1970. 17 TRIPS: Reviews, Article 27.3(B), and related issues https://www.wto.org/english/tratop_e/trips_e/art27_3b_background_e.htm 18 Correa, CM, Correa, JI, De Jonge, B. The status of patenting plants in the Global South. J World Intellect Prop. 2020; https://doi.org/10.1111/jwip.12143 3. BIOTECHNOLOGY PATENTING- AN INDIAN PERSPECTIVE India is one of the most favorable places for biotechnology and is one of the major players of biotechnology industry in Asia- Pacific. The rapid growth of biotechnology India is associated with the demand and rise in healthcare, pharmaceuticals, and government investment in the field of Research and development. Therefore, it is significant to assess the patent protection regime and biotechnology inventions within India. The Patent system in India is driven by the Indian Patent Act, 1970 which awards any invention with a patent if they are not specifically excluded within section 3 of the Act and pass the test of novelty, inventive step, and industrial application. However, the factors for granting patents on biotechnological inventions are of specific importance as they can be specifically excluded within the ambit of patentable subject matter in India. Earlier, the position was in accordance with Section 3(c), where any living things or non-living things which are naturally occurring cannot be patentable and thus any DNA, RNA and sequences which are isolated from living organisms are not patentable and therefore any natural occurring microorganisms are merely discoveries and are not covered within the ambit of an invention. Genetically modified microorganisms can be patented, and vaccines are patentable as it requires a significant amount of human intervention. Till 2002, patents were not granted for inventions of living matter of natural or artificial origin, biological matter and any substances that are derived from such properties. This position was substantially changed post 2002 by the decision made by Calcutta High Court in the case of Dimminaco AG v Controller of Patents and Designs19 concluded that a new and useful process will constitute an invention even if the result contains a living organism which can be utilized as a commercially viable entity and the process leading to the production or manufacture of the same would qualify as an invention. The Controller of patents refused to grant patents in this case for the invention of a vaccine for countering Bursitis infection within poultry. The refusal of grant of patent was because it contained a living organism within the product. This subsequently led to the amendment within patent laws i.e., the Patent (Amendment) Act, 2002 due to which the biochemical, biotechnological, and microbiological processes were included within the patent protection regime. In accordance with Section 3(b) which restricts inventions from being granted patents if they 19 Dimminaco A.G. v Controller of Patents and Designs, (2002) I.P.L.R. 255(Cal) contradict with the public morality. Such discoveries are regarded with a very high degree of caution as it creates a space for misuse and commercial exploitation of any plant or animal life in the process. Since the process of biotechnology deals with tailoring living matter and engineering them in a manner favorable for gaining monetary benefit. These includes the process of cloning human beings or animals, the genetic modification of animals which results in harm and suffering without any medical benefit and the process of preparing seeds or genetic materials which can have detrimental impact on the environment and the utilization of human embryonic cells for commercial purposes. These are all some examples where considerable apprehension is raised and are usually limited or restricted on the grounds of public morality and ethics and their adverse impact on the environment, people, and the biodiversity. Further, according to the Section 3(c) which restricts plant and seed varieties from being patented which use conventional biological processes for the fertilization process and breeding cannot be patented. Therefore, genetically modified seeds or plants cannot be patented but the processes for genetic modification can be patentable. The Monsanto Case20 went into the detailed discussion where the patent claim was based on a transgenic plant which was rejected by the IPO. The IPAB agreed with the party on the ground that the plant cell was a result of the process initiated by human intervention. Although, the genetically modified seeds or plants cannot be patented, the process which involves such a genetic modification can nevertheless be patented21. Further the method of treatment cannot be patented under Section 3(I) of the Act which includes any surgical methods, therapeutic methods, diagnostic methods22. There are additional formal and procedural requirements that must be satisfied for biotechnological patent applications. India joined the Budapest Treaty on the International recognition of the Deposit of Microorganisms in the year 2001 and section 10 of the Act, was amended where the patent applicant must deposit the biological material which is involved in the invention and should be mentioned in the patent application if it cannot be properly or accurately disclosed to the public and cannot be adequately mentioned in accordance with the provisions of the patent laws. The material will be deposited with the international depository under the Budapest Treaty and the 20 Monsanto Technology LLC v Controller of Patents and Design (2407/ DELNP/2006) 21 Guidelines for Examination of Biotechnology Applications for Patent, March 2013; Office of controller General of Patents, Designs & Trademarks ( Available at https://ipindia.gov.in/writereaddata/Portal/IPOGuidelinesManuals/1_38_1_4-biotech-guidelines.pdf) 22 Patenting in Biotechnology – The Indian Scenario ( https://www.iam-media.com/patenting-biotechnology-indian-scenario) collection of the depositories within India are present in Pune and Chandigarh23. As a detailed discussion on ESTs has been carried out, the Indian framework proposes the patent application to disclose the sequence listing of any genes, nucleotide sequences, peptide, and amino acid sequences according to Rule 9(1) of the Patents Rules 2003 and this sequence listing is filled out in an electronic form. This helps the patent examiner to search a wide range of patent applications based on sequences submitted by various patent applicants. ESTs are only allowed to be patented if in addition to other requirements are useful and can be industrially applied. An EST which merely acts as gene marker would not be considered as a patent application since they are incapable of providing any industrial application. Therefore, the patentability of ESTs depends upon the feasibility and substantial usage to diagnose and identify a specific disease. CONCLUSION Biotechnology patenting is an evolving field and is in interplay with various fields and industries which contribute to its growth and impact on the human lives, the environment. The contentious issue with biotechnology patenting is that the approach taken by the countries in granting ownership and rights over the patents which are outweighed against the environmental implications and the societal and ethical concerns it poses. Several developing countries have taken a stricter view to prevent such patents from being granted on the grounds of public morality and commercial exploitation of the Traditional knowledge. An abundance of precedents has clarified the position of patenting of living organisms subject to the patentability requirements. While countries are rapidly pacing to include a wide array of subject matters within biotechnological inventions, it is necessary to take a balanced approach in a way that it does not pose detrimental effect upon the society and environment and incentivize and increase opportunities research and development. The legal framework of biotechnology patenting differs from geographical region and blocs and account for such differences owing to practicalities of political and economic climate within this region. However, most counties follow a general trend or practice when it comes to patenting of living organisms by human intervention. The future of biotechnology will inevitably change or realign the patent systems in the world. This will be dependent upon the research and investments made in the fields and might lead to a 23 India: Biotechnology Patent and Moral Related issues (https://www.mondaq.com/india/patent/758110/biotechnology-patent-and-related-moral-issues) more uniform and aligned patent regime domestically and internationally in place.
https://ijalr.in/volume-2/issue-2/biotechonology-patenting-by-bhavana-j/
One of the fields continuously raising ethical questions is the fast development within biotechnology. Constantly new possibilities arise and lead to ethical questions. For a Lutheran ethic the challenge is twofold - it is both a challenge to determine the key ethical questions in general and a question whether or not there is a specific Lutheran answer to these questions. This twofold challenge is also present when we deal with one of the more recent questions related to biotechnological research, namely the question what ethical stance we should take concerning stem cell research. Introduction In recent years the question on the status of and subsequent possible research on stem cells has attracted considerable attention. This is due to both moral and political reasons. Politically the question has split the US and the European Council. In September 2006 President George Bush vetoed a bill on embryonic stem cell research which would have loosened the restrictions on federal funding. The arguments against the bill were primarily ethical, as he argued that this bill would cross a moral boundary. The acceptance of research on embryonic stem cell research would not sufficiently take into account the moral status of the human embryo. The decision of the American president went against the Senate which would have passed the bill. A similar division could be seen in the European Council. An intense discussion along the same lines as in the US was led among the member states of the European Union. As in the US the discussion was focused on whether this research should receive funding. And again the crucial question was the ethical issue. Countries such as Germany, Austria, Poland, Lithuania, Luxembourg, Slovakia, and Malta - with Germany in a leading opposition - opposed that EU should fund research which could further the killing of human embryos. However, in contrast to the US it was decided in the EU that research on stem cells should receive funding - but with the significant adjustment that no funding will be given to research which destroys human embryos. The ethical discussion has focused on the status of the human embryo. This is due to the methods of obtaining stem cells, where one of the methods has implied the destruction of the human embryo. Therefore, the central issue in the debate has been the status of the human embryo. In the present article, this question will be discussed from a Lutheran perspective. However, before we turn to that a few facts on stem cell research need to be outlined. The prospects of stem cell research is that these cells in the future could be used in various cell-based therapies and thereby further the treatment of diseases such as Parkinson's and Alzheimer's diseases, spinal cord injury, stroke, burns, heart disease, diabetes, osteoarthritis, and rheumatoid arthritis. The Debate on Lutheran Bioethics When we turn to the question, if there is a Lutheran ethical position, we are immediately confronted with the twofold challenge outlined in the beginning of this paper. Let us first turn to the first question - what are the key ethical questions? In the ethical debate on stem cells the focus is on the source of the desired stem cells. There does not seem to be any significant discussion on the potential benefits of stem cells in the cure of the mentioned diseases. The ethical concern lies with the two mentioned methods of getting stem cells. The method of getting stem cells from adults is less controversial. But the second method, where stem cells are taken human embryos, is much more controversial. The ethical problem with this method is that it implies the destruction of human embryos. Therefore, the crucial ethical question is, what moral status one ascribes to the human embryo. If one regards the human embryo as having a dignity which raises a moral demand of care and respect, this method of getting stem cells implies so fundamental moral problems that it probably must be rejected on ethical grounds. However, if one sees the human embryo as something which could develop into a human being worthy of care and respect, the door seems to be at least slightly opened for getting stem cells from human embryos. Having seen the key ethical question, we can now turn to the second challenge - i.e. the question on the specific Lutheran position. The question on the specific nature of a Lutheran bioethics has been dealt with in various articles during the last few years. It has been argued that there are Lutheran approaches to bioethics, but not one specific approach which can claim to be the Lutheran bioethic. Further, the argument has been made that Lutheran bioethics rests on a practicing of neighbour love in a secular context where it to some extent corresponds with natural law ethics. This also entails that there is no Lutheran doctrine on the status of the human embryo. Lastly, it has been argued that a Lutheran bioethic implies an honouring of the bios, i.e. the body, which entails a renewed understanding of the distorted relation between generations in new reproductive technologies. I will not enter into a discussion on each of these approaches to Lutheran bioethics, but only point to the simple fact that it is evidently true that there is not one approach which can be called Lutheran. Rather, various approaches all seem to qualify to this label. Obviously, there are approaches which would not be Lutheran. But in the present context it suffices to answer affirmatively - yes, there is such a thing as a Lutheran bioethic. Dietrich Bonhoeffer's Contribution to Bioethics The affirmative answer is, however, not confining. Rather, it opens up for a variety of approaches which could all claim to be Lutheran. My own approach in this debate leads me back to the Lutheran theologian and pastor, Dietrich Bonhoeffer. Bonhoeffer is not well known for his contributions to bioethics. But this does not mean that we look in vain when reading his ethics. In his posthumously published work Ethics Bonhoeffer actually deals quite extensively with issues which we today would call bioethical. In the section on "Natural Life" Bonhoeffer discusses notions such as the right to bodily life, self-murder, reproduction and developing life, and the freedom of bodily life. Bonhoeffer's views on these issues provide a helpful source when taking a stance on e.g. stem cell research. Bonhoeffer appreciates that the notion of the natural has fallen into disrepute in Protestant ethics. This has also lead to a confusion and lack of guidance in many crucial questions, not to say a static proclamation of divine grace. Therefore, it is important for Bonhoeffer that the notion of the natural is retrieved. In Bonhoeffer the natural is defined christologically. This means that the natural is that which is directed toward the coming of Christ, in contrast to the unnatural which closes itself off from the coming of Christ. Therefore, the natural is confirmed in Christ and only through Christ's becoming human do we have the right to call people to natural life. "Natural life may not be understood simply as a preliminary stage toward the life with Christ; instead, it receives its confirmation only through Christ. Christ has entered into natural life. Only by Christs's becoming human does natural life become the penultimate that is directed toward the ultimate. Only through Christ's becoming human do we have the right to call people to natural life and to live it ourselves." This notion also leads to a protective understanding of natural life. The unnatural is that which distorts life and is contrary to life. This also implies for Bonhoeffer that he argues for a right to bodily life. Bodily life bears intrinsically the right to its preservation, just as the bodily life is understood as an end in itself. This right to natural life also implies the protection of bodily life from arbitrary killing. Arbitrary killing is here understood as every conscious killing of innocent life. With regard to the human embryo Bonhoeffer makes a close link between his understanding of marriage and the subsequent development of human life. For Bonhoeffer marriage is given with the creation of the first human beings and is, as such, rooted in the very beginnings of humanity. Closely related to marriage is the acknowledgement of the right of life that will come into being within this marriage. This developing life has a right to life as it is an expression of God's will to create a human being and a deliberate ending of this life is simply murder, according to Bonhoeffer. To kill the fruit in the mother's womb is to injure the right to life that God has bestowed on the developing life. Discussion of the question whether a human being is already present confuses the simple fact that, in any case, God wills to create a human being and that the life of this developing human being has been deliberately taken. And this is nothing but murder. Bonhoeffer is very explicit in his views on abortion. He may be more explicit than most of us feel comfortable with. I would be hesitant about using a word like murder, as it is deeply laden with emotional connotations. But the main idea is quite reasonable, as I see it. I see no reason why one should not maintain that the human embryo holds a dignity as a developing human life from the very fertilization. I agree with the positions that would argue that there is no other equally decisive break in the process of development of the human embryo. Stem Cell Research - Is There a Lutheran Ethical Standpoint? Returning to the issue of stem cell research and the question, if there is a Lutheran ethical standpoint, I will answer with three points: 1. Yes, there is such a thing as a Lutheran ethical standpoint. There is a widespread consensus among Lutheran ethicists that it does make sense to speak of a Lutheran ethical position. 2. Yes, it is possible to point to traits within the Lutheran tradition that could serve as a source of guidance when dealing with specific questions such as e.g. stem cell research. The challenge is, however, that a variety of approaches all qualify as Lutheran and seem to point in different directions. There is not one specific position which can claim to be Lutheran at the expense of other positions, even if there are positions that also would qualify as not being Lutheran. 3. The variety of Lutheran approaches to bioethics provides a fruitful basis for a lively dialogue on difficult issues. My own approach is inspired by Bonhoeffer and his notion of the natural life. Based on this source of inspiration I would not defend getting stem cells from human embryos. This would be the case both for the "old" method, where the human embryo is destroyed, and the "new" method where the uncertainty concerning the impact on the development of the human embryo is unclear. I do think, however, there are good reasons for and no significant ethical reasons against doing further research on stem cells taken from adults using the existing methods. One of the significant challenges for Lutheran ethics in the years to come is to continue this discussion on the distinctive contribution to bioethics from a Lutheran perspective. This debate will not only be fruitful from a Lutheran perspective, it will also continue to hold importance for a wider audience, as the fruitful disagreement among Lutherans continuously furthers the dialogue and ethical insights.
https://elca.org/JLE/Articles/566
Sunday’s federal vote will include an issue that has divided political parties, and may give rise to some lively debate at home. It’s about whether to allow diagnostic screening of an embryo prior to its implantation in the womb. Pre-implantation diagnostics, which analyse the cell structure and genetic makeup of an embryo, will only be available to infertile couples and to those who suffer from hereditary illness. Presently around 80,000 children are born in Switzerland each year, and around 2,000 of these are the result of fertility treatment. If enacted, this extension of the law will allow doctors to fertilise more embryos and to select which ones to re-implant, or more specifically which ones to abandon or freeze. Scientists see this as progress, libertarians think it will bring Switzerland’s laws into line with the rest of Europe, and politicians cannot decide whether this is a pragmatic debate or an ethical one. Bear in mind that in a normal pregnancy there is a natural selection process within the womb whereby imperfect embryo’s tend not to make it to full term, so the issue here is mainly about how far we allow fertility specialists to intervene in this selection process, by only inserting embryos with a fighting chance of making it to full term. The parliamentary library has released its documents of arguments for and against the vote. The argument in favour of medical intervention is built on three axes, which are that the new laws will: - Optimise the process of assisted fertility, and so alleviate the additional sufferance of couples who rely on medical intervention to have children. - Improve the chances of pregnancy for infertile couples by choosing the best embryo to work with, thanks to the diagnostics. - Reduce unnecessary risks for mother and child by avoiding multiple-pregnancies (potential twins or more) since only one embryo will be inserted. In addition, they note that this development is a fair expression of sound medical progress, and it will bring Swiss law into line with 27 of the 28 states of Europe. This will mean that Swiss couples no longer need to travel abroad to seek treatment. And since it can help identify possible hereditary illnesses (which are sometimes the cause of infertility) this diagnosis can avoid the real ethical dilemma of whether to terminate a pregnancy, or to potentially visit the challenges of hereditary illness on the next generation. The arguments against further medical intervention are largely based on the conviction that it is discriminatory and that it will erode basic human rights, which are enshrined in the Swiss constitution. They also maintain that Switzerland shouldn’t be required to bring its laws into alignment with the rest of Europe. The various opposition concerns, which don’t necessarily weave into a thread, are that: - The attempt to harvest large numbers of embryos may subject the future mother to excessive levels of artificial hormonal treatment. - The ability to harvest additional embryos will lead to an excessive number of fertilised embryos that are frozen and never put to use. - Diagnostic tests will discriminate between potential lives, rendering some useful and others useless. - The judgement that imperfect embryos are not worthy of life will further compromise the perceived dignity of handicapped children. - The screening process may deprive parents of a child that they would willingly have accepted even with a significant risk of handicap or illness. - This procedure will be insufficient to guarantee a healthy child, since not all illnesses can be detected with diagnostics, and many illnesses only arise during the course of pregnancy. - The additional costs will commercialise human life by requiring hopeful parents to pay even more for their fertility treatment, none of which is covered by medical aid. - This procedure falls outside of the legitimate bounds of medicine, which is intended to prevent or treat illness, rather than to eradicate those lives that may carry such illnesses. None of these arguments was sufficient to convince the Swiss Parliament or the Federal Council, which both recommend voting in favour of the amendment. The new initiative gives rise to a legitimate ethical concern, since pre-insertion diagnostics may gradually be extended to more cases and more couples, putting us on a slippery slope towards genetic engineering. Some fear that it could even lead to Eugenics, a social philosophy whereby mankind uses science to improve the quality of our species, by discouraging reproduction by those with genetic defects and encouraging reproduction by those with desirable traits. For clarity, it’s important to note that, in its present form, the new legislation is only there to help infertile couples or to protect those who are clearly susceptible to a hereditary illness. It would not allow couples to choose the gender of their child or to favour embryos, which promised any particular physical advantages on account of their genetic signature. If this legislation carries, it will offer a far greater chance of success for infertile couples. It’s easy to underestimate the emotional and physical hardship that a fertility cycle entails. Not all procedures will lead to pregnancy, and according to the Federal Office of Health, 21.7% of those pregnancies are spontaneously terminated. One Lausanne couple underwent numerous unsuccessful assisted fertility attempts in Paris, London, Geneva and Cape Town before finally enjoying the success of conceiving a healthy and happy child five years ago. It seems unfair to deprive other hopeful parents of any advantage that may improve the chance of that pleasure.
https://lenews.ch/2015/06/11/swiss-vote-on-genetic-selection-of-embryos-playing-god-or-playing-fair/
In recent years, the well-established field of human anthropology has been put under scrutiny by the new data offered by science and technology. Scientific intervention into human life through organ transplants, euthanasia, genetic engineering, experiments connected to the genetic code and the genome, and varied other biotechnologies have placed ethical beliefs into question and created ethical dilemmas. These scientific inventions influence our views on birth and death, on the construction of the body and its technical reproducibility, and have problematized the concept of the human persona. The purpose of bioethics, the science of life, is to find new values and norms which will be valid for a multicultural society. Bioethics is, today, a well-respected topic of research that has brought together philosophers and experts to discuss the limits of science and medicine. The aim of this book is to merge the two fields of bioethics and law (or biolaw) through the literary text, by taking into consideration the transformations of the concept of persona at which we have nowadays arrived. The new meaning of the term ‘persona’ represents in fact the final point of a long-standing quest for man's sense of his own being and human dignity, and of his capacity to live in social interrelations. The volume presents a wide range of perspectives, comprising methodological approaches, legal and literary aspects. Setting out the implications of the postmodern condition for medical ethics, Troubled Bodies challenges the contemporary paradigms of medical ethics and reconceptualizes the nature of the field. Drawing on recent developments in philosophy, philosophy of science, and feminist theory, this volume seeks to expand familiar ethical reflections on medicine to incorporate new ways of thinking about the body and the dilemmas raised by recent developments in medical techniques. These essays examine the ways in which the consideration of ethical questions is shaped by the structures of knowledge and communication at work in clinical practice, by current assumptions regarding the concept of the body, and by the social and political implications of both. Representing various perspectives including medicine, nursing, philosophy, and sociology, these essays look anew at issues of abortion, reproductive technologies, the doctor-patient relationship, the social construction of illness, the cultural assumptions and consequences of medicine, and the theoretical presuppositions underlying modern psychiatry. Diverging from the tenets of mainstream bioethics, Troubled Bodies suggests that, rather than searching for the correct "coherent perspective" from which to draw ethical principles, we must apprehend the complexity and diversity of the discursive systems within which we dwell. With its focus on the offshore randomized control trials of a Pre-Exposure Prophylactic pill (PrEP) for preventing HIV infection, the volume develops a sustained analysis of the complex, virtual and topological dimensions of the expectations, ethics and evidence that surround the innovation of PrEP. This book examines one of the most pressing cultural concerns that surfaced in the last decade - the question of the place and significance of the animal. This collection of essays represents the outcome of various conversations regarding the animal studies and shows multidisciplinarity at its very best, namely, a rigorous approach within one discipline in conversation with others around a common theme. The contributors discuss the most relevant disciplines regarding this conversation, namely: philosophy, anthropology, religious studies, theology, history of religions, archaeology and cultural studies. The first section, Thinking about Animals, explores philosophical, anthropological and religious perspectives, raising general questions about the human perception of animals and its crucial cultural significance. The second section explores the intriguing topic of the way animals have been used historically as religious symbols and in religious rituals. The third section re-examines some Christian theological and biblical approaches to animals in the light of current concerns. The final section extends the implications of traditional views about other animals to more specific ethical theories and practices. Facts101 is your complete guide to Orientation to the Counseling Profession, Advocacy, Ethics, and Essential Professional Foundations. In this book, you will learn topics such as Ethical and Legal Issues in Counseling, Theories of Counseling, The Counseling Process, and Counseling Microskills plus much more. With key features such as key terms, people and places, Facts101 gives you all the information you need to prepare for your next exam. Our practice tests are specific to the textbook and we have designed tools to make the most of your limited study time. Critical Interventions in the Ethics of Healthcare argues that traditional modes of bioethics are proving incommensurable with burgeoning biotechnologies and consequently, emerging subjectivities. Drawn from diverse disciplines, this volume works toward a new mode of discourse in bioethics, offering a critique of the current norms and constraints under which Western healthcare operates. The contributions imagine new, less paternalistic, terms by which bioethics might proceed - terms that do not resort to exclusively Western models of liberal humanism or to the logic of neoliberal economies. It is argued that in this way, we can begin to develop an ethical vocabulary that does justice to the challenges of our age. Bringing together theorists, practitioners and clinicians to present a wide variety of related disciplinary concerns and perspectives on bioethics, this volume challenges the underlying assumptions that continue to hold sway in the ethics of medicine and health sciences. This collection explores how the dominant risk agenda is being embedded across welfare policy and practice contexts in order to redefine social problems and those who experience them. Identities of 'risky' or 'safe', 'responsible' or 'irresponsible' are being increasingly applied, not only to everyday life but also to professional practice.
http://fodreport.net/full/moral-ordering-and-the-social-construction-of-bioethics/
Download a PDF of this report. CRISPR and other methods of gene editing have captured the public imagination, spurring countless lectures, articles, and think pieces about how this technology can shape humanity. Many of these conversations are concerned primarily with the seemingly boundless “potential” of human gene editing to both treat diseases in existing patients and alter the genes of future children and generations. The ethical implications of using technology to permanently alter the human genome are evident and often mentioned. But many discussions downplay the serious societal and ethical implications of human gene editing when they fail to assess it within the context of existing assisted reproductive technologies (ARTs) and the fertility industry. This brief extends public and policy discussions that contextualize human gene editing as an ART with scientific limitations and grave and irreversible social and political consequences. This brief argues that the implications of human germline editing should be understood in the context of ARTs and the for-profit fertility industry—one that reproduces and exacerbates the health and social disparities created by already existing reproductive technologies. There are important synergies that need to be considered. ARTs allow people to have children. Yet, germline editing would let them control what kinds of children to have. This entanglement suggests that the debate concerning the ethical implications of using CRISPR in reproduction must be situated within existing conversations regarding ARTs. Viewing CRISPR through this lens allows us to critique the goals of germline editing and to better understand how this new technology might not only exacerbate existing social and ethical dilemmas around ARTs, but also create entirely new challenges.
https://belonging.berkeley.edu/engineering-for-perfection
A rare conscience vote was afforded MPs in Federal Parliament yesterday on the controversial Mitochondrial Donation Law Reform (Maeve’s Law) - and the bill passed 92 votes to 29. The bill will legalise radical genetic manipulation and, perforce, amend the Prohibition of Human Cloning for Reproduction Act 2002 and Research Involving Human Embryos Regulations 2017. Australian Christian Lobby's National Director of Politics, Wendy Francis, said, “It begs the question, is it ever ethical to advance scientific knowledge by means of experimentation on unwilling human subjects? "For sound historical reasons this has always been a scientific ‘no go’ area. Even those who would like to argue that the case differs materially when the human subjects in question are embryos created for no other purpose, the ethical weakness of this position is abundantly clear. “It is sadly ironic that if this bill had been in place, Maeve (the precious much-loved child for whom this bill is named) would not have been allowed to live.” Eminent ethicists have expressed significant concerns regarding the experimental nature of the process and its long-term consequences. Dr Megan Best, Director of Ethicentre Ltd. and Associate Professor of Bioethics, Institute for Ethics and Society, The University of Notre Dame Australia, said, “Mitochondrial donation involves altering the human germline, that is, altering genetic material that is inherited by the next generation. Allowing mitochondrial transfer to proceed in Australia defies the international call for a moratorium on human germline manipulation.” Future children born of this process (mtDNA transfer) would have two biological mothers and one biological father. Wendy Francis continued, “Creating an embryo with three biological parents crosses a new frontier in human experimentation. The physiological and ethical implications of having three biological parents should not be dismissed lightly.” The Australian Christian Lobby urges Federal MPs to make significant amendments to this legislation at the Senate stage.
https://www.acl.org.au/mr_wf_controversial_mauves_law_passes_in_house_of_representatives
Welcome to the 11th Congress of the European Society for Agricultural and Food Ethics, Uppsala, Sweden, September 11-14, 2013! The congress theme is The Ethics of Consumption: The Citizen, The Market, and The Law. The deadline for submissions has passed, but you are still most wecome to participate in the conference, and we are still looking for chairs for some sessions . Please go to the registration page for more information, contact details, or to sign up. Confirmed Keynote Speakers - Philip Cafaro, Colorado State University, USA - Dorothea Kleine, University of London, UK - Mara Miele, Cardiff University, UK - Soraj Hongladarom, Chulalongkorn University, Bangkok, Thailand EurSafe 2013 is a forum for discussion of ethical issues at the intersection between social, economic and legal aspects of consumption of food and agricultural products. The congress has three main sub themes connected to the overall issue of ethical consumption. However, general contributions to agricultural and food ethics are also welcome. While arguably remarkably efficient, the present system for agriculture and food production involves a number of negative consequences for human health, the environment, and animal welfare. Great challenges lie ahead as we are facing population growth and climate change. It is frequently argued that one of the keys to meeting those challenges lies in changing consumption patterns, for instance through reducing meat consumption, switching to organic or fair trade products, boycotting or 'buycotting' certain products, or consuming less overall. There is considerable disagreement regarding how to bring this about, whose responsibility it is, and even whether it is desirable. Is it a question of political initiatives, the virtues and vices of individual consumers in the developed world, or something else? The Citizen: To an increasing degree, individuals' actions and choice of lifestyle have been put into focus rather than political and collective solutions. This raises questions such as: What roles and responsibilities regarding food consumption are related to being a citizen and consumer respectively? Is there any significant difference between a 'food consumer' and a 'food citizen'? Do we need to contextualize our expectations to the ethical consumer / citizen (e.g. in respect to culture, tradition, religion) or would we rather opt for a universal 'food citizen' codex? The Market: Ethical consumption proceeds in the midst of economic realities like free trade and its barriers, agricultural subsidies, consumer expectations and preferences, labelling and 'glocalness' and so called organic alternatives. Given the two main trends in food marketing; globalisation on the one hand, and strive for localisation or regional or traditional food markets on the other, the issue of ethical consumption becomes closely related to understanding content and impacts of the tension between a variety of interests and ethical aspects. What is the role of retailers, producers and transport chains and waste processing in this tension? How are we to create efficient communication built on trust in the junction of economic factors, politics and human action as regards food consumption? What is the contribution of schemes like CSR (Corporate Social Responsibility), certification systems and fair trade to a dialogue between these actors? The Law: In the light of an increasing strive for ethical consumption the role, limits and possibilities of legislation are important to discuss. Traditionally legislation sets a minimum level, partly due to respect for cultural differences. However, globalization has promoted a deregulation of the food market and therefore both national and international institutions find it difficult to develop intervention tools that can reorient the food market. In the case of the European Union, food legislation does not set any more a minimum level as it usually did. Several factors have provoked a harmonization at maximum levels that aims at free trade and has relevant collateral consequences. Is the role of legislation rather to drive a change in consumer and market behaviour? If so, how to balance with freedom of choice, but also with global food security, animal welfare and climate change mitigation? To what extent can legislation mirror 'the' public view changing over time? How to value public participation in the development of food policies and legislation? Who defines what constitutes a 'good food legislation', its range - for whom, and based on what values? If you have any questions, please contact the conference secretariat. The web site will be updated gradually with further details www.slu.se/eursafe2013 Kind regards,
https://www.slu.se/en/Collaborative-Centres-and-Projects/ethics/eursafe-2013/eursafe-2013/
The Case against Perfection: Ethics in the Age of Genetic Engineering, hereafter referred to as The Case against Perfection, written by Michael J. Sandel, builds on a short essay featured in The Atlantic Monthly magazine in 2004. Three years later, Sandel transformed his article into a book, keeping the same title but expanding upon his personal critique of genetic engineering. The purpose of Sandel's book is to articulate the sources of what he considers to be widespread public unease related to genetic engineering that changes the course of natural development. Format: Articles Subject: Publications, Ethics Ethics of Fetal Surgery Surgeons sometimes operate on the developing fetuses in utero of pregnant women as a medical intervention to treat a number of congential abnormalities, operations that have ethical aspects. A. William Liley performed the first successful fetal surgery, a blood transfusion, in New Zealand in 1963 to counteract the effects of hemolytic anemia, or Rh disease. Format: Articles Subject: Ethics Katharine McCormick (1876-1967) Katharine Dexter McCormick, who contributed the majority of funding for the development of the oral contraceptive pill, was born to Josephine and Wirt Dexter on 27 August 1875 in Dexter, Michigan. After growing up in Chicago, Illinois, she attended the Massachusetts Institute of Technology (MIT), where she graduated in 1904 with a BS in biology. That same year, she married Stanley McCormick, the son of Cyrus McCormick, inventor and manufacturer of the mechanized reaper. Format: Articles Subject: People, Ethics, Reproduction Breast Augmentation Techniques Breast augmentation involves the use of implants or fat tissue to increase patient breast size. As of 2019, breast augmentation is the most popular surgical cosmetic procedure in the United States, with annual patient numbers increasing by 41 percent since the year 2000. Since the first documented breast augmentation by surgeon Vincenz Czerny in 1895, and later the invention of the silicone breast implant in 1963, surgeons have developed the procedure into its own specialized field of surgery, creating various operating techniques for different results. Format: Articles Subject: Technologies, Processes, Reproduction, Ethics Ethics of Designer Babies A designer baby is a baby genetically engineered in vitro for specially selected traits, which can vary from lowered disease-risk to gender selection. Before the advent of genetic engineering and in vitro fertilization (IVF), designer babies were primarily a science fiction concept. However, the rapid advancement of technology before and after the turn of the twenty-first century makes designer babies an increasingly real possibility. Format: Articles Subject: Ethics, Reproduction In re Agent Orange Product Liability Litigation (1979-1984) In the legal case In re Agent Orange Product Liability Litigation of the early 1980s, US military veterans of the Vietnam War sued the US chemical companies that had produced the herbicide Agent Orange, and those companies settled with US veterans out of court. Agent Orange contains dioxin, a chemical later shown to disrupt the hormone system of the body and to cause cancer. As veterans returned to the US from Vietnam, scientists further confirmed that exposure to Agent Orange caused a variety of cancers in veterans and developmental problems in the veterans' children. Format: Articles Adolescent Family Life Act (1981) The 1981 Adolescent Family Life Act, or AFLA, is a US federal law that provides federal funding to public and nonprofit private organizations to counsel adolescents to abstain from sex until marriage. AFLA was included under the Omnibus Reconciliation Act of 1981, which the US Congress signed into law that same year. Through the AFLA, the US Department of Health and Human Services, or HHS, funded a variety of sex education programs for adolescents to address the social and economic ramifications associated with pregnancy and childbirth among unmarried adolescents. Format: Articles Subject: Legal, Outreach, Ethics, Reproduction Social Implications of Non-Invasive Blood Tests to Determine the Sex of Fetuses By 2011, researchers in the US had established that non-invasive blood tests can accurately determine the gender of a human fetus as early as seven weeks after fertilization. Experts predicted that this ability may encourage the use of prenatal sex screening tests by women interested to know the gender of their fetuses. As more people begin to use non-invasive blood tests that accurately determine the sex of the fetus at 7 weeks, many ethical questions pertaining to regulation, the consequences of gender-imbalanced societies, and altered meanings of the parent-child relationship. Format: Articles Subject: Reproduction, Ethics, Legal Ricardo Hector Asch (1947- ) Ricardo Hector Asch was born 26 October 1947 in Buenos Aires, Argentina, to a lawyer and French professor, Bertha, and a doctor and professor of surgery, Miguel. Asch's middle-class family lived among the largest Jewish community in Latin America, where a majority of males were professionals. After his graduation from National College No. 3 Mariano Moreno in Buenos Aires, Asch worked as a teaching assistant in human reproduction and embryology at the University of Buenos Aires School of Medicine where he received his medical degree in 1971. Format: Articles Marie Charlotte Carmichael Stopes (1880-1958) Marie Charlotte Carmichael Stopes was born in Edinburgh, Scotland, on 15 October 1880 to Charlotte Carmichael Stopes, a suffragist, and Henry Stopes, an archaeologist and anthropologist. A paleobotanist best known for her social activism in the area of sexuality, Stopes was a pioneer in the fight to gain sexual equality for women. Her activism took many forms including writing books and pamphlets, giving public appearances, serving on panels, and, most famously, co-founding the first birth control clinic in the United Kingdom. Format: Articles Subject: People, Ethics, Reproduction Bowen v. Kendrick (1988) On 29 June 1988, in Bowen v. Kendrick, the US Supreme Court ruled in a five-to-four decision that the 1981 Adolescent Family Life Act, or AFLA, was constitutional. Under AFLA, the US government could distribute federal funding for abstinence-only sexual education programs, oftentimes given to groups with religious affiliations. As a federal taxpayer, Chan Kendrick challenged the constitutionality of AFLA, claiming it violated the separation of church and state. Format: Articles China's One-Child Policy In September 1979, China's Fifth National People's Congress passed a policy that encouraged one-child families. Following this decision from the Chinese Communist Party (CCP), campaigns were initiated to implement the One-Child Policy nationwide. This initiative constituted the most massive governmental attempt to control human fertility and reproduction in human history. These campaigns prioritized reproductive technologies for contraception, abortion, and sterilization in gynecological and obstetric medicine, while downplaying technologies related to fertility treatment. Format: Articles Subject: Ethics, Legal, Reproduction Medical Vibrators for Treatment of Female Hysteria During the late 1800s through the early 1900s, physicians administered pelvic massages involving clitoral stimulation by early electronic vibrators as treatments for what was called female hysteria. Until the early 1900s, physicians used female hysteria as a diagnosis for women who reported a wide range of complaints and symptoms unexplainable by any other diagnosis at the time. According to historian Rachel Maines, physicians provided pelvic massages for thousands of years to female patients without it being considered erotic or sexually stimulating.
https://embryo.asu.edu/search?text=The%20Cell%20in%20Development%20and%20Inheritance&amp%3Bamp%3Bf%5B0%5D=dc_subject_embryo%3A143&amp%3Bamp%3Bpage=2&amp%3Bf%5B0%5D=dc_subject_embryo%3A6922&f%5B0%5D=dc_description_type%3A35&f%5B1%5D=dc_subject_embryo%3A52&page=1
131. Here I would recall the balanced position of Saint John Paul II, who stressed the benefits of scientific and technological progress as evidence of “the nobility of the human vocation to participate responsibly in God’s creative action”, while also noting that “we cannot interfere in one area of the ecosystem without paying due attention to the consequences of such interference in other areas”.109 He made it clear that the Church values the benefits which result “from the study and applications of molecular biology, supplemented by other disciplines such as genetics, and its technological application in agriculture and industry”.110 But he also pointed out that this should not lead to “indiscriminate genetic manipulation”111 which ignores the negative effects of such interventions. Human creativity cannot be suppressed. If an artist cannot be stopped from using his or her creativity, neither should those who possess particular gifts for the advancement of science and technology be prevented from using their God-given talents for the service of others. We need constantly to rethink the goals, effects, overall context and ethical limits of this human activity, which is a form of power involving considerable risks. 132. This, then, is the correct framework for any reflection concerning human intervention on plants and animals, which at present includes genetic manipulation by biotechnology for the sake of exploiting the potential present in material reality. The respect owed by faith to reason calls for close attention to what the biological sciences, through research uninfluenced by economic interests, can teach us about biological structures, their possibilities and their mutations. Any legitimate intervention will act on nature only in order “to favour its development in its own line, that of creation, as intended by God”.112 133. It is difficult to make a general judgement about genetic modification (GM), whether vegetable or animal, medical or agricultural, since these vary greatly among themselves and call for specific considerations. The risks involved are not always due to the techniques used, but rather to their improper or excessive application. Genetic mutations, in fact, have often been, and continue to be, caused by nature itself. Nor are mutations caused by human intervention a modern phenomenon. The domestication of animals, the crossbreeding of species and other older and universally accepted practices can be mentioned as examples. We need but recall that scientific developments in GM cereals began with the observation of natural bacteria which spontaneously modified plant genomes. In nature, however, this process is slow and cannot be compared to the fast pace induced by contemporary technological advances, even when the latter build upon several centuries of scientific progress. 134. Although no conclusive proof exists that GM cereals may be harmful to human beings, and in some regions their use has brought about economic growth which has helped to resolve problems, there remain a number of significant difficulties which should not be underestimated. In many places, following the introduction of these crops, productive land is concentrated in the hands of a few owners due to “the progressive disappearance of small producers, who, as a consequence of the loss of the exploited lands, are obliged to withdraw from direct production”.113 The most vulnerable of these become temporary labourers, and many rural workers end up moving to poverty-stricken urban areas. The expansion of these crops has the effect of destroying the complex network of ecosystems, diminishing the diversity of production and affecting regional economies, now and in the future. In various countries, we see an expansion of oligopolies for the production of cereals and other products needed for their cultivation. This dependency would be aggravated were the production of infertile seeds to be considered; the effect would be to force farmers to purchase them from larger producers. 135. Certainly, these issues require constant attention and a concern for their ethical implications. A broad, responsible scientific and social debate needs to take place, one capable of considering all the available information and of calling things by their name. It sometimes happens that complete information is not put on the table; a selection is made on the basis of particular interests, be they politico-economic or ideological. This makes it difficult to reach a balanced and prudent judgement on different questions, one which takes into account all the pertinent variables. Discussions are needed in which all those directly or indirectly affected (farmers, consumers, civil authorities, scientists, seed producers, people living near fumigated fields, and others) can make known their problems and concerns, and have access to adequate and reliable information in order to make decisions for the common good, present and future. This is a complex environmental issue; it calls for a comprehensive approach which would require, at the very least, greater efforts to finance various lines of independent, interdisciplinary research capable of shedding new light on the problem. 136. On the other hand, it is troubling that, when some ecological movements defend the integrity of the environment, rightly demanding that certain limits be imposed on scientific research, they sometimes fail to apply those same principles to human life. There is a tendency to justify transgressing all boundaries when experimentation is carried out on living human embryos. We forget that the inalienable worth of a human being transcends his or her degree of development. In the same way, when technology disregards the great ethical principles, it ends up considering any practice whatsoever as licit. As we have seen in this chapter, a technology severed from ethics will not easily be able to limit its own power. 81 John Paul II, Address to Scientists and Representatives of the United Nations University, Hiroshima (25 February 1981), 3: AAS 73 (1981), 422. 82 Benedict XVI, Encyclical Letter Caritas in Veritate (29 June 2009), 69: AAS 101 (2009), 702. 83 Romano Guardini, Das Ende der Neuzeit, 9th ed., Würzburg, 1965, 87 (English: The End of the Modern World, Wilmington, 1998, 82). 84 Ibid. 85 Ibid., 87-88 (The End of the Modern World, 83). 86 Pontifical Council for Justice and Peace, Compendium of the Social Doctrine of the Church, 462. 87 Romano Guardini, Das Ende der Neuzeit, 63-64 (The End of the Modern World, 56). 88 Ibid., 64 (The End of the Modern World, 56). 89 Cf. Benedict XVI, Encyclical Letter Caritas in Veritate (29 June 2009), 35: AAS 101 (2009), 671. 90 Ibid., 22: p. 657. 91 Apostolic Exhortation Evangelii Gaudium (24 November 2013), 231: AAS 105 (2013), 1114. 92 Romano Guardini, Das Ende der Neuzeit, 63 (The End of the Modern World, 55). 93 John Paul II, Encyclical Letter Centesimus Annus (1 May 1991), 38: AAS 83 (1991), 841. 94 Cf. Love for Creation. An Asian Response to the Ecological Crisis, Declaration of the Colloquium sponsored by the Federation of Asian Bishops’ Conferences (Tagatay, 31 January-5 February 1993), 3.3.2. 95 John Paul II, Encyclical Letter Centesimus Annus (1 May 1991), 37: AAS 83 (1991), 840. 96 Benedict XVI, Message for the 2010 World Day of Peace, 2: AAS 102 (2010), 41. 97 Id., Encyclical Letter Caritas in Veritate (29 June 2009), 28: AAS 101 (2009), 663. 98 Cf. Vincent of Lerins, Commonitorium Primum, ch. 23: PL 50, 688: “Ut annis scilicet consolidetur, dilatetur tempore, sublimetur aetate”. 99 No. 80: AAS 105 (2013), 1053. 100 Second Vatican Ecumenical Council, Pastoral Constitution on the Church in the Modern World Gaudium et Spes, 63. 101 Cf. John Paul II, Encyclical Letter Centesimus Annus (1 May 1991), 37: AAS 83 (1991), 840. 102 Paul VI, Encyclical Letter Populorum Progressio (26 March 1967), 34: AAS 59 (1967), 274. 103 Benedict XVI, Encyclical Letter Caritas in Veritate (29 June 2009), 32: AAS 101 (2009), 666. 104 Ibid. 105 Ibid. 106 Catechism of the Catholic Church, 2417. 107 Ibid., 2418. 108 Ibid., 2415. 109 Message for the 1990 World Day of Peace, 6: AAS 82 (1990), 150. 110 Address to the Pontifical Academy of Sciences (3 October 1981), 3: Insegnamenti 4/2 (1981), 333. 111 Message for the 1990 World Day of Peace, 7: AAS 82 (1990), 151. 112 John Paul II, Address to the 35th General Assembly of the World Medical Association (29 October 1983), 6: AAS 76 (1984), 394. 113 Episcopal Commission for Pastoral Concerns in Argentina, Una tierra para todos (June 2005), 19.
https://www.cssr.org.au/justice_matters/dsp-default.cfm?loadref=648
Employing a discourse analytical approach this book focuses on the under-researched strategy of humour to illustrate how discursive performances of leadership are influenced by gender and workplace culture. List of Tables Foreword F. Wellington: Institute of Policy Studies Centre for the Study of Leadership. Our findings illustrate that in addition to considering the socio-cultural context, workplace culture and the norms of communities of practice, the specific interactional context is also of crucial importance for an understanding of how leadership and gender are performed. Far from being a superfluous discursive strategy employed to distract from the transactional aspects of workplace talk, humour performs a range of important functions in a workplace context. In this context, leaders and managers are inevitably significant and influential participants, with a crucial impact on workplace culture. Drawing on authentic discourse data the author focuses on humour - a particularly versatile discursive strategy. Drawing on a corpus of about 100 emails collected in an academic setting, we explore how humour is used in workplace emails. It will be of particular interest to students of professional and workplace communication, intercultural communication and intercultural pragmatics. It illustrates how these areas of interest are interlinked with each other by analysing several examples of authentic interaction. This discourse strategy not only constitutes a prime means for identity construction but also assists the leaders in achieving their various workplace objectives. To explore the leaders' effect on the culture of their department, this investigation of leadership change examines ways in which the leaders manage regular workplace meetings communication with a predominantly transactional orientation and how they contribute to workplace humour more relationally oriented behaviour. Palmerston North : Software Technology New Zealand. The Reader will encourage you to positively problematize the field and reflect on current debates and issues. This crucial role of culture is particularly apparent in a workplace setting: norms regarding appropriate ways of integrating the competing discourses of power and politeness at work are strongly influenced by wider cultural expectations. This seems to be especially true for email, which in many workplaces is the preferred medium for communicating transactional as well as relational topics. Leadership Discourse At Work Schnurr Stephanie Dr can be very useful guide, and leadership discourse at work schnurr stephanie dr play an important role in your products. She illustrates that an analysis of leadership discourse may offer interesting new insights into the complexities of leadership performance. It will also serve the reflective practitioner as personal reference when occupying or aspiring towards leadership roles in schools, colleges and other educational organisations. In spite of the increasing globalisation of the work domain and the mobilization of the workforce Wong et al. The negotiation of deontic authority happens in decision-making phases of these consultations and through laughter the clients affirm their right and sufficient knowledge to make a decision. This chapter examines Im politeness in workplace interaction. Employing a discourse analytical approach this book focuses on the under-researched strategy of humour to illustrate how discursive performances of leadership are influenced by gender and workplace culture. The problem is that once you have gotten your nifty new product, the leadership discourse at work schnurr stephanie dr gets a brief glance, maybe a once over, but it often tends to get discarded or lost with the original packaging. London and New York: Routledge. Although the book's main approach to professional communication is an applied linguistics one, it also draws on insights from a range of other disciplines. In , Gail Fairhurst, who is known worldwide for her discursive leadership theory see Fairhurst, 2007 , writes, For those who feel passionately that a psychological lens is not the only way to view leadership — and that an equally viable lens positions leadership as relationally constructed in communication and through discourse, this is the book for you. Discursive leadership: In conversation with leadership psychology. Far from being a superfluous strategy that distracts from business, humour performs a myriad of important functions in the workplace context. A comparison of the ways in which the leaders use teasing humour indicates substantial pragmatic differences in their choice of teasing style. Incisive analysis of leadership and other stereotypes are a focus in this book, but certainly not the only gems that readers will soon discover. International Encyclopedia of Linguistics, Vol. The book is divided into eight chapters, each dealing with a specific area of professional communication, such as genres of professional communication, identities in the workplace, and key issues of gender, leadership and culture. Hearn and Parkin, 1988; Sinclair, 1998. Register a Free 1 month Trial Account. This paper addresses this gap by exploring some of the ways through which professionals are required to construct and negotiate their various identities in increasingly multicultural contexts where notions of culture may become particularly salient. Although gender is an important issue in many Asian countries where women often face serious discriminatory practices, this topic is notoriously under-researched from a socio-linguistic perspective. Exploring Professional Communication provides an accessible overview of the vast field of communication in professional contexts from an applied linguistics perspective. We aim to address this issue by conducting an in-depth case study of leadership and gender in Hong Kong. While both leaders claim teamwork as an important cultural value for their teams, their respective instantiations of teamwork are rather different. Koller 'Yes Then I Will Tell You Maybe a Little about the Procedure'- Constructing Professional Identity Where There is Not Yet a Profession: The Case of Business Coaching E-M. This book will be an essential resource for providers and students of postgraduate level courses in educational leadership and management, as well as those involved in undertaking professional development programmes. Aimed at both youth work students studying for their professional qualification, as well as practicing managers, Critical Issues in Youth Work Management encourages critical thinking about what management in youth work is and what it can be. The E-mail message field is required. And due to its versatile nature it is particularly suitable to express and respond to the complexities of leadership performance. Developing distributed leadership: Leadership emergence in a sporting context Nick Wilson. Part one deconstructs leadership, providing a critical review and analysis of the key debates within leadership; part two reconstructs leadership, revealing the three dominant discourses of the Controller, Therapist and Messiah, and Eco-leadership discourse. The chapter then illustrates the analysis of Im politeness in workplace discourse by focusing on transitions at different levels, first involving people moving from country to country e. Far from being a superfluous strategy that distracts from business, humour performs a myriad of important functions in the workplace context. This article aims to explore narratives as sites for identity construction by employing the concept of positioning to analyse some of the discursive processes through which identity construction is accomplished in institutional contexts. The analysis also demonstrates that multiple femininities extend beyond normative expectations, such as enacting relational practice Fletcher 1999 , to embrace more contestive and parodic instantiations of femininity in workplace talk. Results indicate that the most salient lexical items refer to actors, strategic actions and technologies. This paper explores the discursive processes of legitimizing leadership claims in the context of the nuclear proliferation crisis. Drawing on a corpus of more than 80 hours of authentic workplace discourse and follow-up interviews conducted with professionals we explore how expatriates who work in Hong Kong with a team of local Chinese construct, negotiate and combine aspects of their professional and cultural identities in their workplace discourse.
http://aimtheory.com/leadership-discourse-at-work-schnurr-stephanie-dr.html
In the context of neoliberal multiculturalism, indigenous activists face a fundamental dilemma. While they organize as indigenous peoples to negotiate and demand from states new terms of citizenship, activists recognize that new forms of accommodation for such demands exist within state institutions. However, indigenous organizations also discover that certain demands exceed these new spaces of participation. I argue that territorial autonomy is one such demand because it challenges the existing power imbalances between indigenous peoples and the state. Not surprisingly, territorial autonomy is a common feature of many emerging forms of indigenous activism in contemporary Latin America. Based on new understandings of “indigenous territories” and “autonomy,” indigenous collective action uses the language of territorial autonomy to challenge the framework and functioning of neoliberal multiculturalism at the local level. By studying neoliberal multiculturalism as a form of government over indigenous populations at a local level, this study engages with broader perspectives that address state formation as a cultural process that involves the formation and control over citizens’ subjectivities through specific forms of citizenship. This approach to indigenous activism allows me to examine the complexity of ongoing political negotiations between indigenous subjects and the neoliberal state. Compliance with the neoliberal parameters of citizenship continue to be sought by post-Washington Consensus states, however, demands for territorial autonomy and the practices of land reoccupations remind us that indigenous activism offers a legitimate alternative form of politics. This is a politics aimed at taking back what has been lost or perceived as lost by a group via collective action. In this study, I call this form of politics “redemptive.” In exploring redemptive politics, my study privileges the local level of indigenous activism. Through a study of the Mapuche, the indigenous peoples of southern Argentina, I argue that the local level is a fundamental space to address the exchanges, negotiations, and conflict between indigenous peoples and the state, especially in cases where they constitute a minority of the national population. To understand the meaning and impact of new kinds of Mapuche activism and new forms of indigenous collective identity, this dissertation addresses three dimensions of indigenous politics: the configuration of indigenous collective identities and their translation into political organizations; the configuration and consolidation of such identities as the result of ongoing resistance, negotiations, and accommodation with the state; and the conflicts around demands for territorial autonomy that often result in the criminalization and rejection of indigenous demands by the state because they exceed the limits of indigenous citizenship under neoliberal multiculturalism. All three dimensions are studied privileging the local level, which this study argues is fundamental to address in the contexts in which indigenous peoples are considered a minority of the national population. Thus, I claim that the study of indigenous politics must privilege the ways in which new forms of activism negotiate and enter into conflict with the states against the background of neoliberal multiculturalism, a cultural project of governing indigenous subjects that is compatible with the expansion of global capitalism and the reach of modern state institutions. This thesis relies on a field study of contemporary indigenous mobilization in Argentina through which the Mapuche have become politically organized. Through an analysis of the ways in which Mapuche activists organize in a particular locality, the province of Neuquén in southern Argentina, this dissertation contributes to the theoretical understanding of collective identity formation and indigenous activism in contexts indigenous peoples are a minority of the national population. Building on interdisciplinary contributions on state formation, citizenship, and collective identity formation, I argue that in the context of minority indigenous mobilization, territorial struggles and the importance of the local political level are crucial for understanding how collective identities are configured and how indigenous activists engage with the state in interesting ways to advance their claims. In this study, I look at the formation of collective identities through processes of contestation, struggles and conflict and also of negotiation and accommodation with institutions, discourses, and practices of the state and the forms of citizenship it sustains. Accordingly, this study on contemporary Mapuche activism advances our understanding of how indigenous collective identities are formed as the result of ongoing interactions between indigenous activists and the state. Recommended Citation Savino, Lucas, "The quest for territorial autonomy: Mapuche political identities under neoliberal multiculturalism in Argentina" (2013). Electronic Thesis and Dissertation Repository. 1717.
https://ir.lib.uwo.ca/etd/1717/
With so much of the global population living on the move, away from their homelands, and in diasporic communities, death and mourning practices are inevitably impacted. Transnational Death brings together eleven cutting-edge articles from the emerging field of transnational death studies. By highlighting European, Asian, North American, and Middle Eastern perspectives, the collection provides timely and fresh analysis and reflection on people’s changing experiences with death in the context of migration over time. First beginning with a thematic assessment of the field of transnational death studies, readers then have the opportunity to delve into case studies that examine experiences with death and mourning at a distance from the viewpoints of Family, Community, and Commemoration. The chapters highlight complicated issues confronting migrants, their families, and communities, including: negotiations of burial preferences and challenges of corpse repatriation; the financial costs of providing end-of-life care, travel at times of death, and arranging culturally appropriate funerals and religious services; as well as the emotional and sociocultural weight of mourning and commemoration from afar. Overall, Transnational Death provides new insights on identity and belonging, community reciprocity, transnational communication, and spaces of mourning and commemoration.Book Details In this study, I examine the life narrative of a female factory labourer, Elsa Koskinen (née Kiikkala, born in 1927). I analyze her account of her experiences related to work, class and gender because I seek to gain a better understanding of how changes in these aspects of life influenced the ways in which she saw her own worth at the time of the interviews and how she constructed her subjectivity. Elsa’s life touches upon many of the core aspects of 20th-century social change: changes in women’s roles, the entrance of middle- class women into working life, women’s increasing participation in the public sphere, feminist movements, upward social mobility, the expansion of the middle class, the growth of welfare and the appearance of new technologies. What kind of trajectory did Elsa take in her life? What are the key narratives of her life? How does her narrative negotiate the shifting cultural ideals of the 20th century?A life story, a retrospective evaluation of a life lived, is one means of constructing continuity and dealing with the changes that have affected one’s life, identity and subjectivity. In narrating one’s life, the narrator produces many different versions of her/him self in relation to other people and to the world. These dialogic selves and their relations to others may manifest internal contradictions. Contradictions may also occur in relation to other narratives and normative discourses. Both of these levels, subjective meaning making and the negotiation of social ideals and collective norms, are embedded in life narratives. My interest in this study is in the ways in which gender and class intersect with paid labour in the life of an ordinary female factory worker. I approach gender, class and work from both an experiential and a relational perspective, considering the power of social relationships and subject formations that shape individual life at the micro-level. In her narratives Elsa discusses ambivalence related to gendered ideals, social class, and especially the phenomenon of social climbing as well as technological advance. I approach Elsa’s life and narratives ethnographically. The research material was acquired in a long-standing interview process and the analysis is based on reflexivity of the dialogic knowledge production and contextualization of Elsa’s experiences. In other words I analyze Elsa’s narratives in their situational but also socio-cultural and historical contexts. Specific episodes in one’s life and other significant events constitute smaller narrative entities, which I call micro-narratives. The analysis of micro-narratives, key dialogues and cultural ideals embedded in the interview dialogues offers perspectives on experiences of social change and the narrator’s sense of self. This book is part of the Studia Fennica Ethnologica series.Book Details Identities in Practice draws a nuanced picture of how the experience of migration affects the process through which Sikhs in Finland and California negotiate their identities. What makes this study innovative with regard to the larger context of migration studies is the contrast it provides between experiences at two Sikh migration destinations. By using an ethnographic approach, Hirvi reveals how practices carried out in relation to work, dress, the life-cycle, as well as religious and cultural sites, constitute important moments in which Sikhs engage in the often transnational art of negotiating identities. Laura Hirvi's rich ethnographic account brings to the fore how the construction of identities is a creative process that is conditioned and infiltrated by questions of power. Identities in Practice will appeal to scholars who are interested in the study of cultures, identities, migration, religion, and transnationalism.Book Details In the mid-19th century, letters to newspapers in Finland began to condemn a practice known as home thievery, in which farm mistresses pilfered goods from their farms to sell behind the farm master’s back. Why did farm mistresses engage home thievery and why were writers so harsh in their disapproval of it? Why did many men in their letters nonetheless sympathize with women’s pilfering? What opinions did farm daughters express? This book explores theoretical concepts of agency and power applied to the 19th-century context and takes a closer look at the family patriarch, resistance to patriarchal power by farm mistresses and their daughters, and the identities of those Finnish men who already in the 1850s and 1860s sought to defend the rights of rural farm women.Book Details Rural spaces are connected with different cultural, economic, social and political codes and meanings. In this book these meanings are analysed through gender. The articles concretely show the process of producing gender and the ways in which accepted gender-based behaviour has been constructed at different times and in different groups. Discussion of gendered spaces leads to wider questions such as power relations and displacement in society. The changing rural processes are analysed on the micro level, and the focus is set on how these changes affect people's everyday lives. Answers are looked for questions like how are individuals responding to these changes? What are their strategies, solutions and tactics? How have they experienced the change process?Book Details The West has always been a resource for the Finns. Scholars, artists and other professionals have sought contacts from Europe throughout the centuries. The Finnish experience in Western Europe and the New World is a story of migrant laborers, expatriates and specialists working abroad. But you don’t have to be born in Finland to be a Finn. The experiences of second-generation Finnish immigrants and their descendants open up new possibilities for understanding the relationship between Finland and the West. The Finnish passage westward has not always crossed national borders. Karelian evacuees headed west, as did young people from the Finnish countryside when opportunities to make a living in agriculture and forestry diminished in the post-war era. The legacy of these migrants is still visible in the suburbs of Finnish cities today.
https://oa.finlit.fi/site/books/series/studia-fennica-ethnologica/
Course Coordinators at CEMUS explore ways to challenge traditional academia and create innovative, insightful and inclusive education. Here at CEMUS, Course Coordinators have jointly created and participated in a Course Coordinator Series that aims to share knowledge, experience, and best practice in the planning and preparation phases of autumn courses. Building upon a model of active participation, the Course Coordinator Series comprises of multiple workshops that engage staff in dialogue on questions of education for sustainable development. We explore our roles as coordinators, asking questions on how it affects the way education designed and conducted. We aim to provoke reflection upon how we can build upon the unique model of Active Student Participation here at CEMUS, facilitating students in becoming resources for each others’ learning. We seek to break the expert/student barriers within education by valuing all knowledge in the classroom, and in doing so empowering students to co-create learning. Fitting with the collaborative environment central to the CEMUS model, coordinators collectively explore various themes to help develop challenging and interdisciplinary courses that grapple with the complexities of sustainability and the modern world. CEMUS runs four Course Assemblies that facilitate brainstorming around central pillars of course design, these are 1) Course themes and content; 2) Literature; 3) Examination; and 4) Pedagogy and didactics. Alongside these assemblies, coordinators choose to participate in 3 of the 9 elective sessions that have been developed as part of the Course Coordinator Series. These are: - Active Student Participation at CEMUS – ways of involving, engaging and enraging students, May 14 - Gender, Intersectionality and Norm-critical Education, May 8 - Sustainability and Climate Change Education – starting points and ways forward and The Course Coordinator Role and Mission – navigating balmy and choppy waters at CEMUS - Multi-Inter-Trans-Disciplinary Methods and Processes at CEMUS – transcending boundaries and breaking down walls - Reading, Writing, Revolution – honest education, reckless talk and challenging students - Deep Adaptation and Education for Survival – between hope and a hard place - Extinction Education – the myth of human supremacy (Jensen) and the love of nature - Fact-Based Education and Calculating The End Of The World – carbon budgets, footprints of giants and the changing nature of facts - CEMUS and Sustainability Myth Busters Active Student Participation at CEMUS – ways of involving, engaging and enraging students, May 14 The Course Coordinator Series continued this week with a session centred on Active Student Participation (ASP), led by our very own Director of Studies Alexis Engström. Alexis also works with the Division for Quality Enhancement, Academic Teaching and Learning. Along with another of our colleagues, Sanna Barrineau, they have published ‘an Active Student Participation companion’ to inspire co-creative ways of learning. The book is available here. The purpose and style of education, along with pedagogical techniques were core themes of the elective session. Coordinators worked together to explore three central questions: - What is the purpose of education at CEMUS? - What are the characteristics of a good teacher and a good student? - Why is Active Student Participation a useful pedagogical and educational method? The session started by looking at different discourses of education and the potential challenges, they pose for education. For example, the idea of students as consumers who pay tuition fees in return for qualifications and skillsets. It makes assumptions that both teaching and learning are evidence-based, and are subject to quality measurement and quality assurance. These assumptions have implications for who can be an educator and what kinds of knowledge are legitimate within higher-level education. However, in the context of sustainability challenges that are complex, multi-scalar and multi-dimensional, we explored why education ought to be conceptualised and practiced in different ways, and how we can value the contribution of different knowledge. Course Coordinators highlighted that in a fast-changing, digital world, the way people understand and relate to modern-day issues is also evolving. Despite this, academic institutions are still educating students for the industrial era, with skills and knowledge that are not equipped to grasp these complexities. Rather than being interdisciplinary, the need for accreditation produces education around silo disciplines, which are poorly equipped to grapple with the complex challenges that transcend academic boundaries. Active Student Participation (ASP) is a radically different way of teaching and learning based upon reciprocal and collaborative practices. ASP refers to the process of learning, and emphasises the role or peer-to-peer learning and students as the co-creators, co-planners, and co-implementers of education. ASP stresses the importance of students as resources in each others’ learning, where different knowledge, skills and worldviews become assets in the educational space. Rather than simply reproducing knowledge, ASP emphasises that the purpose of education is to transcend and critique social norms, values and behaviour by utilising and interacting with different epistemologies, methodologies, theories and worldviews. In relation the purpose of education, Course Coordinators then brainstormed what the characteristics of a good teacher could be and discussed the role of Course Coordinators in facilitating Active Student Participation. The role of an educator immediately moved beyond an ‘expert’, to someone is willing to engage, facilitate and explore alongside students. Course Coordinators reflected on their own multiple roles as observers, learners, peers, planner, designers, administrators, critical friends…and the list goes on. Some qualities raised as important in participatory higher-level education were: the ability to ask the question ‘why’ and to be reflexive; not taking ‘experts’ and information at face value; the ability to create a safe environment where students can raise difficult questions; facilitating curiosity; treating students as individuals with different needs and learning styles; recognising and utilising diversity. Talking about the roles of educators and students highlights the difference between an institution that ‘listens’ to students and an institution that values students as ‘change agents’ and ‘active collaborators’ in education. In addition to rethinking the roles of educators, ASP further prompts us to rethink the characteristics of ‘good’ students. Importantly, Course Coordinators discussed what is ‘good’ in relation to education. What is success and failure? How do we measure success? By problematizing the fundamental notion of ‘good’ students, we immediately began to think about the purpose of education and the context of learning, which may be entirely different for different individuals. ASP as a pedagogical technique emphasises the importance of agency in the learning process and is therefore a fruitful approach to interdisciplinary and collaborative education needed to address complex sustainability challenges. Gender and Norm-Critical Education, May 8 Warren Kunce joined CEMUS course coordinators this spring to discuss Gender and Norm-Critical Education. Warren has been an important voice at Uppsala University in addressing biases and discrimination inherent in the practices of academia. Writing an ‘Introduction to Gender Identity and Gender Expression’, Warren seeks to expel the misinformation around gender identity to help create fair, inclusive and safe educational spaces. You can find a link to the handbook here and more information about Warren’s work on Gender Diversity at this address https://www.genderdiversity.se/ When asked about the importance of gender and norm-critical education, Warren says: “Norm critical approaches to gender diversity and other facets of higher education are vital because norms affect who gets to create knowledge, which knowledge is passed on and what types of knowledge are considered legitimate and important.” A ‘norm’ is something that is considered normal behaviour within a particular social group. Norms are constructed through culturally held beliefs and practices over time; they shape our everyday behaviour and condition us to think, talk and act in certain ways. Importantly, those who abide by these norms have access to greater privilege and power within society, which is no less true in educational spaces. Stereotypes of what it is to be a ‘man’ and a ‘woman’ in society are examples of norms that give access to privilege and power. Ranging from norms that determine acceptable ways for men and women to dress, to speak, and to interact hugely affect biases against those who do not conform to these standards. However, as Warren’s workshop highlighted, gender identity is more than a binary choice of man and woman; and is not determined by anatomy. The way people identify with gender is unique to one’s self. The way people express their gender, through the way they dress, their mannerisms, their names and pronouns, their facial, and body hair etc. should not lead to exclusion or discrimination. The simple example of using pronouns “he” and “she” in educational material highlights how individuals who do not conform to these identities are excluded, and as educators we should understand how this affects whose knowledge is legitimised in the classroom. In education there exists a myth of objectivity, a view that we are able to leave our identity with our perceptions, assumptions and beliefs at the door. However, Warren’s workshop highlighted that approaches to education often neglect the influence of identity and the way it determines our approach to the classroom; the way it affects who we interact with, and how we interact with both course content and different people. Understanding what norms are present is an important first step in being norm-critical. Norm criticism refers to methods and theories that are used to deflect attention from those who do not comply with certain norms to the actual norm that is taken for granted. Norm-criticism is not merely concerned with representing the ‘under-represented’ or ‘alternative’ views in academia, but rather asking why is the ‘norm’ is the norm, and similarly, why the ‘alternative’ is considered alternative? To be norm-critical is about questioning what knowledge is legitimate and why. Rather than positioning the ‘other’ as pieces of a puzzle that must be forcefully fit into certain constructs of ‘normal’, we must strive to re-imagine education in ways that accommodate and constructively utilise diversity. As course coordinators and pedagogues, we must be aware of how we frame problems and what stereotypes we may (unwittingly) promote. Similarly, we must engage students in doing the same in the classroom. In the workshop run by Warren, course coordinators used personal experience to think of times that their assumptions may have affected their thoughts and actions. The session got Course Coordinators to ask tough questions about their worldviews and confront everyday biases that may permeate their approaches to the classroom. For example, when we use collective identities such as ‘we’ and ‘society’ are we aware of who this includes and excludes? Is the absence of the ‘other’ problematic for how we approach certain issues? Thinking in this way allows us to understand what generalisations are made and what values these generalisations are promoting or undermining. The workshop provided Course Coordinators with food-for-thought on how education and academia can be challenged and re-conceptualised in ways that utilise the best of everyone’s knowledge and abilities; and how this can help us at CEMUS create interdisciplinary education that contributes to creating a better world.
http://www.cemus.uu.se/broadening-horizons-in-cemus-education/
Gender roles are biologically evolutionary, capable of being socially constructed Gender roles evolve with time. This is impacted biologically through evolution, as well as in tandem with society's structure. < (1 of 1) The Argument Biological evolution dictates that males and females have psychological differences due to the survival techniques that our ancestors had to use. Men are more aggressive and competitive as they have had to hunt and fight over females, and females are less promiscuous and more selective over males. These psychological differences explain the power imbalance of men and women where men are more dominant and females submissive. Even if there were biological and evolutionary predispositions of male and female characteristics, this does not mean that these must be followed as the natural course of things. Evolution is about survival, and not about what is right or wrong. As societal needs progress, the evolutionary needs of the past do not need to be the case. It can be altered through challenging the norms according to the needs of society and survival. Counter arguments The allotment of gender into just two categories of male of female is not accurate. There are many variations in gender which can be biological or behavioural. For example, those born with difference in genitalia, or with hormonal imbalances, such as of testosterone. Gender is therefore not binary as the biological evolution of gender theory suggests. Evidence of transgender individuals has existed in archaeological finds dating back to ancient civilisations. This suggests that gender is not binary, and that the approach that biological and psychological male and female characteristics developed as a result of their evolutionary survival needs does not reflect on the diversity within each of the male and female identities, and neither of those individuals that would have identified as transgender.
https://www.parlia.com/a/gender-roles-are-evolutionary
** This session has been split into two sessions, and you must attend both. The two times are: - Wednesday, May 19th from 9:00 am - 12:30 pm - Thursday, May 20th from 9:00 am - 12:30 pm The Critical Cultural Competency workshop is built on the premise that US society has a dominant set of norms, values and ways of life. This dominant culture establishes the rules and laws by which all people and their ways of life are measured, resulting in a society with unjust power dynamics based on socially constructed identities such as race, class, gender, sexuality, and ability. The workshop will examine the ways that US society values and advantages some groups while devaluing and disadvantaging other groups based on socially defined categories. The workshop will explore how people within institutions can begin to create new institutional culture by rooting their structure and organizational lifeways in a set of anti-oppressive, transformational values. This Critical Cultural Competency workshop is designed for participants who want to understand the ways that societal inequity is embedded in institutional life as well as in individuals through socialization processes that shape thoughts and behaviors. Participants will examine unconscious bias and learn about the value shift required to start creating more equitable provision of services, programs, and institutional culture. About the Facilitators: Since 1986, Crossroads Antiracism Organizing & Training has been providing strategic organizing, workshop facilitation, and consulting to institutions, sectors and communities striving to dismantle racism. This includes analyzing internal policies and procedures that maintain white power and privilege, and helping to create intervention strategies to dismantle oppressive systems. A key strategy for institutional organizing is creating internal antiracism teams once there is sufficient shared understanding and analysis within an organization. Through this work Crossroads also strives to create and strengthen structures of accountability to People and Communities of Color and other socially oppressed groups, both within the institution and in the wider community. Learning Objectives: - Participants will be introduced to racism as a systemic and structural problem that: - Shapes individual attitudes and actions in ways that pull people into complicity with dominant societal norms. - Impacts institutional norms and behavior in a way that inhibits the ability of institutions to fully and appropriately serve all constituents. - Creates institutional mono-culture that makes it difficult for People of Color, immigrants and refugees to access and receive services in culturally sensitive and appropriate ways. Suggested Career Path: All levels – Anyone interested in learning about how systemic shows up in our communities.
https://www.cnm.org/civicrm/?page=CiviCRM&q=civicrm%2Fevent%2Finfo&reset=1&is_active=1&id=3856
Grandparenting in the 21st century is at the heart of profound family and societal changes. It is of increasing social and economic significance yet many dimensions of grandparenting are still poorly understood. Contemporary Grandparenting is the first book to take a sociological approach to grandparenting across diverse country contexts and combines new theorising with up-to-date empirical findings to document the changing nature of grandparenting across global contexts. In this highly original book, leading contributors analyse how grandparenting differs according to the nature of the welfare state and the cultural context, how family breakdown influences grandparenting, and explore men's changing roles as grandfathers. Grandparents today face conflicting norms and expectations about their roles, but act with agency to forge new identities within the context of societal and cultural constraints. Contemporary Grandparenting illuminates key issues relevant to students and researchers from sociology and social policy, including in the fields of family, childhood, ageing and gender studies. "Will the 21st century be the 'grandparents' century'? We may believe so from reading this collection of contributions by leading scholars from all over the world, showing how grandparents are becoming a 'pivot generation' within families and within society. One of the great qualities of this book is its demonstration of a phenomenon which still remains underestimated." Claudine Attias-Donfut, Associate Senior Researcher, Edgar Morin Centre, Paris (CNRS/EHESS) (National Centre for Scientific Research/School for Advanced Studies in the Social Sciences) "This insightful and penetrating analysis shows how modern grandparenthood shapes and is shaped by the changing social and economic contexts of family relationships. The skilful integration of contributions from around the globe is a unique strength." Anne Martin-Matthews, Department of Sociology, University of British Columbia Sara Arber is Professor of Sociology, and Co-Director, Centre for Research on Ageing and Gender (CRAG), University of Surrey, UK. She received the British Society of Gerontology Outstanding Achievement Award in 2011. Virpi Timonen is Associate Professor and founding Director of the Social Policy and Ageing Research Centre at the School of Social Work and Social Policy in Trinity College Dublin, Ireland. Introduction: A new look at grandparenting ~ Virpi Timonen and Sara Arber; Section One: Grandparenting responding to economic and family transformations; Transformations in the role of grandparents across welfare states ~ Katharina Herlofson and Gunhild O Hagestad; The well-being of grandparents caring for grandchildren in rural China and the United States ~ Lindsey Baker and Merril Silverstein; Grandmothers juggling work and grandchildren in the United States ~ Madonna Harrington Meyer; Solidarity, ambivalence and multigenerational co-residence in Hong Kong ~ Lisanne SF Ko; Grandparenting in the context of care for grandchildren by foreign domestic workers ~ Shirley Hsiao-Li Sun; Section Two: Grandparenting indentities and agency; Being there yet not interfering: The paradoxes of grandparenting ~ Vanessa May, Jennifer Mason & Lynda Clarke; Grandparental agency after adult children's divorce ~ Virpi Timonen & Martha Doyle; Grandfathering: The construction of new identities and masculinities ~ Anna Tarrant; Understanding adolescent grandchildren's influence on their grandparents ~ Alice Delerue Matos and Rita Borges Neves; Social contact between grandparents and older grandchildren: A three generation perspective ~ Katharina Mahne & Oliver Huxhold; Grandparenting in the twenty-first century: New directions ~ Sara Arber & Virpi Timonen.
https://policy.bristoluniversitypress.co.uk/contemporary-grandparenting
Given that effective security and justice sector institutions are fundamental to sustainable peace, Security Sector Reform (SSR) – the reform or (re)construction of security and justice sector institutions – remains central to peacebuilding endeavors. A central principle of SSR is that security and justice sector institutions be both responsive and representative if they are to be effective and instill public confidence and trust. Gender-responsive SSR aims to develop institutions that address the security needs of women and men as well as people of diverse gender identities, with the aim of more equal representation of them. Despite policy guidance recognizing the need for more gender-responsive SSR, in practice women and their security needs continue to be marginalized in SSR efforts and security institutions. When Risk Can Justify Inaction This gap between policy and practice is often the result of arguments that unwelcome risks would arise from promoting a gender-responsive approach to SSR. These risks can legitimize inaction. They include risks to individuals, security sector institutions, and peacebuilding efforts and encompass security, programmatic, fiduciary, and reputational risks. They form three broad categories: - Physical harm: if the principle of gender equality is not valued within society, efforts to recruit women to the security sector can expose these women to harm; - Compromising operational effectiveness: arguments about risks to operational effectiveness in security institutions are based on assumptions that the skillset and aptitude of women can disrupt male bonding processes and institutional capacity; - Destabilizing power relations: in societies lacking a commitment to gender equality, efforts to promote gender equality can result in accusations of challenging traditional patriarchal power relations, which can lead to backlashes against women’s increased empowerment. These risks also result in complementary risks to the SSR program and, in consequence, to any implementing organization or donor, which are often very concerned to avoid reputational and fiduciary risks. This cycle further inhibits efforts to promote gender-responsive SSR. As OECD (2016, 15) has stated, institutional desires to avoid such risks “are a major barrier in scaling up and delivering more effective and transformative programs in fragile, at-risk and crisis-affected contexts.” Risks and Tokenism These arguments about risk tend to focus on recruitment of women to security sector institutions rather than on activities involved in building a comprehensive gender-responsive SSR. This is in part because gender-responsive SSR is often reduced to recruitment of women in security sector institutions and tokenism, ignoring that comprehensive gender-responsive SSR moves far beyond tokenistic recruitment of women and can help avoid some of the risks which often justify inaction. Comprehensive gender-responsive SSR includes: - promoting meaningful and influential representation of women, men, and people of diverse gender identities; - attending to institutional, structural, and cultural barriers to women’s recruitment, retention, and promotion; - ensuring security sector institutions are responsive to the security needs of women, men, and people of diverse gender identities; - taking into account the gender implications of security policies, procedures, and practices; - attending to gender bias within security sector institutions and the way in which gender norms and expectations might cause harm. Risk Analysis and Risk Management These arguments about risk also tend to lead to inaction without undertaking a comprehensive risk analysis to determine the likelihood and magnitude of the potential risks, how risks can be mitigated and managed, and what risks may result from inaction. Given that risk avoidance can undermine program effectiveness, a comprehensive risk analysis would reveal how gender-responsiveness can increase operational effectiveness. The Political Act of Risk Selection and Risk Aversion Where an evaluation of risks leads actors to avoid taking action, it is necessary to ask who decides what constitutes a risk and which risks are worth taking. It is necessary to recognize that these decisions are normative and political and tend to reflect and reinforce cultural norms and dominant power relations, including gendered power relations. Risks that arise as a result of hegemonic masculinities are therefore more likely to be regarded as acceptable or unavoidable (armed conflict, for instance), while those risks which appear to counter masculine norms are more likely to generate concern (e.g. those associated with advancing the principle of gender equality). This is especially the case among those individuals who may benefit from or align themselves with the traditional values enshrined within the patriarchal social order. Institutions reflect and reproduce gender power relations and gendered inequalities. This is especially the case with security sector institutions that often serve to protect and promote hegemonic masculine norms. This understanding helps explain the gap between gender-responsive SSR policy and practice and the tendency for language of risk to trump the language of inclusion and equality. It further helps explain how informal rules, norms, and practices – such as gendered assumptions about the vulnerability of women and their capabilities, the appropriateness of women’s place and behavior, and normative assumptions about risk – can undermine formal rules regarding gender equality, responsiveness, and inclusion. Assumptions about risk are clearly informed by a “gendered logic of appropriateness” (see Chappell 2014 and other feminist institutionalist scholars). Risk taking and risk aversion are structured by informal rules that both reflect and reinforce gender power relations, thereby sustaining and justifying gender inequalities. Missed Transformational Opportunities When arguments about risk justify inaction on gender-responsiveness, opportunities are missed to advance a more effective, responsive, and accountable security sector that promotes the transformational change that can lead to sustainable and equitable peace. Comprehensive gender-responsive SSR has the potential to consolidate efforts to build a sustainable peace. This peace emerges through the renegotiation of gendered power relations and the distribution of resources, including access to security, justice, and power. Moreover, the long-term risks to gender equality, women’s security, and broader societal stability that arise from failing to enact a gender-responsive approach to SSR outweigh the risks of implementation. Unfortunately, the dominant patriarchal focus on preventing a recurrence of conflict in peacebuilding lends itself to focusing on the short-term and immediate risks, rather than longer term risks. Conclusion Women’s marginalization from SSR and security sector institutions occurs despite policy and a professed commitment to the principle of gender equality. This paradox of women’s continued marginalization stems from an attachment to gendered norms, which situate the woman as in need of protection but without the requisite security expertise to determine how best to respond to that need. A woman’s agency is consequently surrendered to others who often agree that women (notably early recruits in the security sector) should be marginalized from the security sector for their own good, as well as for the benefit of the institutions themselves (protecting their operational effectiveness) and wider society (protecting peacebuilding processes from unnecessary destabilization). It can be seen, therefore, that informal rules, practices and norms, imbued with gender biases, undermine efforts to promote gender-responsive SSR, even where formal rules and expectations exist. These informal rules, practices, and norms are especially likely to dilute or subvert reform efforts which challenge the gendered status quo (see Mackay and Murtagh 2019). Consequently, comprehensive gender-responsive SSR, which has the potential to lead to transformational change and avoid some of the practical risks discussed, is more likely to be framed as risky and potentially destabilizing as it is more likely to disrupt the gendered status quo. The language of “risk”, “stability,” and “appropriateness” is central to this process of undermining reform processes and to the continued marginalization of women within and through SSR programs. For the full article, see: Gordon, E., McHugh, C. and Townsley, J. (2020) ‘Gender-Responsive Security Sector Reform and Transformational Opportunities’, Global Security Studies. https://doi.org/10.1093/jogss/ogaa028.
https://www.wiisglobal.org/post-conflict-gender-responsive-security-sector-reform-risk-versus-opportunities/
The dynamic processes of knowledge production in archaeology and elsewhere in the humanities and social sciences are increasingly viewed within the context of negotiation, cooperation and exchange, as the collaborative effort of groups, clusters and communities of scholars. Shifting focus from the individual scholar to the wider social contexts of her work, this volume investigates the importance of informal networks and conversation in the creation of knowledge about the past, and takes a closer look at the dynamic interaction and exchange that takes place between individuals, groups and clusters of scholars in the wider social settings of scientific work. Various aspects of and mechanisms at work behind the interaction and exchange that takes place between the individual scholar and her community, and the creative processes that such encounters trigger, are critically examined in eleven chapters which draw on a wide spectrum of examples from Europe and North America: from early modern antiquarians to archaeological societies and practitioners at work during the formative years of the modern archaeological disciplines and more recent examples from the twentieth century. The individual chapters engage with theoretical approaches to scientific creativity, knowledge production and interaction such as sociology and geographies of science, and actor-network theory (ANT) in their examination of individual–collective interplay. The book caters to readers both from within and outside the archaeological disciplines; primarily intended for researchers, teachers and students in archaeology, anthropology, classics and the history of science, it will also be of interest to the general reader. This book takes a holistic approach to understanding cemetery development, and in its simplest reading it offers a new way to explore horizontal stratigraphy which depends on the local context and the layout of the cemetery. Mortuary archaeologists know that approaches to horizontal stratigraphy are problematic (Ucko, 1969; Parker Pearson, 1999). The same is true of using objects to describe gender, social hierarchy or social status, and yet these approaches reluctantly dominate the contemporary interpretive narrative (Gowland and Knüsel, 2006; Šmejda and Turek, 2004). Approaches to gender tend to be described in cultural terms defined by the difference between biological sex and the social construction gender; see, for example, Sofaer, 2006. Past approaches to gender can be embodied in cultural universality, but should not be seen as passive categories, for example ‘housewife, ‘warrior’, ‘slave’ (Lucy, 1997: 164). Our own contemporary social context, however, does not support the use of these narratives because our experience of society is pluralistic and institutions like family or household influence the expectations and expressions of gender identity (Reay, 1998). Modern Australian, Welsh, Scottish, Irish, English or American societies all have subtly, and not so subtly, different approaches to the body, family, marriage, childbirth, social class, gender and age or education, based on wider cultural contexts like history, religion or law. Most importantly there is not in fact a single approach to these ideas in any of the places described. Indeed, your own attitude to family, for example, might depend on your past, your background and, importantly, the regional or class context of your upbringing. In this case then there are in fact multiple societal attitudes towards gender or the family, just as people’s experience of family varies widely. This book uses a comprehensive exploration of the early Anglo-Saxon mortuary context to drill down into the local history and development of cemetery sites to explore the role of family and household and their impact on localised expressions of gender, life course and wealth. This exploration is a case study in mortuary archaeology which proposes a way of looking at the visual aesthetics of mortuary space, to understand local leitmotifs as part of the expression of community history. Different agents working from different experiences within a unique and complex mortuary landscape created each funeral and, as a result, no two burials and no two cemeteries were the same. What this means is that any two persons’ experiences were not the same. This book shows that each site contained a number of different attitudes towards the body, the display of gender, the use of the past or the use of objects in mortuary display. As a result, the attitudes of a funerary party, and the way they valued the location of a grave and its relationship to those graves around it might be a better indicator of social rank/identity than the number of objects within it. The past then is complex, dynamic and pluralistic, and this can be seen most obviously in the way that people negotiated the expression of mortuary identities within the public sphere. Many mortuary sites were intended to be visited: they were places to tell stories, places to build relationships and places to create or share identities (Price, 2010; Williams, 2002a). Uniquely, the approach outlined in this book places kinship, family and household in the foreground because it is these relational contexts that are at the heart of Anglo-Saxon society as seen in the poems and stories which reproduced it. The institutions of family determined and/or reproduced localised or personal attitudes towards gender, age, status and identity; and so an understanding of family and relational archaeology is essential: it is the keystone in the construction of a social approach that encapsulates the complexity of the lived past.
https://www.manchesteropenhive.com/view/9781526153845/9781526153845.00009.xml
This is the second blog from our 2018 Microgrant winners. Sebastian Cordoba, PhD student at De Montfort University, was awarded £440 for research into understanding the psychological, linguistic and social experiences of people who don't identify as men or women in the UK. Read his story below. I am soon starting the third – and last – year of my PhD in social psychology at De Montfort University. My research focusses on the linguistic, social, and psychological experiences of those living in the UK who do not identify solely as men or women: non-binary and/or genderqueer people. I am particularly interested in the multiple ways non-binary people construct their gender identity linguistically. For instance, some non-binary people might adopt gender-neutral language for themselves: pronouns (e.g., they/them), labels (e.g., agender, bigender, gender-fluid, etc.), titles (e.g., Mx), etc. By exploring these linguistic patterns and the ways in which non-binary people negotiate social interactions, my research aims to contribute to the knowledge base of non-binary gender identities. While non-binary gender identities have gained more cultural awareness and representation in mainstream culture, my research has revealed that there is indeed much to be learned about this ‘invisible’ population, one that has long existed but is just now making its way into the mainstream consciousness. There is a clear lack of societal understanding of gender plurality. I argue that this cultural unintelligibility – especially in the linguistic sense – is one of the main reasons gender minorities experience distress. It is therefore one of my research goals to contribute to the public awareness of gender diversity. The ongoing analysis for my research has revealed a great deal about non-binary people’s unique language patterns while providing a deeper context of their daily challenges. For instance, my interviews have revealed that using a distinct type of language not only serves as a tool for non-binary people to differentiate their gender, but also as a marker of social identity and group membership – one that allows their gender identity to be recognised and more widely validated. Additionally, my interviews have brought to light the discrimination and misgendering that non-binary people face, affecting among other things their overall wellbeing, access to health care, employment opportunities, and education. I am incredibly grateful for the Gradconsult’s microgrant (£440), which has helped me cover some of the costs of my research. Since I am self-funding my PhD living expenses, these funds have been tremendously helpful. I have interviewed 22 non-binary people living in the UK, so I used this money to pay for my participants’ time and contribution to my work. I am also very thankful for the fact that Grandconsult is committed to aiding researchers who, like myself, come from minority backgrounds and are doing research on often ignored or ‘invisible’ populations/subjects. While visiting the Gradconsult office, I felt welcomed and appreciated for my work. The other recipients – all women – also discussed their important research ideas and outputs, which mostly pertained to similarly marginalised or ‘invisible’ populations such as mothers in prison, farming, human trafficking, to name a few. I strongly recommend early career researchers to applying for a microgrant at Gradconsult, given their commitment to assisting researchers who might otherwise not have access to financial support at the early stages of their career.
http://blog.gradconsult.co.uk/gender-diversity-research-making-the-invisible-invincible
Entanglements of Teenage Motherhood Identities: A Critical Ethnography within a Community-Based Organization LoBello, Jana (2017) View/ Download file LoBello_umn_0130E_18216.pdf (983.5Kb application/pdf) Persistent link to this item https://hdl.handle.net/11299/188935 Services Full metadata (XML) View usage statistics Title Entanglements of Teenage Motherhood Identities: A Critical Ethnography within a Community-Based Organization Authors LoBello, Jana Issue Date 2017-05 Type Thesis or Dissertation Abstract The social construction of adolescence as a distinct developmental stage is based on a hierarchy of age, race, social class, and gender that affords some individuals with the privileges of full participation in the United States yet positions others as subordinate within the progress of the nation (Lesko, 2012). The organization of school as an institution relies on the assumption that development occurs in linear stages where grade levels and labels such as elementary, middle, and high school predict certain characteristics found within each context. Oftentimes, teenage mothers are positioned as those subordinate or deficit within these formal systems of education as they do not “fit” into these traditional labeling practices. Negative labels such as “stupid slut”, “teen rebel, teen mom”, “the girl nobody loved” and “dropouts” show evidence of this deficit mindset (Kelly, 2000). The impact of such labels manifests themselves in perceptions of disengagement within formal school settings (Kalil, 2002; Kalil & Ziol-Guest, 2008) and the policing of aged, racial, social classed, and gendered bodies (Jones, 2007). The purpose of this critical, ethnographic study is to deeply explore the experiences of teenage mothers participating in a community-based organization (CBO) as potential opportunities to take up issues of age, race, gender, sexuality, motherhood, and social class within their ongoing identity construction and schooling experiences. This study takes a critical perspective on the social construction of adolescence in order to contribute to scholarly work that attends to how teenage mothers are socially, politically, and educationally positioned within Western schooling and society. By focusing on hybridity and the intersectionality of identities this research pays attention to the ways in which educational practices have been both disrupted and maintained discriminatory when conceptualizing what it means to educate and involve teenage mothers and their children within existing systems. Findings show that the chronological passing of time as well as the physical representation of the pregnant female figure is reflected within women’s stories as one form of oppression and/or agentic negotiation. Additionally, mixed perceptions around if and how local and alternative high schools provide space for the hybridity and intersectionality of teenage mothering identities was engaged by participants within embodied “fitting in” or “pushed out” discourses. These perceptions seek to complicate traditional practices and identities of student, athlete, and parent within formalized educational spaces. Also, Real Moms both provides opportunity for authentic senses of caring (Noddings, 2005) as well as has limitations in “protecting” participants from the risks of being vulnerable within relationship and storytelling. This study will extend the literature by looking at the ways in which teenage mothers are both disrupting and reinscribing discourses of chronological developmental stage theories (Lesko, 2002; Lesko, 2012) by attending to the multitude of social factors that influence the cultural construction of adolescence and adolescents (Vagle, 2012). Additionally, this work looks at how schools are sites for the perpetuation of social contracts that implicitly exclude or push out specific student identities, such as race, social class, and teenage motherhood that do not adhere or assimilate to existing normalized practices (Milner, 2015; Noguera, 2003). For example, the quarantining of teenage mothers into all-female alternative schools or limited participation within local schools attempts to de-sexualize female students against discourses of desire (Fine, 1993). In thinking about authentic, caring relationships (Noddings, 2005), this study also complicates the notion of creative, narrative expression as an automatic form of empowerment as opportunities for vulnerable storytelling stir up both damaging stereotypes (Edell, 2013) and self-interpretations of empowerment (Kelly, 1997). By contextualizing the lived experiences of the female teenage mothers and mentors within this community-based organization, this study thoughtfully and reflexively attends to the existing discourses of teenage motherhood. Keywords Community-Based Organizations Critical Ethnography Identities Intersectionality Mentoring Teenage Mothers Appears in collections Dissertations Description University of Minnesota Ph.D. dissertation. May 2017. Major: Education, Curriculum and Instruction. Advisor: Bic Ngo. 1 computer file (PDF); vii, 239 pages. Suggested Citation LoBello, Jana . (2017). Entanglements of Teenage Motherhood Identities: A Critical Ethnography within a Community-Based Organization. Retrieved from the University of Minnesota Digital Conservancy, https://hdl.handle.net/11299/188935. Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.
https://conservancy.umn.edu/handle/11299/188935
The Stella Prize celebrates Australian women’s writing and is an organisation that champions cultural change. As we celebrate Australian literature and work with writers and readers to question gender disparities and challenge stereotypes, we recognise and support women and non-binary writers — in their diverse and holistic expressions of their gender identities. We do not believe gender identity is reducible to the body. As traditional binaries around gender continue to be challenged both politically and socially, we recognise that what it means to be a woman is not static. Stella advocates for a nuanced conversation around gender inequality, particularly the relationship that language and power have in creating and perpetuating it. We know that rigid gender norms reinforce inequality and limit us as individuals and a community. Regarding eligibility for the Stella Prize, we welcome authors who identify with Stella’s mission to celebrate Australian women’s writing in ways that reconcile with their understanding of their own gender identity. This includes trans women, non-binary and cis women writers. We do not require any statement beyond an author’s self identification and interpret entry to the prize as confirmation of that identification. In our work with young people through our Schools Program, we aim to be welcoming to all genders. This includes girls, boys, non-binary and gender-diverse teens; cis, trans and other identities across the spectrum. We have been inspired and challenged by the young people who have contributed their depth of experience of what it means to feel limited by gender and their understanding of how language can be used to liberate and empower. Each year, we compile an annual Stella Count. Traditionally, we have identified authors’ genders on the basis of published information, both within authors’ own work and the related media coverage. This data is then anonymised and aggregated to ensure that individual can not be identified. However, if an author believes they have been incorrectly considered in a particular gender category, or have subsequently changed their gender identity, Stella will make every effort to correct that data. We want to continue a conversation with the community around how we achieve gender equality. We are proud of our foundations in the women’s movement and do not want to invalidate gender diverse identities by assuming all experiences of gender inequality to be the same. A range of individuals and organisations have contributed to our ongoing understanding of gender; we have compiled a set of resources that have been influential to our thinking. We welcome your feedback and invite you to join this conversation.
https://thestellaprize.com.au/prize/guidelines-submission/statement-on-gender/
I’ve been asked by friends and colleagues, on more than one occasion, to share some of my reflections on the deep, theoretical explorations I engage through books, music, film, and more. Specifically, these folks have asked me to help illuminate for them and for others how those explorations make their way into my pedagogical practice with diverse youth in the Bay Area and Central Valley of California. My work with youth generally centers around anti-violence, social justice, and healthy relationships. Before jumping right into my process of moving from theory to pedagogy, I want to share a bit about how I’ve been taught to approach reading. In particular, I want to share how I read texts that are difficult, dense, opaque, or philosophical. What many might not know is that our modern university system is descended from Christian seminary colleges, where aspiring theologians and spiritual practitioners would engage in deep reading of the Bible. While many spiritual traditions – including sects of Christianity – have held space for diverse interpretations of religious texts, the puritanical forms of Christianity that took root in the United States, and that form many of our social institutions, focus on a “one truth” reading of the Bible. Consequently, this form of reading is the one that has taken root in U.S. universities: one author, one text, one interpretation, one truth. We know that even though the Bible doesn’t have one author (or even one version or translation), we have been taught to see the word of G-d as definitive and singular. This is not how I’ve been taught to read. My critical reading skills were forged through the intellectual traditions of radical Jewish thought and feminist Bengali scholarship, two traditions that foreground spirited debate, hospitality toward difference, interpretation-as-agency, and a respect for complex legacies of social thought and activism. It is my responsibility as a reader to not approach every text like the Bible, or to see it as an Oedipal father to overcome by only pointing to its lack, its gaps, its irrelevance due to its emergence in a prior historical moment. I have to put in the work to keep texts alive and dynamic by putting them to work in the real world. A myth we have been told is how quickly and immensely the world changes across decades; what I’ve come to discover is that deep, underlying social norms are very slow to change, even as how we experience our social worlds is constantly reinterpreted and in flux. Given this reality, texts from ten, twenty, thirty, even forty or more years ago continue to have relevance to our present moment because they help me to chart out debates, foreclosures, and other possibilities that we have long-since abandoned, but that help to animate my imagination with curiosity and creativity. What might things look like otherwise? For me, reading is daring to dream, with wakefulness of the particularities of our present moment. I am always searching for openings, rather than simply identifying (or creating) foreclosures through careless, reactive, or facile interpretation of texts. I approach pedagogy in a similar way that I do reading – through openness, curiosity, and acute attentiveness to the group, the context, and the goals we set together. I am obsessively student-centered, but I approach this in a way that is somewhat different than others do. I believe that centering student learning doesn’t always mean students make decisions or lead the discussion. Sometimes there are important interventions that need to be made and sometimes they just need to listen to me talk to them. I like to think of it as storytelling, an art form in current pedagogical approaches that is unfortunately dying out in mainstream (including liberal and progressive) cultures. Before diving into this kind of teaching, I take time to “learn from below,” a strategy elaborated by Gayatri C. Spivak, wherein the educator becomes intimately aware of the struggles and lived realities of their students in order to craft meaningful learning experiences that are relevant to students’ lives. Even when I lecture (or story-tell, as I like to think of it), I have taken the time to ensure that it will land with each and every one of my youth. With this approach, I’m able to give them a lot to discuss, reflect upon, and question. I have also learned that students don’t operate very well with conceptual voids. We can problematize the world all we want, but it can be incredibly challenging for them to unlearn if I’m not also sharing new ways to make sense of all that they see; however, I don’t want to simply gloss over the importance of unlearning. This process is not as simple as “don’t do this/don’t think that” – it involves what historian, philosopher, and human rights activist Michel Foucault called genealogy, which is the union of erudite and local knowledges. These forms of knowledge come together when rigorously trained scholar-activists dig through the messy and layered archives of historical information to learn how our current cultural ideas and practices came to be, and they also highly value the lived experiences of those who are harmed by those oppressive ideas and practices in order to see how domination and resistance operate in complex and particular ways. I approach youth development work from an explicitly genealogical focus, on a number of fronts – deconstructing binaries of adult/child, queer/straight, white/not-white, trans/cis, disabled/abled – in order to help usher youth into an understanding of the historical and legal production of identity categories, without dismissing the political usefulness of these categories. This is recognizing what Spivak calls “strategic essentialism,” wherein individuals acknowledge the problematics of identity but find it useful to organize around assumptions of at least some shared experience. Why am I approaching youth development work in this way, when ostensibly I’m focusing on healthy relationships education at a domestic violence non-profit? I do this because issues of identity are often incredible sources of tension in relationships with queer and trans people. How “out” are the partners involved? Is it politically important to the individuals involved to engage in experimental and radical forms of relationships, including types of non-monogamy? Are people falling in love or dating interracially? Queer communities are often much more mixed-class than many other communities as well. The forms of difference that are concentrated in cultural settings with queer and trans people are immense, and how these differences shape queerness and trans-ness can often bring folks into conflict with one another. Getting to the root of violence in relationships often involves helping the parties involved differently consider one another’s life experiences, including traumas connected to membership in a social identity group. Before sharing with you exactly how I take rich, theoretical texts and translate them into pedagogical tools, I want to note a few problematics in how I’m presenting this work. I want to put an accent over the hesitation with which I demarcate this dualism of theory/pedagogy, or the “theoretical” and “practical” realms. For me, the two are intimately woven together, and when I’m reading, I’m taking time to both dive into complex works as an opportunity to be imaginative and curious, and at other times I’m reading with the explicit intent of looking for ideas I can bring to my young leaders. I also want to say that theoretical texts can’t just be read in isolation, but they require ongoing engagement and embedded-ness in community reflection. Interpreting theoretical texts is not about an individual’s intelligence or capacity-in-isolation; this kind of work is always connected to long-standing legacies of critical social thought, which often foregrounds the importance of cooperation, relationality, and holding tensions across difference. Having given a number of caveats, you may be wondering how any of this comes to life in the work I do with youth. How is Michel Foucault, a French scholar-activist who was most active during the 1960s-1980s, relevant to queer and trans youth in Oakland, California in the 2010s? First of all, he dedicated his life to activism and ongoing learning. He was deeply involved in multiple social movements across the course of his life, which ended much too early from AIDS-related complications in 1984. Michel might disagree with my labeling him as such, but I perceive him to have been a queer man who disliked being put in a box. There is much in this European man’s life that resonates with the aims of my youth, particularly the way his existence and thought challenged (and continue to challenge) many underlying assumptions of “identity.” Firstly, Foucault challenged the idea that identities are universal across time and place. Working with queer and trans youth from immigrant, diasporic, and low-income communities, they are intimately aware that dominant representations of queer and trans people often do not reflect their experiences. Secondly, Foucault questioned our confessional impulses that portray identity as stable, knowable, and important to decipher and share with others. The young people I work with are excited to explore, question, grow, and change, bringing their queerness or trans-ness with them through these explorations, inevitably reshaping those identities along the way. And thirdly, Foucault questioned the idea that identity is the single best tool for political organizing. Youth today are often feeling suffocated by the popularized and reductive interpretations of critical social thought and identity politics that have proliferated across the internet. Youth want to learn how to build alliances across difference in meaningful and sustainable ways, and don’t want to feel pressured to reduce diverse experiences to hierarchies of oppression. They see adults do it and they want more, they want better. I’m trying to do better for them, and for all of us. Foucault helps us challenge these three oppressive ideas (identities are universal, identities are stable/knowable, and identity is the only basis for political resistance). Why is this important? It’s important because concepts of sexuality and gender were crafted in Euro-American clinical settings that pathologized the queering of sexuality and gender. These youth come from communities that have been pathologized for these and other reasons, and significant tensions exist between queer and trans youth and their families or communities who often say that queer and trans identities are a “white thing” – which I take to mean many things, possibly. It could be a refusal of individualistic identity, assumptions of distance from family, concerns about social rejection, denying clinical narratives of illness, and much more. These concerns need to be taken seriously, and queer and trans youth from diverse communities need thought-tools to help address and reduce acute tensions across generations. To respond to the call-to-action that I take from critical theory, I’ve created a few exercises to begin to loosen the grip that rigid framings of identity have on youth’s self-understandings and how they learn to police themselves and others: 1) an intersectional gender history timeline, 2) an identity map, and 3) an identity Q&A gallery walk. The first exercise explores the ways that law and social norms in the United States were often crafted to target diverse forms of gender and sexuality and immigrant groups simultaneously, so that looking at the production of queer and trans identities requires looking at the project of whiteness and American-ness over the past couple hundred years. Youth learn ten moments in history where gender, sexuality, race, and nationality were being crafted simultaneously, and that in the U.S. it’s impossible to address homophobia and transphobia without also addressing xenophobia. Historically, what has been defined as “queer” has also been seen as “other,” and what is portrayed as “other” has also been described as “queer” or not properly heterosexual. This helps youth understand that our dominant ideas of queer and trans identities are culturally and historically particular. The second activity allows youth to explore their multiple identities and how they shape one another. We give each young person a giant poster paper and ask them to draw themselves in the center. Around that center drawing, we have them list identities that feel important to them. While encouraging them to think about race, place of birth, gender, sexuality, and more, we also give them space to think through non-politicized identities such as introversion/extroversion, affiliation with musical or athletic groups (formal or informal), and other activities (reader? artist? cook? storyteller? spiritual practitioner?). After they’ve listed these identities, we ask them to explore the people, places, and ideas that have impacted how they relate to those identities so that we aren’t perpetually forcing them to obsess over individualistic self-discovery, but to embed those coming-of-age processes within a cultural, community, or family legacy that predates them, and will continue after them. Youth then share their posters with the rest of the group, and it is during this activity that we have the opportunity to learn a lot about one another as individuals, as well as the worlds that we bring with us into that space of learning. The third activity, the identity Q&A gallery walk, provides youth with the opportunity to respond directly to complex questions about identity. Each of the following questions is put on a large sheet of poster paper and stuck up on the wall: How do you define “identity? Why do you think we focus on visible identities? How can our identities make us feel safe? How can our identities make us feel unsafe? Have any of your identities stayed the same over time? Have any of your identities changed over time? Why do you think identities might change? Does everyone with the same identity relate to it in the same way? Why or why not? Youth are given post-its to write their responses on and then stick up on the larger sheets of poster paper. After everyone has responded to all the different questions, we go around and review youth’s responses. They often respond to our questions with questions, and some respond with open and frank statements that push the conversation in fruitful – if not sometimes uncomfortable – places. It’s my job as the facilitator to ensure the safety of participants, and with a solid set of group agreements we craft together on Day One, this isn’t as hard as some might think. If folks are curious to lear more about the specifics of facilitating these activities, please feel free to comment here or message me elsewhere. I’m always happy to share materials. Because I’m employed full-time and don’t need additional sources of income, I have the luxury of not having to worry about intellectual property rights and am happy to see these tools make their way into the lives of more young people who are yearning for alternative, nuanced, and critical ways to think through questions of identity, self-determination, and belonging.
https://outsidein.education/tag/trans/
It is well known that the frequency of sounds can have a powerful effect on the mind and body. In fact, it's a form of self-healing we all practice instinctively, whether it's by listening to our favorite song to put us in a good mood, shutting a window to exclude the noise of traffic outside, or whistling or humming while we work. Our bodies, minds, and spirits are tuned for sound. But what is that sound tuned to? If you're like most people, you know that do re mi are the notes in an octave, but have you thought much about how those notes are defined and standardized? It turns out to be a fascinating story, with big implications. Throughout history, all over the world, people have made music. Every culture has ancient songs, dances, and instruments, that are deeply embedded in their society and traditions. For almost all of human history, songs and instruments were made locally, for their own audiences, and for sharing with each other. However, in the 1600s in Europe, composers and musicians began to create music that was meant to be shared more widely. The same pieces of music were meant to be played in different cities, by different musicians, with different singers. While staff notation for writing music had been invented in the 11th century, instruments were tuned by ear. While everyone could agree, for example, that the note intended to be played was "C", there wasn't common agreement about what "C" was, exactly. In 1711, the tuning fork was invented, and it introduced consistency when tuning among instruments and groups of instruments, but individual tuning forks themselves varied widely in pitch. This variation was frustrating for composers, who intended for their music to sound a certain way. It was also frustrating for singers, who, as stringed instruments were tuned higher and higher, had to strain their voices to keep up. A standard was needed. In 1834, a self-taught musicologist named Johann Heinrich Scheibler invented a device for accurately measuring pitch, and, based on the science of his time, recommended a pitch standard of 440Hz. In other words, now that we had the technology to measure it, the A above middle C should always vibrate at 440Hz, everywhere in the world. This standard was adopted in Germany, although the French used a pitch standard of 435Hz. In 1926, America adopted the 440Hz standard, and began manufacturing instruments accordingly. In 1955, the International Organization for Standards named 440Hz as the pitch standard, and is has been so ever since (at least, in Western music). The adoption of 440Hz has never been without controversy. The relationship between music and mathematics has been understood since ancient, and many scientists, musicians, and philosophers prefer a tuning that is more mathematically harmonious. When a string is halved and plucked, it produces a pitch that vibrates at twice the frequency and sounds and octave higher, producing a series of mathematical ratios. Scientific pitch, for example, is favored because all octaves of C become an exact round number easily expressed in both binary and decimal systems, so scientific pitch advocates for A to be set at 430.54Hz. However, a much older standard dates back to the time of Pythagoras, and advocates for a frequency ratio based on 3:2. Pythagorean tuning is easiest to tune by ear, has pleasing proportions, and puts A at 432Hz. To this day, the Schiller Institute recommends 432Hz for its consistency with the Pythagorean 27:16 ratio. The pitch of 432Hz is deeply significant, in ways that we are just beginning to understand. Music pitched to 432Hz seems to correlate to many of the features of our natural world, and trigger emotional, physical, and mental responses in people who hear it. Music pitched at 432Hz has a number of striking features: The difference between 440Hz and 432Hz may be small, so small that it is barely noticeable. And yet it seems that our hearts, minds, and bodies notice it very much indeed. Comments will be approved before showing up.
https://www.phidle.com/blogs/sacred-geometry/432hz-uncovering-the-link-between-music-mathematics-nature-and-you
A person who has the ability to tell if a given single pitch is slightly out of tune and even to sing or name any tone out of the blue without a reference note is said to have perfect pitch. (The naming is learned, the perception of pitch color differences is instant and automatic after it is learned.)I’ve seen and have performed with many amazing musicians, but there are two people who stand out in my mind as having something extra. They both sang with an effortless exactness of pitch which I could hear, but could not duplicate, even with my years of experience. Playing a few gigs with Jason Mraz freaked me out. When I first heard Bobby McFerrin as a kid at a Jazz festival, I knew there was something unique about him, but perhaps it was just his ability to mimic instruments? Years later, when I heard Jason Mraz raw, live, and unprocessed by a recording studio, I knew what it was about Bobby McFerrin which had fascinated me as a kid: perfect pitch. This is something subtle, something you can’t get from listening to records, especially these days when most studios have pitch correction. At one point after a show Jason came out to smoke and stand with us in the front of the True Love Coffee Shop in Sacramento, CA. He then sang a few bars of one of my songs he liked a capella. This kid singing MY song sounded better than me. (And I’m no slouch!) It wasn’t just the type of voice he had, it was the exactness of the pitches he sang from memory. It was like meeting Mozart or something. Mraz heard with such exactness. I was amazed. (Oh, and Toca Rivera was there harmonizing too for a minute which was so awesome.) Anyway, I’m now taking some time away from performing as a musician to see if it is possible to acquire perfect pitch as an adult. From what I’ve heard, Jason grew up singing in church choirs and I figure that’s where he tuned his ear and ability to harmonize. I grew up around music too, but we always just tuned our guitars to themselves, never to a pitch fork or a tuner. As a result, I have great relative pitch, but I can’t sing an “A” note out of the blue. I once drove a girlfriend crazy by playing an “A” reference tone over and over for a week and trying to sing it out of the blue. I failed. This is NOT the way to obtain perfect pitch. Don’t try to memorize a tone this way. Now I’m trying some new exercises which I’ll share here. There is hope. In the last few weeks I’ve started to see progress. You have to go slow and learn to hear in a new way. Each note has a unique “color” or character about it, so I’m told. EXERCISE 1. Perfect Pitch Training For Guitarists. Learn to sing the open strings. Learning the strings as a new song. Sing the tune and say the notes. Sing it to yourself during the day. Sing low to high E A D G B E, and then high to low E B G D A E. Get to the point where you can do this when given either the high or low E string as a reference. EXERCISE 1a. Wait 5 or 6 seconds. Blindly play any note on the guitar (use the wrong hand and locate a string without knowing by touch which string it is) and with that as a reference, sing the other strings using the above melody which you have learned. EXERCISE 1b. Play any open string on the guitar blindly (see above) and name that note. Vary the amount of silent time between tests. Spend a few days working on this until you can name the strings correctly 30 times in a row.
http://xenophilia.com/the-quest-for-perfect-pitch-can-it-be-learned/
‘I Have Perfect Pitch’ Ihave perfect pitch. It’s the rare ability that allows me to put a note to every sound that I hear, and I mean every sound. For example, I will hear a bird chirp and think to myself, “Oh, that’s G sharp!” My perfect pitch is also a great conversation starter, but this fact typically only impresses musicians. Those who aren’t familiar with the concept usually give me one of two responses: “You can sing really well, right?” or “Oh, I love that movie!” Essentially, few people know what I’m talking about. So I went to my former “Introduction to Musical Literature” (Music 10) professor, Carol A. Hess, to talk about perfect pitch. Though she’s well aware of the phenomenon, she doesn’t have it herself. “The ear is fundamental in music,” Hess emphasized. “If you have a good one, you’re that much further ahead.” The response I get the most when I let people know that I have perfect pitch is a question about whether I’m a good singer. The answer to that is subjective. If you judge me solely off of my atrocious range, then I’m a horrible singer. Despite this, I’ll always stay in tune and hit all of the right notes when I croak my way through a song. People also ask how I “got” perfect pitch, to which I say that it’s something I’ve always had. Most researchers believe that perfect pitch is an inherent ability and that it cannot be learned. Hess utilized many musical memorization techniques throughout her career, but never developed full-fledged perfect pitch. “When someone plays random notes at the piano, I’m usually a half or whole step too high or too low,” she said. Fortunately, there are other ways to develop a better sense of pitch. With interval training and musical dictation exercises, “relative pitch” can be learned. Musicians without perfect pitch use relative pitch to distinguish the intervals between pitches and identify different chords. “Start by listening to simple melodies and writing them down. That way you can figure out what the relationships between the notes are,” Hess suggested. Now, you might be thinking, “Perfect pitch seems like such a gift. You’re so lucky to have it!” But you may be surprised to learn that there are, indeed, downsides to this unique ability. My brain processes every sound I hear, from pop tunes to everyday sounds like car horns. I start to feel like I am constantly immersed in notes, which isn’t always a good thing. As a result, I come across as overly sensitive in this regard, but sometimes I wish people could hear from my point of view. Still, I wouldn’t want to give up this ability. Perfect pitch has reinforced my appreciation and understanding of music, and I truly couldn’t imagine life without it.
https://magazine.ucdavis.edu/i-have-perfect-pitch/
...How do we tune our guitars, why they are so hard to keep in tune, and what does it mean to be in tune? I went to see the acoustic super-group Goat Rodeo, consisting of Edgar Meyer (bass & piano), Stuart Duncan (fiddle), Chris Thile (mandolin, fiddle & guitar) and Yo Yo Ma (cello). They are all astounding musicians, and their repertoire for this group is very complex instrumental “chamber music,” mostly things they created for their group. When Thile played the guitar on one song, it was quite out of tune. It was the only thing in the entire evening that wasn’t in tune, and it jumped out and got me thinking about why the guitar always seems to be the instrument that's least in tune. Thile seems to be a reincarnation of Mozart, so I couldn’t blame him for not knowing better. I also know there was nothing wrong with the guitar, since they made a point to tell us that it was an Olson guitar (one of the most respected modern luthiers) that was borrowed from James Taylor. All of us have likely seen a guitar player wrestling with being out of tune, though I suspect that few of us know the surprising number of non-obvious reasons why this happens. Tuning is actually a very complex subject, and not just a matter of the musician being incompetent, drunk, careless or tired. Since I have never seen a thorough and detailed discussion of why the subject is so complicated, I’m going to offer one here, and I apologize up front if it's more than you wanted to know. Now that electronic tuners are so inexpensive and accurate, it can be argued that there is no longer any need for anyone to learn anything about how to get in tune, or to even bother to understand the surprising complexity that underlies the seemingly routine act of tuning. I started playing guitar before there were electronic tuners, and the first thing to learn was how to get in tune. Fifty years later I'm still learning how. Tuning is one of those disciplines where science and art meet head-on, and neither is supreme. Only a few people are trusted to tune the pianos of the great pianists, and though electronic equipment has drastically improved, the methods of tuning used at the highest levels of music require human judgement and decisions to be made that are beyond the realm of just measurement and science. You could tune a piano to a stroboscope- but few good pianists would enjoy playing it. The whole idea of what it means to be “in tune” is foggier and more imprecise than bystanders or beginners might imagine. Varying cultures of the world also have different ideas of what it means to be in tune, and the history of musical pitch is itself an extremely interesting and convoluted subject. Personal taste is even a factor. A certain old bluesman’s guitar always sounds out of tune to me on his recordings, but he reportedly liked it that way and would “fix” it when someone tried to “tune” it when he wasn’t looking. Maybe if we understand what underlies the seemingly simple act of tuning we might feel better or blame themselves less next time we are struggling to get in tune. One set of issues in guitar tuning involve the physical things like the strings, the instrument and the environment. A second set of concerns involve human issues of hearing and judgement, and there is a third and very mysterious set of issues that involve the music itself. I have never seen a guitar instruction resource anywhere that even started to go down this slippery slope, and they all just say "tune your guitar" as if it was a simple and straightforward thing. It can be the simple act it seems to be, though bubbling just below the surface are a startling number of mysterious issues. Not only is the tuning of musical instruments scientifically complex, but the social and political questions appear immediately about who is in charge of tuning and who notices or fixes it if something is determined to be out of tune. There aren’t any “tuning police,” and I have told students and audiences for years that the worse ear you have, the more music you can enjoy. This will take a while, so settle into a comfortable chair... There are two types of being "in tune:" • The strings of a guitar can be in "relative tune" so that the notes generated on the various strings are consonant with each other • Whether or not those notes match up to the "absolute tuning" standard that civilization has decreed is what we should tune to is another matter. If you are playing alone, it doesn't matter that much if your A string is not tuned to 110 Hertz exactly as long as you are in relative tune to yourself. If you are in a band, it does matter. Instruments and strings are manufactured for the tension of standard pitch, and tuning to a very different pitch can cause strings or instruments to underperform or to break. If you are singing with your guitar, then your vocal range becomes part of the situation, and singing songs by other artists might cause you to keep your guitar tuned to standard pitch or perhaps not to. For about 30 years I always slack-tuned my guitar down 1 fret to Eb because I liked the extra resonance, I liked singing in Eb more than in E, and I always performed solo so it didn't matter if I was playing in uncommon keys like Ab or C#. It did make it hard to jam along with other people, since I had to retune or put a capo on the first fret, which is disorienting. “STANDARD” PITCH Electronic tuners may tell you that a perfect A is 440 hz. (cycles per second) but they do not tell you that number was essentially invented out of thin air at the International Standards Association meeting in London in 1939 after a lot of bickering and compromise. The system based on 440A is usually called “Standard Pitch” or “Concert Pitch” and it evolved to solve real problems, and is a triumph of civilization over barbarism. The choice of 440 cycles does not seem to be connected with any scientific, religious or other phenomenon, and except for the fact that other people use it, there is no real reason why you have to tune to standard pitch if you are not performing with anyone else. Over the centuries, from town to town and country to country, the pitches of instruments have varied all over the place, with 440 A ranging from over 500 and to a low of below 400. This especially challenged traveling singers, who had to sing along with whatever pipe organs, pianos and horns they encountered. For most of the human time on earth, there was no way to establish pitch, though new research and theories at megalithic sites like Stonehenge indicate that the stones they used were an unusual kind that were musically resonant. They may have been tuned to pitches and struck like massive stone bells. Do a web search for something like “Stonehenge musical stones” and you’ll find all sorts of tantalizing information, ideas and theories. Scientists now think that the best cave paintings are in the locations in the caves where the acoustics are the best, which makes total sense. Of course primitive people paid very close attention to sound. The Aurignacian (and possibly Neanderthal) flutes they dug up have holes drilled in very precise places, and show that Paleolithic humans knew about music very long ago. In 1834 scientists first developed the ability to measure musical pitches and assign numerical frequencies to them. The tuning fork was invented earlier, by Englishman John Shore in 1711. Its two tines cleverly cancel out overtones and generate a pure pitch. Tuning forks were hard to obtain and very valuable, and kept like precious jewelry by their owners. Museums have a number of tuning forks that belonged to celebrities of the past, and it is interesting to measure what they are calibrated to. Mozart’s tuning fork reportedly was A=421.6 cycles, and Handel’s quite close at 422.5. The first attempt to standardize pitch was in 1859 when the French government decreed that a musical A be 435 cycles at 15 degrees centigrade (59 Fahrenheit), reportedly to solve problems with military bands. In 1889 conventioneers decided on 435.4 cycles for A and then changed their minds and kicked it up to 440 hz. several decades years later, where it has remained ever since. Some “early music” groups tune to A=415 cycles, since apparently a lot of music was written centuries ago for this pitch range. (https://www.piano-tuners.org/history/pitch.html has more than you’ll ever need to know about the history of pitch.) REFERENCE NOTES Violin players still typically tune one string to a reference note, and then the players use their ears to tune the other 3 strings to that one. There are a number of systems of guitar tuning that players used to always learn, that are strategies of how to tune 5 strings to a reference note. Tuning forks Tuning forks were a huge breakthrough when they were invented, and they marked the beginning of establishing pitch standards. Most of them made currently generate the note A= 440 cycles, though I used to have a larger guitar fork that gave the pitch for the high E string of the guitar. They work best when you place the ball end on the bridge of your instrument, though I always liked to hold them in my teeth, so I had both hands free to tune. Get them ringing before you put them in your teeth to save on dental bills. Pitch pipes Guitars were commonly sold with a "pitch pipe" years ago, that sounded a lot like a goose call to me, but they were a little better than just having a tuning fork to generate one note. I can't find my old one that had 6 little pipes, one for each of the guitar strings.They helped train your ear as you compared notes, and they at least could get you close to where you needed to be. If you were wildly out of tune they were often more helpful than an electronic tuner, which can sometimes send you off toward notes that are 12 frets too high or low from where you want to be. Electronic tuners Most guitar players now use an electronic tuner or smartphone app for tuning. I recommend getting a "chromatic tuner" which is capable of tuning to any note. Cheaper tuners and apps that are sometimes called "guitar tuners" just help you tune the 6 open strings, and don't help much if you are in a non-standard tuning, or using a capo. Most tuners are chromatic now, but not all. Many tuners just show a green light for in tune and a red light for out of tune, though more commonly now you'll also find a yellow light for "almost there." We often need more information than that. Learn to observe the songs or situations where the music sounds in or out of tune, use the tuner to measure what your artistic self likes or dislikes so you can perhaps re-create it whenever you want to by using the tuner as a measuring tool. If you consistently don’t like what your tuner tells you, you might try a different brand. I like the TU-12 tuner by Boss (pic). I’ve used it to tune on most of my recordings for over 35 years, and I don’t ever listen back to the recordings now and think they are out of tune. Problems with electronic tuners You might as well know that you don’t always get perfect or consistent results even when you tune an instrument with an electronic tuner. All electronic tuners are not identical, and one brand of tuner might say you are in tune and another might disagree slightly. This may be a matter of how “fussy” and precise the tuner is, and one tuner may give something a passing grade and another might flunk it. It may also have to do with how the tuner's microphone "hears" (or doesn't hear) or its algorithms process the various overtones of the string. I prefer tuners that have a needle that shows you what is going on, and do not just have the colored lights. With a particular instrument, two tuners might possibly both give readings that a good musician’s ear would disagree slightly with. Some brands of tuners have indicator needles or lights that bounce around a lot, get confused by the overtones from bass strings, or even change their mind as a note decays, leaving you to do some guessing. We learn how to use our tuners, and not just do everything they say. Sometimes we learn that it can be best to tune an open string, and at other times we might get better results using a fretted pitch or a harmonic. Even if you don't have a good ear, you can use an electronic tuner to observe and measure what is going, and find your way, though you will need a good quality chromatic tuner to do it. Phone apps These are convenient, and you're likely to have your tuner with you, but you can't use them effectively on stage, they only work by listening with the microphone, and are pretty valuable to be using constantly at a music festival, party or bonfire. A $20 electronic tuner might make more sense to use than a $700 phone. Many of us musicians like the Cleartone app that costs around $5. It has a sensitivity preference and seems to work better on "less sensitive" so the "needle" doesn't bounce around as much, though I still prefer the Boss TU-12. Many guitars now have tuners built into them (pic), and many of those are now pretty good, though they run off a 9 volt battery that powers the built-in guitar pickup system and that can be dead when you need to get in tune. The best ones not only give accurate readings, but also turn off after a minute or so and stop using up the battery when you stop using them. The so-called "snark" tuners (pic) are very popular, and they clip onto the headstock, so you don't need a pickup built into your guitar or a cord to plug into them, and they are unaffected by wind noise or conversation around you. Tuners that rely on a microphone to listen to the notes need quiet to work right. I have a very nice tuner built into a Planet Waves capo. It's a good capo and a good tuner, though it is a bit large, and expensive. Other sources • If there is a piano nearby, chances are it is at least close to proper pitch. • You can of course tune to another guitar. A great trick I learned long ago is to have the other player play minor chords for you to tune to. Instead of the other player sounding their E string for you to match, have them strum an E minor chord. For some reason it works much better than if they play an E major chord or just the single string note. Then have them play Am, Gm, Dm and Bm chords so you can tune the other 4 strings. • We have a foghorn here on the coast of Maine that is a lovely A, but not on sunny days. • My old Chevy van had a horn than sounded a near-perfect A . • Dial tone on a phone is a perfect F, which is a little hard to use since no common stringed instruments have an F string. STRINGS My best guess is that the strings on Thile's guitar had been recently changed and had stretched since they were tuned. Instrument strings are also not perfect things, and though most new strings behave properly once they have stretched a little, there is always a possibility that a string might be defective or damaged. A string can become faulty or "sour" and fail to produce correct pitches, even though there may be nothing visibly wrong with it. Old strings are generally done stretching, and the problems they cause may give the instrument poor tone, or certain overtones may be not in tune, but I doubt that old strings are what caused Thile’s problems. If you find yourself unable to get your instrument in tune, replacing strings is the first thing to do before you look for deeper causes. Strings may also be improperly attached at either end, which can allow them to slip unpredictably. Strings that have low tension on them can sound permanently unmusical and out of tune. This is a common issue with smaller-scale children’s instruments, since in order to tune the shorter strings to standard pitch they need to have significantly less tension, and your natural inclination is to put thinner strings on an instrument for a child. Tuning smaller guitars 2 or 3 frets sharp or using thicker strings can solve this problem, but in turn cause other problems. (The guitar is no longer tuned to standard pitch, and the thicker or tighter strings are harder to press down, which is not ideal for children. I have written extensively on children's guitar issues.) THE INSTRUMENT It is possible for an instrument to not be capable of producing correct musical pitches over its fingerboard. The positions of the nut, saddle and frets collectively determine the intonation. Luthiers develop a formula for where they place the parts of the fingerboard that establish proper intonation, and manufacturers generally work out these tolerances to the point that you needn’t worry. Modern guitars are generally made more precisely than instruments of the past, and computer and laser-controlled cutters now often put the parts in place to an accuracy of ten-thousandths of an inch. Even inexpensive instruments now generally play in pretty good tune, and are often better than relatively expensive instruments of 50 or more years ago. Since errors are proportionately larger, shorter-scale instruments such as guitars for children should be made more precisely than full-size guitars, but usually are not. Don’t expect a $99 children’s guitar to play in perfect tune, but don't be surprised if it does. Guitars can have or develop improper intonation, which means that they do not produce correct pitches over the fretboard. Open strings might be in tune but a fretted string might not be. A particular string may be “off,” or perhaps notes higher up the fingerboard may be progressively out of tune. A common problem with guitars is caused by string tension pulling the neck slightly upward. This causes the strings to sit higher above the fingerboard than they should. In my experience the string height (known as the “action”) of my acoustic guitars goes up in the summer and down in the winter, presumably due mostly to humidity. If the action is too high, it can cause the strings to stretch a little (go sharp) as they are pressed down, especially on the higher frets where they are highest above the fingerboard. The thicker gauge strings you use, the more stretching will occur in this situation. Most guitars made in the last 40 years have a threaded “truss rod” inside the neck that can be adjusted to increase or decrease the amount of “pulling” that the neck does when the strings are tightened. Optimizing this is part of what is called “setting up” a guitar. If you put on different gauge strings than the instrument was designed for, it can slightly aggravate tuning issues. The ends of the strings rest in grooves cut in the saddle and the nut, and thinner strings might sit too deeply in the nut grooves, and thicker ones slightly higher up above the fretboard. If the slots in the nut are too narrow or the strings thicker than normal, they can also pinch a string and cause it to stick and perhaps jump up or down unpredictably when you operate the tuning pegs. A tiny amount of dry lubricant like graphite powder can remedy this. It’s a simple but delicate job to widen the notch cut in the nut, though without the right tools and knowledge you can cause other problems. A strap attached to the headstock can pull on the neck and put an instrument out of tune. Most professionals attach both ends of the strap to the body and not the neck or headstock like Woody Guthrie did. The gears and mechanisms of the tuning machines can also be flawed, loose or worn. They might slip a little at times and cause chronic random out-of-tuneness. Tighten all the screws and look for obvious problems. It's not a big deal to replace a tuning machine, though it hurts the resale value of vinatge guitars if they don't have original parts. Frets can be worn down by a lot of use. This isn't uncommon, and can cause a string to be stopped at an imperfect place, though these errors are quite small. Fret wear usually just causes strings to buzz or rattle a little. The neck angle can be wrong. This can easily be caused by string tension, and can cause various intonation problems. It may mean that to prevent rattling or buzzing of the strings, the action has to be higher than it optimally should be. This causes sharping when the strings are pressed down, since they stretch a little, especially toward the middle of the string where it is highest above the fretboard. Correcting the neck angle of a guitar is called a "neck reset" and can be a quite expensive repair on some brands and models. A number of leading brands of guitar (pioneered by Taylor Guitars) make the necks easy to remove and adjust to solve this chronic guitar problem. ENVIRONMENT Changes in humidity or temperature such as heating or cooling systems in a building, or sun and clouds alternating outdoors, can make it hard to keep a stringed instrument in tune, as can moving the instrument in and out of cars and buildings that have different temperatures and humidities. They are the other likely culprit in putting Chris Thile’s guitar out of tune. It is likely that his guitar went out of tune because it had a larger quantity of wood than the mandolin or fiddle, making it more susceptible to environmental changes than a smaller mandolin or violin. That doesn’t explain why Yo Yo Ma’s cello sounded fine, but with a fretless instrument a good musician can constantly make pitch corrections that a guitarist can't. Changes in the environment can put you out of tune. Your instrument might sit for weeks at home where it stabilizes, having absorbed a certain amount of moisture. When you take it outside, in a car, and in and out of a heated or cooled building, it can take quite some time to settle down. When sunlight, stage lighting, air currents and various thermal and humidity changes impact an instrument, the hardwood, softwood, bone, metal, seashell and plastic parts of the instrument, tuners and strings themselves will expand, contract and absorb moisture differently, causing all manner of tuning problems. When I play a concert, I try to let my instruments sit as long as possible in the place where they are going to be played before I tune them. Often I will get to a gig, and notice that my autoharp (with 36 strings) is out of tune. If I tune it right away, it will almost certainly go out of tune again by showtime, and I am usually better off if I wait until right before show time to tune it, after it has adjusted somewhat. I prefer not to tune in the backstage or "green room" because it usually has a different climate than the stage. Playing in sunlight or in front of a heat source like a fire can wreak havoc with tuning. The front of the instrument (or even of the strings) may be in the sun or heat with the back in the shade, or even worse, there may be intermittent lighting or heating that causes non-stop fluctuations, such as what happens as clouds pass by. If I tune my autoharp for an outdoor gig and then the sun falls directly on it an hour later toward the end of the show, I’m in trouble, and I even have to plan for the sunlight conditions for the near future. Hot and wet environments can cause wood to swell and expand, while cool and dry environments cause wood to shrink. Because the different materials that make up our instruments and strings respond differently to environmental factors, the results can be unpredictable, and it is not a simple matter of knowing that a particular thing happens on a particular kind of weather day. Theaters sometimes are cold when the audience arrives, since each audience member generates about 100 watts of heat. People also constantly exhale moisture, so a concert room may get significantly warmer and moister over the course of a 2 or 3-hour concert. Again this is not a recipe for perfect tuning. MOVING AIR Try to play guitar and sing in front of a window fan that's running on high speed. It’s awful, and the sound is chopped up and sounds like a phase shifter or tremolo pedal. The same thing happens to a smaller degree when even slight air movements from heating and cooling systems are present. Performing outdoors on a windy day can be very problematic as the sound is blown around, and I have had awful experiences trying to hear properly when even a small fan is blowing on me. It’s quite possible that forced-air heating systems that circulate air around a room can not only put your instrument out of tune, but also slightly interfere with your hearing. Some states even have laws requiring a number of cubic feet per minute of air to be circulated through a concert hall, and this can help or hinder performers who are trying to stay in tune. CAPOS Capos also usually throw guitars out of tune a little, especially by sharping the thicker strings. Unnecessarily over-tightening a capo can push you further out of tune, and possibly prevent you from making smooth tuning adjustments once it is in place. I use a lot of partial capos, and they wreak havoc on a fingerboard, since you are mixing open strings together with strings clamped by the capo. Capo first, then tune. 60-CYCLE HUM Our electric grid uses alternating current, that reverses direction 60 times a second. (This was an idea pioneered by Nikolai Tesla, in opposition to Thomas Edison, who favored direct current.) Wires carrying the weaker signals from instruments or microphones need to be shielded to prevent interference from much-stronger electric current that is passing through nearby wires and electronic devices. A common symptom of this interference is a “60-cycle hum” that can permeate your sound system when you are performing and make it very hard for you to tune by ear. The faint hum of electrical motors and equipment can make musicians crazy, especially when they are trying to tune an electric guitar near an ice machine with a pinball machine and a neon sign nearby. 60 cycles is not a note in the modern pitch system (B is technically 61.7354 and Bb is 58.2705) and a 55 cycle (which is a perfect A) electrical grid would cause thousands of musicians to not drink too much at their gigs to drown out the awful clashing of music and electrical current. I have been told that parts of Tokyo were once 55 cycles, and it appears that worldwide now only on the island of Guam is the electric current still running at 55 cycles, so musicians there can tune to the ice machines and neon signs. If your guitar pickup is not working right or shielded well, or if you are in an electrically “hostile” environment, surrounded by neon or fluorescent lights, it is also not uncommon to discover a 60 cycle hum mixed with the signal coming out of your instrument cable. This will compromise your situation if you plug into an electronic tuner and may prevent you from getting an accurate reading from your tuner. If you use a battery-powered clip-on style tuner on your instrument headstock you should be OK, since it works on vibrations in the instrument’s body and not on an electrical signal in a wire. Instrument pickups that rely on magnets are generally more susceptible to interference problems than the types that use piezo-electric materials. The actual physics of tuning is a surprisingly convoluted subject, since a bystander might just think tuning a few strings to specified pitches is a routine and boring thing. It’s worth knowing about the subtler points involved, largely so you won’t always blame yourself or your instruments when you can’t seem to get in tune. If you are going to be a musician you should know about them, and they are not part of any music curriculum I am aware of. Buckle your seatbelt. The whole idea of what it means to be "in tune" is much foggier and more imprecise than bystanders might imagine. Measuring and standardizing musical pitch are things we take for granted, as if they are set in stone, and the more you learn about the subject the less certain you may feel. OVERTONES All vibrating objects emit a series of overtones, generally fainter to our ears than the "fundamental" and the way those overtones are present or not present is what makes various instruments sound different even though they may be playing the same pitch. What seems to be a simple musical note is almost never that simple, unless it is a tuning fork or an electronically generated sine wave. If you vibrate a string or blow across a Coke bottle and produce almost any musical note on any instrument, nature automatically provides the Pythagorean “harmonic series” of overtones, consisting of other notes with frequencies in integer multiples of 2x, 3x, 4x etc. that are mixed together with the so-called “fundamental” tone. Many of the overtones are very consonant with the fundamental tone. Many people have some idea that this has to do with those mysterious things that music and math have in common. The reason that the same pitched note sounds different when played on different instruments depends largely on how much of the various overtones are present in each note, though the “attack” of how the note is struck or created is also a vital component in what gives each instrument its signature sound. When “integer notes” occur in nature, acoustics people refer to them as Pythagorean pitches, because they are generated from the harmonic series of overtones so beloved by Pythagoras and other ancient Greeks. It’s quite startling how much the people of antiquity seemed to understand these issues, but they were also predicting eclipses and accurately calculating the diameter of the Earth in quite ancient times. Some old cultures got very excited and even religious about the relationship between integers and music. THE OCTAVE The first overtone is the "octave,” which essentially gives us a way to listen to the integer 2. Shorten a vibrating string by half and it raises the pitch an octave; doubling its length lowers it one octave. Each time you change the vibrating object by a factor of 2, it changes another octave. Reducing the volume of air in a Coke bottle by half when you blow across the opening raises the pitch an octave, and doubling the air volume lowers it an octave. Since A has been defined as 440 cycles per second, the other octaves of A occur at 220, 110, 55, 27.5 and also at 880, 1760 etc. They are all producing different octaves of what we call the musical note A. The octave is present in all musical systems on earth and is the fundamental building block of all music. A guitar spans about 4 octaves in pitch, and a piano close to 8. THE FIFTH If you shorten a string by 1/3 its length you get the 2nd overtone, which is a new note that is not the octave. It is generally called a musical "fifth", an interval Westerners know as do-re-mi-fa-SOL-- the 5th scale of the do-re-mi "major scale." (Shortening a string to 1/3 actually generates a pitch that is an octave plus a fifth higher...) This is the next most important building block, and I believe it also appears in music of all cultures. When this note occurs in nature, it is a Pythagorean interval, because it is embedded in the note you hear coming from the instrument, generated from the harmonic series of overtones. Shortening a string to 1/4 of its length, 1/5 or 1/6 all make harmonious overtones. 1/7 is the first dissonant overtone, and this has contributed to the eternally shadowy reputation of the number 7. The next dissonant overtones are 11 and 13, which also have their own reputations. TEMPERING Unfortunately, things get messy really fast when you try to build a musical system with just integers and fractions. If you have a string tuned to a C and then shorten it by 1/3 it makes what we call a G note, a 5th above. If you then make another note a 5th above that it is a D, and you can then make an A and an E (called a "circle of fifths"), and you can generate a whole series of notes this way. The bad news is that any series of numbers generated by the integer 3 will never yield a note that is commensurate with the powers of 2 that generate the octaves. 2 and 3 just are not ever multiples of each other. So if you started with an A at 440 cycles, and started making a series of 5ths, you would keep getting new notes that were not octaves (multiples of 2) from your starting point. You’d get notes generated by multiples of 3 and you would never ever land on A again, an octave (power of 2) multiple of 440. What does this mean? It means that a musical scale based on pure Pythagorean 5ths spirals off into oblivion without ever returning to its starting point, and you can’t build much of a consistent music landscape with only Pythagorean integer-generated pitches. Many things will sound great, but some things will sound really sour. The ancient Greeks even knew this, and developed their own tempering systems thousands of years ago, to adjust the sacred integer-based musical pitches to solve problems that arise and make the music work better. The notes we tune to now are not those created by the hallowed integers that so intrigued the ancient Greeks. Temperings are systems of tuning that are intended to minimize the errors inherent in musical pitches. Guitar students had to learn to tune before there were cheap electronic tuners everywhere, and some techniques were developed to help us get the instrument in tune by tuning one string to a tuning fork and then tuning the other 5 to it with various recipes. We have to learn to use a “tempered” musical system that is not based on integers, and repress our primal selves who seem to want those mystical Pythagorean pitches. The human ear apparently likes Pythagorean intervals, and they sound "sweet" to us. If you bore holes in a flute using the Pythagorean math, some but not all melodies sound good. But if you start trying to form chords or play in different keys, things start to sound out of tune, and so-called "wolf tones" appear. Over the centuries, various "tempering" systems have evolved, including "just" and "meantone" tempering, which allow various compromises between these opposing forces. One of J.S. Bach's most important contributions to music was to celebrate and endorse a new system of tempering called "well-tempered" in which these little errors were better divided over the 12 notes in the Western scale. Bach wrote many works for the "well-tempered clavier,” which was a new way of tuning the piano. He wrote music that changed keys repeatedly and never sounded greatly out of tune, which such music would do if played on an untempered system. His enthusiastic adopting of this system signaled the beginning of the modern era of musical tuning. "Early Music" groups actually make a point to tune their instruments to differently tempered systems when playing music that was written during the "pre-well-tempered" era. They feel that the composers chose their notes partly because of the subtle nuances of the tempering, and that playing music composed before well-tempering on well-tempered pianos does not sound the way it should. In an equal-tempered system, all notes are "equally and slightly" out of tune from their "natural" form. The octave is divided into 12 equal pieces that are in the ratio to each other of the 12th root of two! So much for the integer beauty of the ancient Greeks, and welcome to listening to irrational numbers. (Be careful reading explanations of the differences between well-tempering and equal tempering since they read like physics books and can overwhelm you with their complexity. It's probably more than any of us need to know) What nobody ever tells troubadours is that the frets on the guitar are placed according to irrational numbers, not integers. The ratio of the length of each guitar fret to the previous one is actually the very irrational 12th root of 2, though many luthiers develop and adopt their own systems of fret and bridge placement that are not purely mathematical and may deviate slightly from this. The confusing but interesting Buzz Feiten Tuning System that some people swear by is a recent attempt to address the fundamental out-of-tune-ness of guitars by slightly altering the placements of tuning, nut, saddle and even fret shape. Making a guitar with tempered frets is also a possibility, such as this one here (http://www.truetemperament.com/) that looks like a bad photograph but is actually somebody's solution to the guitar tuning problem. This is essentially what violin & cello players are doing all the time on their fretless fingerboards. Seeing a little more depth in the struggle to tune a guitar? Our innate inclination when tuning notes to each other the way we often do on a guitar or a fiddle is to play them together and see if they sound "in tune" with each other. The adjacent strings on a guitar are generally a musical 4th apart, and unfortunately almost no one can hear that interval accurately. Octaves and unisons we hear best, less well but still reliably the fifth, which is what violin and cello players do. Almost no one can be trusted to properly tune two strings to each other by ear if they are not an octave or a fifth apart in pitch. Most guitarists know that guitars are often out of tune, and play a “test” chord when they first pick up a guitar. The choice of that chord can make a difference. If you are the type who tests with a G chord, then if the guitar was tuned by someone who plays a D or an E to test the guitar, there will be disagreements. Some people even think that if you are a player with a good ear who likes the A chord, you’ll always be a little unhappy unless you alter your guitar set-up. (My personal guess is that Buzz Feiten likes the A chord.) So if a beginner tunes the open guitar strings by ear, or tunes a G chord of a guitar till it sounds good, they will likely tune the B string (the music 3rd of the chord) to a Pythagorean interval. Then when they play an E chord, the B string will sound very wrong. (Try it!) If you must tune by ear, try tuning the 2nd string so it sounds good in an A or D chord, and then it will be in-between the pitches you like when you tune to either a G or an E chord. I'm even thinking here that it's possible that the brands of guitars that have come to be associated with certain styles of music may have something to do with the intonation preferences of the manuacturers that ended up slightly favoring certain musical keys. The fact that so many young musicians seem to be writing songs in G makes me wonder if there is a trend in guitar-making involved. Imperfect is the new perfect. We actually have to become “civilized” and learn to hear and accept the tempered notes. The “correct” thing to do is to average out the error, and make it so that all pitches “equally and slightly” out of tune. The truth is, our whole musical system is designed to be a little out of tune, and electronic tuners are manufactured to give us those notes. Studies have shown that singers and violin players, who can adjust their pitches in ways that guitar or piano players cannot, will usually choose to "sweeten" any intervals they can, so the civilizing process and acceptance of the tempering is only on the surface, and our inner primitive selves still yearn to hear those integers. This is true even among trained and disciplined musicians who are supposedly fully acclimated to the well-tempered world. If you think about it, all the instruments in an orchestra are capable of adjusting the pitch so that the musicians can temper their harmony to sound right. Pianos and guitars that play fixed pitches are not normal parts of orchestras, and can inject intonation problems into the ensemble when they play with other instruments or each other. (I once spent several hours trying to get my autoharp in tune with a piano. I wish I had filmed it, because I may never try it again.) As we start to play music and learn to hear differences between notes, as our ability to listen improves, it is occurring alongside this other tempering thing, and we can easily get confused right when we are learning to hear musical pitches better. As beginners, we might just play away and not give it a thought. There are a number of older recordings (I won’t mention any names) I used to enjoy, but now that my ear has improved I am quite aware of instruments or singers being out of tune. Maybe my fond memories of campfire guitar are rooted in the fact that I had a less-critical ear when I was a teenager. Now I dread the thought of trying to get a guitar to stay in tune at a campfire. As we start to hear better, and realize we are out of tune, our newly acquired sense of "in tuneness" lets us down, because our natural tendencies to tune to untempered notes clash with the rigid metal frets that are chopping these notes up into irrational 12th roots of two when we press then down to the fingerboard. We simultaneously have to learn to hear, and then to shun our instincts and embrace slightly out-of-tune notes. It is very hard, and all musicians go through it. The only thing that makes it easier are electronic tuners that are almost universal now, and they provide a standard and a benchmark that gets the job done without us having to know or deal with how messy everything really is. Once your ear is really trained, you will likely start noticing that electronic tuners are not always to be treated as omniscient or totally authoritarian. Like wild horses who are tamed, those of us who wish to embrace the Western musical world, with its scales, keys & chords, must as a necessary consequence abandon our primitive desires to hear perfect 5ths and perfect 3rds. (The difference between a tempered and an untempered 5th is small, about 1/50 of a fret, but the difference between a tempered and untempered 3rd is about 1/7 of a fret. Even beginners can hear that. When you tune an instrument like a banjo, dulcimer or dobro that is commonly tuned to an open chord, or an autoharp that only plays in 1 or 2 keys, these issues assert themselves strongly. Tune your guitar to an open chord tuning, like a D chord (D-A-D-F#-A-D), and you get more trouble, since your ear REALLY wants to flat the 3rd of the chord (the 3rd string) closer to its Pythagorean resting place, but the frets relentlessly give out their irrational numbers and tempered pitches. Since banjos are usually tuned to an open G chord, the banjo world has recently dealt with this in an interesting way: most 5-string banjos now have a 2nd (B) string (the musical 3rd of the G chord G-D-G-B-D the instrument is usually tuned to) a little bit different length than the other strings. This is done on banjos with a “compensated” saddle and sometimes even a staggered nut that makes the B string a slightly different length than the others. In the last 30 years or so, most fretted-instrument saddles have been “compensated,” which means that some of the strings (commonly the 2nd and 3rd strings on guitar- the 2nd string as shown here) are slightly different in length than the others. The places where the strings cross the saddle do not form a straight line. If done correctly, this can make a huge difference in the instrument’s intonation, and may be the largest reason why older guitars sound out of tune compared to modern ones. Guitar saddles are now made thicker than they were 50 years ago for this reason, and a good luthier can fine-tune the length of the strings in ways that can’t be done with a thin saddle. It’s a common alteration often made to older guitars to widen the saddle slot in the bridge (if there is room), and install a wider, compensated saddle. MORE PROBLEMS WITH OVERTONES Tuning even gets even another level messier when you consider more deeply the overtones an instrument is producing. This isn’t fantasy, and it causes genuine though subtle problems. Since the natural Pythagorean harmonic overtones of a lower-pitched vibrating string are always there, they may overlap and clash with either higher-pitched open strings or the fretted, untempered pitches produced on the fretboard. Any note generates a harmonic series of integer overtones, but lower-pitched notes are more problematic, since they generate overtones that are easier to hear than those made by higher-pitched strings, and they land in the same audible pitch range as other notes you are playing on the instrument. When you play a low note on an instrument with a wide pitch range like a piano, its natural overtones clash with the tempered notes you are using in the higher registers. The guitar has enough pitch range for this to be a real problem. If you tune your guitar down low and plug it in on stage, you aggravate these issues by greatly amplifying the overtones from the bass strings. A surprisingly complex and fuzzy thing that some call “inharmonicity” and piano tuners often call “octave stretching” enters in. The more resonant and rich an instrument is, the more pronounced the overtones will be, and the more this accentuates this octave-stretching issue. The solution involves an artistic decision, based on the tastes of the person who is tuning and the persons who are going to play the instrument, to override the math and electronics, and change things a little to make everything sound more musical. There is a complex world of things like “wide fifths” and “narrow fifths” and “French temperament ordinaire,” which are things piano tuners have developed to reconcile the resonances of their instruments, the physics and realities of mathematics, together with our ears and artistic sensibilities. Bass players who borrow a tuner from a guitar player may face the “octave stretching” problem because the untempered overtones from the bass (whose open strings are tuned to the tempered electronic tuner) can clash slightly with the tempered open-string and Pythagorean overtones from guitar pitches. Skilled players will sometimes make slight adjustments to their tuning in these situations if they hear something they don't like. Some tuners are designed for bass players, and some tuners have a switch to select guitar or bass to address this issue. The more resonant and rich an instrument is, the more pronounced the overtones will be, and this can accentuate the need for octave stretching. Our ears can sometimes hear the slight discrepancies, but we don’t all hear it the same. Beginners often don’t hear these kinds of things at all, and then as their ears gets better, they may start to notice that things are out of tune. What I do is to spend a long time listening, playing and tuning, and I use a very accurate electronic tuner as a measuring device, not as a rigid guideline. Some of us will use an electronic tuner as a measuring tool only, so we can carefully and repeatedly tune certain things a little “out of tune” in order to sound more “in tune.” Sometimes we’ll do this just for a particular song, tuning or instrument. As an experienced artist you are allowed to override a tuner, and you can use your tuner to purposely tune the B string a couple cents flat for certain songs, for example. This can be true especially if they involve an open tuning or a partial capo. A few great players like Steve Vai and Eddie van Halen have posted comments about tuning, and mention that they sometimes make small adjustments to certain strings and ignore the advice of their electronic tuners, sometimes just for certain songs. COMBINATION (GHOST) TONES You may think I have gone off the deep end when I venture into this one, so I'll skim it, and I’ll give you some searchable words so you can do your own internet research, which of course will leave you more confused than when you start. There are words like “resultant tones,” “combination tones,” “multiphonics” and “subjective tones” that will scramble your brain a bit, since they involve auditory phenomena that are both imaginary and real, and might actually be dark demons confusing us when we are innocently trying to tune our guitars so we can sing a song. And there may be days when we hear them better than other days. The sounds themselves are quite interesting, as are the history of when and how people discovered them. Now that we have wave-generating machines, oscilloscopes and all sorts of tools to analyze and generate sounds, we can only marvel at the things people figured out and were mystified by long ago when they had very few tools. Play any note and it is well-known that unless it is a tuning fork, it automatically generates the "harmonic series" of integer overtones. There are also little-known, hard-to-hear but quite real “sum tones” and “difference tones” that show up when two or more notes are played at the same time, and you can never discount the possibility that they are entering into our tuning considerations. Sum tones are higher-pitched, and are the sum of the frequencies of two notes being played, but the difference tones, the difference of the two frequencies, cause more trouble. They were first discovered a few centuries ago, and show up quite clearly when you play a high, loud double-stop (two notes at once) way up the neck on a violin, or especially when two flutes are playing together in higher registers. You’ll hear the two notes being played, but there is also a much lower note, that is sometimes quite dissonant and often not musically related to the two notes that generate it. I have heard an arrangement of Jingle Bells, where two flutes play strange, seemingly un-melodic notes, with the “ghost tones” eerily sounding the melody of the song. These difference tones don’t actually show up on an oscilloscope, and are an auditory illusion created in the listener's ears and brain. They have nothing to do with us having two ears or other theories people have come up with. They don’t actually exist in the physical world. Yet we hear them loud and clear sometimes, though they do sound like they are being created in our head, because they are. The real "kicker" of difference tones is illustrated by an experiment that involves what is called the "missing fundamental." Imagine 10 pitch-generating machines that are collectively sounding 10 octaves of an A note all together. This would mean 55hz, 110, 220, 440, 880, 1760 and so on. Supposedy switching off any one of them does not affect the sound, since the others are creating sum and difference tones with each other. I read that all you need is any 3 consecutive octaves sounding and what we humans hear is the same. (In this era of "fake news" it's hard not to be skeptical and to want to hear that with our own ears. Maybe there is a web site that does it for us somewhere.) A telephone cord cannot transmit sounds lower than 300 hz., yet male voices sound fine even when they are pitched below that. Supposedly the low notes on our guitars and even the low string on a violin are "phantom fundamentals" and when we measure the sound with electronic measuring tools we find that a guitar string really does not put much energy into the fundamental, but we hear it as a low note. This is downright creepy, and the more you read about missing fundamentals the creepier it all gets, since it seems to say that much of what we hear is created in our brains. Here are some links you can explore, so you know I am not making this up. This kind of thing could be interfering with our ability to hear and our ability to tune. I tried to warn you that tuning is complicated, though if you made it this far I do salute your bravery and persistence. THE POLITICS & PSYCHOLOGY OF TUNING There are still more issues overshadowing the tuning issue, which involve emotions, psychology and even politics. Any time there are humans involved, human concerns come into play, and science and physics do not rule the roost. Different cultures of the world have different ideas of what it means to be in tune. The history of musical pitch itself is an extremely interesting and complex subject. Human taste is even a factor. We have emotions, and a host of small but real physical and psychological things affect our abilities to decide on what we think is in tune or not in tune. Players from one part of the world or from a remote rural area may tune differently than those from a different country or an intellectual or urban culture. If you playing strictly primitive, pentatonic music, you might need to make slightly different choices about tuning than someone who is playing a lot of extended jazz chords. Tuning is political also. If you are in a band or disagree with someone with more authority than you, then politics enter immediately. There may be deep and lasting disagreements that are not just a matter of one person being right and another person wrong. A well-known bluegrass band that I knew personally essentially broke up because the guitar player and the dobro player (who typically tune their instruments G-B-D-G-B-D which is 1-3-5-1-3-5) could not agree on how the dobro player should tune his B strings. He wanted them flatted (“sweetened” is the word usually used for this) a little, and the guitar player wanted a more “equal-tempered” tuning. They couldn’t agree and their mutual frustration was a factor when they quit playing together. Piano tuners argue among each other endlessly, and there is no widespread agreement of either how to tune a piano, or what methods to use to achieve a desired system of tuning. US One of the biggest variables in the whole tuning situation is actually us, the humans. There seem to be yet another set of issues underlying the notion of what it means to be “in tune.” When I first started to play guitar, tuning wasn’t an issue, and I played happily as a 14 year old kid, presumably wildly out of tune but not knowing or caring. I have also have had untold numbers of somewhat mysterious experiences, that I could superstitiously attribute to “good tuning days” or “bad tuning days,” or maybe blame on some hex put on me by a witch doctor. I suspect that my musician friends would agree with me on this one. There are often times when everything sounds awful and out of tune, and I put the guitar down in frustration, thinking the strings are shot or that I need a different guitar. Then I’ll pick it up the next day in the same room, with the same strings, and play happily for hours and never touch the tuning pegs. We may listen to the same recording on different days and feel differently about how “in tune” it sounds, which means that the only possible cause for the discrepancy is us, the gloriously flawed human who might actually have a better or fussier ear one day than another. That kind of knowledge is tricky to extract from a science experiment, since the enemy of all science is the flawed observer. Our ears are not precise scientific instruments, and our brains that process what our ears tell us do not behave entirely like machines. The term “psycho-acoustics” is often used to describe this shadowy world. Our emotional state, and probably chemical and hormonal levels also affect either our ability to hear or our ability to interpret what we hear, and any discussion of tuning that fails to mention this is being unreasonably shallow. There are days when we feel more religious or more energetic, and possibly there are some days when we have more precise musical ears than others. There is no certainty that our ears behave the same at all times. Our ear canals and sinus cavities are always doing different things as allergens and moist and dry air interact with them, and it is quite possible (and likely) that our whole hearing mechanism is a much less reliable system than we might at first imagine. We hear differently when we are tired than when rested. When I work in the recording studio, I try to not work for more than about 4 hours, and other colleagues of mine, both musicians and sound engineers have agreed that our “ears get tired.” Don’t count on those dramatic overnight marathons on the studio, because you can make big mistakes in listening or tuning if you’ve done it all day long. When we compare two pitches, we tend to hear the lower note as being correct, and want to always adjust the higher-pitched of two notes. If it is the lower of the 2 notes that is incorrect, it’s harder to spot. This is why we usually tune starting at the bass end, and why when a folksinger puts on a capo, that pushes the bass string a little sharp, which can easily lead to the error of tuning higher-pitched strings to the now incorrect lower strings. Musicians jamming at a party or festival who don’t have electronic tuners tend to drift upward in pitch over time, and I think this is the explanation. The time delay between two notes we are comparing is vital also. If they are simultaneous it’s hard to compare them, and if there is too much time between them it’s also hard in a different way. We need to learn for ourselves how to play the lower of the two notes, leave a short pause then play the upper note, then mentally vote on what to do to adjust the 2nd note to match the first. If you’ve ever listened to a good musician tuning, this is what they usually do, because it best suits the way our ears work and don’t work. I think the answer is to gain as much experience as you can, be open-minded, always be as careful as you can, and make sure you use an electronic tuner or phone app that is accurate to within “one cent,” which means 1/100th of a fret. The fundamental out-of-tuneness of guitars is actually a pretty good argument for playing a lousy guitar with old dead clunky strings (few overtones) in an open tuning with a slide, like the old bluesmen did, since no acoustician or oscilloscope on earth is going to call the shots in that pure and funky little world of guitar tuning where 3rds and 7ths of chords bend and slide all over the place, and the guitar and vocal notes swoop around. If you made it this far, reward yourself and go play your guitar happily. Do your best to get in tune, and maybe even try not to think too much about all the "dark knowledge" you've read here. This is another posting where I'm trying to raise issues, questions and awareness in the world of modern troubadours... You deserve a reward or a door prize for making it to the end. Please check back to look for new posts as I get them done. I plan to cover a wide range of issues and topics. I don't have a way for you to comment here, but I welcome your emails with your reactions. Feel free to cheer me on, or to disagree... Chordally yours,
http://harveyreid.com/blogs/tuning_guitars.html
Hey! That's out of tune! In tune or out of tune – people with no formal musical training versus professional musicians. Frankfurt/Liège: Not everyone has the ability to sing. But is everyone capable of hearing that a song is out of tune? "Pop Idol", "The Voice of Germany"… there are many music casting shows and talent contests based on viewers' voting. However, television viewership is not comprised of professional musicians, but of laypersons. Are they really capable of judging non-professional singers? A new study by Pauline Larrouy-Maestri, researcher at the Max Planck Institute for Empirical Aesthetics and her colleagues shows that laypeople do have the ability to evaluate peers. Laypersons are very good at judging pitch accuracy of untrained singers in familiar songs. A layperson is defined as a person having no formal training in music, unlike professional musicians. However, laypersons are sensitive to musical structures of their respective culture and can detect melodic errors from an early age on. "Melodic errors are , for example, violations of melodic contour, deviation of interval size and changes in tonality", says Larrouy-Maestri. Is this "sensitivity" for melodic errors enough to make an expert? In order to answer this question, Larrouy-Maestri and her colleagues presented 166 different versions of the song "Happy Birthday" to musical laypersons – all versions performed by peers. Each listener was asked to rate each sample twice, on a scale of its perceived accuracy. Afterwards, the judgments were compared to experts' judgments from a previous study (Larrouy-Maestri, Lévêque, Schön, Giovanni, & Morsomme, 2013). The results show that there is a high overlap in the definition of pitch accuracy between laypersons and experts when hearing familiar melodies. Even though the definition is not exactly the same for laypersons and professional listeners, the laypersons were particularly consistent, i.e., keeping the same definition and strategy from one time to the other, and clearly using musical criteria in evaluating pitch accuracy of untrained singers. Layperson listeners are thus trustable judges of pitch accuracy in familiar melodies. Still: the next time you hear "Happy Birthday" and find yourself thinking: "Now, that sounds out of tune", remember: "Happy Birthday" has to come from the heart – no matter how it sounds. Original publication: Larrouy-Maestri, P., Magis, D., Grabenhorst, M., & Morsomme, D. (2015). Layman versus Professional Musician: Who Makes the Better Judge? Plos One 10(8): e0135394. doi:10.1371/journal.pone.0135394 Contact:
https://www.aesthetics.mpg.de/en/research/former-departments/department-of-neuroscience/news/news-neurowissenschaften-detail/article/hey-thats-out-of-tune.html
Jazz musicians may have to be 'in the mood' to find their groove. In contrast, most bluegrass musicians just love to play. On a recent perfect evening at Burnsville Lake the Glenville State College Bluegrass Band performed from a boat moored at the docks. GSC Bluegrass Band members pictured, clockwise from left, Josh Chapman, Luke Shamblin, Mary Sue Bailey, Patrick Thompson, David O'Dell, GSC chemistry professor; and Buddy Griffin. Music drifted lazily to listeners on a mild, windless night. Occasionally, between numbers, you could hear water gently lapping against the side of nearby boats. Burnsville Dock owners Dave and Judy Waldron typically sponsor concerts twice a year. Josh Chapman, left, paces the tune on bass as Buddy Griffin plays fiddle. Fishing and bluegrass music are favorites at the lake. "They've been catching super nice catfish and musky," Dave Waldron reported. In August a Vienna man caught a 50-inch musky, while another fisherman pulled out a catfish topping 50-pounds. Bill Boggess, of Vienna, WV recently pulled a 50-inch musky from Burnsville Lake. According to Waldron, from around late September until mid-November, conditions should become ideal for musky, crappie and bass. Although Burnsville Docks finishes its season during the third week in October, lake remains open to the public year round. These West Virginia anglers show off their catch following a recent night of catfishing. Glenville State College is home to one of two colleges nationwide offering 4-year music degrees specializing in Bluegrass music. Braxton County native Buddy Griffin heads the program.
http://www.hurherald.com/obits.php?id=30714
The first thing musicians must do before they can play together is "tune". For musicians in the standard Western music tradition, this means agreeing on exactly what pitch (what frequency ) is an "A", what is a "B flat" and so on. Other cultures not only have different note names and different scales, they may even have different notes - different pitches - based on a different tuning system. In fact, the modern Western tuning system, which is called equal temperament , replaced (relatively recently) other tuning systems that were once popular in Europe. All tuning systems are based on the physics of sound . But they all are also affected by the history of their music traditions, as well as by the tuning peculiarities of the instruments used in those traditions. Pythagorean , mean-tone , just intonation , well temperaments , equal temperament , and wide tuning . To understand all of the discussion below, you must be comfortable with both the musical concept of interval and the physics concept of frequency. If you wish to follow the whole thing but are a little hazy on the relationship between pitch and frequency, the following may be helpful: Pitch ; Acoustics for Music Theory ; Harmonic Series I: Timbre and Octaves ; and Octaves and the Major-Minor Tonal System . If you do not know what intervals are (for example, major thirds and perfect fourths), please see Interval and Harmonic Series II: Harmonics, Intervals and Instruments . If you need to review the mathematical concepts, please see Musical Intervals, Frequency, and Ratio and Powers, Roots, and Equal Temperament . Meanwhile, here is a reasonably nontechnical summary of the information below: Modern Western music uses the equal temperament tuning system. In this system, an octave (say, from C to C) is divided into twelve equally-spaced notes. "Equally-spaced" to a musician basically means that each of these notes is one half step from the next, and that all half steps sound like the same size pitch change. (To a scientist or engineer, "equally-spaced" means that the ratio of the frequencies of the two notes in any half step is always the same.) This tuning system is very convenient for some instruments, such as the piano, and also makes it very easy to change key without retuning instruments. But a careful hearing of the music, or a look at the physics of the sound waves involved, reveals that equal-temperament pitches are not based on the harmonics physically produced by any musical sound. The "equal" ratios of its half steps are the twelfth root of two, rather than reflecting the simpler ratios produced by the sounds themselves, and the important intervals that build harmonies can sound slightly out of tune. This often leads to some "tweaking" of the tuning in real performances, away from equal temperament. It also leads many other music traditions to prefer tunings other than equal temperament, particularly tunings in which some of the important intervals are based on the pure, simple-ratio intervals of physics. In order to feature these favored intervals, a tuning tradition may do one or more of the following: use scales in which the notes are not equally spaced; avoid any notes or intervals which don't work with a particular tuning; change the tuning of some notes when the key or mode changes.
https://www.jobilize.com/course/section/introduction-tuning-systems-by-openstax?qcr=www.quizover.com
You should spend about 20 minutes on Questions 27- 40, which are based on Reading Passage 293 below. Is perfect pitch a rare talent possessed solely by the likes of Beethoven? Kathryn Brown discusses this much sought-after musical ability. The uncanny, if sometimes distracting, ability to name a solitary note out of the blue, without any other notes for reference, is a prized musical talent - and a scientific mystery. Musicians with perfect pitch - or, as many researchers prefer to call it, absolute pitch - can often play pieces by ear, and many can transcribe music brilliantly. That’s because they perceive the position of a note in the musical stave - its pitch - as clearly as the fact that they heard it. Hearing and naming the pitch go hand in hand. By contrast, most musicians follow not the notes, but the relationship between them. They may easily recognise two notes as being a certain number of tones apart, but could name the higher note as an E only if they are told the lower one is a C, for example. This is relative pitch. Useful, but much less mysterious. For centuries, absolute pitch has been thought of as the preserve of the musical elite. Some estimates suggest that maybe fewer than 1 in 2,000 people possess it. But a growing number of studies, from speech experiments to brain scans, are now suggesting that a knack for absolute pitch may be far more common, and more varied, than previously thought. ‘Absolute pitch is not an all or nothing feature,’ says Marvin, a music theorist at the University of Rochester in New York state. Some researchers even claim that we could all develop the skill, regardless of our musical talent. And their work may finally settle a decades-old debate about whether absolute pitch depends on melodious genes - or early music lessons. Music psychologist Diana Deutsch at the University of California in San Diego is the leading voice. Last month at the Acoustical Society of America meeting in Columbus, Ohio, Deutsch reported a study that suggests we all have the potential to acquire absolute pitch - and that speakers of tone languages use it every day. A third of the world’s population - chiefly people in Asia and Africa - speak tone languages, in which a word’s meaning can vary depending on the pitch a speaker uses. Deutsch and her colleagues asked seven native Vietnamese speakers and 15 native Mandarin speakers to read out lists of words on different days. The chosen words spanned a range of pitches, to force the speakers to raise and lower their voices considerably. By recording these recited lists and taking the average pitch for each whole word, the researchers compared the pitches used by each person to say each word on different days. Both groups showed strikingly consistent pitch for any given word - often less than a quarter-tone difference between days. ‘The similarity,’ Deutsch says, ‘is mind-boggling.’ It’s also, she says, a real example of absolute pitch. As babies, the speakers learnt to associate certain pitches with meaningful words - just as a musician labels one tone A and another B - and they demonstrate this precise use of pitch regardless of whether or not they have had any musical training, she adds. Deutsch isn’t the only researcher turning up everyday evidence of absolute pitch. At least three other experiments have found that people can launch into familiar songs at or very near the correct pitches. Some researchers have nicknamed this ability ‘absolute memory’, and they say it pops up on other senses, too. Given studies like these, the real mystery is why we don’t all have absolute pitch, says cognitive psychologist Daniel Levitin of McGill University in Montreal. Over the past decade, researchers have confirmed that absolute pitch often runs in families. Nelson Freimer of the University of California in San Francisco, for example, is just completing a study that he says strongly suggests the right genes help create this brand of musical genius. Freimer gave tone tests to people with absolute pitch and to their relatives. He also tested several hundred other people who had taken early music lessons. He found that relatives of people with absolute pitch were far more likely to develop the skill than people who simply had the music lessons. There is clearly a familial aggregation of absolute pitch,’ Freimer says. Freimer says some children are probably genetically predisposed toward absolute pitch - and this innate inclination blossoms during childhood music lessons. Indeed, many researchers now point to this harmony of nature and nurture to explain why musicians with absolute pitch show different levels of the talent. Indeed, researchers are finding more and more evidence suggesting music lessons are critical to the development of absolute pitch. In a survey of 2,700 students in American music conservatories and college programmes, New York University geneticist Peter Gregersen and his colleagues found that a whopping 32 per cent of the Asian students reported having absolute pitch, compared with just 7 per cent of non-Asian students. While that might suggest a genetic tendency towards absolute pitch in the Asian population, Gregersen says that the type and timing of music lessons probably explains much of the difference. For one thing, those with absolute pitch started lessons, on average, when they were five years old, while those without absolute pitch started around the age of eight. Moreover, adds Gregersen, the type of music lessons favoured in Asia, and by many of the Asian families in his study, such as the Suzuki method, often focus on playing by ear and learning the names of musical notes, while those more commonly used in the US tend to emphasise learning scales in a relative pitch way. In Japanese pre-school music programmes, he says, children often have to listen to notes played on a piano and hold up a coloured flag to signal the pitch. ‘There’s a distinct cultural difference,’ he says. Complete the notes below using words from the box. Write your answers in boxes 28-35 on your answer sheet. Research is being conducted into the mysterious musical 28 ........................... some people possess known as perfect pitch. Musicians with this talent are able to name and sing a 29 ........................... without reference to another and it is this that separates them from the majority who have only 30 ........................... pitch. The research aims to find out whether this skill is the product of genetic inheritance or early exposure to 31 ........................... or, as some researchers believe, a combination of both. One research team sought a link between perfect pitch and 32 ........................... languages in order to explain the high number of Asian speakers with perfect pitch. Speakers of Vietnamese and Mandarin were asked to recite 33 ........................... on different occasions and the results were then compared in terms of 34 ........................... . A separate study found that the approach to teaching music in many Asian 35 ........................... emphasised playing by ear whereas the US method was based on the relative pitch approach. Reading Passage 293 contains a number of opinions provided by five different scientists. Match each opinion (Questions 36-40) with one of the scientists (A-E). Write your answers in boxes 36-40 on your answer sheet. You may use any of the people A-E more than once. 36. Absolute pitch is not a clear-cut issue. 37. Anyone can learn how to acquire perfect pitch. 38. It’s actually surprising that not everyone has absolute pitch. 39. The perfect pitch ability is genetic. 40. The important thing is the age at which music lessons are started.
https://www.ielts-mentor.com/reading-sample/academic-reading/2829-striking-the-right-note
TRACK OF THE WEEK DAY & DATE: Number One on the Billboard Hot Soul Singles chart in the week ending Saturday, May 29, 1982. SONGWRITERS: Reggie Andrews, Leon “Ndugu” Chancler PRODUCER: Reggie Andrews BACKSTORY: Leo’s Casino in Cleveland was a music venue with strong links to any number of top Motown artists who performed there during the 1960s. Bobby Harris, the singer and saxman who led the Dazz Band, continued the tradition, gigging at Leo’s behind Lou Rawls in the 1970s, then forming the first group of musicians through which he gained experience, an audience and, eventually, chart success. “We’d been playing an intricate jazz/fusion kind of music,” he said of his pre-Dazz days. “But to expand our audience, we started playing Top 40 songs and developed our live set into more of an entertaining show, as opposed to just musicians standing on a stage.” Harris’ combo came to Motown’s attention in 1980, signing up and spending almost six months rehearsing their first album for the company, Invitation To Love, before hitting the 24-track Recording Connection studio in Cleveland to put the results on tape. Record buyers accepted the “invitation” on a modest scale, giving the Dazz Band a couple of minor R&B chart entries before “Let It Whip,” from their third album, blew the doors off. “We wanted to create a song that they could make a dance out of,” said Leon “Ndugu” Chancler in The Billboard Book of Number One Rhythm & Blues Hits. “A song that was different in that it would be something that no one had ever really talked about on a record.” He added, “What kind of lyric content could we talk about that could end up being a dance? And Reggie came up with the idea of a whip.” Andrews was known for his work with Patrice Rushen, but in 1980, he was an A&R assistant at Motown. Chancler, his “Let It Whip” co-writer, had previously produced George Duke. Together, the pair sparkled – and the Dazz Band liked what they heard. “I think part of it was we knew all the guys, and we knew what they could do and how they would do it.” With the help of drum machines. “At that time, we were using the Roland 808,” explained Chancler. “Reggie had the 808 and I had the Linn Drum, and we used a mini-Moog bass.” The combination of technology and live musicians paid off: “Let It Whip” cracked the top of the Billboard R&B best-seller lists for five weeks in mid-1982 – and, better still, reached the Top 5 of the pop charts. There was one more celebration to come, when the Dazz Band shared – with Earth Wind & Fire, no less – the 1982 Grammy© award for Best R&B Vocal Performance by a group. The voters danced to that tune, whips or no whips. REMAKES: This is yet another Motown hit with a lifespan across the decades. After its popularity during the ’80s, “Let It Whip” returned in the ’90s via versions by British-based jazz-pop band Matt Bianco and Australian pop group CDB. In 2004, former Motown hitmakers Boyz II Men selected the song for their album Throwback. But “Let It Whip” gained its greatest 21st century exposure in Pitch Perfect, the 2012 movie comedy about an all-female a cappella band, the Barden Bellas, and their male rivals, the Treblemakers. With a name like that, no wonder the boys chose “Let It Whip. Pitch Perfect corralled at least $100 million in worldwide boxoffice; the soundtrack album was a Top 10 success in various countries. FOOTNOTE: What’s in a name? The Dazz (for “danceable jazz”) Band have had quite a few, even as their line-up of musicians has evolved. Leader Bobby Harris was first in a combo known as Black Heat. Then he joined Bell Telephunk, which opened in his Cleveland hometown for the Crusaders and Billy Cobham. This aggregation became Kinsman Dazz, which secured a deal with 20th Century Records in 1978. After a couple of well-regarded albums, Harris reshaped the line-up and, with eight players, took on the Dazz Band identity. At which point, they arrived at Motown’s door and let it rip.
https://classic.motown.com/story/dazz-band-let-whip/