content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
Modern mobile devices have evolved into small computers that can render multimedia streaming content anywhere and anytime. These devices can extend the viewing time of users and provide more business opportunities for service providers. Mobile devices, however, make a challenging platform for providing high-quality multimedia services. The goal of this thesis is to identify these challenges from various aspects, and propose efficient and systematic solutions to solve them. In particular, we study mobile video broadcast networks in which a base station concurrently transmits multiple video streams over a shared air medium to many mobile devices. We propose algorithms to optimize various quality-of-service metrics, including streaming quality, bandwidth efficiency, energy saving, and channel switching delay. We analytically analyze the proposed algorithms, and we evaluate them using numerical methods and simulations. In addition, we implement the algorithms in a real testbed to show their practicality and efficiency. Our analytical, simulation, and experimental results indicate that the proposed algorithms can: (i) maximize energy saving of mobile devices, (ii) maximize bandwidth efficiency of the wireless network, (iii) minimize channel switching delays on mobile devices, and (iv) efficiently support heterogeneous mobile devices. Last, we give network operators guidelines on choosing solutions suitable for their mobile broadcast networks, which allow them to provide millions of mobile users much better viewing experiences, attract more subscribers, and thus increase the revenues. Network Systems Lab at SFU The Network Systems Lab at Simon Fraser University is led by Dr. Mohamed Hefeeda, and is affiliated with the Network Modeling Group at Simon Fraser University. We are interested in the broad areas of computer networking and multimedia systems. We develop algorithms and protocols to enhance the performance of networks, especially the Internet, and to efficiently distribute multimedia content (e.g., video and audio objects) to large scale user communities. Our current research interests include multimedia networking, peer-to-peer (P2P) systems, wireless sensor networks, network security, and high performance computing. Brief descriptions are given below. In multimedia networking, we are focusing on distributed streaming in dynamic environments and for heterogeneous clients. Our goal is to analyze and understand scalable coding techniques, and to design several optimization and streaming algorithms to make the best possible use of them in real multimedia systems. This will yield better quality for users, and more efficient utilization of network and server resources. We are also designing algorithms to optimize streaming quality for wireless and mobile clients. In P2P systems, we are exploring the applicability of the P2P paradigm to build cost-effective content distribution systems. Problems such as sender selection, adaptive object replication, and content caching are being studied. We are also developing models to analyze the new characteristics of the P2P traffic and the impact of these characteristics on the cache replacement policies and object replication strategies. Furthermore, we are devising analytic models to study the dynamics of the P2P system capacity and the impact of various parameters on it. In network security, we are exploring network monitoring techniques to detect and thwart intrusion and denial-of-service attacks in their early stages by observing unusual traffic patterns injected by such attacks. We are studying the security of multimedia streaming systems that employ multi-layer and fine-grain scalable video streams. In high performance computing, we are exploring the opportunities of utilizing new architectures such as GPUs, multi-core processors, and distributed clusters (cloud computing) to efficiently solve research problems related to multimedia content analysis, large-scale data analysis, and machine learning techniques.
http://records.sigmm.org/?phd-thesis-summary=cheng-hsin-hsu
This book deals with the various types of revolutionary history and the numerous schools of historical thought concerned with the French Revolution. The survey of writings presents a cross-section of historians of the Revolution from the early nineteenth century right up to the present day. From liberals to conservatives and from Marxists to revisionists, it focuses on those individuals who are generally perceived to be the 'major' or 'pre-eminent' figures within revolutionary historiography. A 'history of the histories', this book will be an ideal starting point for those students seeking to better-understand the French Revolution and its history. Between the Devil and the Deep Blue Sea? This book examines the development of Jewish positions on the relationship between church and state in France from the French Revolution until the 1905 law of separation. It is a comprehensive study of the complex interplay among all segments of the Jewish population and the communitys attempt to come to terms with its social and religious status in the nineteenth century. It addresses how French Jews understood the constitutional right of religious freedom in a state that supported Judaism, while, at the same time, in its Concordat with the Catholic Church, officially recognized Catholicism as the religion of the great majority of French citizens. Conversely, it examines how they responded to the attempts by the republican majority during the Third Republic to radically secularize the public sphere and separate church from state. The volume considers the extent to which the positions expressed by the representatives of French Jewry on church-state policies were pragmatic and the extent to which they were ideological and compares Jewish attitudes toward the relationship between church and state with those of other religious groups in France. This bestselling history, published between 1833 and 1842, interpreted the French Revolution as a warning about the dangers of democracy. "The newest edition of the successful Rewriting History series, this fascinating book studies all aspects of the French Revolution, from its origins, through its development, right up to the consequences of this major historical event."--Publisher description. Discusses the significance of the French Revolution in English literary and cultural history, particularly in the works of Edmund Burke, William Blake, William Wordsworth, and Thomas Carlyle. The Oxford Handbook of the French Revolution brings together a sweeping range of expert and innovative contributions to offer engaging and thought-provoking insights into the history and historiography of this epochal event. Each chapter presents the foremost summations of academic thinking on key topics, along with stimulating and provocative interpretations and suggestions for future research directions. Placing core dimensions of the history of the French Revolution in their transnational and global contexts, the contributors demonstrate that revolutionary times demand close analysis of sometimes tiny groups of key political actors - whether the king and his ministers or the besieged leaders of the Jacobin republic - and attention to the deeply local politics of both rural and urban populations. Identities of class, gender and ethnicity are interrogated, but so too are conceptions and practices linked to citizenship, community, order, security, and freedom: each in their way just as central to revolutionary experiences, and equally amenable to critical analysis and reflection. This volume covers the structural and political contexts that build up to give new views on the classic question of the 'origins of revolution'; the different dimensions of personal and social experience that illuminate the political moment of 1789 itself; the goals and dilemmas of the period of constitutional monarchy; the processes of destabilisation and ongoing conflict that ended that experiment; the key issues surrounding the emergence and experience of 'terror'; and the short- and long-term legacies, for both good and ill, of the revolutionary trauma - for France, and for global politics. This concise yet rich introduction to the French Revolution explores the origins, development, and eventual decline of a movement that defines France to this day. Through an accessible chronological narrative, Sylvia Neely explains the complex events, conflicting groups, and rapid changes that characterized this critical period in French history. She traces the fundamental transformations in government and society that forced the French to come up with new ways of thinking about their place in the world, ultimately leading to liberalism, conservatism, terrorism, and modern nationalism. Throughout, the author focuses on the essential political events that propelled the Revolution, at the same time deftly interweaving the intellectual, social, diplomatic, military, and cultural history of the time. Neely explains how the difficult choices made by the royal government and the revolutionaries alike not only brought on the collapse of the Old Regime but moved the nation into increasingly radical policies, to the Terror, and finally to the rise of Napoleon Bonaparte. Written with clarity and nuance, this work offers a deeply knowledgeable understanding of the political possibilities available at any given moment in the course of the Revolution, placing them in a broad social context. All readers interested in France and revolutionary history will find this an engaging and rewarding read. Contesting the French Revolution provides an insightful overview of one of history’s most significant events, as well as examining the most significant historiographical debates about this period. Explores the causes, events, and consequences of the French Revolution Offers a stimulating analysis of the most controversial debates: Were the events of 1789 a social revolution or a political accident? Did they mark the rise of industrial capitalism or the birth of modern democracy? Was Napoleon Bonaparte an heir to the ideals of 1789 or a betrayer of the Revolution? Shows how historical interpretation of the French Revolution has been influenced by the changing political and social currents of the last 200 years – from the Russian Revolution to the fall of the Berlin Wall – and how historical study has shifted from a political focus to social and cultural approaches in more recent years. In the last generation the classic Marxist interpretation of the French Revolution has been challenged by the so-called revisionist school. The Marxist view that the Revolution was a bourgeois and capitalist revolution has been questioned by Anglo-Saxon revisionists like Alfred Cobban and William Doyle as well as a French school of criticism headed by Francois Furet. Today revisionism is the dominant interpretation of the Revolution both in the academic world and among the educated public. Against this conception, this book reasserts the view that the Revolution - the capital event of the modern age - was indeed a capitalist and bourgeois revolution. Based on an analysis of the latest historical scholarship as well as on knowledge of Marxist theories of the transition from feudalism to capitalism, the work confutes the main arguments and contentions of the revisionist school while laying out a narrative of the causes and unfolding of the Revolution from the eighteenth century to the Napoleonic Age. Combines historical and theoretical analysis, setting political thought in the context of various frameworks of the modern world. From the impact of the French and American revolutions, through reaction and constitutional consolidation, this book traces the contrasting criteria invoked to justify particular forms of political order from 1789. A landmark work of French Revolution scholarship, now in its 20th anniversary edition. Edmund Burke’s Reflections on the Revolution in France is one of the major texts in the western intellectual tradition. This book describes Burke’s political and intellectual world, stressing the importance of the idea of ‘property’ in Burke’s thought. It then focuses more closely on Burke’s personal and political situation in the late 1780s to explain how the Reflections came to be written. The central part of the study discusses the meaning and interpretation of the work. In the last part of the book the author surveys the pamphlet controversy which the Reflections generated, paying particular attention to the most famous of the replies, Tom Paine’s Rights of Man. It also examines the subsequent reputation of the Reflections from the 1790s to the modern day, noting how often Burke has fascinated even writers who have disliked his politics. Political decisions are never taken in a vacuum but are shaped both by current events and historical context. In other words, long-term developments and patterns in which the accumulated memory of what came earlier, can greatly (and sometimes subconsciously) influence subsequent policy choices. Working forward from the later seventeenth century, this book explores the ‘deep history’ of the changing and competing understandings within the Tory party of the role Britain has aspired to play on a world stage. Conservatism has long been one of the major British political tendencies, committed to the defence of established institutions, with a strong sense of the ‘national interest’, and embracing both ‘liberal’ and ‘authoritarian’ views of empire. The Tory party has, moreover, at several times been deeply divided, if not convulsed, by different perspectives on Britain’s international orientation and different positions on foreign and imperial policy. Underlying Tory beliefs upon which views of Britain’s global role were built were often not stated but assumed. As a result they tend to be obscured from historical view. This book seeks to recover and reconsider those beliefs, and to understand how the Tory party has sought to navigate its way through the difficult pathways of foreign and imperial politics, and why this determination outlasted Britain’s rapid decolonisation and was apparently remarkably little affected by it. With a supporting cast from Pitt to Disraeli, Churchill to Thatcher, the book provides a fascinating insight into the influence of history over politics. Moreover it argues that there has been an inherent politicisation of the concept of national interests, such that strategic culture and foreign policy cannot be understood other than in terms of a historically distorted political debate. From David Brion Davis's The Problem of Slavery in the Age of Revolution to Paul Gilroy's The Black Atlantic, some of the most influential conceptualizations of the Atlantic World have taken the movements of individuals and transnational organizations working to advocate the abolition of slavery as their material basis. This unique, interdisciplinary collection of essays provides diverse new approaches to examining the abolitionist Atlantic. With contributions from an international roster of historians, literary scholars, and specialists in the history of art, this book provides case studies in the connections between abolitionism and material spatial practice in literature, theory, history and memory. This volume covers a wide range of topics and themes, including the circum-Atlantic itineraries of abolitionist artists and activists; precise locations such as Paris and Chatham, Ontario where abolitionists congregated to speculate over the future of, and hatch emigration plans to, sites in Africa, Latin America and the Caribbean; and the reimagining of abolitionist places in twentieth and twenty-first century literature and public art. This book was originally published as a special issue of Atlantic Studies. This Companion takes stock of the trajectory, achievements, shortcomings and prospects of Marxist political economy. It reflects the contributors' shared commitment to bringing the methods, theories and concepts of Marx himself to bear across a wide range of topics and perspectives, and it provides a testimony to the continuing purpose and vitality of Marxist political economy. As a whole, this volume analyzes Marxist political economy in three areas: the critique of mainstream economics in all of its versions; the critical presence of Marxist political economy within, and its influence upon, each of the social science disciplines; and, cutting across these, the analysis of specific topics that straddle disciplinary boundaries. Some of the contributions offer an exposition of basic concepts, accessible to the general reader, laying out Marx's own contribution, its significance, and subsequent positions and debates with and within Marxist political economy. The authors offer assessments of historical developments to and within capitalism, and of its current character and prospects. Other chapters adopt a mirror-image approach of pinpointing the conditions of contemporary capitalism as a way of interrogating the continuing salience of Marxist analysis. This volume will inform and inspire a new generation of students and scholars to become familiar with Marxist political economy from an enlightened and unprejudiced position, and to use their knowledge as both a resource and gateway to future study. Due to its height, density, and thickness of crown canopy; fluffy forest floor; large root system; and horizontal distribution; forest is the most distinguished type of vegetation on the earth. In the U.S., forests occupy about 30 percent of the total territory. Yet this 30 percent of land area produces about 60 percent of total surface runoff, the major water resource area of the country. Any human activity in forested areas will inevitably disturb forest floors and destroy forest canopies, consequently affecting the quantity, quality, and timing of water resources. Thoroughly updated and expanded, Forest Hydrology: An Introduction to Water and Forests, Third Edition discusses the concepts, principles, and processes of forest and forest activity impacts on the occurrence, distribution, and circulation of water and the aquatic environment. Brings water resources and forest-water relations into a single, comprehensive textbook Focuses on the concepts, processes, and general principles in forest hydrology Covers functions, properties, and science of water; water distribution; forests and precipitation, vaporization, stream flow, and stream sediment Discusses watershed management planning and practical applications of forest hydrology in resource management In a single textbook, Forest Hydrology: An Introduction to Water and Forests, Third Edition comprehensively covers water and water resources issues, forest characteristics relevant to the environment, forest impacts in the hydrological cycle, watershed research, watershed management planning, and hydrologic measurements. With the addition of new chapters, new issues, and appendices, this new edition is a valuable resource for upper-level undergraduates in forest hydrology courses as well as professionals involved in water resources management and decision-making in forested watersheds.
http://eglenin.net/online/the-debate-on-the-french-revolution-issues-in-historiography-issues-in-historiography/
The term global emergency medicine has emerged in the past several years, much in line with the distinctions drawn between the terms global and international health. More and more people refer to what previously was called international emergency medicine as global emergency medicine because it focuses not only on the global practice of emergency medicine but also on efforts to promote the growth of emergency care as a branch of medicine throughout the world. For the purposes of this book, we will be using the term “global emergency medicine” as it is most inclusive with the understanding that this term may include international emergency medicine. What is Global Emergency Medicine? Global emergency medicine (GEM) encompasses a diverse array of initiatives, settings, approaches, and objectives that center on health care system capacity building, and the delivery of healthcare (specifically acute care) worldwide. GEM activities and competencies can be categorized into 3 major areas: emergency medicine development, delivery of acute care in resource limited settings, and disaster and humanitarian response. GEM research has also become a specialty itself due to unique challenges inherent to global research, and it intersects with the three major areas. Below are brief overviews of each category. There is, of course, a lot of overlap between parts of these areas, but for the sake of clarity we have created distinct categories. Emergency Medicine Development As countries improve their economies and healthcare systems, their burden of disease changes from infectious disease, sanitation, and nutrition to trauma (particularly motor vehicle), heart disease, and cancer. These new burdens require a new type of system and provider that specializes in delivery of this care. Emergency medicine development (EMD) focuses on the development of emergency medicine globally, both at the health-systems level and individual training level. It seeks to strengthen public health systems and emergency medical systems into ones that that are organized, integrated, equitable, and accessible to anyone needing acute care. Endeavors include, but are not limited to, establishing pre-hospital medical and trauma care systems, creating culturally appropriate emergency departments within hospitals, and training providers to staff those departments. EMD also encompasses EM specialty development (including advocacy for EM at the national level), development of collective national organizations that unite, inform and educate their members, and the initiation and expansion of residency/graduate level training programs. In addition, EMD aims to improve and advance emergency medicine education and training through propagation of structured training programs (e.g. Advanced Cardiac Life Support, Advanced Trauma Life Support, and other resuscitation programs). Other strategies include specific education about topics relevant to a region (e.g. infectious diseases, sanitation, and injuries), tele-simulation, and public/community education. The goal of an EMD program is collaboration and local capacity building which are essential for the sustainability and the longevity of EM in any region. Acute/Emergency Care in Resource-Limited Settings This aspect of GEM focuses on the actual delivery of care: diagnosis, management, and prevention of diseases in low- and middle- income countries to improve the overall health of the population. The inherent uncertainty and challenges of medicine multiply exponentially in resource-limited settings due to limited infrastructure, staff, and diagnostic and therapeutic resources. This area focuses on optimizing the use of resources available, examining the efficacy of treatment regimens for diseases seen primarily in these settings (e.g. rehydration methods for diarrheal illnesses), and improving bedside skills (e.g. diagnostic algorithms, physical exam skills). Traditional areas of interest include vulnerable populations, maternal and child mortality, and infectious diseases (e.g. diarrheal illnesses, pneumonia, TB, malaria, and HIV). Given that injury and diseases of old age now exceed communicable diseases as leading causes of death in many resource-limited countries, there has also been a surge in stroke, heart disease, and trauma-related illnesses in these settings. Research has therefore also expanded to injury prevention and heart disease/stroke prevention. In addition to the delivery of care, this area of GEM seeks to address the inherent challenges in conducting research in low-resource settings. Disaster and Humanitarian Response Disaster and humanitarian response (DHR) focuses on care of those affected by natural disasters, armed conflict, disease epidemics, mass migrations, political/economic instability, and other potentially reversible situations. DHR not only encompasses disaster response, mitigation, and assessment, but also prevention, preparedness, and rebuilding. The care for the populations involved includes attention to the short term problems of food, water, sanitation, healthcare, shelter, and safety in the acute phase of a disaster as well as attention to the long-term problems of rebuilding damaged health and social infrastructure and addressing psychological and emotional distress of affected people. In recent years, humanitarianism has seen rapid expansion and undergone professionalization with the development of its own standards, ethics, training, and research. This advancement is based upon two important principles. The first is that the limited resources available must be allocated to provide the greatest benefit to the greatest number of people. The second is that disease prevention and health promotion (e.g. nutrition, sanitation, communicable disease control) should be emphasized over complex medical care. By utilizing these strategies, morbidity/mortality rates have been reduced during complex humanitarian disasters. A large area of focus in DHR is developing skilled communication and coordination of resource allocation rather than on improving the actual healthcare delivery. In addition to the above topics, improving the training of aid workers has increased their effectiveness, allowing them to deliver aid more effectively in complex political and social climates in which they have to operate. A seminal publication in the field came out of the 'Sphere Project' entitled "Humanitarian Charter and Minimum Standards in Disaster Response", which outlines the core principles and minimum standards for humanitarian programs during emergencies. Global Emergency Medicine Research Conducting research in culturally distinct and often resource-limited settings provides challenges that are unique to GEM. Therefore, GEM research has become a specialty in itself, and has grown rapidly over the recent years. Some of the unique challenges to research in GEM are: 1. Lack of adequate funding 2. Lack of appropriate resources 3. Lack of healthcare infrastructure 4. Cultural/Societal differences between patients, local researchers, and foreign researchers 5. Feasibility of integration of advancements into the present healthcare system 6. Historical studies in which the ethical standard of a sponsoring country was not applied to research in the host country These challenges have led to the formation of specific fundamentals of GEM research. In addition, the growing body of “grey literature” produced by governmental agencies and nongovernment organizations reflects the widespread interest in developing and enhancing emergency medicine systems in different countries. The four fundamentals of GEM research are: 1. Capacity building - utilizing local members of a region in a project 2. Health care improvement domains – helping develop standards of care by changing structure, process, or outcome 3. Implementation – consideration of the efficacy, feasibility, and cost effectiveness of proposed interventions 4. Methodology – addressing specific challenges of design and data collection, pushing utilization of epidemiological data As mentioned above, the focus of GEM research has changed recently and now includes domains such as trauma, injury, preventive care, and a focus on chronic disease in addition to infectious disease and malnutrition. For example, cardiovascular diseases are now the number one cause of death globally. In the United States, the development of observation units and intensive care units has reduced rates of missed diagnoses of myocardial infarctions. However, these interventions might not be translatable globally due to cultural and economic differences. Thus, research is needed to find the best management plans for this ever-growing cohort of patients in a variety of care environments. Final words Global Emergency Medicine is an ever-growing field that is becoming more specialized each year. Skills and competencies for those interested in a career in GEM expand beyond the necessary EM competencies to provide clinical care in a variety of settings. As described above, there are many different aspects of GEM. In the following pages, we hope to provide you with the fundamentals needed to explore GEM as a specialty or part of your career as a whole. There are many different ways to become involved as a student, a resident, or as a practicing physician. Thank you for reading, and welcome to the world of Global Emergency Medicine. References 1. Arnold, Jeffrey L. (January 1999). "International Emergency Medicine and the Recent Development of Emergency Medicine Worldwide". Annals of Emergency Medicine 33 (1): 97–103 2. Sistenich, Vera (August 2012). "International Emergency Medicine: How to Train for It". Emergency Medicine Australasia 24 (4): 435–41 3. Arnold LK, Razzak J. Research agendas in global emergency medicine. Emergency Medicine Clinics of North America 2005; 23:231-257. 4. Becker T, Jacquet G, March R, et al. Global emergency medicine: a review of the literature from 2013. Academic Emergency Medicine 2014; 21(7):810-817. 5. Brennan R, Nandy R. Complex humanitarian emergencies: a major global health challenge. Emergency Medicine 2001; 13:147-156. 6. Cardiovascular diseases. World Health Organization website. http://www.who.int/mediacentre/factsheets/fs317/en/. Accessed June 20, 2015. 7. GEMLR Procedures Manual 2013. Available at: http://www.gemlr.org/uploads/1/4/6/4/14642448/2012_ gemlr_procedures_manual_abridged.doc. Accessed June 20, 2015. 8. Hogan D, Burstein J. Disaster Medicine. 2nd ed. Philadelphia: Lippincott Williams Wilkins; 2007. 9. Hsia R, Razzak J, Tsai A, et al. Placing emergency care on the global agenda. Annals of Emergency Medicine 2010; 56(2): 142-149. 10. Humanitarian Charter and Minimum Standards in Disaster Response. Available at: http://www.sphereproject.org/resources/download-publications/?search=1&keywords=&language=English&category=22. Accessed June 20, 2015. 11. Jacquet G, Foran M, Bartels S, et al. Global emergency medicine: a review of the literature from 2012. Academic Emergency Medicine 2013; 20(8):835-843. 12. Schroeder E, Jacquet G, Becker TK, et al. Global emergency medicine: a review of the literature from 2011. Academic Emergency Medicine 2012; 19(10):1196-1203. 13. Smith J, Haile-Mariam T. Priorities in global emergency medicine development. Emergency Medicine Clinics of North America 2005; 23:11-29. 14. Sphere Project. Available at: http://www.sphereproject.org/. Accessed June 20, 2015.
https://emra.org/books/nuts-and-bolts-of-global-emergency-medicine/chapter-2-global-emergency-medicine/
Teachers rarely get opportunities for reflection and collaboration with others outside their grade level or departmental team. The term “community of practice” (CoP) was first introduced by Etienne Wenger, an education practitioner and scholar who described CoPs as “groups of people who share a passion for something that they know how to do and who interact regularly to learn how to do it better.” Virtual CoPs provide an opportunity for educators to connect around similar topics, passions, and areas of expertise. There are communities of practice for many subjects, but ambassadors in our Participate Learning programs benefit from communities designed specifically for global educators and dual language educators. At Participate Learning, there are three main benefits of using a community of practice: Encourages information and knowledge sharing Teachers are often looking to exchange ideas and stay fresh by gaining new perspectives from others. Virtual communities of practice serve as a meeting place that can be joined at any time, from anywhere in the world. Facilitated discussions ask thought-provoking questions to generate open conversations between members. The act of sharing what works well and what doesn’t can bring opportunities for growth and innovation in the classroom, empowering educators to try new things. Ambassadors in our dual language and global leaders programs benefit from the knowledge sharing that happens in each CoP. It provides a meaningful extension to the in-person professional development that happens at the beginning of each school year. Resource pages and discussions are spaces for educators to share helpful tools and best practices that work. For example, teachers within our middle school Spanish program, Conexiones, have access to specific curriculum resources and collaborate regularly through discussions to discuss ways to improve student outcomes. “Communities of practice give me a feeling of belonging. I love connecting with my colleagues. I have learned so much. The ideas and resources shared are super helpful.” – Ava-Gaye Blackford Allows instant feedback and collaboration Within communities of practice, there are also time-bound learning experiences developed to foster learning and connection. Time-bound learning experiences, or learning that occurs in quick bursts and utilizes blended-learning approaches, helps participants achieve their specific learning objectives. The experiences encourage engagement and a sense of belonging with a group of individuals with a shared interest and passion. As adult learners, we are looking for ways to ask questions, to learn, and to share our expertise. Participating in a discussion about favorite technology tools, or go-to global lesson plans, provides an opportunity to troubleshoot with others and collaborate together. Connecting educators with diverse learning experiences can create a lasting impact on teacher practice and student learning. This direct and authentic engagement between adult learners, when combined with opportunities to solve real problems, can increase leadership and confidence. Connects educators from around the world Many of our ambassadors are adjusting to the American education system for the first time. As part of their orientation week, teachers get professional development time to learn and share information that serves as a foundation in their classrooms. After their initial orientation training ends, they are eager to support each other through the challenges of the first few months. Facilitated discussions among passionate educators can create a lasting impact on teacher practice and student learning. Joining a community of practice can also be motivating for new ambassadors because it allows them to learn from the experiences of others. One of our ambassadors recently commented, “The Designing Cultural Activities experience, I believe, is the way forward in a time where networking and shared experiences is critical to enhance and build a diverse global community.” Our ambassadors are part of a global community of passionate educators who are constantly looking for ways to improve their practice. The exchange of ideas within a supportive discussion space fosters creativity and growth, and helps to form lasting bonds between teachers. Why wait for an annual conference or professional development day to have these opportunities? Virtual discussions and collaborative learning can fit into busy teacher schedules, and can happen much more frequently within a community of practice. We believe that fostering connection and learning for our ambassadors is the foundation for uniting the world through global education. Find out more about our ambassadors and the competitive advantage of our programs here.
https://participatelearning.kleystaging.com/blog/communities-of-practice/
Species interactions restrict or promote population growth, structure communities, and contribute to evolution of diverse taxa. I seek to understand how multiple species interactions are maintained, how human altered species interactions influence evolution, and explore factors that contribute to variation in species interactions. In Chapter 1, I examine how plants interact with multiple guilds of mutualists, many of which are costly interactions. The evolution of traits used to attract different mutualist guilds may be constrained due to ecological or genetic mechanisms. I asked if two sets of plant traits that mediate interactions with two guilds of mutualists, pollinators and ant bodyguards, were positively or negatively correlated across 36 species of Gossypium (cotton). Traits to attract pollinators were positively correlated with traits to attract ant bodyguards. Rather than interaction with one mutualist guild limiting interactions with another mutualist guild, traits have evolved to increase attraction of multiple mutualist guilds simultaneously. In Chapters 2 and 3, motivated by the fact that agriculture covers nearly 50% of the global vegetated land surface, I explore the consequences of changes in plant mutualist and antagonist guilds in agriculture for selection on plant traits. I first explore how agriculture alters abundance and community structure of mutualist pollinators and antagonist seed predators of wild Helianthus annuus texanus. Mutualists were more abundant near crops, whereas antagonists were more abundant far from crops near natural habitat. In addition, mutualist pollinator communities were more diverse near sunflower crops. Plant mutualists and antagonists respond differently to agriculture. Next, I explore how these changes in abundance and community structure of mutualists and antagonists influenced natural selection on H. a. texanus floral traits. Natural selection on heritable floral traits differed near versus far from crop sunflowers, and overall selection was more heterogeneous near crop sunflowers. Furthermore, mutualist pollinators and antagonist seed predators mediated these differences in selection. Finally, in Chapter 4, I ask if variation in interaction outcomes differs across types of species interactions. Furthermore, I examined the relative importance of factors that create context-dependency in species interactions. Using meta-analysis of 353 papers, we found that mutualisms were more likely to change sign of the interaction outcome when compared across contexts than competition, and predation was the least likely to change sign. Overall, species identity caused the greatest variation in interaction outcomes: whom you interact with is more important for context-dependency than where or when the interaction occurs. Additionally, the most important factors driving context-dependency differed significantly among species interaction types. Altogether, my work makes progress in understanding how species maintain interactions with multiple guilds of mutualists, how agriculture alters species interactions and subsequent natural selection, and the variation in species interaction outcomes and their causes. Chamberlain, Scott A.. "VARIATION IN SPECIES INTERACTIONS AND THEIR EVOLUTIONARY CONSEQUENCES." (2013) Diss., Rice University. https://hdl.handle.net/1911/71135.
https://scholarship.rice.edu/handle/1911/71135
In general, exercise-induced asthma (EIA) is diagnosed clinically and may not need any further laboratory studies, imaging studies, or other tests and procedures. Laboratory evaluation is reserved for equivocal cases, for treatment failures, and to narrow the differential diagnosis when it seems reasonable. Testing may then be appropriate to differentiate EIA from cardiac conditions, vocal cord and upper airway obstructive conditions, allergic conditions, and psychiatric conditions when these are strongly considered in the differential diagnosis. Imaging studies are often not indicated in the evaluation of routine EIA, but they may be useful for evaluating other possibilities in the differential diagnosis. Allergy and Infection Evaluation A complete blood cell count and differential can help in the assessment of the likelihood of infection by analysis of the patient's white blood cells and by evaluation of the eosinophil counts (for allergy). Assessing the immunoglobulin E (IgE) level helps in determining the likelihood of allergic disease. If the diagnosis is uncertain, performing a nasal swab for the presence of eosinophils is helpful in identifying the role of allergic rhinitis. Skin allergen testing or a radioallergosorbent test (RAST) can be used to help identify specific allergens to promote patient avoidance or immunotherapy, if indicated. Either method has been used extensively in atopic workups. In young children, RAST testing may be preferable, owing to the relative ease of administration, but this is a less specific test, and therefore, skin testing may be preferred in general. An erythrocyte sedimentation rate (ESR) or C-reactive protein (CRP) may help in the evaluation of inflammatory and infectious conditions. Sputum analysis and culture can be used to help identify the presence of infection and treatment options for strains of resistant organisms. Thyroid Function Evaluation Thyrotropin levels can be used to help evaluate the potential of patient thyroid dysfunction in the likelihood that anxiety is mimicking the symptoms of asthma. Radiography Chest radiography is used to evaluate for signs of chronic lung disease (eg, hyperexpansion, scarring, fibrosis, hilar adenopathy), for congestive heart failure and/or valvular heart disease (eg, chamber enlargement, pulmonary edema, vascular or valvular calcification), and for a foreign body. Lateral neck radiographs/soft-tissue penetration can also evaluate the upper airway for a foreign body or obstruction Go to Imaging in Asthma for complete information on this topic. Echocardiography Echocardiography may be used to evaluate for cardiac valvular abnormality or global contractile function, as well as dysrhythmia, cardiomegaly, or other heart disease that may manifest during exercise. Laryngoscopy Laryngoscopy can be performed to evaluate for foreign body or other obstruction in the upper airway. Postexercise laryngoscopy can be used to evaluate for vocal cord dysfunction, a condition often mistaken for EIA. Vocal cord dysfunction manifests as stridor with exercise due to paradoxical contraction of the vocal cords with inspiration; this condition can be evaluated via laryngoscopy after an exercise challenge. Challenge Tests Various challenge tests exist that can be used to formalize the diagnosis of EIA. A formal diagnosis is often not critical, clinically, but in recent years, the US Olympic Committee (USOC) has required a positive challenge test to be documented for an athlete to qualify for the use of controlled substances that aid in ameliorating the symptoms of EIA. This requirement has resulted in new studies that have been used to validate some of these assessment tools, whether they are field challenges, treadmill testing, or new techniques such as eucapnic voluntary hyperventilation (EVH). [6, 5, 7] At present, the USOC requires EIA to be diagnosed via EVH in order for preventive and treatment-related medications to be used in competition. Treadmill exercise challenges with preexercise and postexercise pulmonary functions This type of testing formalizes an aerobic challenge and provides an objective measure of the degree of bronchospasm that results from the exercise. The results can help the physician to clarify the diagnosis and to enforce the treatment; the results can also be used to evaluate success of the treatment. Before the exercise challenge, the patient's baseline pulmonary function levels should be obtained (preferably forced expiratory volume in 1 second [FEV1]; forced vital capacity [FVC], or FEV1/FVC; or, less ideally, peak expiratory flow rate [PEFR]). The exercise challenge involves exercising the athlete on a treadmill until his or her heart rate reaches 70-85% of the maximum predicted heart rate. This is maintained for 6-10 minutes, at which time the exercise is stopped. Pulmonary function levels are measured every 2-10 minutes for 15-30 minutes and then compared with the baseline measurements. Any drop from the baseline that is greater than or equal to 10%, on any postexercise measurement, indicates EIA. Severity of disease can be classified as follows: - Mild - Decrease of 10-20% from baseline - Moderate - Decrease of 20-40% from baseline - Severe - Decrease of greater than 40% from baseline Informal exercise challenge An informal exercise challenge can be substituted for the above procedure, but without monitoring the heart rate, the level of work is not reliable. Pulmonary function testing Pulmonary function testing can be used to evaluate baseline pulmonary function or allergic asthma and to categorize pulmonary function as obstructive or restrictive disease. Bronchoprovocation testing Bronchoprovocation testing, as used with general asthma, methacholine, histamine, or cold air challenges, can be used to assess asthma. However, if the results are positive, they are indicative of asthma in general, not specifically EIA. A study of 46 children with exercise-induced asthma-like symptoms reported that a combination of the methacholine test, followed by the mannitol test, gives the highest return to identify bronchial hyper-responsiveness in children for the diagnosis of exercise-induced asthma or bronchospasm. The combination of methacholine test and mannitol tests detected bronchial hyper-responsiveness in all of the children in whom bronchial hyper-responsiveness (BHR) was found (93.5% of all the children) compared to the exercise challenge testing which detected BHR in 23.90%, the bronchodilator testing which detected BHR in 21.7%, mannitol testing which detected BHR in 80% and methacholine testing which detected BHR 91%. [19, 20] Eucapnic voluntary hyperventilation Eucapnic voluntary hyperventilation (EVH) is a technique believed to be more sensitive and more accurate for diagnosing EIA. [6, 7] Furthermore, EVH can be applied in a laboratory setting and altered to mimic the environmental conditions of the sport in question. Go to Peak Flow Rate Measurement for complete information on this topic. Tables What would you like to print?
https://emedicine.medscape.com/article/1938228-workup
Come Prepared. Do Your Best. Make Smart Decisions. Show Respect. Facebook Twitter skyward school-news-app School News App briefcase Search Main Menu Toggle Our School About The School A Word from the Principal Assistant Principal's Desk From the Guidance Office Staff Nurse Angie's Office Our School Athletics Sports Booster Club Baseball Basketball Boys Soccer Cheerleading Cross Country Football Girls Soccer Golf Softball Volleyball Athletics Academics E/LA Department Math Department Science Department Social Studies Department Academics Clubs/Fine Arts Art Band Chorus BETA Drama Greenhouse Scholars' Bowl Clubs/Fine Arts Resources for Parents for Students for Staff Cafeteria Menu District Academic Calendar TN Ready Schedule & Tips Course and Materials List Supply Lists Online Learning Resources Resources District Site Chuckey-Doak Middle School Home getinvolved Student Resources Parent Resources Academic Calendar Assessment Calendar sports Athletics digitalcitizenship Record Request Faculty & Staff Black Knight Tradition With 14+ years of academic excellence, CDMS strives to build upon its achievements to continue to be one of the top middle schools in the state of Tennessee. The Knights Oath “As a Knight in the Chuckey-Doak Middle School family, I pledge to be honest with myself, with others and in my work; to respect myself, my peers, adults and school property; and to be responsible for my actions.” C ome Prepared D o Your Best M ake Smart Decisions S how Respect Beliefs At Chuckey-Doak Middle School we believe that students.... • should be able to function successfully for all levels of society and understand local, national, and global civic responsibilities, demonstrate active citizenship and develop the interpersonal skills needed to function in society, including accepting responsibility for their behavior. • should be actively engaged in purposeful learning that prepares the students for their academic future. • have diverse strengths, skills, interests, backgrounds, and potential. • should respect and value the diversity of people within the community. • should read deeply to independently gather, assess, and interpret information from a variety of sources and read avidly for enjoyment and lifelong learning. • should develop strengths, skills, and interests to have an understanding of his or her potential contribution to society. At Chuckey-Doak Middle School we believe that teachers... • should foster the emotional, social, and academic growth of students. • should demonstrate a positive attitude by caring about and supporting students and each other. • should communicate clearly with students, parents, and staff. • should be lifelong learners. • should respect and recognize the diversity of the student population within the classroom. • should present rigorous instruction in a dynamic and purposeful manner. At Chuckey-Doak Middle School we believe that the school... • should provide various programs to enhance higher levels of learning, including use of current technology. • should be safe, inclusive, and supportive environment. • should provide support for successful transitions. • should include stakeholders in decision-making and encourage stakeholder involvement. • should assist in the development of positive character and integrity.
https://cdms.greenek12.org/
The picture represents a jewish orchestra and the spectators sitting by its side. In the center on a long bench sit (from the left to the right) two violonists and a drummer. The third violinist stands to their left. The spectators are sitting and standing from the left and from the right of them. A large chandelier hangs in the center emanating black rays. sub-set tree: Name/Title Akselrod, Before the Wedding | Unknown Object Detail Monument Setting Unknown Date 1921 Synagogue active dates Reconstruction dates Artist/ Maker Origin Belarus | Minskaia vobl. | Minsk | | Historical Origin Unknown Community type Congregation Unknown Location Unknown | Site Unknown School/Style Period Unknown Period Detail Collection Documentation / Research project Unknown Material/Technique Material Stucture Material Decoration Material Bonding Material Inscription Material Additions Material Cloth Material Lining Tesserae Arrangement Density Colors Construction material Measurements Height Length Width Depth Circumference Thickness Diameter Weight Axis Panel Measurements Condition Extant Documented by CJA Surveyed by CJA Present Usage Present Usage Details Condition of Building Fabric Architectural Significance type Historical significance: Event/Period Historical significance: Collective Memory/Folklore Historical significance: Person Architectural Significance: Style Architectural Significance: Artistic Decoration Urban significance Significance Rating Languages of inscription Unknown Type of grave Unknown 0 Ornamentation Custom Contents Codicology Scribes Script Number of Lines Ruling Pricking Quires Catchwords Hebrew Numeration Blank Leaves Direction/Location Façade (main) Endivances Location of Torah Ark Location of Apse Location of Niche Location of Reader's Desk Location of Platform Temp: Architecture Axis Arrangement of Seats Location of Women's Section Direction Prayer Direction Toward Jerusalem Coin Coin Series Coin Ruler Coin Year Denomination Signature Colophon Scribal Notes Watermark Hallmark Group Group Group Group Group Trade Mark Binding Decoration Program Summary and Remarks Suggested Reconsdivuction History/Provenance Main Surveys & Excavations Bibliography Short Name Full Name Volume Page Type Documenter | Author of description | Architectural Drawings | Computer Reconstruction | Section Head | Language Editor | Donor | Object Copyright Yachilevich Family Negative/Photo. No.
https://cja.huji.ac.il/?mode=set&id=28699
The United Nations Framework Convention on Climate Change (UNFCCC) and the scientific community recognizes that climate change is threat to agriculture and all aspects of food systems such as crop and animal production, food processing, fish stocks and trade are vulnerable. Food system activities are directly dependent upon and inherently interconnected to climate and weather. The effects of climate change are already being felt globally and Canada is not immune to the impacts of climate change on its agricultural sector with respect to the production of safe, high quality food and maintaining a constant supply. According to the Government of Ontario, Poverty Reduction Strategy Office of Ontario (2017), Ontarians benefit from one of the world’s best food system and enjoy the low cost associated with high quality, safe and nutritious foods. Economically, the agricultural and food sector accounts for $49 billion to Cana’s gross domestic product (GDP), $15.3 billion in Ontario and supports more than 790,000 jobs in the Province. While Ontarians generally enjoy quality food, 12% of Ontarians are affected by food insecurity (GoO, 2017). Food insecurity in Canada and the globe will soon be a major challenge as a result of climate change because it will affect the availability, access, quality and safety of food systems due to increased demands and decreased production. This essay will discuss diverse way in which climate change affects food systems because agricultural processes are inherently nature based. The author acknowledges that the impacts of climate change will present food producers with opportunities and risks, however, for this short essay, the impacts of climate change on agricultural crop yields, the loss of crops due to an increase in pests and diseases and influence of extreme events on global trade, food transport and food prices will be discusses covering availability, access, utilization and stability which is the basics of food systems in the built environment. Research have indicated that agricultural crop production will be affected by climate change due to changes in the atmospheric carbon dioxide concentrations, increase in temperature and precipitation because of the variations in photosynthesis, respiration rates, water use efficiency and soil C and N biochemical transformations (Wang et al., 2014) (Long et al., 2015). Research has also shown that the seasons are getting longer and warmer in Canada and this is will affect water availability as it will affect the intensity and frequency of droughts, precipitation and there will be an increase in the intensity of storms which will affect farm production and supply of foods. Climate change will have both a positive and negative impact on food systems by increasing or decreasing crop production, storage, processing, distribution and exchange of food. According to the IPCC, as global temperature increase it will have a negative impact on crop production, (Iizumi et al., 2018) investigated the effects of temperature increase and its effects on crop yield between 1981-2010 and found that crop yield for maize, soybean and wheat decreased globally by 4.1, 4.5 and 1.8% respectively. These crops are staples for many countries especially in Africa, a decrease in food staples will have a drastic effect on the livelihoods of farmers, and it will cause an increase in food prices and affect food available. (Ketiem et al., 2017) indicated that staple foods such as mangoes, maize, wheat, corn and fruit crops have decreased in the continent, the effects of climate change have also affected India crop production as wheat yields has decreased by 5.2% between 1981-2009 (Gupta et al. 2017), however, the opposite was recorded in Asia with wheat production. Like Gupta et al 2017, Tao et al. 2014 looked at wheat growth yield in different climatic zones for the same period mentioned before and found that the Northern region of China experienced increased wheat production while Southern China was negatively impacted. For temperate countries like Canada, an increase in temperature may increase local food production with the use of adaptation methods such as greenhouses, cold resistant fodder, longer growing seasons and hotter summers which will increase capacity to grow food (Roussin et al., 2015). An Canada has benefitted from an increase in global temperatures through an increase in productivity as some Provinces are experiencing an increase the growing season for soybeans and corn production shifting production into Saskatchewan, in British Columbia the benefits were longer periods for grazing and livestock operations, in Prince George the production of Canola and in Quebec’s Montérégie region longer growing season increased the production of soybean, corn, maple syrup and forage production (Warren et al., 2014). While some sectors have benefitted from the increase in temperatures, Canada and other countries have been affected by the negative effects of climate change as mention above, namely from water availability, pests and diseases, extreme weather events, droughts, invasive species, a reduction in crop yields. Canada will benefit from opportunities to increase agricultural exports to other countries (Warren et al., 2014), whereas, some countries would become more dependent on food imports because of a decrease capacity to grow food and supply the increasing demand for food and other agricultural products, for example, countries in the Caribbean such as the Turks and Caicos, Bermuda and the Eastern Caribbean islands which are affected by pests and diseases and extreme weather events such as extended dry seasons, decreased water availability, flooding and hurricanes. Warmer weather can cause an increase is pests, weeds, invasive species and diseases as pests found in the south may move northwards. Changes in temperatures and humidity may increase insect-borne diseases as their temperature limits move poleward and regions long believed to be climatically protected from certain pests may find themselves open to infestation and contagion (Mozell, 2014). An increase in pests and disease will also affect global food supply, food quality and costs as farmers will have to spend more money and time fighting pests and diseases. Deutsch et al., indicated that insects consume about 10 per cent of the globe\'s food and forecasts that insects will consume between 15-20 per cent more crops by the end of the century. Deutsch et al., stated that if the global temperature should increase by 1.5 degrees Celsius, the world would lose 48 million tons of wheat, rice and corn to insects as pest are predicted to increase in their size and there their metabolic rate would also increase. Farmers will have to use more fertilizers and pesticides to increase crop yields and decrease pests, this will cause environmental and health damages and it will also cause a decrease in global supply in food which will drive the cost for staple food prices up. The effects of climate change will increase inequality within food systems as it will affect food access, choice and availability for lower and middle income families. The effects of extreme weather events such as droughts, flooding and hurricane will have a direct impact on agricultural production, impacting global trade and transportation of food increasing international prices on grains and other food commodities traded globally. In Canada, marine transport The increase cost associated with food transportation due to extreme weather events and cost associated with implementing climate change adaptation mechanisms to reduce the effects on climate change on crop production, storage, processing and transportation of food will cause retail prices for food to increase and it will reduce the purchasing power of middle and low income families. This will have a spin off effect on the public health system as it will lead to less healthy diets, malnutrition and increase diet-related mortality as will be forced to change their will have a direct impact on the health and well-being of the most vulnerable in communities and more people will be forced to depend on government subsidies or food banks (Warren et al., 2014). Climate change will affect all aspects of food systems (Mbow et al., 2019), especially food stability and access for Small Island Developing States and countries currently being affected by extreme climatic events food security and stability. It is important that governments develop adaptation measures and policies to turn climate risks into opportunities and benefits by decreasing their vulnerability to the effects of climate change and ensuring that the most vulnerable in their country can afford healthy, high quality foods. This can be done through funding climate risk research and development, compensating farmers for crop failure due to extreme weather events, creating a disaster contingency fund/weather insurance program, subsidizing transportation for imported food and import tax reduction on staple foods. These measures will help minimize the impact of climate change on food systems in their country by ensuring the cost for producing, processing, transporting, storing and distributing is not passed on to the most vulnerable in society. It also ensures that families will have access to healthy, high quality and sufficient food to maintain a nutritious and healthy diet.
https://envrexperts.com/free-essays/essay-about-effect-climate-change-canada
Volcanoes are vents in the Earth’s crust that periodically expel lava, gas, rock and ashes. Some types of volcanoes explode quite violently, and many of these types look like hills or mountains with steep slopes. These slopes may be covered in vegetation and barely recognizable as volcanoes, depending on the dates of their last eruptions. There are three types of volcanoes that erupt violently and also possess steep slopes. Distinguishing Features and Mechanisms Whether a volcano explodes with violent force depends on the consistency of the magma, or molten rock, inside of it. Volcanoes that contain thin, runny magma -- like those that made the Hawaiian chain of islands -- don’t typically produce violent explosions, while those with thick, viscous magma do. This is due to the fact that thinner magma allows potentially explosive gases to easily exit into the atmosphere, while thicker magma prevents these gases from escaping. The denser type of magma often contains silica, which acts as a thickening agent. Eventually, the gases build up and exert so much pressure on the volcano that it bursts open in a violent eruption. Once it has erupted, magma is called lava. Many of the world’s most violently-exploding and steeped-sloped volcanoes are located near subduction zones. Subduction zones are tectonic plate boundaries in which oceanic plates slide underneath continental plates. Examples of subduction zones include the coastal U.S. Pacific Northwest and southern Alaska, which contain numerous violent, steep-sided volcanoes, such as the infamous Mount St. Helens. Composite Volcanoes Approximately 60 percent of the volcanoes on Earth are composite volcanoes. Also known as stratovolcanoes, these steep-sided symmetrical mountains can rise to heights of 8,000 to 10,000 feet (2,438 to 3,048 meters). Some of the world’s most majestic mountains are composite volcanoes, including Washington’s Mount Rainier and Mount St. Helens, Oregon’s Mount Hood, Japan’s Mount Fuji and Italy’s Mount Etna. Each of these volcanoes contains a conduit system that extends deep below the Earth’s crust and culminates in a magma-containing reservoir. Stratovolcanoes generally experience long periods of dormancy between eruptions, but when they do erupt, they usually do so with great ferocity, spewing lava and ash high into the air, and sometimes causing avalanches, landslides and mudflows. Cinder Cones Cinder cones are simple, easily recognizable volcanoes. Made from loose, granular cinders, they are circular or oval in shape and contain bowl-shaped craters at their summits. They don’t attain the soaring heights of composite volcanoes, generally rising no more than 1,000 feet (304 meters) above the surrounding landscape. They also don't emit an enormous volume of materials like stratovolcanoes. However, they feature very steep slopes and forceful explosions in which gas-charged lava blows out violently. Cinder cone volcanoes are relatively common in western North America. Examples include Paricutin in Mexico and the unnamed volcano on Wizard Island in Oregon’s Crater Lake. Lava Domes Lava dome volcanoes typically develop out of composite volcanoes, when small, thick, bulbous pools of lava collect around a volcano’s vent following an eruption. Lava domes can grow quickly, becoming noticeably larger over a period of mere months. They often form steep-sided mounds, some of which may be so steep that they appear as obelisks. Lassen Peak in California and Mont Pelee on the island of Martinique are types of lava dome volcanoes. Also, lava domes may be contained within other types of volcanoes, such as the Novarupta Dome, which is located inside Alaska’s Katmai volcano, and several unnamed domes within Mount St. Helens’ crater. References - US Geological Survey: What Is a Volcano? - LiveScience: Volcano Facts and Types of Volcanoes, Mary Bagley - US Geological Survey: How Do Volcanoes Erupt? - US Geological Survey: Principal Types of Volcanoes - South Carolina Geological Survey: Types of Volcanoes - Oregon State University – Volcano World: Types of Volcanoes - Wheeling Jesuit University / Center for Educational Technologies: Types of Volcanoes - San Diego State University: How Volcanoes Work, Mont Pelee Eruption 1902 - Oregon State University – Volcano World: Subduction Zone Volcanism About the Author Based in western New York, Amy Harris began writing for Demand Media and Great Lakes Brewing News in 2010. Harris holds a Bachelor of Science in Mathematics from Penn State University; she taught high school math for several years and has also worked in the field of instructional design.
https://sciencing.com/types-volcanoes-violent-steep-slopes-6773.html
Equanimity (Upeksha Bhavana) This month I will be presenting the final installment of my introduction to the four immeasurable states of mind, the four divine abodes, which are loving-kindness, compassion, sympathetic joy, and equanimity. We will conclude with a discussion of equanimity. Equanimity is the final term of the four divine abodes, and could also be called the finale. Each of the four divine abodes can be considered different modes of loving-kindness, the wish that all beings may be well and happy. When one regards beings that are afflicted with suffering, loving-kindness becomes compassion. When one regards beings that are enjoying the rewards of meritorious action, loving-kindness becomes sympathetic joy. Equanimity, however, is the feeling of even-mindedness in the face of both suffering and joy. It is the ability to be equal minded in all circumstances and towards both friend and foe. It is the ability to regard all beings with loving-kindness without any trace of partiality or bias. This is loving-kindness developed to point where all boundaries are transcended. It is also the divine abode where loving-kindness becomes more than just a feeling of well wishing but the way to an unshakeable peace and serenity. The Lotus Sutra stresses joy and compassion, but equanimity is also unmistakably present. In chapter two the Buddha announces that the one great purpose for which the buddhas appear in the world is open the gate to the insight of the buddhas for all beings, to show the insight of all buddhas to all beings, to cause all living beings to obtain the insight of the buddhas, and to cause all beings to enter the way to the insight of all buddhas. (see The Lotus Sutra p. 32) In this declaration no being is left out. Furthermore, the Buddha declares that he teaches only bodhisattvas and that in fact the One Buddha Vehicle is for all beings and not just some. This is the impartiality of the Buddha. Chapter 16 ends with a powerful reiteration of the Buddha’s constant care for all beings without any partiality or bias: I know who is practicing the Way and who is not. Therefore, I expound various teachings To all living beings According to their capacities. I am always thinking ‘How shall I cause all living beings To enter into the unsurpassed Way And quickly become Buddha?’ (The Lotus Sutra, p. 249) Other noteworthy chapters that stress the equal regard that the Buddha and the bodhisattvas have for all beings are chapter 14, Peaceful Practices, and chapter 20, Bodhisattva Never Despise. However, the value of equanimity and impartial regard for all beings can be found throughout the Lotus Sutra. Equanimity also describes the attitude we should cultivate towards the vicissitudes of life. Prior to the Lotus Sutra, the Buddha used the formula of the eight winds to describe different circumstances both good and bad that can sway the minds of deluded beings and cause them to lose their equanimity. Nichiren himself referred to the eight winds in his writings: “Worthy persons deserve to be called so because they are not carried away by the eight winds: prosperity, decline, disgrace, honor, praise, censure, suffering, and pleasure. They are neither elated by prosperity nor grieved by decline. The heavenly gods will surely protect one who is unbending before the eight winds.” (WND, p. 794) In a letter attributed to Nichiren, it is made clear that equanimity and faith in Namu Myoho Renge Kyo go hand in hand: “Suffer what there is to suffer, enjoy what there it to enjoy. Regard both suffering and joy as facts of life, and continue chanting Namu Myoho Renge Kyo, no matter what happens. How could this be anything other than the boundless joy of the Dharma? Strengthen your power of faith more than ever.” (WND, p. 681) The Lotus Sutra itself teaches that all things are insubstantial, empty, and exist only by virtue of causes and conditions. When viewed in this way, we can see that both good and bad circumstances are impermanent and conditioned and therefore nothing to lose our equanimity over in the ultimate perspective. In fact, because things have no fixed nature or substance they can even be viewed as peaceful to the core because they bring no permanent disturbance or mark. Taking this view, chapter 2 of the Lotus Sutra states: “All things are from the outset in the state of tranquil extinction.” (The Lotus Sutra, p. 39) Peace and serenity are the results of such a view, and actions based on love, serenity and an impartial love and kindness for all beings is what the divine abode of equanimity is all about. It should never be mistaken for indifference or aloofness. It is the nothing less than the mind that motivates the Buddha to embrace all beings in all circumstances with innumerable expedients in order to lead them all to the One Buddha Vehicle so that they too can experience this peace and serenity for themselves. So how can we cultivate equanimity? Just as with loving-kindness, compassion, and sympathetic joy we can cultivate it through a series of traditional exercises that can be combined with Shodaigyo meditation so as to bring them within the context of the Odaimoku. 1. Take a few moments to just sit with yourself and breathe. Maybe do a cycle of ten breaths or more counting the breaths if necessary. Non-judgmentally take notice of your physical and mental state. Then begin to cultivate equanimity for yourself by considering the both the good and bad things that you have experienced and how all these things are passing manifestations of causes and conditions that are not fixed and have no substance. You may even want to repeat to yourself, “No matter what conditions arise, may I dwell forever in the limitless realm of the ever present peaceful heart and serene mind.” Do this for a few minutes at least. 2. Now take a few moments to extend equanimity to a stranger or to someone about whom one does not have any particularly strong feelings one way or another. Unlike the other three divine abodes, we begin with the neutral person here because it is easier to get in touch with the feeling of impartiality and non-bias. Consider how this person also experiences good and bad things due to causes and conditions. Extend the thought of equanimity to them by repeating to yourself, “May this person dwell forever in the limitless realm of the ever present peaceful heart and serene mind.” 3. Now take a few minutes to extend equanimity to a person who is a benefactor or friend, but preferably not someone we have or would like to have an intimate relationship with, as this would generate strong feelings of attachment. Wish that they may dwell forever in the limitless realm of the ever present peaceful heart and serene mind. 4. Now imagine someone that one has a problem liking or getting along with and extend to them the wish that they may dwell forever in the limitless realm of the ever present peaceful heart and serene mind. If we can maintain this wish with as much conviction and strength for those we have difficulty with as we had for the friend or benefactor, then we will know that we are truly beginning to develop a mind of equanimity which is impartial and unbiased. 5. Now spend some time extending equanimity simultaneously to oneself, a neutral person, a friend or benefactor, and to the person that is hard to get along with. This is the true test of equanimity, impartiality, and non-bias. This can be extremely difficult to do as it takes a universal perspective and not the perspective of our own sentiment or self-interest. 6. Finally one should spend some time imagining that all beings in all directions may dwell forever in the limitless realm of the ever present peaceful heart and serene mind, thereby extending the feelings generated in the previous exercises. This part is more abstract but its point is to enable us to cultivate or at least imagine a truly universal equanimity that looks with equal favor and loving-kindness upon all beings.
http://fraughtwithperil.com/ryuei/2010/06/16/equanimity-upeksha-bhavana/
Biological control of plant diseases and plant pathogens is of great significance in forestry and agriculture. There is great incentive to discover biologically active natural products from higher plants that are better than synthetic agrochemicals and are much safer, from a health and environmental ... - Evapotranspiration: Principles and Applications for Water Management This book covers topics on the basic models, assessments, and techniques to calculate evapotranspiration (ET) for practical applications in agriculture, forestry, and urban science. This simple and thorough guide provides the information and techniques necessary to develop, manage, interpret, and ... - Justus Ludewig von Uslar, and the First Book on Allelopathy Allelopathy is a fascinating and perplexing topic that concerns the chemical interactions of plants. It has profound implications in agriculture and forestry where species are grown artificially in mixture, with no evolutionary history of co-existence. The topic of allelopathy is widely credited as commencing in ... - Symbiosis of Plants and Microbes Symbiotic associations are of great importance in agriculture and forestry, especially in plant nutrition and plant cultivation. This book provides an up-to-date and lucid introduction to the subject. The emphasis is on describing the variety of symbiotic relationships and their agricultural and ... - Climate Change: Significance for Agriculture and Forestry Societies throughout the world depend on food, fiber and forest products. Continuity and security of agricultural and forest production are therefore of paramount importance. Predicted changes in climate could be expected to alter, perhaps significantly, the levels and relative agricultural and forestry ... - North American Agroforestry: An Integrated Science and Practice Because of the environmental consequences of past agricultural and forestry practices that focused exclusively on the economic bottom line, the public now demands greater accountability and the application of more ecologically and socially friendly management approaches. Introductory chapters focus on the ... - Modern Trends in Applied Terrestrial Ecology , zoology, ecology, vegetation science, agriculture, forestry and population biology to name a few. ... - Aphid Ecology Aphids are the most important of the sap sucking insects, they are also major pests of agriculture, horticulture and forestry. This book covers the evolution of aphids and their development in relation to specific plants. Optimization is used to explain how modes of feeding and reproduction have ... - Management of Mycorrhizas in Agriculture, Horticulture and Forestry This book is the most up-to-date and comprehensive review of our knowledge of the management of mycorrhizas in agriculture, horticulture and forestry. It contains twenty-four reviews written by leading international scientists from eight countries. The reviews consider the ecology, biology and ... - Nitrogen Fixation in Agriculture, Forestry, Ecology, and the Environment This book is the self-contained fourth volume of a seven-volume comprehensive series on nitrogen fixation. The outstanding aspect of this book is the integration of basic and applied work on biological nitrogen fixation in the fields of agriculture, forestry, and ecology in general. Nowadays, the ... - Mycorrhizae: Sustainable Agriculture and Forestry Mycorrhizal fungi are microbial engines which improve plant vigor and soil quality. They play a crucial role in plant nutrient uptake, water relations, ecosystem establishment, plant diversity, and the productivity of plants. Scientific research involves multidisciplinary approaches to understand the adaptation of mycorrhizae to the rhizosphere, mechanism of root colonization, effect on plant ... - Allelopathy in Sustainable Agriculture and Forestry Simply put, allelopathy refers to an ecological phenomenon of plant-plant interference through release of organic chemicals (allelochemicals) in the environment. These chemicals can be directly and continuously released by the donor plants in their immediate environment as volatiles in the air or root exudates in soil or they can be the microbial degradation products of plant residues. The ... - Trace Elements in the Rhizosphere The first book devoted to the complex interactions between trace elements, soils, plants, and microorganisms in the rhizosphere, Trace Elements in the Rhizosphere brings together the experimental, investigative, and modeling branches of rhizosphere research. Written by an international team of authors, it provides a comprehensive overview of the mechanisms and fate of trace elements in the ... - Boron in Soils and Plants: Reviews Boron in Soils and Plants: Reviews is the most up-to-date and comprehensive review of our knowledge of boron in soils, plants and animals. This volume coincides with a period of significant progress in boron research. It covers recent advances in the identification of the physical and chemical role of B in the cell wall, the characterisation of the genetic basis for differences in B ... - Novel Aspects of the Biology of Chrysomelidae agricultural pests. The Colorado potato beetle, the cereal beetle, flea beetle and the corn root worms ... - Applied Population Biology An increasing variety of biological problems involving resource management, conservation and environmental quality have been dealt with using the principles of population biology (defined to include population dynamics, genetics and certain aspects of community ecology). There appears to be a mixed record of successes and failures and almost no critical synthesis or reviews that have ... - Microbial Root Endophytes Plant roots may not only be colonized by mycorrhizal fungi, but also by a myriad of bacterial and fungal root endophytes that are usually not considered by the investigators of classic symbioses. This is the first book dedicated to the interactions of non-mycorrhizal microbial endophytes with plant roots. The phenotypes of these interactions can be extremely plastic, depending on environmental ... - Concepts in Mycorrhizal Research Mycorrhiza will be the focus of research and study for the coming decade. Successful survival and maintenance of plant cover is mostly dependent on mycorrhization. During the last decade about ten books have appeared on various aspects of mycorrhiza, including two on methodology. The present book has been compiled to give a complete and comprehensive description of the topic to the students ... - Agrometeorology: Principles and Applications of Climate Studies in Agriculture Agrometeorology: Principles and Applications of Climate Studies in Agriculture is a much-needed reference resource on the practice of merging the science of meteorology with the service of agriculture. Written in a concise, straightforward style, the book presents examples of clinical applications (methods, ... - Towards the Rational Use of High Salinity Tolerant Plants The symposium on high salinity tolerant plants, held at the University of Al Ain in December 1990, dealt primarily with plants tolerating salinity levels exceeding that of ocean water and which at the same time are promising for utilization in agriculture or forestry. The papers of the ...
https://www.agriculture-xprt.com/books/keyword-forestry-and-agriculture-138963
Company :Allegheny Health Network Job Description : GENERAL OVERVIEW: Collaborates with the trauma program medical director (TMD) and trauma program director/manager to ensure the trauma center provides quality patient care. ESSENTIAL RESPONSIBILITIES: - Manages the data flow process for performance improvement. Supervises and guides performance improvement in accordance with the Pennsylvania Trauma Systems Foundation (PTSF) Standards for Accreditation. Manages the Performance Improvement plan in conjunction with the TMD and trauma program director/manager. Responsible for the aggregation, monitoring and reporting of regulatory mandated and/or program specific quality metrics. Reviews adverse events, monitors for trends, develops and plans practice changes in collaboration with the TMD and trauma program director/manager. Maintains peer review process for the trauma program. (70%) - Supports the trauma program in maintaining PTSF Accreditation including application and formal site survey. (10%) - Assists with the daily flow of the trauma program. (15%) - Acts as a representative for the trauma program. (5%) - Performs other duties as assigned or required. QUALIFICATIONS: Minimum - Bachelor's Degree in Nursing - RN License in the State of Pennsylvania - Basic Life Support (BLS) - 5 years with nursing practices - Experience with the PTSF Standards for Accreditation or comparable experience with regulatory agencies Preferred - Master’s degree in Nursing or related field - ACLS and CEN, CFRN, CCRN or TCRN - Previous performance improvement experience - Advanced computer skills - Excellent knowledge of Epic - Trauma nursing experience in either Emergency Department or Intensive Care Unit Disclaimer: The job description has been designed to indicate the general nature and essential duties and responsibilities of work performed by employees within this job title. It may not contain a comprehensive inventory of all duties, responsibilities, and qualifications required of employees to do this job. Compliance Requirement : This job adheres to the ethical and legal standards and behavioral expectations as set forth in the code of business conduct and company policies. As a component of job responsibilities, employees may have access to covered information, cardholder data, or other confidential customer information that must be protected at all times. In connection with this, all employees must comply with both the Health Insurance Portability Accountability Act of 1996 (HIPAA) as described in the Notice of Privacy Practices and Privacy Policies and Procedures as well as all data security guidelines established within the Company’s Handbook of Privacy Policies and Practices and Information Security Policy. Furthermore, it is every employee’s responsibility to comply with the company’s Code of Business Conduct. This includes but is not limited to adherence to applicable federal and state laws, rules, and regulations as well as company policies and training requirements. Highmark Health and its affiliates prohibit discrimination against qualified individuals based on their status as protected veterans or individuals with disabilities, and prohibit discrimination against all individuals based on their race, color, age, religion, sex, national origin, sexual orientation/gender identity or any other category protected by applicable federal, state or local law. Highmark Health and its affiliates take affirmative action to employ and advance in employment individuals without regard to race, color, age, religion, sex, national origin, sexual orientation/gender identity, protected veteran status or disability. EEO is The Law Equal Opportunity Employer Minorities/Women/Protected Veterans/Disabled/Sexual Orientation/Gender Identity ( https://www.eeoc.gov/sites/default/files/migrated_files/employers/poster_screen_reader_optimized.pdf ) We endeavor to make this site accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact number below.
https://careers.highmarkhealth.org/explore-jobs/job/j205546-rn-trauma-performance-improvement-coordinator/
We have been awarded $999,519 for Long distance connectivity for superconducting quantum-bits within the MBIE Smart Idea scheme! some Press articles about it: The goal of this project is to develop the technologies need to efficiently, coherently and reversibly convert individual microwave photons into individual optical photons.The motivation for this work is communicating and performing calculations using inherently quantum mechanical states. There has been spectacular progress towards this end using devices based on superconducting qubits. Superconducting qubits are small resonant circuits made of superconductor. One of their key advantages is that they are manufactured in a similar way to today’s computer chips with the same benefits of robustness and ease of manufacture at reasonable cost. One of the downsides is that they have to be cooled to only a fraction of a degree above absolute zero. These extreme operating temperatures mean that it is very hard to send quantum signals to and from the computer and currently there are no technologies that work. This lack of a quantum interface means superconducting qubit computers cannot be connected together to form quantum network. Severely limiting their application to super-secure quantum cryptography and to powerful distributed quantum computation.In this project we will improve the efficiency of the conversion of microwave photos to optical photons.
http://quantumchaos.de/post/mbie/
The ECO-SEE consortium members have started their communication and dissemination activities actively promoting the project in relevant conferences and events all over Europe. Representatives from consortium member Acciona attended the IndustryTech 2014 exhibition and conference in Athens, in April 2014. The event brought together an interesting audience from nanotechnology, biotechnology, advanced materials and new production technologies. The event offered opportunities for developing valuable research and industry collaborations, and showcases cutting-edge research, latest innovations and rising companies from all around Europe. ECO-SEE was promoted through individual meetings and networking with interested stakeholders, and flyers were handed out through the company booth. Another partner, Claytec from Germany attended the 2014 Hannover Messe, and presented ECO-SEE on April 14, during a conference session entitled “From Eco-Innovation to System Innovation: ECO-INNOVERA - Boosting eco-innovation in research and dissemination”. His presentation focussed on natural materials and systems for green buildings. Partners are happy to engage with interested stakeholders and discuss project activities, so don’t hesitate to contact us for further information!
http://eco-see.eu/news-events/9-news/29-eco-see-dissemination-activities.html
The potential availability of a current year tax deduction for contributions made to a traditional IRA is perhaps its most appealing feature for many taxpayers. After all, getting a tax deduction for your saving activities is like an instant return on investment. For example, if you make a $5,000 contribution that nets you a full $5,000 deduction on your tax return, and your effective tax rate is 25%, then you’re effectively paying just $3,750 for that $5,000 deposit. But there are situations where that deduction may not be available. In many situations, making a contribution to a non-deductible self-directed IRA will still be in your best long-term financial interests. - When You’re Not Eligible to Make Contributions to a Deductible IRA. Not everyone is eligible to make a tax-deductible contribution to an IRA. For example, if you’re covered by a retirement plan at work (such as a 401(k)), then you are only eligible to make fully tax deductible contributions if your modified adjusted gross income is $61,000 or less and you file as a single taxpayer. (A partial deduction is available if your income is between $61,000 and $71,000.) Couples who file jointly must have a modified AGI of $90,000 or less (with a partial deduction available for incomes between $90,000 and $118,000). Despite not being able to get the deduction, the funds you deposit into a self-directed IRA will grow on a tax-advantaged basis. In the case of a Roth self-directed IRA, you’ll never pay taxes on distributions that you take from your account once you reach age 59½. - When The Value of the Contribution Deduction is Low. Even if you are eligible to make a deductible contribution to a traditional self-directed IRA, it’s important to determine exactly what the value of the deduction will be, and weigh that against the value you could receive in the longer term by making a contribution to a Roth account. For example, taxpayers with relatively low modified AGIs – which may be due to having a large number of other deductions available, and/or a lower income – might not find an IRA deduction to be particularly valuable. Keep in mind that the IRA contribution deduction is a deduction, not a credit, so the lower your modified AGI, the lower the effective value of that deduction will be. for example, if a $5,000 contribution only yields you a tax bill that’s a few hundred dollars lower, then you should probably consider making that year’s contribution to a Roth self-directed IRA instead in order to receive tax free distributions once you hit retirement. - When You Have Other Sources of Retirement Income. The rules on required minimum distributions are not given enough attention by many retirement savers. these rules require that, once you reach age 70½, you must begin taking distributions each year from your traditional IRA.this means that not only will your account balance not be able to grow as much, you’ll be hit with a tax bill on those distributions. In contrast, a Roth self-directed IRA is not subject to those rules. Therefore, if you anticipate having a large nest egg and/or various other sources of income during retirement, your best long-term financial bet may be to prioritize non-deductible contributions to your Roth account rather than trying to make deductible contributions to a traditional account.
https://www.questtrustcompany.com/tag/self-directed-ira-administrator/
Have you ever thought why big company’s CEO can cash in on millions of dollars in bonuses and still remain untouchable by Federal Government? Have you ever thought why media always talks about ethical issues, and nothing ever happens to those people? Recently, I was watching a documentary on how big companies are moving their IPs (intellectual properties) to Ireland, and paying only 15% taxes there instead of paying 35% tax in United States. Then I asked myself a question….. Is this legal? ……The answer came out to be…. “YES”. They are not doing anything wrong legally. So, government in United States can’t do anything about this. This raised an another question in my mind….. Is it ethical? ….. And surprisingly the answer came out to be as “YES”. And here is why…… Basics of business: Have you ever saw the definition of business? If you have then you know what I am talking about. In defining a business, ethics don’t play in to the picture at all. Sole purpose of a business is to increase the value for its stakeholders. Thus, can you blame those businesses, who are taking advantage of the lower tax policies in Ireland to increase their net income? It might be morally wrong for those businesses to show all of their profit in Ireland, while they get their 50-70% profit from United States, but you can’t do anything about that. As more and more countries loosen their tax policies to attract foreign businesses, there would always be some companies who want to move there to increase their net profit by paying lower taxes there. Definition of ethics: In my opinion, definition of ethics is very subjective. I don’t think that you can have a clear defined ethical standards globally. Whenever you try to define ethics, it doesn’t remain ethics anymore. It becomes a law or a rule. For example: If you think that it’s not an ethical practice for people to do insider trading on the basis of the insider information, and if you want to change that then you might want to change the law which punishes those people. Unless you put that law in practice, you will always find immoral people, who will be using their insider knowledge to make huge bucks for themselves. At the end, ethics shrink down to morals and personal belief of that particular individual, who is running that business. Because there is nothing clearly defined in the books, which will prevent this person from taking unethical decisions. Subjective nature: As I mentioned earlier, ethics basically shrinks down to morals and beliefs of the person who is handling that business. Thus, ethics tend to be very subjective in nature. And there are various other factors that affects the core definition of ethics. For example: It would be considered unethical for an employer to hire a kid, who is only 15 years old. But in some countries, some government encourages companies to hire younger people, so that they can support their family, while getting the invaluable professional training for their future. Thus, you can’t exactly define what is ethical and what is not, when your business is global and you yourself can’t define what is considered ethical.
https://bhavingandhi.com/2012/02/27/why-cant-your-business-have-any-ethical-standards/
The Senior SAP MM business analyst will develop and support the procure to pay (purchasing and inventory) business processes. This involved optimizing existing business processes and building out solutions for new business initiatives. The analyst will also support the Ariba set up and deployment for supply chain collaboration. Locations: CA, IL, AZ, TX Principal Duties: - Leverage business and technical expertise to address technology architecture, blueprinting, data analysis, business modeling, technical design, application development, integration, and enablement. Leads the research and evaluation of emerging technologies to support changing business needs. - Define and develop best practices for process excellence and functional configurations. - Work with business leaders to define end state Procure 2 Pay business processes. This involves working with business departments and process owners in defining functional requirements. - Analyze business processes and provide solutions to optimize productivity and ensure data accuracy. - Provide day to day system/process support to users in coordination with corporate support team - Build process documentation, functional specifications, test scripts and training documentation - Maintain system configuration as defined by project scope within SAP and other business applications. - Monitor existing processes to ensure system accuracy and integrity. - Change management and train users on new systems and processes. - Manage and oversee the design and development of customized reports, utilizing ABAP technical resources and other tools (e.g., report writer) functionality within SAP - Design and development of interfaces and conversion programs to/from existing legacy systems - Work directly with project stakeholders throughout the entire lifecycle of multiple projects to define, design and implement robust technology solutions. - Performs a project manager role to roll out applications/business processes to ensure the goals and objectives of the project are accomplished on time and within budget. - Manage outsourced resources to achieve project goals - Prepares project status reports for management and business users. - Leads system testing effort necessary as part of project deliverables - Partners with peers across the IT organization to solve complex business problems including taking a lead role on the high-level solution options and design Essential Skills: - Detail oriented and excellent organizational, time and stress management skills - Excellent analytical and problem-solving skills - Ability to handle multiple projects simultaneously and independently - Excellent interpersonal skills - Excellent communication (verbal, written) and customer-facing skills. - Willingness to expand knowledge and skills to other technologies and solutions Required Experience: - Proven expertise as SAP MM analyst with at 7+ years relevant experience with full life cycle implementations - Strong understanding of SAP SD, WM, Transportation and FI integration points - Thorough understanding of wide range of business processes with a focus on Procure to Pay processes. - 7+ years of experience in Information Technology, specifically with SAP ERP implementations - 7+ years of experience in one of the SAP sub-domain areas of CRM, Sales & Distribution, Manufacturing, Supply chain, Warehouse Management. - Solid hands-on experience with configuration, deployment, and testing of SAP based enterprise software applications - 7+ years of extensive hands-on experience architecting business solutions using SAP technologies - Experience with non-SAP ERP solutions and cloud-based solutions will be a plus - Desired good understanding of WM and Transportation processes, warehouse optimization, transportation and freight - Strong Project Management capabilities Education: - Bachelor’s Degree in Computer Science, Information Systems, Industrial Engineering, Operations Management or Engineering.
https://www.dice.com/job-detail/bb913b8d-4a14-4083-86f2-8fa59f13fa68
Q: If $u_n$ (sequence of harmonic functions) converges weakly to $u$ in $L^2(\Omega)$, then $\Delta u = 0$ in $\Omega$. If a sequence of harmonic functions $u_n \rightharpoonup u$ (converges weakly) in $L^2(\Omega)$, then $\Delta u = 0$ in $\Omega$. Recall a sequence of functions $f_n$ defined on an open set $\Omega$ is said to converge weakly in $L^2(\Omega)$ to a function $f$ if: $$\int f_n(x)\,g(x)\,dx \to \int f(x)\,g(x)\,dx \hspace{1cm} \forall g \in L^2 (\Omega).$$ My first thought is just to pass the limit using the Mean Value Theorem since if MVT holds that implies $u$ is harmonic. However, I don't think that works with 'weak convergence' with my definition above. A: $$u_n \rightharpoonup u\implies \int_\Omega u_n(x)\phi(x)dx \to \int_\Omega u(x)\phi(x)dx \hspace{1cm} \forall \phi \in C^\infty_0 (\Omega).$$ in particular $$ \phi \in C^\infty_0 \implies \Delta \phi \in C^\infty_0 $$ Hence $$\int_\Omega u_n(x)\Delta\phi(x)dx \to \int_\Omega u(x)\Delta\phi(x)dx\Longleftrightarrow \int_\Omega (u_n(x)-u(x))\Delta\phi(x)dx \to 0\hspace{1cm} \forall \phi \in C^\infty_0 (\Omega).$$ But using integration by part or By derivative in the sense of distributions we know that $$\int_\Omega (u_n(x)-u(x))\Delta\phi(x)dx =\int_\Omega \Delta(u_n(x)-u(x))\phi(x)dx~~\\ =\int_\Omega -\Delta u(x)\phi(x)dx ~~\to 0~~~\forall~~ \phi \in C^\infty_0 (\Omega).$$ Since $$\Delta u_n =0$$ Hence, $$\int_\Omega -\Delta u(x)\phi(x)dx =0~~\forall~~ \phi \in C^\infty_0 (\Omega)\Longleftrightarrow \Delta u = 0~~\text{a.e on }~~\Omega$$
Governing transport in the algorithmic age Transport policy needs to ready itself for the age of algorithms, and policy makers must become algorithmically literate. This is the key message of a new report by the International Transport Forum, presented today (23 May) at the global summit of transport ministers in Leipzig, Germany. Automated decision-making is becoming more and more prevalent. Choices that used to be made by humans are instead entrusted to algorithms and based on Artificial Intelligence (AI). Transport is one of the areas where algorithms play an increasing role, for instance in automated driving or new mobility services. Algorithms can be hugely beneficial. They are able to solve formerly intractable problems or improve our ability to accomplish previously time-consuming tasks. They also raise unique legal, regulatory and ethical challenges. Algorithmic decisions may result in unintended and harmful behaviour – for instance where the wrong objective is specified, or if the training data for machine learning is biased or corrupted. When algorithms fail, people can get hurt and material damaged. Where these risks can propagate across systems, the harm can multiply. Privacy risks also exist. Algorithms are data-processing technologies. Yet data anonymisation is rarely robust enough to stand up against serious data-discovery attacks. Vulnerabilities grow as adversarial algorithms get better at extracting data. Physical and moral hazards emerge particularly when AI systems start to drift into areas of human decision-making in ways that remain inscrutable to human cognisance. Algorithmic systems are highly opaque and difficult to explain to regulators, or to those affected by their decisions. Code is often created in environments that are not open to scrutiny, such as private companies. It uses machine languages that are not widely understood. The operation of several types of AI algorithms may not even be explained by their designers. The lack of insight into AI processes challenges traditional forms of public governance. Transport policy, its institutions and regulatory approaches have been designed for human decisions. They are bound by legal and analogue logic that is now challenged by systems which function with machine logic. Public authorities therefore have to evaluate whether their institutions and working methods are adapted to this development. If not, they will have to begin to reshape themselves for a more algorithmic world. This will require new skill sets, notably code literacy. The report “Governing Transport in the Algorithmic Age” makes a number of further, specific recommendations. Among other things, it suggests that public authorities: - Convert analogue regulations into machine-readable code - for example, authorities could encode permissible uses of street and curb-space as the Los Angeles’ open-source Mobility Data Specification (MDS) does. - Use algorithmic systems to regulate more dynamically and efficiently - AI may create new ways of regulating with a lighter touch. - Compare the performance of algorithms with that of humans - is the balance of risks and benefits tilted towards one or the other? - Establish robust regulatory frameworks that ensure accountability for decisions taken by algorithms - ensure that algorithmic systems are built so they can be trusted. - Establish clear guidelines and regulatory action to assess the impact of algorithmic decision-making - such as Canada’s “Directive on Automated Decision-Making”, a model approach. The work for this report was carried out in the context of a project initiated and funded by the International Transport Forum's Corporate Partnership Board (CPB). The Corporate Partnership Board (CPB) is the ITF’s platform for engaging with the private sector and enriching global transport policy discussion with a business perspective. The findings are those of the involved parties; they do not necessarily reflect the views of ITF member countries. The CPB companies involved in this project are: Abertis, ExxonMobil, Kapsch TrafficCom, Latvian Railways, NXP, PTV Group, RATP Group, Renault Nissan Mitsubishi Alliance, Robert Bosch GmbH, SAS Institute, Siemens, SNCF, Total, Toyota Motor Corporation, Uber, and Valeo. Download the report: https://www.itf-oecd.org/governing-transport-algorithmic-age Also just publised: Expanding Innovation Horizons: Learning from Transport Solutions in the Global South https://www.itf-oecd.org/expanding-innovation-horizons-learning-transport-solutions-global-south New Directions for Data-driven Transport Safety https://www.itf-oecd.org/new-directions-data-driven-transport-safety-0 Media Contact: Michael KLOTH Head of Communications M +33 (0)6 15 95 03 27 E [email protected] Summit media resources:
https://www.itf-oecd.org/governing-transport-algorithmic-age-0
Description of Idea: In this lesson plan students learn about rhythm, how it relates to music, and how rhythm is felt through the body. Expectations: It is expected that students gain an understanding of what rhythm is, how rhythm can be created with different parts of the body, and how well the student can copy a rhythm created by the teacher. Student Groupings: Each student in the class is expected to participate in the activities of the lesson, so that an understanding of rhythm can be attained. Teaching Strategies: Part 1: Focus The teacher describes what rhythms and beats are.The teacher initiates a discussion by asking the students to describe where they have heard different types of rhythm, and then the teacher discusses the students responses and talk about why rhythm is important. Part 2: Introducing the Art Teacher discusses with students how rhythm can be felt from the way we use words and language. To demonstrate this idea to the students, the teacher claps simple rhythmic pattern and have the students repeat the pattern that was created. The teacher selects nursery rhymes and use the words to create different rhythmic patterns.The teacher has the students say the nursery rhymes line by line and have the students clap the rhythm of the rhyme as they say it. The teacher introduces other ways that rhythm can be felt, such as through the feet by stomping, dancing, etc. Assessment Strategies: *Can the student repeat the rhythmic patterns of the nursery rhyme? *Can the student express individual interpretation of the rhythm with clapping and stomping? *Can the student identify that story telling can have rhythmic patterns? Adaptations: For ESL students who are at the intermediate level of English, this assignment can be adapted so that the students can bring in music from their cultures and have them try and create rhythms. This allows the students to have a personal connection to the lesson plan and can make it easier to understand the concept of rhythm with music that is familiar to them.
Community development projects of BREADS aim to develop skills within communities that empower them in overcoming social, economic and environmental challenges .These multifaceted projects have tried to create effective and sustainable impact on the living conditions, Health and economic status of disadvantaged communities, strengthening their livelihood capabilities by establishing systems that foster participation and self reliance. BREADS successfully implemented eight community development projects in 2016-17, reaching out to less privileged communities from more than 100 villages and 6 urban slums with focus on women, youth and children. These projects emphasized the building of capacities and skills of the community, thereby strengthening people’s involvement in the process of policy making and implementation. The integration of advocacy, networking, linkages and convergence with government policies and projects has assured the continuance of the development actions initiated by the projects with the high-level involvement of various stakeholders such as the direct beneficiaries, civil society, policy makers and duty bearers. The Mobile clinic project has helped…Read more... The mobile clinic reached out to…Read more... The project reached out to 15…Read more... BREADS initiated a new project at…Read more... 10 days trainings/awareness classes were organized…Read more...
https://breadsbangalore.org/community-development
In this post we discuss how Bela can be used to prototype immersive interactive audio scenes, the main topic of the Soundstack 2018 Bela Mini workshop led by Becky Stewart. Virtual and augmented reality is finally maturing as a technology and with this new artistic medium comes the need for new tools and approaches to creating immersive content. Being able to work with low-latency interactive audio is an important piece of this puzzle, particularly when it is spatialised in a virtual environment. Latency between a player’s movements and the response of a virtual environment is a well-known problem in VR and AR: delays between action and reaction can degrade feelings of presence and immersion in a virtual world. It is for these reasons that Bela is particularly well positioned for prototyping spatial audio interactions. By utilising a combination of low-latency head-tracking and binaural spatialisation it is possible to create extremely responsive interactive audio scenes that can be rapidly designed, explored and refined. Soundstack 2018 took place on the 5-7th Oct 2018 and consisted of three days of workshops and masterclasses on the art and technology of spatial audio, bringing together spatial audio researchers, sound designers and content creators. The future of spatial audio was the overarching topic for the three days, from ambisonics to object-based audio, interactivity to synthesis. This is the second edition of Soundstack: for a summary of last year’s event see our blog post. This year the workshops and masterclasses covered a broad range of topics: spatial aesthetics led by Call & Response, the SPAT real-time spatial audio processor led by Thibaut Carpentier from IRCAM, and integrating interactive sound assests in Unity using Pure Data and the Heavy open source compiler (which will be familiar to many of you) led by Christian Heinrichs. Becky Stewart’s workshop on the Saturday morning concentrated on using Bela as a tool for prototyping interactive binaural audio scenes. When working on virtual or augmented reality applications it is becoming increasing important to be able to quickly prototype, deploy, test and redesign the spatial sound elements you are working with. Becky demonstrated a workflow that uses Pure Data and Bela in combination with a head-tracking sensor. Pure Data is a widely-used computer music language that is high-level enough for composers and musicians to get creative with. The workshop used a 9 DOF absolute orientation sensor for head-tracking which was attached to the band of a pair of headphones. The sensor consists of an accelerometer, magnetometer and gyroscope on a single board. On the Adafruit breakout board that was used during this workshop there is also a high speed ARM Cortex-M0 based processor which gathers all the sensor data and does the sensor fusion required to get some meaningful data about the absolute orientation of the board (or whatever it is attached to). The result is an interactive binaural scene that is possible to explore by moving your head. The absolute orientation sensor keeps track of the position of your head and it is possible to fix sound sources in the room so they always seem to be propogating from the same point. All the material from the workshop is online so if you are interested in looking through the slides or trying out the example projects then you can find it all here. Bela is an open-source platform for ultra-low latency audio and sensor processing. Find out more on our website, buy Bela at our shop, follow us on twitter or join our community and discuss this post on our forum.
https://blog.bela.io/2018/10/12/bela-AR-VR-binaural-spatial-audio/
An extreme exoplanet is more sizzling than scientists thought In 2016, scientists discovered a gaseous exoplanet called WASP-76b known for vaporizing iron in its atmosphere. This extreme exoplanet, which lies about 640 light-years from Earth, is 1.8 times the size of Jupiter in our Solar System. In the high-resolution analysis obtained from the Gemini North telescope, scientists obtained a rare trio of spectral lines in the exoplanet’s atmosphere. These lines are caused by ionized calcium in the planet’s upper atmosphere. Dr. Ernst de Mooij from the School of Mathematics and Physics at Queen’s University Belfast was involved in analyzing the data. He comments: “This detection of ionized calcium is the first result from the ExoGemS survey and shows the impact of the extreme conditions on the atmospheres of WASP-76b.” First author and University of Toronto doctoral student Emily Deibert explains: “We see so much calcium, it’s a powerful feature. This spectral signature of ionized calcium could indicate that the exoplanet has very strong upper atmosphere winds. Or the atmospheric temperature on the exoplanet is much higher than we thought.” This hot Jupiter exoplanet orbits its star every 1.8 Earth days and is 30 times closer to its star than the Earth is to the Sun. This study that offers insights into the planet’s upper atmosphere suggests that the upper atmosphere is either hotter than expected or that strong winds are present. Co-author Dr. Ray Jayawardhana, the Harold Tanner Dean of the College of Arts and Sciences and a professor of astronomy at Cornell University, explains: “As we do ‘remote sensing’ of dozens of exoplanets, spanning a range of masses and temperatures, we will develop a complete picture of the true diversity of alien worlds – from those hot enough to harbor iron rain to others with more moderate climates, from those heftier than Jupiter to others not much bigger than the Earth.” “It’s remarkable that with today’s telescopes and instruments, we can already learn so much about the atmospheres – their constituents, physical properties, presence of clouds, and even large-scale wind patterns – of planets that are orbiting stars hundreds of light-years away.” Dr. De Mooij says: “These observations are not only revealing more details of exoplanet atmospheres now but are also paving the way for investigating ever-smaller planets with the next generation of telescopes, such as the Extremely Large Telescope.”
Q: Under what conditions does correlation imply causation? We all know the mantra "correlation does not imply causation" which is drummed into all first year statistics students. There are some nice examples here to illustrate the idea. But sometimes correlation does imply causation. The following example is taking from this Wikipedia page For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation. Are there other situations where correlation implies causation? A: Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The twin that goes to the amusement park loses the device, hence the low grade. A good way to get this stuff straight is to think of the structure of Bayesian network that may be generating the measured quantities, as done by Pearl in his book Causality. His basic point is to look for hidden variables. If there is a hidden variable that happens not to vary in the measured sample, then the correlation would not imply causation. Expose all hidden variables and you have causation. A: I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from Practical Psychiatric Epidemiology, by Prince et al. (2003). Causation, or causality interpretation, are by far the most difficult aspects of epidemiological research. Cohort and cross-sectional studies might both lead to confoundig effects for example. Quoting S. Menard (Longitudinal Research, Sage University Paper 76, 1991), H.B. Asher in Causal Modeling (Sage, 1976) initially proposed the following set of criteria to be fulfilled: The phenomena or variables in question must covary, as indicated for example by differences between experimental and control groups or by nonzero correlation between the two variables. The relationship must not be attributable to any other variable or set of variables, i.e., it must not be spurious, but must persist even when other variables are controlled, as indicated for example by successful randomization in an experimental design (no difference between experimental and control groups prior to treatment) or by a nonzero partial correlation between two variables with other variable held constant. The supposed cause must precede or be simultnaeous with the supposed effect in time, as indicated by the change in the cause occuring no later than the associated change in the effect. While the first two criteria can easily be checked using a cross-sectional or time-ordered cross-sectional study, the latter can only be assessed with longitudinal data, except for biological or genetic characteristics for which temporal order can be assume without longitudinal data. Of course, the situation becomes more complex in case of a non-recursive causal relationship. I also like the following illustration (Chapter 13, in the aforementioned reference) which summarizes the approach promulgated by Hill (1965) which includes 9 different criteria related to causation effect, as also cited by @James. The original article was indeed entitled "The environment and disease: association or causation?" (PDF version). Finally, Chapter 2 of Rothman's most famous book, Modern Epidemiology (1998, Lippincott Williams & Wilkins, 2nd Edition), offers a very complete discussion around causation and causal inference, both from a statistical and philosophical perspective. I'd like to add the following references (roughly taken from an online course in epidemiology) are also very interesting: Swaen, G and van Amelsvoort, L (2009). A weight of evidence approach to causal inference. Journal of Clinical Epidemiology, 62, 270-277. Botti, C, Comba, P, Forastiere, F, and Settimi, L (1996). Causal inference in environmental epidemiology. the role of implicit values. The Science of the Total Environment, 184, 97-101. Weed, DL (2002). Environmental epidemiology. Basics and proof of cause effect. Toxicology, 181-182, 399-403. Franco, EL, Correa, P, Santella, RM, Wu, X, Goodman, SN, and Petersen, GM (2004). Role and limitations of epidemiology in establishing a causal association. Seminars in Cancer Biology, 14, 413–426. Finally, this review offers a larger perspective on causal modeling, Causal inference in statistics: An overview (J Pearl, SS 2009 (3)). A: At the heart of your question is the question "when is a relationship causal?" It doesn't just need to be correlation implying (or not) causation. A good book on this topic is called Mostly Harmless Econometrics by Johua Angrist and Jorn-Steffen Pischke. They start from the experimental ideal where we are able to randomise the "treatment" under study in some fashion and then they move onto alternative methods for generating this randomisation in order to draw causal influences. This begins with the study of so called natural experiments. One of the first examples of a natural experiment being used to identify causal relationships is Angrist's 1989 paper on "Lifetime Earnings and the Vietnam Era Draft Lottery." This paper attempts to estimate the effect of military service on lifetime earnings. A key problem with estimating any causal effect is that certain types of people may be more likely to enlist, which may bias any measurement of the relationship. Angrist uses the natural experiment created by the Vietnam draft lottery to effectively "randomly assign" the treatment "military service" to a group of men. So when do we have a causality? Under experimental conditions. When do we get close? Under natural experiments. There are also other techniques that get us close to "causality" i.e. they are much better than simply using statistical control. They include regression discontinuity, difference-in-differences, etc.
What is the right defensive set-up for the King-Bishop_Rook trio in the diagrammed position? It's Black's move in the diagram. 1. What are the right squares for the king? 2. Should I push my h-pawn to h6? (I will probably have to play e6 to protect f5, so, Bg5-f6 is a threat. 3. Do I put my rook on g-file? In the actual game I did, but that ended up smothering my king in the corner and for a long time to come gave tactical traps to my opponent. Is there a general guide for how to defend such weakened structures? Like how to cover up for a missing h-pawn, or g-pawn, or f-pawn? Yes, the king should go to h8. You don't want to sit in the pin and you have to deal with Bh6 threat. white doesn't have enough forces to target the h7 pawn and his knight is going to take a while to assist any kingside activity. Yes, Rg8. Your rook becomes much more active on the g-file! and can possibly deploy to g6 or g4 etc. No, you don't need a pawn on h6 and yes you need to defend f5 with e6 at some point. Bg5-f6 is nothing if your king is on h8 and rook on g8. You forgot to say who has the move in the given position. This is always important. For example, here if White has the move, he can attack with 1.Bh6, and he will be up a piece. So we can assume that Black has the move. In that case he had better play 1...Kh8 to avoid White's attack with 2.Bh6. This move also frees Black's bishop, so White should probably defend the pawn with 2.d4 and Black will probably need to play 2...Rg8. This already answers a couple of your questions (the king needs to go to h8, and the rook will go to g8). Due to the pin on the 1st rank, black wins a piece with a won game. Although the end game is difficult, it's better than passively hiding. Since we're close to an end game, the right place for the king is in the center. If more pieces were on the board, I would go with a plan of Kh8; e6; h6; Kh7. This would limit the white dark square bishop. The rook on the g-file doesn't accomplish anything. No other piece can assist in any attack. The only active place would be on b1 via b8, after you protect against a knight fork on a6. A general rule-of-thumb is that when under attack to return material to weaken the attacking force. One that I use is to return material to simplify into a won end game. That is, if I have a queen and pawn versus a rook, I exchange the queen for the rook to get a won pawn end game. Most books are written for the fun part of chess--the attack. However, by assessing the needs of the position, you can find defensive moves. Here the most important element is to stop Bh6 and mate next. Kh8 must be played; f6 looks too artificial and just kills your pawn structure. The next step would be to both protect f5 and fix the weakness on e5 (the weakness is not the pawn but that it blocks the bishop), but white's first move is most likely d4 to protect e5, which makes a new threat on a6--it was protected earlier by a queen fork. (BTW, we'd like to keep the pin on e5 until we can fix it with e6, so protecting the a-pawn would be done by a5, even though Qa5 seems to win an pawn. Finally, h6 and Kh7, if you can manage these move, would restrict white's bishop further. This seems like a long post, but I barely scratched the surface. Not the answer you're looking for? Browse other questions tagged pawn-structure defense or ask your own question. What are the setups/patterns useful in defence? What are good defenses of black King with Bishop on f8 or g7?
https://chess.stackexchange.com/questions/21952/what-is-the-right-defensive-set-up-for-the-king-bishop-rook-trio-in-the-diagramm
Abortion has always been a hot topic in the United States, a subject prone to criticism, especially among the religious sectors in America. In most scenarios, the anti-abortionist arguments are usually invalid, for instance, Marquis’s paper “Why Abortion is Wrong” is very misleading as it is based on the assumptions that abortion is only allowed when it is supposed that a mother will have a mentally defective fetus (Marquis 182). Considering the number of medical reasons proposed by doctors, the practice of abortion should remain legal in the U.S. since it allows a mother to choose her destiny and prevents the bearing of unwanted children in the society. The main goal of the paper is to prove that though abortion is considered to be a morally wrong act, it is sometimes justified based on the prevailing circumstances. In the paper, I intend to suggest the strong points supporting abortion and criticize the weak ones proposed by some anti-abortionists with a view to justifying the need for legalization of abortion in the United States. Presenting Arguments Mifepristone, also known as RU-486 is a medication that has an ability to block the action of the progesterone hormone which sustains woman’s pregnancy. The drug has been used to terminate pregnancies since 1988, especially in China and France, and since 1990’s in the United Kingdom. Since then, given its safety, it has been licensed in over 37 countries including the U.S. For that reason, the pill has been used by millions of women all over the world. Mifepristone, together with Misoprostol, can be used for termination up to 63 days of being pregnant and has been used to terminate pregnancy in the United States with a success rate of between 95-98 %. Based on the success rate of RU-486 and its legalization in the U.S, this proves that currently abortion is safe enough. Most requests for abortions in the U.S result from the unwanted or unplanned pregnancies. Such women may be victims of rape or forced sexual intercourse, or even worse, engaged in careless sex resulting in an unplanned pregnancy. How would one deal with such a situation? Should she give birth to her child if she does not want it? Every born child should be wanted in their family. Most children that were born due to unwanted pregnancies always suffer from being rejected by their parents, which makes them grow with low self-esteem. In the book “Why Abortion Is Immoral” Marquis supports his arguments by claiming that since normal fetus has a “future-like-ours,” they have a right to live. Such assumption can be misleading since having a “future-like-us” is not a sufficient condition to justify a right to life (Marquis 186). Children are gifts from God, and as such, should not be subjected to the emotional tortures arising from being rejected by the mother. Therefore, the purpose of abortion is to give the women the right to decide whether they want to have children or not. In some situations, abortion is justified when the mother’s health is at risk as a result of the pregnancy. In such situations, it would not be morally right to let the child’s mother die with a view of saving the child. Consequently, the medical practitioner is allowed to conduct abortion that can help save the mother’s life. This has been a common scenario in most countries all over the world, thus justifying the need for abortion. Due to the rising cost of living all over the world and in the United States in particular, raising an unplanned child by a single parent is a challenging task. Most unplanned pregnancies cause financial strain to the mothers of such children leaving them with the burden of taking care of the child alone. At the same time, it might interfere with their normal way of life, e.g. some young ladies will be forced to leave school and take care of the child. In such situations, abortions should be allowed to ensure family planning, which will shelter such families from the costs arising from unplanned pregnancies. Some pregnancies arise from rape, where victims are subjected to forced sex. Such victims are subjected to psychological traumas and stress resulting from such an unfortunate event. In case when the victims become pregnant, such pregnancies always remind them of the ugly experience they underwent, and children are rejected even before their birth. Children who are born from such pregnancies will suffer from a rejection, which will subject them to emotional torture. In such scenarios, it would be justified to have the abortion so as to ensure that the every child should be wanted and planned. An abortion will also free the mother from the trauma she might undergo as a result of such unfortunate experience. Critical Evaluation To support the act of abortion, Marquis wrote a book “Why Abortion Is Wrong.” In his book, Marquis is right when presumes that it is wrong to kill, but the claim that the loss of one’s future as result of his or her killing is an unfounded hypothesis. Human beings due to their nature should be treated differently from the rest of the creatures. The basic principle states that harming living organisms requires justification, however, it is also quite clear that various kinds of harm can be exerted to different living beings depending on their nature of consciousness. Therefore, the assumption that moral weight should form the entire basis of advocating for anti-abortion campaign in the United States is not the right hypothesis. One may look at an example when contraception is considered to be morally wrong and does not serve to be a sole reason why not to use it, given its dire need in the world. In addition to that, Marquis’ arguments that fetus has a “future-like-ours” should be considered as an unfounded premise, since it is based on the assumption that it already has a “future-like-ours.” It is important to note that fetuses are very different from adults since philosophical investigation of the personal identity over the time has proved that the biological and physiological complexities regarding the connections that exist between later and earlier stages of person’s development are evident. Such significant differences invalidate the assumption that fetuses have personal features that are similar in some ways to those of the adults (Paske 369). Objection/Response Though abortion at times is necessary, it should be conducted by a qualified medical doctor and in very safe conditions to ensure mother’s safety. Over the years, the main cause for abortion has always been to save the life of the expectant mother. As such, it would make sense that to achieve this objective, abortion should be done by a highly qualified medical practitioner and in very safe conditions so as to ensure that its primary objective is achieved. As a result, while legalizing abortion in the United States, the government should ensure that all the safety procedures are observed, which will ensure the safety of the mother. Concerning moral aspect, abortion is not justified since it ends the life of the child, denying their right to life that every living being should be guaranteed. Therefore, there is not enough reason to justify the act of ending one’s life to save another one. As such, the religious and civil groups leading anti-abortion campaigns in the United States are justified to some extent. Nevertheless, young ladies should not take advantage of advocating for abortion in the United States, while engaging in acts of careless sex will lead to having abortions. Young men and women should be responsible enough not to engage in unprotected sex as they are not financially prepared to raise a child. Therefore, my stand on advocating for legalization of abortion in the United States should not be taken as an excuse to engage in unprotected sex that can lead to the unwanted pregnancy. Conclusion Abortion should be legalized in the U.S as it is considered to be medically safe. Though every child has a right to live, the child’s mother should give birth to him or her willingly. Furthermore, a healthy economy is one that is able to feed its population well and able to cater for their needs. To achieve these points in the U.S, planned deliveries are inevitable. Mothers should be given a chance to give birth when they are ready for the upcoming responsibilities. Though the anti-abortionist arguments cannot be ignored, the causes for abortion are strong enough to guarantee its legalization in the United States.
https://essays-writers.com/essays/research/pro-abortion.html
Sound change includes any processes of language change that affect pronunciation (phonetic change) or sound system structures (phonological change). Sound change can consist of the replacement of one speech sound (or, more generally, one phonetic feature) by another, the complete loss of the affected sound, or even the introduction of a new sound in a place where there previously was none. Sound changes can be environmentally conditioned, meaning that the change in question only occurs in a defined sound environment, whereas in other environments the same speech sound is not affected by the change. The term "sound change" refers to diachronic changes, or changes in a language's underlying sound system over time; "alternation", on the other hand, refers to surface changes that happen synchronically and do not change the language's underlying system (for example, the -s in the English plural can be pronounced differently depending on what sound it follows; this is a form of alternation, rather than sound change). However, since "sound change" can refer to the historical introduction of an alternation (such as post-vocalic /k/ in Tuscan, once, but now ), the label is inherently imprecise and often must be clarified as referring to phonetic change or restructuring. Sound change is usually assumed to be regular, which means that it is expected to apply mechanically whenever its structural conditions are met, irrespective of any non-phonological factors (such as the meaning of the words affected). On the other hand, sound changes can sometimes be sporadic, affecting only one particular word or a few words, without any seeming regularity. For regular sound changes, the term sound law is sometimes still used. This term was introduced by the Neogrammarian school in the 19th century and is commonly applied to some historically important sound changes, such as Grimm's law. While real-world sound changes often admit exceptions (for a variety of known reasons, and sometimes without one), the expectation of their regularity or "exceptionlessness" is of great heuristic value, since it allows historical linguists to define the notion of regular correspondence (see: comparative method). Each sound change is limited in space and time. This means it functions within a specified area (within certain dialects) and during a specified period of time. For these (and other) reasons, some scholars avoid using the term "sound law" — reasoning that a law should not have spatial and temporal limitations — replacing the term with phonetic rule. Sound change which affects the phonological system, in the number or distribution of its phonemes, is covered more fully at phonological change. Read more about Sound Laws: The Formal Notation of Sound Change, Principles of Sound Change, Terms For Changes in Pronunciation, Examples of Specific Historical Sound Changes Other articles related to "sound laws, sound, law, laws": ... Lambert ten Kate first formulated the regularity of sound laws, introducing among others, the term root vowel ... Rasmus Christian Rask developed the principle of regular sound changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek and Latin ... Grimm were unable to explain apparent exceptions to the sound laws that they had discovered ... ... Umlaut Grimm's law Grassmann's law Verner's law Great Vowel Shift (English) High German consonant shift Anglo-Frisian nasal spirant law Kluge's Law Dahl's Law ... ... He is also credited with one of the principal inventions that brought sound to motion pictures ... He was the first to formulate reliable laws for the resistance that water offers to ships (such as the hull speed equation) and for predicting their stability ... color vision research, and on the sensation of tone, perception of sound, and empiricism ... ... Gaulish changed the PIE voiceless labiovelar kʷ to p (hence P-Celtic), a development also observed in Brythonic (as well as Greek and some Italic languages), while the other Celtic, 'Q-Celtic', retained the labiovelar ... Thus the Gaulish word for "son" was mapos, contrasting with Ogamic Irish *maqqos (attested genitive maqqi), which became mac (gen ... Famous quotes containing the words laws and/or sound:
http://www.liquisearch.com/sound_laws
Innovation and commercialization is the essential activity or process which helps to deal with many other challenging or opportunities. Innovation is the new techniques or services offered by the organization to maintained their position in the long term market. Innovation and commercialization is taught by distinguish practicing innovators. Present study will be based on innovation for that Healthy Drink company will be taken into action in order to meet the needs of organization. Further this assignment will cover the importance of innovation and comparison with invention. It helps to present the overall different aspects of innovation such as 4P's of innovation. It will also explain about the innovation funnel. If any company wants to achieve dynamic stability and sustainability then they should follow the organizational business plan. LO 1 P1 Explaining the importance of innovation process and comparison between innovation and invention. Innovation: Innovation is the process of generating new product and services in the market in order to enhance the better opportunity and growth. (Ke, W. and et.al., 2016). Innovation give the value of good products or services in the purest sense. In the market it will better introduced when something new create and get attached by customer to fulfills their demand. Whatever is going in a market we have to considered and introduced or developed a new product to show our product. That process helping in the generation of the creative idea and changes into demand of customer are covered under innovative ideas. (Yu, Y. and et.al., 2016). Importance of innovation: Succeed in the business. Now a time a more competition created so take a knowledge of our business whichever it is small but it will be succeed. Various important part of is to create business that foster and more likely our creativity . It will use innovation as a important, general, human centered & technological business management processes and global industry eco-systems. Differences Between Invention and Innovation: Invention is defined as idea for a product or process it has never been used before. (Rasmussen, T. E. and Eliason, J. L., 2017). Innovation is transformation of an idea into reality. Invention always related to the new product . Invention is the most essential process to rerain potential customers with the company in long term goals. Along with that, without invention innovation has not been done. (Yin, W. J., Shi, T. and Yan, Y., 2014). P2 Explaining the role of organisation vision, mission, leadership, culture and teamwork and how they shape innovation and commercialisation. Vision: take nutrition today to keep our health well tomorrow Healthy drink shares honesty and healthy body environment with customers. Apart from that, it also encourages living healthy and protective life growth. The vision of the company to live healthy, energetic, fulfilling lives. Creativity and innovation are the lifeblood of society and business especially in today's competitive global economy. Healthy drinks products are scientifically developed to provide the nutritious good quality and necessary nutrition many of us are missing in our modern lives (Wang, C. and et.al.,, 2017). Leadership: Quantitative analysis having a huge data to examines the relationship between organizational culture, leadership behaviors, and innovativeness. The most valuable asset to a company is its human capital and healthy companies start with healthy leadership. When an executive is not getting adequate sleep, has a hormone imbalance, or nutritional deficiencies, the decisions they make will be less than optimal. When the very people in charge of the health of the company are avoiding their own health, it's a gamble with unfavorable odds (Song, Z., and et.al., 2016). Everyone needs to start somewhere, so here's a list of the top five things you can doing today to become a healthier leader: 1-Take control of your sleep 2-Hydrate effectively 3-Have an executive physical exam 4-Eat clean 5-Exercise effectively Teamwork: To be succeed in our life a tesam is very necessary to help us. (Song, Z., and et.al., 2016). This idea is seen within the greater framework of a team, which is a group of interdependent single who work together towards a common goal. Behind every good company a great role is played is called team. Teamwork at the workplace has been demonstrated to increase ratio, improve act, expedite idea generation, distribute workload and establish a culture in which each employee feels a sense of belonging and empowerment (Wang, C. and et.al.,, 2017). As Daniel green once noted teamwork”is the drinks that allows common people to attain uncommon results”. M1 Different sources of innovation will help company to make new product and services for healthy drinks. Innovation sources such as technology, ideas, new thoughts, new concept and method of processing. Process need, industry and market structure etc. these are the sources of innovation will help to innovate new challenges and innovation ideas. LO 2 P3 Explaining the 4P's innovation Innovation can take many forms but they can be reduced to four directions of change: - Product innovation: product innovation refers to the system through company can enhance better or new product and services to customers. - Process innovation: it is the another P of innovation which defines the way in which they are created and pass to the end consumers. - Position innovation: it is the another process under which product place has been decided in the market where product has been placed or position among others. - Paradigm innovation: it is the overall changes in the model of the company. 'Product Innovation': Different types of suggestions can help us to make a new product better to other product is known as incremental change. Another form of recommended design is making your product completely different or new change (Chun, D., and et.al., 2015). Additionally, developing current product to meet customer's needs. Healthy drinks is one of the leading based drinks in the world. This can be understood by its product strategy in its marketing mix. In terms of the width of the product mix. We tend to help at every step regarding to Assignment for getting on track... We tend to help at every step regarding to Assignment for getting on track... 'Process Innovation': It includes the manufacturing process and the distribution process. . Its only available at chemist's shop and grocery store. It can also be used in to booth presence in large retail , with a bright and attractive product displays. Improved fixed line telephone services. Extended range of stock broking services. Improved auction house operations (Ge, J. and et.al., 2017). 'Position Innovation': It does not include the preparation process o0f your services or product. This steps we always to think whenever we start the process, such as activity, progress, hiring people. How can this all be enhanced to increase your profit. Which are the latest trends to carry out those activity. Traditional higher education courses support on line processes. (Ge, J. and et.al., 2017). Banking services offering to target at key segments students, retired, people etc. 'Paradigm Innovation': To do business use a innovative ideas. The changes must be new, but bear in mind that not all changes are always positive. Let your imagination take you to all the possible changes that can leverage your profit. That includes changes in your business model, like, for example, changing your published products from print to online platforms (Aryal, P. and et.al., 2016). P4 Explaining the process or development of frugal innovation which provide better enviornment in the company. And how it is used in an organisation. Frugal innovation discovers new business models, reconfigure value chains, and redesigns products to serve users who highly suffered from challenges and competition level. Basically frugal innovation helps to provide the better solution making task which helps to regain the better opportunity and growth. Frugal innovation is the process of developing the company functional activities in terms of getting more customer satisfaction. Health drinks needs to be developed innovation frugal in order to sustain customer attraction towards the company sales. It will also explain the better growth and opportunity growth in more productive manner. Frugal development aims to bring new product or services in order to fulfill the basic needs and organization objectives (Walt, D.R., 2014). In order to entrance of new competitors in the market healthy drinks needs to be taken new product and services in order to get the better opportunity and growth. Besides, frugal innovation provides an opportunity through company fulfill their new opportunity growth in order to meet the needs of company structure. Frugal innovation has been given innovation process such as lo fluster growth and cutting funds to reduce debt, new technological platforms, the fastest growing markets, market constraints etc. Frugal development stable the market position of the company which enhance the better opportunity and growth apart from that it will also help to recover the better opportunity from the market in order to make the better marketing growth. Frugal development process makes better opportunity for grabbing (Kolychev, V. D. and Prokhorov, I.V., 2015). M2 In order to adopt the innovation funnel in the organization is the more influencing and more target market share through company helps to make better target market help. Innovation funnel is used to describe the steps that take place in developing a process or product. Innovation funnel includes three steps first step involves the focusing on the wide mouth of the funnel, narrowing segments and narrow segment of a funnel (Ge, J. and et.al., 2017). M3 Frugal innovation has been play a very important role through healthy drinks can adopt or regain its value from the market. It also helps to protect the market formation process which helps to provide better opportunity and growth. Frugal innovation process is just to make innovation in the organization context. LO 3 P5 Explaining the importance of the commercial funnel and explain the process of new product development. Commercial funnel gives new face or image of company in the market, it helps to gain customer attraction and better quality of services. It helps to generate new advancing and better opportunity and growth (Wang, C. and et.al.,, 2017). Apart from that, It will also help to protect the better opportunity and growth. Commercial funnel also lead sales of the product which makes better sustainability level in the long term market. Overall process also helps to recover the new advancing growth market level. This generate or given new opportunity and market growth. Apart from that, there are giving some process or steps of new product development market approach. New product development is the process of making good task oriented goods which makes better growth oriented performance. Application of new product development has been introduced new process of goal oriented and providing better opportunity and growth. In order to make new market oriented goals and target market position. Moreover, it will be the new task oriented goals (Kolychev, V. D. and Prokhorov, I.V., 2015). Along with that, new product development gives better responsive and goal oriented task. Along with that, it will more promising and challenging process growth. Moreover, new product development is the process of making good task oriented goals Which helps to take new better performance level. +44 1615244345 [email protected] +44 1615244345 P6 Produce a business case for an organisation including the overall process to access funding. Business innovation case is the overall process to implement the new process of product and services in order to built the new advancement and growth. The business case for innovation, for us at inventium (Aryal, P. and et.al., 2016). Basically it relates the difference between invention and innovation. It gives new better opportunity to take the business into new level of manner. Besides, it also helps to bring new better innovation services for Healthy drinks. It is the creative process or target market growth which enhance the better services and target market growth. Introducing new business case will help business: - improve productivity - reduce costs - Build the value of Health Drinks. - Increase company turnover M4 Innovation helps to make better quality and services for the customer. Health Drinks can make better opportunity growth and business image in the market. It helps to enhance the better opportunity and growth for the company development. It also helps to get the better opportunity growth in the target market goals (Kolychev, V. D. and Prokhorov, I.V., 2015). LO 4 P7 Explaining different tools that Mr green use to develop or to protect the intellectual property. Trademark: A UK based company was successful in registering trademark. placeable mark is a trademark, design or look which identifies products or services of a particular source from those of others, although trademarks used to identify services are usually called service marks (Ge, J. and et.al., 2017). The trademark owner must be an individual, business system or any legal entity. The trademark may be located on a collection, a label, a voucher, or on the product itself. For the sake of corporate identity, trademarks are often displayed on company buildings. A trademark serves to exclusively identify a product or service with a specific company, and is a recognition of that company's ownership of the brand. Copyright: is a legal right that grants the creator of an original work exclusive rights for its use and distribution. This is usually only for a limited time. The copyright law have limitations or expectations have no right to exclusively to include n fairy use. A major limitation on copyright is that copyright protects only the original expression of ideas, and not the underlying ideas themselves (Yin, W. J., Shi, T. and Yan, Y., 2014). territorial rights are also known as copyright., which means that they do not extend beyond the territory of a specific jurisdiction. Patent: Small business sometimes create creativity that can be protected from violation by others. (Ge, J. and et.al., 2017). Patent is the tool to protect the better enlarging and protect the overall business activities. Drinks are usually proprietary under utility plain. Under this category, the drink must be useful, Patent protect company intangible assets in order to protect the better environmental growth. Small business does not require any long process of patent they only just need to make registered. Along with that, it also protects the better environmental growth. M5 Present supported evidence-based evaluation of these different tools. The proper using of trademark, copyright and patent process will help in suitable rise in the operational capability of the organization that will create the better rise in operational capability of firm. This will support the organization and Mr. Green to safeguard their intellectual property and thus will help in suitable improvement in the efficiency and effectiveness of the firm that will help in suitable management of activities and growth activities that are been followed by the firm. Besides this, it will help in maintaining the quality and effectiveness of operations in a much better way (Yu and et.al., 2016). CONCLUSION Thus, this can be concluded that innovation lay a very deep impact on the operation of the organization. The report laid an emphasis on the innovation process and why is it important for the growth and development of the company. Factors impacting the leadership and teamwork and importance of frugal innovation is discussed with respect to NPD process and building of a innovative business case. Other than this the measure to safeguard the intellectual property is been analyzed. REFERENCES - Ke, W. and et.al., 2016. Employing lead thiocyanate additive to reduce the hysteresis and boost the fill factor of planar perovskite solar cells.Advanced materials,28(26), pp.5214-5221. - Yu, Y. and et.al., 2016. Improving the performance of formamidinium and cesium lead triiodide perovskite solar cells using lead thiocyanate additives.ChemSusChem.9(23). pp.3288-3297. - Rasmussen, T. E. and Eliason, J. L., 2017. Military-civilian partnership in device innovation: Development, commercialization and application of resuscitative endovascular balloon occlusion of the aorta.Journal of Trauma and Acute Care Surgery.83(4). pp.732-735. - Yin, W. J., Shi, T. and Yan, Y., 2014. Unique properties of halide perovskites as possible origins of the superior solar cell performance.Advanced Materials,26(27), pp.4653-4658. - Song, Z., Abate, A., Watthage, S.C., Liyanage, G.K., Phillips, A.B., Steiner, U., Graetzel, M. and Heben, M.J., 2016. Perovskite Solar Cell Stability in Humid Air: Partially Reversible Phase Transitions in the PbI2â€CH3NH3Iâ€H2O System.Advanced Energy Materials,6(19). - Wang, C. and et.al.,, 2017. Understanding and eliminating hysteresis for highly efficient planar perovskite solar cells.Advanced Energy Materials,7(17). - Chun, D., and et.al., 2015. Labor union effects on innovation and commercialization productivity: An integrated propensity score matching and two-stage data envelopment analysis.Sustainability.7(5). pp.5120-5138. - Ge, J. and et.al., 2017. Oxygenated CdS Buffer Layers Enabling High Openâ€Circuit Voltages in Earthâ€Abundant Cu2BaSnS4 Thinâ€Film Solar Cells.Advanced Energy Materials.7(6). - Aryal, P. and et.al., 2016. Parameterized complex dielectric functions of CuIn1− xGaxSe2: applications in optical characterization of compositional nonâ€uniformities and depth profiles in materials and solar cells.Progress in Photovoltaics: Research and Applications.24(9). pp.1200-1213.
https://www.instantassignmenthelp.com/free-samples/innovation-and-commercialisation/unit-3-business-and-business-environment-btec-level-5-hnd-diploma-business-regent-college-higher-education
Absolute risk measures the size of a risk in a person or group of people. This could be the risk of developing a disease over a certain period, or it could be a measure of the effect of a treatment – for example, how much the risk is reduced by treatment in a person or group. There are different ways of expressing absolute risk. For example, someone with a 1 in 10 risk of developing a certain disease has "a 10% risk" or "a 0.1 risk", depending on whether percentages or decimals are used. Absolute risk doesn't compare changes in risk between groups – for example, risk changes in a treated group compared to risk changes in an untreated group. That's the function of relative risk. A before and after study measures particular characteristics of a population or group of individuals at the end of an event or intervention, and compares them with those characteristics before the event or intervention. The study gauges the effects of the event or intervention. Blinding is not telling someone what treatment a person has received or, in some cases, the outcome of their treatment. This is to avoid them being influenced by this knowledge. The person who's blinded could be either the person being treated or the researcher assessing the effect of the treatment (single blind), or both of these people (double blind). A case-control study is an epidemiological study that's often used to identify risk factors for a medical condition. This type of study compares a group of patients who have that condition with a group of patients that don't, and looks back in time to see how the characteristics of the 2 groups differ. Case crossover studies look at the effects of factors thought to increase the risk of a particular outcome in the short term. For example, this type of study might be used to look at the effects of changes in air pollution levels on the short-term risk of asthma attacks. Individuals who have had the outcome of interest are identified and act as their own control. The presence or absence of the risk factor is assessed for the period immediately before the individual experienced the outcome. This is compared with the presence or absence of the risk factor when the individual didn't experience the outcome (control period). If there's a link between the risk factor and the outcome, it would be expected to have been present in the period just before the outcome more often than in the control period. A case series is a descriptive study of a group of people, who usually receive the same treatment or have the same disease. This type of study can describe characteristics or outcomes in a particular group of people, but can't determine how they compare with people who are treated differently or who don't have the condition. In a cluster randomised controlled trial, people are randomised in groups (clusters) rather than individually. Examples of clusters that could be used include schools, neighbourhoods or GP surgeries. This study identifies a group of people and follows them over a period of time to see how their exposures affect their outcomes. This type of study is normally used to look at the effect of suspected risk factors that can't be controlled experimentally – for example, the effect of smoking on lung cancer. A confidence interval (CI) expresses the precision of an estimate and is often presented alongside the results of a study (usually the 95% confidence interval). The CI shows the range within which we're confident that the true result from a population will lie 95% of the time. The narrower the interval, the more precise the estimate. There's bound to be some uncertainty in estimates because studies are conducted on samples and not entire populations. By convention, 95% certainty is considered high enough for researchers to draw conclusions that can be generalised from samples to populations. If we're comparing 2 groups using relative measures, such as relative risks or odds ratios, and see that the 95% CI includes the value of one in its range, we can say there's no difference between the groups. This confidence interval tells us that, at least some of the time, the ratio of effects between the groups is one. Similarly, if an absolute measure of effect, such as a difference in means between groups, has a 95% CI that includes 0 in its range, we can conclude there's no difference between the groups. A confounder can distort the true relationship between two (or more) characteristics. When it isn't taken into account, false conclusions can be drawn about associations. An example is to conclude that if people who carry a lighter are more likely to develop lung cancer, it's because carrying a lighter causes lung cancer. In fact, smoking is a confounder here. People who carry a lighter are more likely to be smokers, and smokers are more likely to develop lung cancer. This is an epidemiological study that describes characteristics of a population. It's "cross-sectional" because data is collected at one point in time and the relationships between characteristics are considered. Importantly, because this study doesn't look at time trends, it can't establish what causes what. A diagnostic study tests a new diagnostic method to see if it's as good as the "gold standard" method of diagnosing a disease. The diagnostic method may be used when people are suspected of having a disease because of signs and symptoms, or to try to detect a disease before any symptoms have developed (a screening method). In ecological studies, the unit of observation is the population or community. Common types of ecological study are geographical comparisons, time trend analysis, or studies of migration. An experiment is any study in which the conditions are under the direct control of the researcher. This usually involves giving a group of people an intervention that wouldn't have occurred naturally. Experiments are often used to test the effects of a treatment in people, and usually involve comparison with a group who don't get the treatment. Gene expression is a term used to describe the influence the "information" contained in genes can have on a cellular level – in most cases, in terms of the way specific proteins are created. This study looks across the entire genetic sequence (genome) to identify variations in this sequence that are more common in people with a particular characteristic or condition and may be involved in producing that characteristic or condition. A measure of the relative probability of an event in 2 groups over time. It's similar to a relative risk, but takes into account the fact that once people have certain types of event, such as death, they're no longer at risk of that event. A hazard ratio of 1 indicates that the relative probability of the event in the 2 groups over time is the same. A hazard ratio of more than or less than 1 indicates that the relative probability of the event over time is greater in one of the two groups. If the confidence interval around a hazard ratio doesn't include 1, the difference between the groups is considered to be statistically significant. Intention-to-treat (ITT) analysis is the preferable way to look at the results of randomised controlled trials (RCTs). In ITT analysis, people are analysed in the treatment groups to which they were assigned at the start of the RCT, regardless of whether they drop out of the trial, don't attend follow-up, or switch treatment groups. If follow-up data isn't available for a participant in one of the treatment groups, the person would normally be assumed to have had no response to treatment, and that their outcomes are no different from what they were at the start of the trial. This helps make sure RCTs don't show that a particular treatment being tested is more effective than it actually is. For example, if 50 people were allocated to the treatment group of an RCT, perhaps 10 might drop out because they got no benefit. If all 50 were analysed by ITT analysis, with 10 assumed to have had no benefit, this gives a more reliable indication of the effect of the treatment than just analysing the remaining 40 people who stayed on treatment because they felt they were getting the benefit. This is a hierarchical categorisation (ranking) of different types of clinical evidence. It's partly based on the type of study involved, and ranks evidence according to its ability to avoid various biases in medical research. Several ranking schemes exist that are specific to the question posed in the research. Studies with the highest ranking are those that provide the best evidence that a result is true. The expert opinions of respected authorities – based on clinical experience, descriptive studies, physiology, bench research or first principles – are often thought of as the lowest level evidence. Although there are different systems, some of which take into account other aspects of quality including the directness of the research, the levels are designed to guide users of clinical research information as to which studies are likely to be the most valid. A Likert scale is a commonly used rating scale that measures attitudes or feelings on a continuous linear scale, usually from a minimum "strongly agree" response to a maximum "strongly disagree" response, or similar. Likert scales can be 5-point, 6-point, 10-point etc depending on the number of response options available. A narrative review discusses and summarises the literature on a particular topic, without generating any pooled summary figures through meta-analysis. This type of review usually gives a comprehensive overview of a topic, rather than addressing a specific question, such as how effective a treatment is for a particular condition. Narrative reviews don't often report on how the search for literature was carried out or how it was decided which studies were relevant to include. Therefore, they're not classified as systematic reviews. This is one of a set of measures used to show the accuracy of a diagnostic test (see sensitivity, specificity and positive predictive value). The negative predictive value (NPV) of a test is a measure of how accurate a negative result on that test is at identifying that a person doesn't have a disease. The NPV is the proportion of people with a negative test result who don't truly have a disease. For example, if a test has an NPV of 75%, this means that 75% of the people who test negative are truly disease free, while 25% who test negative have the disease (false negatives). The NPV for a test varies depending on how common the disease is in the population being tested. An NPV is usually lower (false negatives are more common) when disease prevalence is higher. A nested case-control study is a special type of case-control study in which "cases" of a disease are drawn for the same cohort (population of people) as the controls to whom they're compared. These studies are sometimes called case-control studies nested in a cohort or case-cohort studies. The collection of data on the cases and controls is defined before the study begins. Compared with a simple case-control study, the nested case-control study can reduce recall bias (where a participant remembers a past event inaccurately) and temporal ambiguity (where it's unclear whether a hypothesised cause preceded an outcome). It can be less expensive and time consuming than a cohort study. Incidence and prevalence rates of a disease can sometimes be estimated from a nested case-control cohort study, whereas they can't from a simple case-control study, as the total number of exposed people (the denominator) and the follow-up time aren't usually known. In this type of study, participants aren't randomly allocated to receiving (or not receiving) an intervention. An odds ratio is one of several ways to summarise the association between an exposure and an outcome, such as a disease. Another commonly used approach is to calculate relative risks. Odds ratios compare the odds of the outcome in an exposed group with the odds of the same outcome in an unexposed group. Odds tell us how likely it is an event will occur, compared with the likelihood that the event won't happen. Odds of 1:3 that an event occurs, such as a horse winning a race, means the horse will win once and lose 3 times (over 4 races). Odds ratios are a way of comparing events across groups who are exposed and those who aren't. Open access means that a study or article is available free of charge, usually online. To access full articles in most medical journals you usually have to pay a subscription or make a one-off payment (these types of articles are often referred to as paywalled content). Some fully open access journals are funded by non-profit organisations. Others meet their running costs by charging individual authors a fee for publication. Occasionally, a paywalled journal will release individual articles on an open access basis (often those with important public health implications). Open label means that investigators and participants in a randomised controlled trial are aware of what treatment is being given and received (the study isn't blinded). Peer review involves giving a scientific paper to one or more experts in that field of research to ask whether they think it's of good enough quality to be published in a scientific journal. Studies that aren't of sufficient quality won't be published if their faults aren't corrected. Journals that use peer review are considered to be of better quality than those that don't. Per-protocol analysis, sometimes called on-treatment analysis, is one way to analyse the results of randomised controlled trials (RCTs). It analyses the outcomes of only the participants who receive a trial treatment exactly as planned, and excludes participants who don't. This approach can exclude participants who drop out of the trial for important reasons (for example, because the treatment isn't working for them or they experience side effects). Excluding these people from the analysis can bias the results, making the treatment look better that it would be in a real-world situation where some people may not follow the treatment plan perfectly. Per-protocol analysis can give a good estimate of the best possible outcome of treatment in those who take it as intended. Intention-to-treat (ITT) analysis is the alternative, and generally preferable, way to look at the results of RCTs because it gives a better idea of the real-world effects of treatment. Person years describes the accumulated amount of time that all the people in the study were being followed up. So, if 5 people were followed up for 10 years each, this would be equivalent to 50 person years of follow-up. Sometimes the rate of an event in a study is given per person year rather than as a simple proportion of people affected to take into account the fact that different people in the study may have been followed up for different lengths of time. Phase I trials are the early phases of drug testing in humans. These are usually quite small studies that primarily test the drug's safety and suitability for use in humans, rather than its effectiveness. They often involve between 20 and 100 healthy volunteers, although they sometimes involve people who have the condition the drug is aimed at treating. To test the drug's safe dosage range, very small doses are given initially and are gradually increased until the levels suitable for use in humans are found. These studies also test how the drug behaves in the body, examining how it's absorbed, where it's distributed, how it leaves the body, and how long it takes to do this. During this phase of testing, a drug's effectiveness in treating the targeted disease in humans is examined for the first time and more is learnt about appropriate dosage levels. This stage usually involves 200 to 400 volunteers who have the disease or condition the drug is designed to treat. The drug's effectiveness is examined, and more safety testing and monitoring of its side effects are carried out. In this phase of human testing of treatments, the effectiveness and safety of the drug undergoes a rigorous examination in a large, carefully controlled trial to see how well it works and how safe it is. The drug is tested in a much larger sample of people with the disease or condition than before, with some trials including thousands of volunteers. Participants are followed up for longer than in previous phases, sometimes over several years. These controlled tests usually compare the new drug's effectiveness with either existing drugs or a placebo. These trials are designed to give the drug as unbiased a test as possible to ensure that the results accurately represent its benefits and risks. The large number of participants and the extended period of follow-up give a more reliable indication of whether the drug will work, and allows rarer or longer term side effects to be identified. This is one of a set of measures used to show how accurate a diagnostic test is (see sensitivity, specificity and negative predictive value). The positive predictive value (PPV) of a test is how well the test identifies people who have a disease. The PPV is the proportion of people with a positive test result who truly have the disease. For example, if a test has a PPV of 99%, this means 99% of the people who test positive will have the disease, while 1% of those who test positive won't (false positives). The PPV of a test varies depending on how common the disease is in the population being tested. A test's PPV tends to be higher in populations where the disease is more common and lower in populations where the disease is less common. These are in vitro (for example, in cell cultures) and in vivo laboratory animal tests on drugs in development carried out to ensure they're safe and effective before they go on to be tested in humans (clinical studies). Prevalence describes how common a particular characteristic (for example, a disease) is in a specific group of people or population at a particular time. Prevalence is usually assessed using a cross-sectional study. This study identifies a group of people and follows them over a period of time to see how their exposures affect their outcomes. A prospective observational study is normally used to look at the effect of suspected risk factors that can't be controlled experimentally, such as the effect of smoking on lung cancer. A prospective study asks a specific study question (usually about how a particular exposure affects an outcome), recruits appropriate participants, and looks at the exposures and outcomes of interest in these people over the following months or years. Publication bias arises because researchers and editors tend to handle positive experimental results differently from negative or inconclusive results. It's especially important to detect publication bias in studies that pool the results of several trials. Qualitative research uses individual in-depth interviews, focus groups or questionnaires to collect, analyse and interpret data on what people do and say. It reports on the meanings, concepts, definitions, characteristics, metaphors, symbols and descriptions of things. It's more subjective than quantitative research, and is often exploratory and open-ended. The interviews and focus groups involve relatively small numbers of people. Quantitative research uses statistical methods to count and measure outcomes from a study. The outcomes are usually objective and predetermined. A large number of participants are usually involved to ensure the results are statistically significant. This is a study where people are randomly allocated to receive (or not receive) a particular intervention (this could be 2 different treatments or 1 treatment and a placebo). This is the best type of study design to determine whether a treatment is effective. This is a study in which people receive all of the treatments and controls being tested in a random order. This means that people receive one treatment, the effect of which is measured, and then "cross over" into the other treatment group, where the effect of the second treatment (or control) is measured. Recall bias is when a person's recall of their exposure to a suspected disease risk factor could be influenced by the knowledge that they're now suffering from that particular disease. For example, someone who's suffered a heart attack may recall having a highly stressed job. The stress they now report experiencing may be subtly different from the stress they would have reported at the time, before they developed the disease. Relative risk compares a risk in 2 different groups of people. All sorts of groups are compared to others in medical research to see if belonging to a particular group increases or decreases the risk of developing certain diseases. This measure of risk is often expressed as a percentage increase or decrease, for example, "a 20% increase in risk" of treatment A compared with treatment B. If the relative risk is 300%, it may also be expressed as "a 3-fold increase". The human genome is the entire sequence of genetic information contained within our DNA. This sequence is made up of strings of molecules called nucleotides, which are the building blocks of DNA. There are four nucleotides, called A,C, T and G. All humans share a very high level of similarity in their DNA sequence, particularly within genes, where the sequence of nucleotides contains the instructions for making the proteins that the cell and organism need. However, there are points in the DNA where different people have a different nucleotide, these are called single nucleotide polymorphisms (SNPs, pronounced "snips"). Most SNPs do not affect a person’s health or characteristics, as they do not lie in parts of DNA that encode proteins. However, they are useful to researchers, as SNPs that are more common in people who have a specific condition than those without the condition indicate that the regions of DNA surrounding these SNPs are likely to contain genes that are contributing to these diseases. The standard deviation is a statistical term that measures how much individual scores of a given group vary from the average (mean) score of the whole group. Another way of saying this is that it measures the spread of the individual results around the average of all the results. A water maze test comprises a pool of water, with a single platform (sometimes more than one platform) placed just below the surface of the water. Usually the platform and the pool are white, making the platform difficult to see. Mice are placed in the pool and swim around until they find the platform. Researchers usually time how long their test mice take to find the platform, but they may also film the mice to examine their searching pattern or technique. This can be an important indicator of their behavioural functions. Usually, mice are tested over and over again to see if they learn where the platform is. If the mice fail to find the platform after a certain time they are usually removed to prevent them from drowning.
https://www.nhs.uk/news/health-news-glossary/
Afrobarometer Round 3 [electronic resource] : The Quality of Democracy and Governance in Lesotho, 2005/ David Hall , Clement Leduka , Michael Bratton , E. Gyimah-Boadi , Robert Mattes . - Edition: - 2009-05-19 - Publication: - Ann Arbor, Mich. Inter-university Consortium for Political and Social Research [distributor], 2009. - Series: - ICPSR (Series) 22203. ICPSR 22203 - Format/Description: - Datafile 1 online resource. - Summary: - The Afrobarometer project was designed to assess attitudes toward democracy, governance, economic reform, quality of life, and civil society in several Sub-Saharan African nations, and to track the evolution of such attitudes in those nations over time. This particular survey was concerned with the attitudes and opinions of the citizens of Lesotho. Respondents in a face-to-face interview were asked to rate Lesotho's Prime Minister Pakalitha Mosisili and his administration's overall performance, to state the most important issues facing the nation, and to evaluate the effectiveness of certain continental and international institutions. Opinions were gathered on the role of the government in improving the economy, whether corruption existed in local and national government, whether government officials were responsive to problems of the general population, and whether local government officials, the police, the courts, the overall criminal justice system, the media, the National Electoral Commission, and the government broadcasting service could be trusted. Respondents were polled on their knowledge of the government, including the identification of government officials, their level of personal involvement in political, governmental, and community affairs, their participation in national elections, the inclusiveness of the government, and the identification of causes of conflict and resources which may aid in the resolution of conflict. Economic questions addressed the past, present, and future of the country's and the respondent's economic condition, and whether great income disparities are fair. Societal questions were asked of respondents concerning the meaning of being "poor" and "rich", monetary support systems, personal responsibility for success or failure, characteristics used in self-identification, methods for securing food, water, schooling, medical services, news and information, the ease of obtaining assistance for certain services, and whether problems existed with school and the local public clinic or hospital. Background variables include age, gender, ethnicity, education, religious affiliation and participation, political party affiliation, language spoken most at home, whether the respondent was the head of household, current and past employment status, whether a close friend or relative had died from AIDS, language used in interview, and type of physical disability, if any. In addition, demographic information pertaining to the interviewer is provided, as well as their response to the interview and observations of the respondent's attitude during the interview and of the interview environment. Cf.: http://doi.org/10.3886/ICPSR22203.v1 - Notes: - Title from ICPSR DDI metadata of 2015-01-05. - Contributor: - Hall, David Leduka, Clement Bratton, Michael Gyimah-Boadi, Emmanuel. Mattes, Robert Inter-university Consortium for Political and Social Research. - Access Restriction: - Restricted for use by site license. - Online:
https://franklin.library.upenn.edu/catalog/FRANKLIN_9968998863503681
Q: The change in magnitude of centripetal acceleration When an object (e.g. racecar) moves around in circles with constant tangential velocity, constant centripetal acceleration is present. What happens to the centripetal acceleration when the racecar is at rest, then increases its speed? I know that the tangential velocity increases due to the tangential acceleration, but what about the centripetal acceleration? Since centripetal acceleration is tangential velocity squared divided by the radius, and the tangential velocity is increasing from rest, the centripetal acceleration must then be increasing as well. How do you calculate the values for centripetal acceleration if it is changing? There doesn't seem to be a formula for it. And it seems that centripetal acceleration is changing, is there a term for the rate of change of it? A: As you have stated, the centripetal acceleration is given by: $$a_c=\frac{v^2}{r}$$ where $v$ is the magnitude of the velocity (technically it is the magnitude of the tangential velocity, but I will assume we stay on a circle of radius $r$). Therefore, if the velocity is a function of time $v=v(t)$, then the centripetal acceleration will be $$a_c(t)=\frac{v(t)^2}{r}$$ What determines $v(t)$ is the tangential acceleration $a_T$ according to $$v(t)=v(0)+\int_0^t a_T(t')\ \text d t'$$ (Note this is because $a_T=\frac{\text d v}{\text d t}$. It is not derived from the above equations). What determines these acceleration components is, of course, the centripetal and tangential components of the net force, but if you know what the tangential force is, then you could determine what centripetal force is required to keep the object moving in a circle of radius $r$ using the equations above.
The ESA Euclid mission aims to understand why the expansion of the Universe is accelerating and pin down the source responsible for the acceleration. It will uncover the very nature of dark energy and gravitation by measuring with exquisite accuracy the expansion rate of and the growth rate of structure formation in the Universe. To achieve its objectives Euclid will observe the distribution of dark matter in the Universe and its evolution over the last ten billion years by measuring the shapes of weakly distorted distant galaxies lensed by foreground cosmic structures. The shapes of lensed galaxies will be measured using the Euclid wide field imaging instrument VIS. In parallel, Euclid will analyse the properties of baryon acoustic oscillations and redshift space distortion and the distribution of clusters of galaxies by measuring the redshifts of galaxies with the NISP photometer and spectrometer instrument. The Euclid mission will observe one third of the sky (15,000 deg2) to collect data on several billion galaxies spread over the last ten billion years. In parallel to the space mission, the Euclid survey also comprises ground-based photometric and spectroscopic observations that will be used jointly with the Euclid satellite data to get photometric redshifts of billions of sources. Altogether the Euclid data set will be an exceptional gold mine for cosmology and fundamental physics but also for all fields in astrophysics. The presentation will describe the main scientific objectives and expected performances of the Euclid mission. The most recent forecasts and constraints on dark energy, gravity and dark matter will be presented as well as the expectations for the physics of inflation or neutrinos and the other domains of astronomy that will benefit from the Euclid mission data base. - Publication: - 42nd COSPAR Scientific Assembly - Pub Date: - July 2018 - Bibcode:
https://ui.adsabs.harvard.edu/abs/2018cosp...42E2252M
Senior Care Innovation Scholarship Finalist Katherine Kitchen A Place for Mom is proud to announce the commencement of their annual $1,000 scholarship for advancement in the field of gerontology. This is a general scholarship which will award the selected applicants with a financial donation. We have narrowed-down the finalists, which includes Katherine Kitchen. Congratulations to Katherine Kitchen, Senior Care Innovation Scholarship Finalist! Read Katherine’s essay below and vote for her if you think she deserves to be one of the 5 recipients of the $1,000 scholarship awards. Katherine’s Essay Caring Together: Collaboration between Clergy and Psychologists I was a little nervous the first time I met 93-year-old Mrs. Berkley. I remember clenching my palms together as I trailed behind the nurse who swiftly navigated through busy nursing home halls. My grandmother died after spending her last few days in a nursing home, and I vividly remembered visiting her there and observing other residents, some of whom seemed depressed and lonely. It was my first day of a high school-based work experience program. Upon discovering that Mrs. Berkley, who previously read on a daily basis, lost the majority of her eyesight after suffering a stroke, I offered to read to her. Over time our friendship deepened, and I continued visiting with and reading to Mrs. Berkley after my work experience program ended. My relationship with Mrs. Berkley sparked my desire to study psychology and become a mental health care professional specializing in geriatric mental health care. Although Mrs. Berkley has since died, the memories of our friendship remain, as does my desire to help improve the mental health and well-being of our aging population. I am particularly interested in helping improve the lives of older adults residing in long term-care facilities, who have higher rates of major depression and subsyndromal depressive symptoms than those residing in the general community (e.g., Blazer, 2003; Unutzer, Katon, Sullivan, & Miranda, 1999). Improving mental health care for rural, older adults is crucial because this population is in double jeopardy—they are under served because of their age and where they reside. With a severe shortage of mental health professionals trained in geriatrics in the current U.S. workforce, our current health care system is ill-equipped to face the impending crisis in geriatric mental health care. The efficacious pharmacological and psychological/psychosocial treatments that are available fail to reach many older adults in need (Olfson et al., 2003). Additionally, there are many well-documented barriers to receiving mental health services in rural communities, including mental health stigma, geographic isolation, and severe shortages of mental health care practitioners. It is critical that we address treatment access challenges for rural, older adults, because the proportion of older adults in rural communities is rapidly growing (Ham, Goins, & Brown, 2003), which can be expected to result in a concomitant increase in the need for mental health services. We must seize the opportunity to develop, implement, and disseminate innovative ways of reaching under-served older adults both in the community and in long-term care facilities before the rural mental health care system accumulates additional stress. Clergy members have the potential to help improve mental health treatment access and decrease the service need gap in rural communities. For over three decades, the role of clergy members as front-line mental health workers has been recognized (Weaver, 1995). The clergy are represented over a wide geographic region and have the unique chance to notice changes in behavior among community members, especially among older adults with whom they have frequent contact (Pickard & Guo, 2008). Older adults, in particular, prefer to receive help for mental health issues from clergy members more frequently than from mental health practitioners (Pickard & Tang, 2009). Alarmingly, few studies have aimed to understand clergy members’ role in counseling and referring older adults with mental health problems. Even less is known about whether rural clergy are adequately prepared to recognize and respond to geriatric depression, which is the most prevalent mental health problem among older adults (U.S. Department of Health & Human Services, 1999). In fact, to my knowledge, only one small study has been conducted to examine knowledge and perceptions of geriatric depression among the clergy (i.e., Stansbury, 2011). The research project which I am currently conducting for my doctoral dissertation will examine predictors of referral intentions to mental health professionals based on multiple levels of personal and environmental influence among clergy in two predominately rural states. This survey will be sent to a large group of clergy members, and will also include questions related to clergy’s training and informational needs related to geriatric health and mental health, as well as barriers to collaborating with mental health professionals in the community. It is expected that improved understanding of individual and environmental factors influencing referral intentions will lead to the development of intervention strategies incorporating clergy members to improve mental health treatment access for rural, older adults. Developing strategies to incorporate clergy into treatment access interventions will be a significant contribution to senior care, because clergy who are trained to recognize depression among older adults have the potential to serve as valuable mental health access points for both community-dwelling older adults and older adults residing in long-term care facilities. If older adults with depression are adequately identified and referred to health care providers, it can be expected that the financial burden associated with untreated depression will be reduced in rural communities and long-term care facilities. Additionally, it is anticipated that clergy members trained in evidenced-based behavioral therapies could help reduce the mental health provider burden by supplementing existing rural mental health services. Finally, it is expected that what is learned from this project will contribute to a broader understanding of how clergy members and mental health providers can collaborate more effectively to improve the mental health of rural, older adults. View other Senior Care Innovation Scholarship Finalists. Don’t hesitate to congratulate and vote for Katherine in the comment form below if you think her essay is one of the most compelling of all the finalists. Keep in mind we are awarding 5 of the finalists with $1,000 which they can use toward their studies. We Can Help! Our local advisors can help your family make a confident decision about senior living.
https://www.aplaceformom.com/blog/katherine-kitchen/
Newly discovered dinosaurs fill in evolutionary gap spanning 70 million years Two fossils unearthed in northwestern China are missing links in evolution of an unusual lineage, according to scientists. By KATIE WILLIS Two newly discovered dinosaurs may be missing links in an unusual lineage of predators that lived between 160 million and 90 million years ago, new research suggests. The two species, Xiyunykus and Bannykus, were theropods—a group of bipedal, largely carnivorous dinosaurs. Some theropods eventually gave rise to birds, while another branch, the alvarezsauroids, evolved into strange-looking insectivores with short arms and hands with an enlarged finger for digging into nests. But until now, little was understood about how this change happened because of the 70-million-year evolutionary gap separating the insect-eating alvarezsauroids from the earliest known member of the group, Haplocheirus. “The significance of Xiyunykus and Bannykus is that they fall within that gap and shed light on patterns of evolution within Alvarezsauroidea,” explained Corwin Sullivan, a University of Alberta paleontologist who participated in the international study. “These specimens greatly improve the scientific community's understanding of the early stages of alvarezsauroid evolution and give us a better idea of what early alvarezsauroids were like.” Sullivan noted the new specimens reveal clues about how the creatures’ diet shifted from meat to insects. “The forelimbs show some adaptations for digging, which would later become more exaggerated, and some features of their skulls also resemble those of insectivorous alvarezsauroids. The hindlimbs are less modified, suggesting the arms and head of alvarezsauroids underwent significant change before the legs did. “There's still a lot to learn about the early evolution of alvarezsauroids,” added Sullivan, who is also curator of the Philip J. Currie Dinosaur Museum. “Xiyunykus and Bannykus are currently represented by one incomplete specimen apiece. Those specimens provide a good deal of intriguing information, but we'll need many more fossils before we can be confident that we have a clear understanding of how alvarezsauroids, to put it bluntly, got so weird.” The paper, “Two Early Cretaceous Fossils Document Transitional Stages in Alvarezsaurian Dinosaur Evolution,” was published in Current Biology.
https://www.folio.ca/newly-discovered-dinosaurs-fill-in-evolutionary-gap-spanning-70-million-years/
Originally published in Habitat Magazine, April 2017 What happens in a community when homeowners do not follow restrictive covenants? The court reviewed just that question in Bluff Point Townhouse Owners Association v. Kapsokefalos. Lisa Kapsokefalos and her husband own a townhouse in the Bluff Point community in Plattsburgh, New York. The homeowners pay membership dues to Bluff Point, which provides services for the benefit of residents. There was a long history of litigation between the parties, with the Kapsokefaloses refusing to pay monthly dues. There were two prior actions against the couple as a result of their failure to comply with certain restrictive covenants and pay dues. The second of these two actions ended with a decision and order issued on January 6, 2014, awarding the association judgment for the relief requested in the complaint. Specifically, the Kapsokefaloses were directed to pay the monthly dues outstanding from August 2007 to December 2013. The defendants complied, but eventually, after another personal dispute with the board, they stopped paying monthly dues and eventually owed $2,900 for those dues outstanding from January 2014 to June 2016. In June 2016, Kapsokefalos painted a sign on the garage door of her townhouse that declared: "Property Rights Matter!!!" According to a neighbor, the sign was written in large letters, and appeared to have been spray-painted to give the appearance of graffiti. In addition, Kapsokefalos painted with red paint the trim around the garage, her front door, and the second-story windows of her townhouse. But red violated the color scheme previously approved by Bluff Point. Bluff Point sought a preliminary injunction as a result of the painting of the garage sign and the trim to non-conforming colors. It also demanded that the Kapsokefaloses be required to cut back or trim overgrown vegetation in the front and rear of the unit. The decision discussed here deals primarily with Bluff Point's request for a preliminary injunction. An injunction is an equitable remedy, and because it is interim – i.e., requested at the beginning of the action rather than at the end – there is a heightened burden on the one demanding the relief. Thus, the court explained that, in order to obtain a preliminary injunction, Bluff Point would have to show that it is likely to succeed on the merits of its claims for permanent injunctive relief; that there will be irreparable harm to Bluff Point if the injunction is not granted; and that the equities of the situation balance in Bluff Point's favor. The court first noted that, because of prior litigation between the parties, Bluff Point would probably succeed on the merits of its claims, meeting the first element for a preliminary injunction. However, the court explained, whether Bluff Point established irreparable harm and a balancing of the equities differed on each item it sought to enjoin. The court took each of the issues separately. It concluded that the painting of a graffiti-like sign on the garage door was unsightly and could affect surrounding property values. The Kapsokefaloses, however, submitted proof that Lisa had painted over the sign. The court stated that, given her history, "it is not inconceivable that [she] might repeat the conduct." It thus ordered that during the litigation, Kapsokefalos would not be permitted to paint or letter the exterior of the premises. As to the trim paint, the court acknowledged that the color was non-conforming. However, the court looked specifically to the "irreparable harm" prong of the preliminary injunction test. It concluded that the paint color did not, in and of itself, rise to the level of harm that would warrant the grant of a preliminary injunction. Presumably, the court did not believe that the non-conforming color would affect property values or otherwise negatively affect other members of the community. Although it did not order that Lisa Kapsokefalos repaint, the court did direct that she could not further paint the exterior of the townhouse with a color not approved by Bluff Point. The last issue addressed by the court was the vegetation in the front and back of the Kapsokefaloses' house. The photographs submitted did not, in the court's view, create a condition that was so unsightly or dangerous so as to establish the irreparable harm required to allow the court to grant a preliminary injunction. This was, in part, because there can be no finding of irreparable harm if money damages are available to resolve the matter. Accordingly, the court noted, to the extent Bluff Point had the right to maintain the lawns, it could trim the vegetation and then seek monetary damages separately. The Takeaway It appears from the decision reported here, and from other cases concerning these parties both at the lower court and the appellate level, that the two sides continue to litigate in part because of a personal incident. Where there are personal disputes in an association setting, it is important that the parties try to keep them in perspective. Homeowners should not take action merely to flout the rules, nor should a board implement rules solely directed at an owner as a result of a personal animus. Condominiums, cooperatives, and homeowners associations require people to live together and comply with the rules, which are presumably implemented for the benefit of all owners. While the motion discussed here was a "preliminary injunction," we suspect that, if the matter proceeds to conclusion, Bluff Point would probably receive the injunctive relief it seeks, assuming the rules were promulgated in accordance with Bluff Point's governing documents. This is because it has long been the law that when one buys into a community such as a cooperative, condominium, or homeowners association, one submits to the governance of that community. Attorneys: For Plaintiff: Niles & Bracy For Defendant: The Clements Firm The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
https://www.mondaq.com/unitedstates/real-estate/582692/the-battle-at-bluff-point
Researchers from the University of Sydney had to get creative to see how a toad lungworm alters its host's behavior. Photo Credit: Patt Finnerty Parasites are nature's master puppeteers. Jewel wasps can make cockroaches into docile, edible nannies for their young with just a sting, for example. Some nematodes convince the insects they infect to commit watery suicide because their larvae are aquatic. It's even thought that Toxoplasma gondii, a parasite that usually infects rats and cats, can alter our brains when we accidentally host them instead, subtly altering our personalities and maybe even making us more likely to commit suicide. So perhaps it's not surprising that scientists recently discovered lungworms alter the behavior of their cane toad hosts to ensure things are most comfortable for them. But what is surprising, or at least a little unnerving, is what they actually do: the worms makes their hosts poop differently. The question of whether the parasites were manipulating toads arose after Patt Finnerty noticed that infected toads acted a little differently in other lab trials he was conducing. Further investigation revealed significant differences in behavior between infected and uninfected toads, particularly when it comes to their bowel movements. The findings are published in an amusingly-titled new study in the journal The miniature manipulators. Photo Credit: Greg Brown The parasite in question is the lungworm Rhabdias pseudosphaerocephala, a nematode that primarily infects cane toads (Rhinella marina). It came to the attention of scientists because its those toads—normally native to the Americas—have become invasive pests in places like Australia after they were introduced in 1935. And with the invaders came their parasites, which have since found their way into other amphibians and are evolving rapidly at the invasion front, shifting their size and timing of the different stages of their complex lifecycles. That lifecycle involves a two-generation process with both parasitic and free-living stages. As the name implies, the parasitic lungworms live in their hosts lungs as adults feeding on blood (1). The eggs they lay there are essentially coughed up and swallowed, and hatch in the toad's digestive tract (2). As larvae, they hang out in feces, which they consume (3), and a little after the frog defecates, they molt to become free-living adult worms (4), which find one another and mate. The female worm gets the short end of the parenting stick at this point, as she retains her fertilized eggs until after they hatch (5). The offspring develop inside her until they kill her as they burst free (6). They then wait in the soil (7) until the opportunity presents itself to burrow into the next unfortunate toad that stops by (8), starting the cycle anew. The lungworm's lifecycle. Figure Credit: Crystal Kelehear Scientists have known that these lungworms harm their host toads—which isn't surprising, since each unlucky amphibian can have nearly 300 of them inside their teeny lungs—but whether the parasites truly manipulate their hosts in any way was less clear. While some parasites make their puppeteering obvious, it's not always easy to distinguish between true manipulations and general bodily reactions to infections, which can include immunological responses like fevers or other "sickness behaviors" that occur because of an infection but don't necessarily have an adaptive purpose. Still, that's exactly what the University of Sydney researchers set out to do. They were especially interested in behaviors related to hydration, because the worms need a certain level of moisture to survive, especially while they're in the soil. So the team captured nearly 50 wild toads and kept them in captivity. Some were naturally infected, while others were not; about half of each received a dewormer, such that they ended up with four treatment groups of 11-13 toads: those that were infected and dewormed, those that were infected and not dewormed, those that weren't infected but got the dewormer anyway, and those that weren't infected and didn't receive the meds (the last two served mostly as controls). Then, over 4 months, they subjected the toads to a battery of tests including temperature and hydration trials to see if the parasite affected its host. They also conducted field studies where they captured toads, fitted them with radio transmitters so they could find them again, determined if they were infected, and then treated some of them, just like with the captive animals. They then fed them different colors of a non-toxic, UV fluorescent dye which allowed them to see where the animals defecated, and tracked where they went. The infected lab toads tended to prefer warmer areas, which seemed to benefit the parasite, as the toads' feces contained 27% more larvae when the toads were kept warm. The infected toads also spent more time in water, no matter what the temperature of their enclosure was. Paper figure showing how infected toads spent way more time in water than those that were dewormed. Figure 1 from Finnerty et al. 2018. The researchers also noticed that infected toads defecated more often, and seemingly aimed for their water containers rather than the dry newspaper floor of their cages. When they weighed these poops out, the average wet mass of the infected animals' feces was higher than that of the dewormed ones, but the dry mass was the same—infected poops were just about 15% moister. And that was especially intriguing, because when they put the lungworm larvae through the gauntlet, they found that over 15 times as many survived after three days if the soil was moist rather than dry. The same story emerged from the field data. Infected wild toads tended to poop closer to bodies of water, and on moister soils. And when the weather was dry, the infected toads stuck much closer to water. "We found that toads with lungworms behaved much differently than uninfected toads is several regards," explained Greg Brown, a research fellow at the University of Sydney and co-author on the paper. "Most notably, infected toads tended to stay closer to water and poo in moister areas." "These are the conditions that increase survival of the larval worms in the poo and increase their likelihood of encountering a new host." It's possible that these behavioral changes are the result of an immunological assault, but the researchers think this is unlikely. "The fact that infected individuals act differently than uninfected ones isn't that surprising," Brown continued, "but the nature of the differences seemed to consistently be in directions that that should favor parasite fitness. That’s why it appears to be manipulation rather than just general sickness." And if so, then the results open a, well, can of worms, so to speak, because they are evidence that parasitic manipulation of hosts may be more subtle than scientists thought—and more common. "Lots of parasite larvae are transmitted to the environment through feces, so maybe parasite ability to manipulate host pooing is widespread," said Brown. "It seems like a logical way for the parasite to increase the likelihood of its offspring surviving and infecting another host." A cane toad, perhaps contemplating where its next bowel movement should be deposited. Photo Credit: Greg Brown But if the worms are really manipulating their hosts, then more questions arise—like, how are the worms causing these behavioral changes? The researchers couldn't say for sure, but they're betting the worms tweak circulating cellular signals like neurotransmitters or hormones to get the toads to deposit moist, frequent poops near water. Learning exactly how they exert their control could help scientists better understand the toads' physiology, or maybe even point to novel ways of getting rid of them. Controlling an invasive species by controlling its bowels—now wouldn't that be something.
https://www.discovermagazine.com/planet-earth/with-parasites-nothing-is-sacred-study-finds-lungworms-alter-how-their-host-toads-poop
A Mala is a beaded necklace consisting of a tassel, a 'guru' bead, and a fixed number of round beads that is used in meditation practice or worn as an accessory. Malas are beautiful strands of 108 beads, with a slightly larger guru bead, that have been used in meditation for millennia by Hindus. In Sanskrit, Mala literally translates to “prayer beads for spiritual practice.” Malas are traditionally held in their right hand and the thumb is slid across each bead to count each time a mantra is recited, for a total of 108 times. It can be used in meditation to recite things you are thankful, for, grateful for and much more. This Mala is handmade and has 108 x 8mm amazonite, chrysocolla, malachite and green spot stone gemstone beads, with a larger jade guru gemstone bead and a silky tassel. The Mala has each bead hand knotted in place. Please note design or colour may vary slightly due to computer screen differences or changes in stock. If you would like a mala made for you in the gemstones of your choice, please contact me to discuss.
https://essentialwellnessandlifestyle.com/products/copy-of-mala-obsidian-and-rhodochrosite
BACKGROUND INFORMATION Technical Field Background Information SUMMARY BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION The disclosed embodiments relate to web applications. Cellular telephones execute ever more complex application programs. Examples of complex application programs include video messaging programs, mobile television viewing programs, and three-dimensional multi-user video game programs. A contemporary user of a cellular telephone often does not just use the cellular telephone to engage in wireless telephone conversations. Rather, the user uses the cellular telephone as and input/output device to interact with and access services and data provided by and on other remote computers. In one example, the cellular telephone of each of a plurality of users executes a copy of a video game application program. The cellular telephones communicate with each other either directly or through a central computer such that the users can all play the same multi-user video game in a common virtual environment. Executing such a complex application program on a cellular telephone may, however, consume a large proportion of the resources of the cellular telephone. Examples of cellular telephone resources include battery capacity, memory capacity, and processing power. Executing the complex application program may take up a lot of the available battery capacity. Playing the video game may, in fact, use so much battery energy that there is inadequate battery energy left over for the cellular telephone to communicate as a cellular telephone. Alternatively, playing the video game may consume battery energy quickly without the user recognizing that the resulting discharged battery is so discharged that it cannot power the cellular telephone for a cellular telephone call of ordinary duration. Not only can a complex application use a large amount of the available battery energy, but the complex application program may also use a large proportion of the available random access memory (RAM) of the cellular telephone. If the cellular telephone is being used to the play the multi-user video game, then so much of the available memory may be used by the video game application that the cellular telephone may not be able to invoke another application program at the same time. Executing the complex application program may also require and use a large proportion of the available processing power of the central processing unit (CPU) of the cellular telephone. If the cellular telephone is being used to the play the multi-user video game, then so much of the processing power of the cellular telephone may be used that it may not be possible to execute another application program with a desired processing speed or responsiveness. Some of these problems can be addressed by executing the complex application program on a remote computer and using the cellular telephone as an input/output device to interact with an application. Browser software executing on the cellular telephone that is used to interact with the remote computer uses a smaller amount of resources than the complex application program would were the complex application program executed on the cellular telephone. The usage of resources in the cellular telephone is therefore reduced. The application program that executes on the remote computer is sometimes called a "web-browser application", a "web application" or a "Webapp" because the browser executing on the cellular telephone is used to communicate across the World Wide Web with the application running on the remote computer. Internet access to such applications executing on a cluster of computers (sometimes referred to as a "server farm") may be provided for a fee for use by cellular telephone users as Webapps. In one example, the multi-user video game application program is executing on a computer in such a server farm. Rather than consuming large amounts of cellular telephone resources executing the complex application program on the cellular telephone, the user only executes the browser on the cellular telephone and interacts with the complex application program that is executing on the computer in the server farm. US 6 141 759 A US 2004/199918 A1 It is not, however, always desirable to execute such a complex application program on a remote computer. There may be cost issues, or communication latency or reliability issues, or other issues that favor execution of the complex application on the cellular telephone in a particular circumstance. Where the resources of the cellular telephone are stretched thin due to usage of such application programs, there may be only a small amount of spare resources available on the cellular telephone. If, for example, a higher priority application is then to be used, it may not be possible to invoke the higher priority application program if the total amount of resource usage would exceed the total available amount of resources on the cellular telephone. The situation could also involve a resource being used so heavily that when an incoming cellular telephone is to be received, the cellular telephone does not have adequate resources to receive the call. Managing the resources and deciding which application programs to offload as Webapps and which application programs not to offload or not invoke in a given circumstance can be cumbersome and difficult. Attention is drawn to describing a system and method for distributing, monitoring and managing information requests on a computer network including one or more client computer systems, a first server computer system, and one or more secondary server computer systems. Information requests from the client computer systems to the first server computer system are intercepted and examined by a request broker software system implemented on the first server computer system. The request broker software system examines information regarding the capabilities and resources available on the first server computer system and the secondary server computer systems to determine whether to process the information request locally on the first server computer system or to process the information request remotely on one of the secondary server computer systems. The request broker software system will off-load or distribute the information requests to the secondary server computer systems so as to load-balance the information requests among the secondary server computer systems. The request broker software system will also monitor the processing of information requests and initiate recovery actions in the event a fault or error occurs during the processing of the request. If the information request is to be processed remotely on one of the secondary server computer systems, the request broker software system establishes an authenticated communication channel with the selected secondary server computer system to transmit the information request to the selected server computer system. The secondary server computer system processes the information request and sends the results back to the request broker software system on the first server computer system. The request broker software then sends the results of the information request that was processed either locally or remotely back to the client computer system that originated the information request. Further attention is drawn to describing backfill scheduling techniques which are used to schedule execution of applications, either on a local computing unit or a remote unit. In determining whether a particular application is to be scheduled to execute on a local unit or a remote unit, the data associated with that application is considered. As examples, an amount of data to be moved, availability of communication channels, and/or availability of remote data storage resources are considered. In accordance with the present invention a method and corresponding program-readable medium, and a mobile communication device, as set forth in the independent claims, respectively, are provided. Preferred embodiments of the invention are described in the dependent claims. A utility program executing on a mobile communication device (for example, a cellular telephone) decides whether to launch a first instance of an application program locally on the mobile communication device or to launch a second instance of the application program remotely as a web application (hereinafter "Webapp"). The decision is based at least in part on an estimate of how much of a resource the first instance would consume were it to be launched and executed on the mobile communication device. Examples of resources include battery capacity or battery energy usage, memory capacity or memory usage, and processing power capacity or usage. In one example, if the total amount of a particular resource consumed by executing currently executing applications and the first instance of the application program would exceed a threshold amount, then the utility program uses a browser program on the mobile communication device to launch the second instance of the application program remotely as a Webapp, otherwise the utility program causes the first instance of the application program to be launched locally. The utility program interacts with the operating system of the mobile communication device to cause the decided upon type of launching. In some embodiments, the first and second instances are identical programs. In one embodiment, the first instance is a simplified version of the application that is customized and adapted for execution on a device having limited resources. The second instance, on the other hand, is appropriate for execution on the remote computer that does not have the resource constraints of a mobile communication device. The utility program has a graphical user interface (GUI) whereby a user of the mobile communication device can configure and customize utility program operation. The GUI is, for example, usable to change the conditions under which the decision is made to launch an application remotely as a Webapp. The user can use the GUI to disable offloading of a particular application. In a specific example, the GUI causes a resource usage table to be displayed on the display of the mobile communication device in response to an appropriate prompt by the user. The table lists all the application programs that are executing on the mobile communication device. In addition, the table lists the application program that is to be launched. For each application program listed, the table includes a usage value for each type of resource. In the case of the application program to be launched, the usage values are estimated usage values. The estimated usage values are usages that would occur were a first instance of the application program to be executed locally on the mobile communication device. Based at least in part on the resource usage values of the currently executing application programs and the estimated resource usage values of the application program to be launched, the utility program determines whether the application program to be launched should be executed on the mobile communication device or should not be executed on the mobile communication device. If the determination is that the application program should not be executed on the mobile communication device, then the utility program uses the browser program on the mobile communication device to launch a second instance of the application program remotely as a Webapp. In this scenario, the second instance of the application program is not split such that some of the application program is executing on the mobile communication device and such that another part of the application program is executing remotely. No part of the second instance of the application program is executing on the mobile communication device. If, however, the determination is that the application program should be executed on the mobile communication device, then the utility causes the first instance of the application program to be launched locally on the mobile communication device. The first instance of the application program is not split between the mobile communication device and the remote computer. No part of the first instance of the application program is executing on the remote computer. In some embodiments, the utility program can terminate execution of the second instance of the application program on the remote computer under some resource usage conditions. The first instance of the application program is launched on the mobile communication device in the state that the second instance was in when it was terminated. Execution of the application program therefore migrates back from the remote computer under the control the utility program. The foregoing is a summary and thus contains, by necessity, simplifications, generalizations and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and does not purport to be limiting. Other aspects, inventive features, and advantages of the devices and/or processes described herein, as defined solely by the claims, will become apparent in the nonlimiting detailed description set forth herein. Figure 1 is a view of the front of a mobile communication device that executes a novel utility program in accordance with one novel aspect. Figure 2 is a simplified flowchart that illustrates an operation of the novel utility program. Figure 3 Figure 1 is a diagram that illustrates the software executing in the mobile communication device of and on a remote computer. Figure 4 Figure 3 Figure 3 is an illustration of the mobile communication device of , the remote computer of , and the communication path between the two. Figure 5 Figure 4 Figure 4 illustrates a sequence of communications that occurs between the mobile communication device of and the remote computer of . Figure 1 Figure 1 Figure 1 Figure 1 is a view of the front of a mobile communication device 100 in accordance with one novel aspect. Mobile communication device 100 in this example is a cellular telephone that has an antenna 101, a built-in microphone 102, a speaker 103, a display 104 and a QWERTY keypad 105. The electronics within mobile communication device 100 includes, among other parts, RF transceiver circuitry, a digital baseband processor integrated circuit, a rechargeable battery or batteries, and a power management integrated circuit. This circuitry is not illustrated in because is a plan view and the circuitry is contained within the housing of mobile communication device 100. The baseband processor integrated circuit includes an amount of semiconductor memory, and a digital processor. The semiconductor memory is a processor-readable medium that stores programs of processor-executable instructions that are executable on the processor. The processor accesses the memory and executes an operating system program of processor-executable instructions out the memory. The processor is also able to execute application layer programs. In the example of , there are multiple such application layer programs executing on the processor including: a word processing program designated "WP", an email and calendar and contact manager program designated "EMAIL", a three-dimensional multi-user video game designated "GAME", a web browser program designated "BROWSER", and a novel utility program. Execution of the utility program provides a graphical user interface (GUI) whereby a user of mobile communication device 100 can view output of the utility program and can interact with and configure the utility program. In one operational mode of the utility program, the GUI causes a resource usage table 106 to be displayed on display 104. Figure 1 Resource usage table 106 includes in the left-most column a list of various programs that are executing on the cellular telephone. The "SYSTEM" entry refers to the operating system and the novel utility application program. Although the utility program is an application layer program, it is tightly coupled to the operating system and is therefore listed as part of the operating system. For each of the listed programs, resource usage table 106 includes a numerical value that is indicative of an amount of a resource that is being used or is otherwise allocated to or reserved by the program. In the example of , there are three different resources listed: battery power usage (listed on the display as "BATT"), memory usage (listed as "MEM"), and processing power usage (listed as "CPU"). The email program designated "EMAIL", for example, is indicated to be using three units of the power usage resource "BATT", two units of the memory usage resource "MEM", and one unit of the processing power resource "CPU". The bottom row of the resource usage table 106 sets forth a threshold value for each of the three resources. The units of the usage values and threshold values in resource usage table 106 are normalized with respect to each other. In one example, the threshold values are fixed values that cannot be changed by the user. In another example, the GUI allows a user of mobile communication device 100 manually to change the threshold values within predetermined ranges. Figure 1 Figure 1 The right-most column of resource usage table 106 includes an indication of whether each of the programs has been offloaded to execute as a Webapp on a remote computer. In the example of , the video game application "GAME" is executing on a remote computer and the browser of cellular telephone 100 and the cellular telephone hardware itself are being used to interact with the remotely executing game program on the remote computer. The fact that the application program "GAME" has been offloaded is designated by the check mark in the right-most column. The GUI allows the user manually to disable offloading of a selected one of the listed programs. In the example of , the user has disabled offloading of the email program designated "EMAIL". The "offload disabled" state is indicated in resource usable table 106 by the cross is the right-most column. In ordinary operation of mobile communication device 100, the utility program does not cause resource usage table 106 to be displayed. Rather, resource usage table 106 is displayed in response to appropriate user prompts to the utility program. In this way, the user can enter a prompt using the GUI thereby causing table 106 to appear, can then interact with the utility using the GUI and the table, and can then enter an appropriate prompt that causes table 106 to be displayed no longer. The utility program, however, continues to execute in the background even though table 106 is not being displayed. Figure 2 Figure 1 Figure 1 Figure 1 is a simplified flowchart that illustrates a method in accordance with one novel aspect. The method is carried out using mobile communication device 100 of . Initially, the processor is executing the operating system program and the novel utility application program as well as a few other application programs. Each program consumes an amount of each of the three resources "BATT", "MEM" and "CPU". In the initial state before step 201, the "WP", "EMAIL" and "BROWSER" applications listed in are executing. The "GAME" application is, however, not executing. The display 104 therefore appears as it appears in , except that the row having the "GAME" entry is not present. Note that the total power usage (resource "BATT") is ten usage units. The total memory usage (resource "MEM") is also ten usage units. The total processing power usage (resource "CPU") is six usage units. All three usage totals are therefore their respective threshold values. The user then takes an action to invoke the multi-user video game application program designed "GAME". The user may, for example, select an icon of the video game GAME that appears on display 104. The operating system detects this condition and issues a request to the utility program to invoke the GAME application. The utility program receives the request to invoke (step 201). Next (step 202), the utility program estimates the amount of each resource that would be consumed by the application layer program if the application layer program were to be executed. In one example, the processing power that would be consumed by the new application program is determined by historical empirical data. If, for example, every time the GAME program was executed previously six usage units of CPU processing power were consumed on average, then this average usage value of six is stored and used as the estimated CPU usage value. The amount of processing power actually being used by the other programs that are already executing can be determined in any one of many suitable ways. For example, the utility program or the operating system may periodically interrupt the CPU to check CPU activity. In a crude example, a low priority task is periodically issued to the CPU and data is collected as to whether the operating system allowed the task to be executed. The operating system may output CPU usage data that is usable by the utility program. This data may be collected and output in the way that the task manager of a conventional Windows operating system collects and outputs CPU usage data. The estimated amount of power consumption (resource "BATT") may similarly be determined by historical empirical data. The power management integrated circuit (PMIC) within the cellular telephone may monitor battery voltage at discrete times. Changes in the voltage measured indicate the amount of energy consumption. By analyzing battery voltage when selected individual application programs are being executed, or when selected subgroups of individual application programs are being executed, information on the battery energy consumption of each individual application program is collected. Figure 2 Figure 1 The estimated amount of memory usage (resource "MEM") may similarly be determined by historical empirical data. The amount of memory allocated to each application layer program is generally known to the operating system. These usage values are supplied to the utility program. If every time the GAME application program was executed it consumed six usage units, then the estimate is that this next time the GAME program is executed it will likely consume six usage units as well. The actual memory usages of the various application programs executing are output by the operating system and are used by the utility program as set forth above. Accordingly, at this point in the method of all the usage values set forth in resource usage table 106 of are known to the utility program. Next (step 203), for each resource, the estimated usage of the application program to be invoked is summed with the resource usage values of the other programs that are executing. In the present example, the sum for the "BATT" resource is sixteen units. The sum for the "MEM" resource is also sixteen units. The sum for the processing power resource "CPU" is twelve units. Figure 2 1) the estimated resource usage values for the application to be launched, and 2) the amount of resources consumed by the other programs currently executing on the mobile communication device 100. In the example of the utility program of , if the sum of any of the three resources as determined in step 203 exceeds a corresponding predetermined threshold value for that resource, then it is determined that the application program to be launched should not be executed on the mobile communication device 100, but rather an instance of the application program should be executed on a remote computer as a Webapp. Next (step 204), a decision is made as to whether the application program to be launched should be launched and executed on the mobile communication device or should be launched and executed on a remote computer. This decision is based at least in part on: If the sum as determined in step 203 for each of the three resources were below its corresponding threshold value, then processing would proceed to step 205. The utility program would communicate with the operating system and would cause the application program (the GAME program in this case) to be launched on the mobile communication device. Thereafter (step 206), the application program would execute in normal fashion on mobile communication device 100. In the present example, however, the sum for the "BAT" resource is sixteen and the corresponding threshold value for the "BATT" resource is fifteen. Also, the sum for the "MEM" resource is sixteen and the corresponding threshold values for the "MEM" resource is fifteen. The decision of step 204 is therefore not to launch the GAME application program on the mobile communication device, but rather to invoke a client interface (step 207). The client interface is interface software that is integrated into the web browser program of the mobile communication device. When the client interface is integrated into the browser in this fashion, the browser is usable to communicate with a corresponding host interface on a remote computer. The client interface captures user inputs such as keypad key press information and communicates them across the internet to the host interface. The host interface in turn supplies the user input to the application program executing on the remote computer so that to the application program executing on the remote computer it appears that the user input were generated locally in normal fashion. Data output from the application program such as display data passes in the opposite direction through the host interface, across the internet, and to the browser and client interface. The browser displays the display data on display 104 of mobile communication device 100 in a similar way to the way that the display data would ordinarily have been displayed on a display local to the remote computer. This client interface and the associated host interface software is conventional Webapp software. An example of this interface software is available from Citrix Systems Inc. of Fort Lauderdale, Florida. Once the client interface has been invoked (step 207), then the utility program acts through the client interface and host interface and causes the GAME application program to be launched (step 208) on the remote computer. Thereafter, the GAME application program executes on the remote computer (step 209). At no time did the GAME application program ever execute on the mobile communication device 100. The GAME application program was not split such that some of the application executed on the mobile communication device and another part of the GAME application program was executed on the remote server. A instance of the GAME application program is executed on the remote computer as a Webapp with the browser on the mobile communication device and the mobile communication device hardware being used to interact with the Webapp. In this instance, the instances of the application program that execute on the mobile communication device and on the remote computer differ in that the instance that is on the mobile communication device executes on a different processor and has reduced functionality so that it can execute with adequate speed on the mobile communication device that has limited resources as compared to the remote computer. Figure 3 is a diagram that illustrates the software executing in mobile communication device 100 and remote computer 107. The software executing on mobile communication device 100 includes the operating system 108 having a BREW (Binary Runtime Environment for Wireless) type application programming interface 109. The horizontal dashed line 110 represents the interface between the operating system 108 and application layer programs 111-113 that execute on the operating system. Application program 111 is the first instance of the application program that is to be launched. Application program 112 is the novel utility program. Utility program 112 is illustrated here as an application layer program because it provides the GUI and outputs display data and receives user inputs. The utility program 112 may, however, also be considered to be a part of the operating system 108. Application program 113 is the web browser into which the client interface functionality 114 is incorporated. Software executing on the remote computer 107 includes an operating system 115, the second instance 116 of the application program to be launched, and host interface functionality 117. Remote computer 107 is a computer that is a part of a server farm of computer resources. The second instance 116 of the application program is usable as a Webapp by a user of mobile communication device 100. Dashed arrow 118 represents information flow from mobile communication device 100 to the remote computer 107. This information flow involves user entry data and input information such as key press information on which keys of keypad 105 the user pressed. This information passes from mobile communication device 100, across a wireless link (for example, a CDMA wireless link), and through other networks and the internet, to the server farm and remote computer 107. Dashed arrow 119 represents information flow from remote computer 107 to mobile communication device 100. This information flow involves display data that is output by the second instance 116 of the application program that is executing on remote computer 119. Rather than this display data being displayed on a monitor or screen or other display of remote computer 107, the display data is communicated across the internet and the wireless link to mobile communication device 100. The browser 113 on mobile communication device 100 renders the information such that it is displayed on display 104 of mobile communication device 100. Figure 4 is an illustration of mobile communication device 100, remote computer 107, and the communication between the two. The blocks in mobile communication device 100 labeled "A", "U", "B" and "OS" designate the first instance of the application program to be launched, the utility program, the browser having the client interface, and the operating system, respectively. These programs are stored in semiconductor memory and are executed by the processor of mobile communication device 100 as explained above. The block 120 labeled "PMIC" is a power management integrated circuit. Power management integrated circuit 120 is coupled to rechargeable batteries 121. Power management integrated circuit 120 monitors battery voltage and provides battery information to the processor. The battery information may be battery voltage in some embodiments. The battery information may in other embodiments be an indication of the amount of energy stored in the battery or the rate of battery power consumption or current consumption. Remote computer 107 is one of several computers in server farm 122. The block 123 labeled "A" within remote computer 107 designates the second instance of the application program to be launched. This second instance of the application program is the program that can be executed remotely on remote computer 107 as a Webapp. Communication between mobile communication device 100 and remote computer 107 passes through a wireless link 124, a cellular telephone network 125 (in this case, a CDMA network), and a wide area network (WAN) 126. WAN 126, CDMA network 125 and the wireless link 124 can all be considered to be a part of the internet 127. Arrow 128 represents user entry information passing from mobile communication device 110 to the Webapp executing on remote computer 107. Arrow 129 represents display data that is output by the Webapp executing on remote computer 107 and that passes from remote computer 107 to mobile communication device 110. Figure 5 illustrates a sequence of communications that occurs between mobile communication device 100 and remote computer 107 in one scenario. Time in the illustration extends from top to bottom. The upper arrow 130 represents an application execution request. This is the communication that causes the second instance of the application program to be launched on the remote computer. In some examples, the application execution request includes the telephone number of the mobile communication device 100. The next arrow 131 represents an acknowledge communication whereby remote computer 107 acknowledges that the second instance of the application program has been launched. Arrow 131 being illustrated as a relatively darker and heavier arrow is to indicate that the communication from remote computer 107 to mobile communication device 110 is a relatively high-bandwidth communication involving more information transfer than the information flow in the opposite direction from mobile communication device 100 to remote computer 107. The following pairs of arrows 132 represent application streaming. User input is communicated to the Webapp executing on remote computer 107, which in turn results in new display data that is output from the Webapp and is communicated back to mobile communication device 100 for display. The last pair of arrows 133 and 134 represents the utility program in mobile communication device 100 terminating the Webapp execution on remote computer 107. In one example, in addition to terminating execution and sending the acknowledgment 134, the remote computer 107 sends additional status information on the state of execution of the Webapp to mobile communication device 100. This status information is then useable on mobile communication device 100 to resume execution of the application at the state of the application when the application was terminated. From the user's perspective, the application continues to execute as if the location of execution of the application program never changed. The GUI is usable to change the conditions under which the utility program will terminate Webapp execution. In one embodiment, an application execution request includes the telephone number of a requesting mobile communication device. The operator of the server farm maintains information on the telephone numbers of mobile communication device users and also maintains associated billing information on the users. When an application execution request is received at the server farm, the telephone number in the incoming request is used to identify associated billing information and to bill the user of the mobile communication device for use of the provided Webapp application program. Alternatively, identification information other than a telephone number is embedded in the application execution request and this other identification information is used to facilitate the operator's billing of the user for use of the Webapp application program. In some embodiments, there is a second application program executing on the remote computer. This second application program causes the host interface to include advertising information into the communication back to the client interface along with the Webapp display data. The client interface and browser executing on the mobile communication device receives the communication and causes the advertisement to be rendered on the display of the mobile communication device along with the Webapp display data. In another example, the second application program causes advertising information to be loaded and stored into the mobile communication device when there is communication between the remote computer and the mobile communication device. When the first instance of the application program is invoked locally on the mobile communication device, then the stored advertising information is automatically displayed along with the display data of the application program. This display occurs even if at the time of invocation there is no communication between the mobile communication device and the remote computer at the server farm. Although certain specific embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. Examples of a mobile communication device include: a cellular telephone, a personal digital assistant (PDA), a laptop computer, a tablet personal computer, a smart phone, or any mobile device that executes a web browser. Accordingly, various modifications, adaptations, and combinations of the various features of the described specific embodiments can be practiced without departing from the scope of the claims that are set forth below.
Protocol is recruiting for a Education Technology Facilitator and e-Learning Content Developer on a full-time permanent basis within the Isleworth area. Job Purpose: To work as part of the e-Learning Team to create, develop and promote e-Learning, and facilitate the use of relevant digital resources to students and staff. Contribute to the development of e-Learning and digital literacy resources and activities for independent learning. Salary: Circa £28,758 to £30,539 Key responsibilities: 1. Work collaboratively with the Education Technology Facilitator and lead on the research and development of e-Learning tools & resources. 2. Pro-actively support staff and lead on the creation of e-Learning materials including video content and the use of e-Learning technologies for teaching and learning. 3. Liaise with the curriculum teams to support the embedding of digital literacy skills across the College. 4. Work with departments such as IT and MIS and with external partners to oversee the administration of the College’s VLE platform and its integration into other systems, applications and services, including managing user accounts, courses and structure. 5. Train curriculum staff and support them in the use of the College’s recording, tracking and monitoring tools. 6. Plan, organise and deliver training and guidance for staff in the innovative use of eLearning tools, including self-access training content for staff. 7. Pro-actively contribute to the implementation of the College’s E-Learning Strategy. 8. Facilitate independent learning sessions when required. 9. Carry out induction sessions for new staff and students where appropriate, and support with the running of digital literacy workshops, advice and guidance with their resource and study. 10. Conduct customer service duties through front of house support (including managing student behaviour, tidying, shelving and service desk cover) to students and staff, including a minimum of one evening duty per week. About Protocol Protocol are the specialist full-service recruiter dedicated to education, training and skills. People are at the heart of everything we do. We place people first. We’re more than a recruitment agency – we pride ourselves on our ongoing support and aftercare delivered by our expert team, and all our candidates benefit from free access to our exclusive online CPD portal, Learning Zone. Whatever your career goals, we’ve got the right role for you. With a wide range of temporary and permanent positions available, from lecturing and training jobs The legal bit… Protocol National ltd trading as Protocol are acting as an employment agency for this vacancy. As a result of the volume of applications we are currently receiving we regret that we may be unable to respond with individual feedback. If we have not contacted you within two weeks of your application being received then regretfully your application will not be taken forward on this occasion.
https://www.protocol.co.uk/job-search/230738-education-technology-facilitator-and-e-learning-content-developer
Why Muscles Contract and Relax Because DMD is caused by a mutation in the gene that codes for dystrophin, it was thought that introducing healthy myoblasts into patients could be an effective treatment. Myoblasts are the embryonic cells responsible for muscle development and, ideally, they would carry healthy genes that could produce the dystrophin needed for normal muscle contraction. This approach has been largely unsuccessful in humans. A more recent approach was to increase the production of utrophin in muscle, a dystrophin-like protein that could potentially play the role of dystrophin and prevent cell damage. Passive stretching. This type of muscle contraction occurs when your muscle is passively elongated. For example, bend over to touch your toes. There is no extra weight that your thigh muscle needs to hold or lift by exerting strength, but it still stretches from movement. Although the muscle performs a negative amount of mechanical work (the work is done on the muscle), chemical energy (originally released by oxygen, by fat or glucose and temporarily stored in ATP) is still consumed, although less than would be consumed during a concentric contraction of the same force. For example, you use more energy when you climb a flight of stairs than when you go down the same flight. The force-speed relationship refers to the speed at which a muscle changes its length (usually regulated by external forces such as tension or other muscles) with the amount of force it generates. The force decreases hyperbolically relative to the isometric force as the shortening rate increases, and eventually reaches zero at maximum speed. The opposite is true when the muscle is stretched – the force increases beyond the isometric maximum until an absolute maximum is finally reached. This intrinsic property of active muscle tissue plays a role in the active cushioning of joints operated by simultaneously active counter-muscles. In such cases, the force-speed profile amplifies the force generated by the lengthening muscle at the expense of the shortening muscle. This favor of the muscle, which balances the joint, effectively increases the cushioning of the joint. In addition, the strength of the cushioning increases with muscle strength. The motor system can thus actively control joint damping via the simultaneous contraction (co-contraction) of opposing muscle groups. The mechanism of muscle contraction has eluded scientists for years and requires further research and updating. The sliding wire theory was developed independently by Andrew F. Huxley and Rolf Niedergerke, as well as Hugh Huxley and Jean Hanson. Their results were published as two consecutive papers published in the May 22, 1954 issue of Nature under the common theme “Structural Changes in Muscles During Contraction.” How would muscle contractions be affected if the ATP in a muscle fiber were completely depleted? The contractile activity of smooth muscle cells can be tonic (persistent) or phasic (temporary) and is affected by multiple inputs such as spontaneous electrical activity, neuronal and hormonal inputs, local changes in chemical composition, and stretching. This contrasts with the contractile activity of skeletal muscle cells, which is based on a single neuronal input. Some types of smooth muscle cells are able to spontaneously generate their own action potentials, which usually occur after pacemaker potential or slow wave potential. These action potentials are generated by the influx of extracellular Ca2+ and not Na+. Like skeletal muscles, cytosolic Ca2+ ions are also needed for the transverse bridge cycle in smooth muscle cells. Concentric and eccentric muscle contractions. These two types of contractions often go hand in hand. A concentric muscle contraction will help you lift something heavy. We often talk about positive work. Passive stretching. This type of muscle contraction is useful for gently lengthening your muscles. You can passively contract your muscles by stretching them as far as they can physically walk. This lengthens your muscles in a way that activates them without the effort required. Excitation-contraction coupling is the process by which a potential for muscle action in the muscle fiber causes the myofibrils to contract. In skeletal muscle, excitation-contraction coupling is based on direct coupling between key proteins, the sarcoplasmic reticulum (SR) calcium release channel (identified as ryanodine 1 receptor, RYR1) and voltage-controlled L-type calcium channels (identified as dihydropyridine receptors, DHPR). DHPR are located on the sarcolemma (which includes the surface sarcolemma and transverse tubules), while RyRs are located across the SR membrane. The narrow arrangement of a transverse tubule and two SR regions containing RyRs is described as a triad and is primarily the place where the excitation-contraction coupling takes place. Excitation-contraction coupling occurs when depolarization of the skeletal muscle cell leads to a muscle action potential that spreads through the cell surface and into the tubular T network of the muscle fiber, thereby depolarizing the inner part of the muscle fiber. Depolarization of the internal parts activates dihydropyridine receptors in terminal cisterns, which are located near ryanodine receptors in the adjacent sarcoplasmic reticulum. Activated dihydropyridine receptors physically interact with ryanodine receptors to activate them via foot processes (with conformational changes that activate ryanodine receptors allosterically). When the ryanodine receptors open, Ca2+ is released from the sarcoplasmic reticulum into the local connection space and diffuses into the bulk cytoplasm to cause a spark of calcium. Note that the sarcoplasmic reticulum has a high calcium buffering capacity, which is partly due to a calcium-binding protein called calequesterin. The almost synchronous activation of thousands of calcium sparks by the action potential causes an increase in calcium at the cell level, which leads to the increase in calcium transient. The Ca2+ released in the cytosol binds to troponin C through the actin filaments to allow the transverse bridge cycle, which generates strength and movement in certain situations. Calcium ATPase of the endoplasmic sarco/reticulum (SERCA) actively pumps Ca2+ into the sarcoplasmic reticulum. When Ca2+ falls back to the level of rest, strength decreases and relaxation occurs. During a concentric contraction, a muscle is stimulated to contract according to the sliding wire theory. This happens along the entire length of the muscle, creating strength at the origin and beginning, shortening the muscle and changing the angle of the joint. As for the elbow, a concentric contraction of the biceps would cause the arm to bend to the elbow when the hand passes from the leg to the shoulder (a bicepslock). A concentric contraction of the triceps would change the angle of the joint in the opposite direction, stretching the arm and moving the hand towards the leg. (2) Chemical reactions cause the reorganization of muscle fibers in such a way that the muscle shortens – this is contraction. In eccentric contraction, the tension generated during isometry is not enough to overcome the external load on the muscle, and the muscle fibers lengthen as they contract. Instead of working to pull a joint towards muscle contraction, the muscle acts to slow down the joint at the end of a movement or otherwise control the repositioning of a load. This can happen unintentionally (for example. B when trying to move a weight too heavy to lift the muscle) or voluntarily (for example. B when the muscle “smoothes” a movement or resists gravity, by. B example during the descent). In the short term, strength training, which involves both eccentric and concentric contractions, seems to increase muscle strength more than training with concentric contractions alone. However, exercise-induced muscle damage is greater even with prolonged contractions. A multi-step molecular process in muscle fibers begins when acetylcholine binds to receptors in the membrane of muscle fibers. Proteins in muscle fibers are organized into long chains that can interact with each other and reorganize to shorten and relax. When acetylcholine reaches the receptors on the membranes of muscle fibers, the membrane channels open, and the process of contraction of a relaxed muscle fiber begins: when a muscle is at rest, the concentration of calcium in the sarcoplasm (the cytoplasm of a muscle cell) is very low and prevents the bridges from attaching to actin. .
https://canadawood.org/why-muscles-contract-and-relax/
We conducted a heating, cooling and lighting systems review and energy consumption assessment for the Diocese of Durham. Energy consumption, lighting and controls analysis Detailed energy audit Developed program to reduce energy use Challenge - Identify opportunities to reduce energy consumption. - Improve internal comfort condition for the occupants of Cuthbert House. - Increase control of the building’s main systems. Solution - Analysis of real gas and electricity consumption data against industry benchmarks. - Review of building systems including heating, domestic hot water, ventilation, air conditioning and lighting. - Analysis of building controls and metering arrangement. Results - Detailed recommendations to improve occupant’s comfort, optimise heating, cooling and hot water control and reduce overall building energy consumption.
https://www.sustain.co.uk/case-studies/diocese-durham/
CROSS REFERENCE TO RELATED APPLICATION TECHNICAL FIELD BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION This application claims priority to U.S. Provisional Patent Application Ser. Nos. 62/482,724, filed Apr. 7, 2017 entitled LOW COST, MULTIFUNCTION EXERCISE PLATFORM; 62/484,986 filed Apr. 13, 2017 entitled LOW COST, MULTIFUNCTION EXERCISE PLATFORM; and 62/598,370 filed Dec. 13, 2017 entitled: IMPROVED LOW COST, MULTIFUNCTION EXERCISE PLATFORM, the contents of which are incorporated herein by reference. The present invention relates to exercise equipment used to strengthen and tone the body of the user. There are a variety of types of fitness equipment and apparatus on the market. There are two primary price segments for fitness equipment relevant to this product. Basic single-purpose products such as those known as the Perfect Pushup®, ThighMaster®, GoFit Ab Wheel®, P90X Chin Up Bar®, and punching bags, each of which cost between $10 and $115. The cost of multi-purpose equipment segment starts at about $90 up to the thousands of dollars and includes products such as the Weider Bungee Bench® and Total Gym 1100®. The cost difference is based on, among other things, quality, functionality, and material. The equipment in the lower cost segment provides limited functionality, enables limited exercises, and limited range of use. However, these lower cost products are more easily movable and storable. The products in the higher cost segment provide better quality, greater functionality, seat adjustability and resistance adjustability, but these higher priced products otherwise have limited mobility, require more set up time and more training. What is desired is an inexpensive exercise platform configurable to allow the user to engage in a variety of exercises easily, safely and comfortably. An exercise platform having a base member being a firm, resilient structure or a semi-flexible structure, having in some aspects and edge member or a surrounding frame. The shape of the base member varies on design considerations, one shape being a generally planar, inverted pear shape and another being rectangular with side extensions. The base member is or has positioned thereon a cushioned top member, such as a foam or soft yoga mat type cushion. In an embodiment, the base member has around its periphery a pattern of anchor points or apertures to which resistance bands are attached, coupled or threaded. The use of resistance bands accessible around the periphery enable the user to utilize positions that enhance comfort, broaden functionality, prevent injury, and maximize the benefit to each muscle group. In an aspect, a first end of each of the resistance bands have handles that are located and can be stored within the base member. The handles are configured for receiving a hand or foot of the user. In another embodiment, the base member is a stand-alone cushioned member, such as a foam or soft yoga mat type cushion, which has around its periphery a hollow frame that encases a singular resistance band, accessible through a plurality of apertures where the band can be accessed by the user. To those skilled in the art to which this invention relates, many changes in construction and widely differing embodiments and applications of the invention will suggest themselves without departing from the scope of the invention as defined herein. The disclosures and the descriptions herein are purely illustrative and are not intended to be in any sense limiting. While the making and using of the disclosed preferred embodiments of the invention is discussed in detail below, it should be appreciated that the invention provides many applicable inventive concepts which can be embodied in a wide variety of contexts. Some features of the preferred embodiments shown and discussed may be simplified or exaggerated for illustrating the principles of the invention. FIGS. 1 to 3 FIG. 2 FIG. 1 100 105 100 100 106 100 104 104 101 102 102 104 100 200 100 104 103 200 100 200 100 200 200 105 200 100 105 200 100 200 200 105 103 In a first preferred embodiment and referring to , the invention comprises an exercise platform comprising a base member having a cushioned member integrated with the base frame , the base frame having a continuous side wall, the base member side wall having a perimeter substantially dimensioned as an inverted pear shape when viewed from above, the base of the pear being proximate the top of the base member , a plurality of resistance bands , each resistance band having a first end and a second end , the second end of each resistance band being loosely coupled to the base member or coupled to an anchor (as seen in ) which is then attached to base member , and the first end of resistance band being configured to receive a handle for a hand or foot. The anchor is positioned on the base member, the inner faces of the “U” in direct contact with the base member , specifically, one side inner face of the anchor is in contact with the bottom surface of the base member , the bottom inner face of the anchor is in contact with the base member side wall and the other inner side face of the anchor is in contact with the top surface of the base member. In this arrangement, the outer face such latter anchor side is in contact with the bottom surface of the cushioned member . Alternately, as seen in , the anchor is positioned on the base member, one the inner face of the “U” in direct contact with the base member and the other inner face is in contact with the cushioned member . More specifically, one side inner face of the anchor is in contact with the bottom surface of the base member , the bottom inner face of the anchor is in contact with the base member side wall and the other inner side face of the anchor is over the cushioned member . In this arrangement, the outer face such latter anchor side is exposed. As noted below, the handles can be of a split design allowing them to receive the resistance tubing axially placed through the longitudinal center thereof. FIGS. 2 and 3 FIG. 2 FIG. 3 200 300 301 302 201 200 202 203 , among others, are illustrations of a “ball and socket” type anchor—a “U” shaped part , seen in . This attachment comprises an anchor that receives the ball end , as seen in , with a wider end due to having the ball implanted within. The ball end is first inserted into a larger circular opening on the bottom of the 3 sides of the “U” shaped anchor , then moved by the user up the narrow channel to the smaller circular opening where the tube expands so as to retain the ball in place during use The invention further comprises an exercise platform comprising a base frame for receiving a cushioned member there-within, the base frame forming perimeter in a generally planar, inverted pear shape, with the base of the pear being proximate the top of the base frame, a cushioned member within the base frame; and a plurality of resistance bands each having a handle, the resistance bands loosely coupled to the base frame. In this further aspect, the cushioned member is comprised of a flexible, yet resilient material. The base frame is comprised of one selected from the group consisting of aluminum, metal, wood, synthetic wood, high-impact plastic, Polyvinyl chloride (PVC), low-density or high-density polyethylene (LDPE, HDPE), Acrylonitrile butadiene styrene (ABS), and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS) and the cushioned member is comprised of one selected from the group consisting of a sticky mat, foam mat, yoga mat material, urethane-foam cushion covered with vinyl, thermoplastic elastomer, fabric or vinyl covered Polyurethanes (PU) and rubber material. In an aspect, the base frame has spaced around the sidewall periphery thereof a plurality of apertures in communication with a plurality of vias in the cushioned member, each via being a path within the cushioned member for receiving a resistance band, wherein each resistance band has a first end and a second end, the second end of each resistance band being coupled to at least one other resistant band second end, and the first end configured to receive a handle for a hand or foot. The vias through the cushioned member can be either closed hollow tubes through which the resistance bands are threaded or open furrows. In a further aspect, the hollow tunnels or open furrows have at least one node of convergence approximately in the lateral and longitudinal center of the cushioned member thus forming a mesh network of hollow tubes for receiving the resistance bands. Alternatively, the hollow tunnels or open furrows have two nodes of convergence approximately aligned in the lateral center of the cushioned member, and longitudinally spaced down from the top of the cushioned member about ⅓ multiplied by the longitudinal length of the cushioned member and ⅔ multiplied by the longitudinal length of the cushioned member, thus forming a plurality of mesh networks of hollow tubes for receiving the resistance bands. The resistance bands and handles are similar for all of the embodiments and aspects thereof. A further base frame embodiment has 2 optional detachable handles coupled to the platform using resistance bands. A multiple-length option allows different lengths and tensions for each resistance band. This use of different lengths and tensions expand the range of resistance and hence difficulty with the shorter band (of same tension level) providing more tension. Further, it allows a user to better size the resistance band by providing selectable lengths that best suits the range of motion required to perform each exercise. Another grip option has a single resistance band with multiple grips affixed to the band, to enable the user to select the proper length required for an exercise without having to switch out bands of different lengths. FIGS. 2 and 3 A further embodiment has a bar that is used to connect 2 resistance bands to perform exercises requiring range of motion up/down the center axis of the mat or requiring both arms/legs working together. Both bands, one from the left side and one from adjacent right side, are secured to the bar while already attached to the mat anchors on the end of resistance band. Resistance bands may attach to the bar using (1) resistance bands with ball and socket configuration on both ends so distal end secures into the bar as seen in ; or (2) a “clip” connector on one end of the resistance band attaches to carabiner or similar coupler. FIGS. 4 through 9 FIG. 4 FIGS. 8 and 9 401 405 400 402 401 500 400 404 401 404 401 400 Now referring to , various aspects of a preferred second embodiment of the invention are seen. In a first aspect, as illustrated in , a resistance band is threaded around the periphery of a base member with extensions from the surrounding resistance band being exposed via apertures around the periphery of the base member whereby the handles can be secured to a portion of the resistance band proximate a respective aperture during use. Referring to , a user lies or kneels on the mat and places his or her hands or feet in a respective handle , then extends against said resistance band . Alternatively, the user can perform exercises from a standing position while positioned on the base member . In an aspect of the second preferred embodiment, there are a plurality of resistance bands, each having a first end (proximate end, proximate to the base member (base frame)) and a second end (distal end, distal to the base member (base frame)), the first end having a handle for receiving a hand and being exposed through apertures around the periphery of the base member, the second ends being threaded through internal vias in the base member and being coupled together at a node of convergence within the base member. Alternatively, the first end may be referred to as the distal end and the second end referred to as the proximate end. There can be one or more nodes of convergence within the base member. FIG. 10 1000 1002 1003 As seen in , a third embodiment of the present invention comprises an exercise platform, comprising a base member having a top surface and a bottom surface and a plurality of sides, the base member sides forming perimeter in a generally rectangular shape, an embodiment further having side extensions . A cushioned top member is positioned on and coupled to the top surface of the base member, the base member having sides forming a generally rectangular shape, the cushioned top member sides having a perimeter less than that of the base member sides such that the sides of the top member are positioned within the sides of the base member. 1005 1006 1006 1004 1006 1004 1005 A plurality of base member apertures are formed within the base member top surface proximate the perimeter thereof, the sidewalls of the base member apertures further having formed therein a pair of opposed side apertures . Each of the pair of opposed side wall apertures are configured to receive cylindrical metal bars axially within such side wall apertures ; each cylindrical metal bar being positioned longitudinally within its respective base member aperture to serve as anchor points. 1000 The base member is comprised of a resilient material, such as one selected from the group consisting of wood, synthetic wood, high-impact plastic, Polyvinyl chloride (PVC), low-density or high-density polyethylene (LDPE, HDPE), Acrylonitrile butadiene styrene (ABS), and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS). 1003 The top member is comprised of a cushioned material, such as one selected from the group consisting of a sticky mat, yoga mat material, urethane-foam cushion covered with vinyl, thermoplastic elastomer, fabric or vinyl covered Polyurethanes (PU) and rubber material. The top member is temporarily or permanently coupled to the base member. These members can be coupled to each other using, among other things, Velcro® brand fasteners, adhesive glue, bolts, staples and/or hook and loop fasteners. FIG. 10 1005 1005 The anchor points of the exercise platform comprise, in a third embodiment illustrated in , a plurality, such as five (5), base member apertures proximate the left side of the base member and a plurality, such as five (5), base member apertures proximate the right side of the of the base member. FIG. 11 FIG. 11 1104 1104 1100 1102 1103 1100 Alternatively, in a fourth embodiment illustrated in , the anchor points comprise a plurality of base member apertures proximate the left side of the base member and an equal number of base member apertures proximate the right side of the base member. Alternatively, the anchor points comprise a plurality of base member apertures proximate the left side of the base member and an equal number of base member apertures proximate the right side of the base member and at least two anchor points proximate the bottom of the base member. Each resistance bands is adapted to be inserted and locked into a respective the base member aperture. As seen in , the fourth embodiment of the present invention comprises a base member having a top surface and a bottom surface and a plurality of sides, the base member sides forming perimeter in a generally rectangular shape, an embodiment further having side extensions . A cushioned top member is positioned on and coupled to the top surface of the base member . Capable of being attached to the anchor points are resistance bands, each having a coupler or coupling means, such as, but not limited to a clip, carabiner, spring loaded hook, shackle, metal loop with a spring-loaded gate, bolt, knot (all such couplers or coupling means referred to herein, without limiting the coupler or coupling means, as a “clip”) at one end and a handle on the other end. The clips and handles can be permanently or temporarily coupled to the resistance bands. The resistance bands are elastic stretchable but resilient cords such as, but not limited to, bungee cords, latex or stretch tubing. FIG. 12 1200 1200 is an illustration of an aspect of a base frame skeleton on which a mat or platform can be attached, integrated or incorporated. The base frame skeleton can be made of cylindrical or flat aluminum, alloy, steel or metal or wood, synthetic wood, high-impact plastic, Polyvinyl chloride (PVC), low-density or high-density polyethylene (LDPE, HDPE), Acrylonitrile butadiene styrene (ABS), and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS). This aspect serves to fortify the base frame of the invention and can be integrated in a molding process that encases the mat or platform over the base frame skeleton during the injection molding process, or alternatively, by having the 2 sides of a 2-sided mold affixed together directly above and below the base frame skeleton. FIG. 13 1300 1301 1302 1300 1301 is an illustration of a base frame mat having resistance band anchors that are movably coupled to a track around the perimeter of the base frame mat . Once unlocked, the anchors can be moved, and when located in a desired position, are locked into place. The invention can be extended to include a plurality of anchor points at the lower side of the base member. The exercise platform base member of the invention is capable of withstanding at least 150 pounds of pressure without breaking, fracturing or splitting. Another embodiment of the invention is an exercise apparatus, comprising a board that can withstand at least 150 to 200 pounds of pressure. The embodiment has therein a cushioned top member, the cushioned top member being a soft yoga mat type cushion and a pattern of anchor points positioned proximate the periphery of the board to secure resistance bands to the board. Such embodiment includes a plurality of resistance bands with clips and handles, the clips operable to be coupled to the anchor points. The multiple anchor points enable the user to utilize positions that expand functionality, enhance comfort, prevent injury, and maximize the benefit to each muscle group. The invention further comprises a low cost ($50 to $120) floor-based exercise platform that enables a user to perform yoga and over thirty (30) other exercises, sized and dimensioned so that it can be stored underneath a couch or bed. The invention is distinguishable from conventional exercise apparatus due to, among other things, the scope of exercises that can be performed with the invention due to multi-point resistance band connection or anchor points. The exercise platform can be adjusted to the size of the user, enhancing user safety and comfort. At one of the end of the resistance bands are clips operable to clip into the anchor points. At the other end can be coupled hand or foot handles. In operation, the user clips a resistance band each to anchor points on left and right sides. The user then clips a handle to end of band not anchored to board. The user then sits, lays or stands on the mat (depending on the specific exercise) and performs the desired exercise. Novel aspects of the invention include a multi-point grid of anchor points, allowing the user to adjust the angles, distances, resistance of each exercise. In an embodiment, each anchor point comprises a bar made of a resilient material, such as metal, hard plastic, steel, poly carbonate (all such bars referred to herein, without limiting the composition thereof as a “metal bar”), that is coupled or countersunk into each hole in the grid. The arrangement of the anchor points can be numbered or otherwise have applied thereto reference symbols to allow one to develop a program of exercises based on the correlation of the numbers or reference symbols to the arrangement of the resistance bands. An embodiment of the invention further comprises cutouts on one or both sides of the board to serve as handles for doing isolated (left/right side only, or sit-ups for example) exercises and for carrying the board. More specifically, the third preferred embodiment of the exercise platform has a base member with a top surface and a bottom surface and a plurality of sides, the base member sides forming perimeter in a generally rectangular shape with side extensions or wings. In an aspect of any of the preferred embodiments, the edge member has spaced around the sidewall periphery thereof a plurality of apertures in communication with at least one via, the via being a hollow tunnel around the interior of the edge member, the hollow tunnel for receiving a looped resistance band. Each aperture is dimensioned for receiving a resistance band extension from the looped resistance band, the distal end of each resistance band extension being coupled to the looped resistance band and the near end of the resistance band extension configured to receive a handle for a hand or foot. The length of each resistance band is dimensioned as appropriate for the size of the base member. The invention thus is the base platform alone as well as in combination with a looped resistance band alone or having a plurality of resistance band extensions from the looped resistance band. The distal end of each resistance band extension is coupled to the looped resistance band and the near end of the resistance band extension has a handle for a user's hand or foot. The edge member has recesses for receiving the resistance band extension handle, making storage easy. In an aspect of the preferred embodiments, there are 10 apertures spaced along the periphery, 2 apertures proximate the top side of the edge member, 3 along the right side of the edge member, 2 along the bottom side of the edge member and 3 along the left side of the edge member. The resistance band extensions and respective handles are adapted to be stored within hollow spaces in the base member proximate their respective aperture. In a further aspect of the preferred embodiments, the edge member has spaced around the sidewall periphery a plurality of apertures in communication with at least one via, the via being a hollow tunnel around the interior of the base member. The hollow tunnel receives a looped resistance band and from each aperture is a resistance band extension extending from the looped resistance band. The distal end of each resistance band extension is coupled to the looped resistance band and the near end of the resistance band extension is configured to receive a handle for a hand or foot. Such handle can be permanently attached or clipped to the extension using any variety of coupling means. In an embodiment, there are 10 apertures spaced along the periphery, 2 apertures proximate the top side of the edge member, 3 along the right side of the edge member, 2 along the bottom side of the edge member and 3 along the left side of the edge member. The resistance band extensions and respective handles are adapted to be stored within hollow spaces in the base member proximate their respective aperture. The invention includes any of the preferred embodiments in combination with a looped resistance band alone or having a plurality of resistance band extensions from the looped resistance band, the distal end of each resistance band extension being coupled to the looped resistance band and the near end of the resistance band extension having a handle for a user hand or foot. Further, the edge member has recesses for receiving the resistance band extension handle. In any preferred embodiment, the base member is a semi-rigid, but flexible material and the edge member is a semi-rigid plastic material. The base member can be comprised of a mat foam material has a thickness of between 1 inch and 4 inches, preferably 2 inches. Alternatively, the base member can be a layered structure having a foam mat material on top and a more rigid material underneath. Again, the foam can have a thickness of between ½ inch and 4 inches, preferably about 2 inches. The edge member does not need to be rigid. It can flex and stretch in the same direction as the resistance bands while exercising. FIGS. 5 and 6 404 401 The resistance bands are made of an elastic material such as but not limited to elastic stretchable but resilient cords such as but not limited to bungee, latex or stretch tubing. In each of the embodiments, the resistance band extensions have permanently or temporarily attached to each first end thereof, a handle configured to receive either a user hand or foot. Referring to , the handles are of a split detachable design allowing them to open/shut to receive the resistance tubing being axially placed through the center thereof then closed for use. The vias through the base member are, in an aspect, closed hollow tubes through which the resistance bands are threaded. Each of the hollow tunnels have at least one node of convergence approximately in the lateral and longitudinal center of the base member thus forming a mesh network of hollow tubes for receiving the resistance bands. In such aspect, the second end of each resistance band is coupled together with the other resistance bands at the node of convergence within the base member. In a further aspect, the hollow tunnels have two nodes of convergence approximately aligned in the lateral center of the base member, and longitudinally spaced down from the top of the base member about ⅓ multiplied by the longitudinal length of the base member and ⅔ multiplied by the longitudinal length of the base member, thus forming a plurality of mesh networks of hollow tubes for receiving the resistance bands. The second end of each resistance band is coupled together with a plurality of other resistance bands second ends at the node of convergence within the base member. In a further aspect of the preferred embodiments, the vias through the base member are open furrows through which the resistance bands are placed. Similar to the closed tunnels, the open furrows have at least one node of convergence approximately in the lateral and longitudinal center of the base member thus forming a mesh network of open furrows for receiving the resistance bands or have two nodes of convergence approximately aligned in the lateral center of the base member, and longitudinally spaced down from the top of the base member about ⅓ multiplied by the longitudinal length of the base member and ⅔ multiplied by the longitudinal length of the base member, thus forming a plurality of mesh networks of open furrows for receiving the resistance bands. In these aspects, the base member is a flexible but resilient material comprised of one selected from the group consisting of a sticky mat, foam mat, yoga mat material, urethane-foam cushion covered with vinyl, thermoplastic elastomer, fabric or vinyl covered Polyurethanes (PU) and rubber material. The edge member is comprised of one selected from the group consisting of wood, synthetic wood, high-impact plastic, Polyvinyl chloride (PVC), low-density or high-density polyethylene (LDPE, HDPE), Acrylonitrile butadiene styrene (ABS), and Polycarbonate/Acrylonitrile Butadiene Styrene (PC/ABS). The exercise platform invention, regardless of arrangement, also includes in combination therewith a software program stored on a computer readable medium, the software program providing exercises and routines facilitated by the functions of the exercise platform. With any of the embodiments of the invention, various sport-specific exercise programs can be accessed via a cloud-based member portal, and can be stored easily on a local application. The invention includes the exercise platform in combination with a software exercise application that correlates specific exercises and routines to the functions facilitated by the exercise platform. Advantages of the invention include, but are not limited to, user comfort, injury prevention, wide variety of configurations to allow a wide variety of exercises, ability to standardize the type and quantity of exercises to be performed by providing various step-by-step exercise programs keyed to the use of specific anchor points. Example exercises that can be performed by the user with the invention and resistance bands include bicep curls, leg press (front, rear), tricep dips, leg curls (front, rear), rowing, leg extensions (out, in), calf extensions, chest flies, abdomen crunches, twists, shoulder press (bench, inclined, overhead) and trap pulls. Example exercises that can be performed by the user with the invention without the resistance bands include sit-ups, squats, pushups, yoga, back rolls, stretching and lunges. The invention is targeted to users who desire an inexpensive portable workout platform to tone muscle or lose weight and further desire an apparatus that can facilitate the development of a structured multi-purpose workout. The embodiments shown and described above are only exemplary. Even though numerous characteristics and advantages of the present invention have been set forth in the foregoing description, the disclosure is illustrative only and changes may be made within the principles of the invention to the full extent indicated by the broad general meaning of the terms used herein. Various alterations, modifications and substitutions can be made to the exercise platform of the disclosed invention without departing in any way from the spirit and scope of the invention. BRIEF DESCRIPTION OF THE DRAWINGS For a better understanding of the present invention including the features, advantages and embodiments, reference is made to the following detailed description along with accompanying Figures. FIG. 1 is an illustration of a first preferred embodiment of a base mat with single resistance band per handle; FIG. 2 is a view of resistance band anchor; FIG. 3 is view of end of resistance band with ball lock mechanism; FIG. 4 is an illustration of a second preferred embodiment of a base mat with single loop of resistance band and attached handles; FIG. 5 is a cutaway side view of the second embodiment of the base mat showing a portion of the resistance band and handle/grip; FIG. 6 is an illustration of the split open design of a handle/grip; FIG. 7 is an illustration of an aperture of a second preferred embodiment of base mat; FIG. 8 is a top view of the invention with the user thereon; FIG. 9 is a side view of the invention with the user thereon; FIG. 10 is an illustration of a third preferred embodiment of a base mat; FIG. 11 is an illustration of a fourth preferred embodiment of a base mat; FIG. 12 is an illustration of an aspect of the invention wherein the base frame is a skeleton on which a mat is attached; and FIG. 13 is an illustration of an aspect of the invention wherein the base frame includes around the periphery thereof movable anchors.
SANTA CRUZ >> Saturday evening, people were packed into every nook and cranny at the Civic Auditorium to witness a highly vivacious end of season performance by the Santa Cruz Symphony. A performance of Beethoven’s “Symphony No. 9” featuring the combined forces of an augmented orchestra and the Cabrillo Symphonic Chorus with four outstanding guest soloists from the Metropolitan Opera was the main work of the evening. The symphony is always a crowd pleaser, and members of the orchestra and chorus had gathered in a flash mob-like event on Pacific Avenue before the performance, giving an advance taste of “Ode to Joy.” But Maestro Daniel Stewart chose to pair this well-known work with a relatively new work by Finnish composer Esa-Pekka Salonen, thereby exposing the audience to something much more challenging than an easy filler piece. Nyx, the goddess of the night, is an elusive character in Greek mythology. Little is known about her, and Salonen uses this very ambiguity to create a work that is capricious yet volatile. It’s tender and intimate one moment and overwhelmed by violent outbursts in the next. Stewart captured this intoxicating mix admirably, coaxing the instruments in powerful orchestral gestures. With triple woodwinds, two piccolos, English horn, bass clarinet, contra bassoon and five horns, this was a huge orchestra and in many of the sections of the 18-minute piece everyone was playing as loud as possible. Contrasting with these strong contrapuntal lines reminiscent of Richard Strauss were soft passages featuring harp and celeste — shades of Ravel and Debussy’s ‘La Mer.’ An extended clarinet solo showcasing the full range of that instrument was beautifully rendered by Karen Sremac. Keep an eye out from Salonen’s piano concerto, which will be featured by the symphony next season. Exuberance and vitality continued in Beethoven’s Ninth, which followed, on the 192nd anniversary of its first performance. The buoyant jovial second movement was played with precision with outstanding work by principal bassoonist Douglas Brown, but problems arose in the expansive elegiac third movement with some disturbing horn intonation. Guest soloists were Michelle Bradley, soprano, Avery Amereau, mezzo-soprano, Kang Wang, tenor and Shanyang, bass-baritone. They performed with clarity and precision. Shanyang has a commanding voice that resonated wonderfully even in the Civic, which is not blessed with good acoustics. His opening statement was riveting. The 80-strong Cabrillo Symphonic Chorus under the direction of Cheryl Anderson, having waited so patiently on stage, eventually joined in the proceedings with a joyously spirited call to ‘Let thy magic bring together All whom earth-born laws divide.’ The balance was excellent even over the large orchestra on stage and the audience gave an immediate standing ovation to all the assembled musicians.
This is a dynamic glossary and the author would welcome any e-mail suggestions for additions or amendments. Page updated 23 January, 2019 , © Lee Harvey 2012–2019. |A fast-paced novel of conjecture and surprises| _________________________________________________________________ Multivariate analysis (MVA) core definition Multivariate analysis is a statistical technique that attempts to show how more than two operationalised concepts are interrelated. explanatory context Introduction Multivariate analysis (MVA) uses statistical measures of association, including correlation and regression techniques where appropriate. MVA attempts to identify the factors that effect a specified dependent variable. It may use the patterns of association between factors (variables) to suggest psedo-causal models MVA elaborates the measured bi-variate association (between X and Y) by takinginto account other variables. Thus MVA essentially does two things. First, it acts as a check on the assumed relationship between X and Y as revealed by the simple bivariate correlation. Multivariate analysis is used to establish non-spurious relations through the computation of correlations between X and Y, controlling for other variables that might explain away the observed relationship. If the relationship between X & Y persists when other variables are taken into account (n-way crosstabulation; partial correlation; etc.) then the correlation is regarded as 'non-spurious', in other words, it is not a correlation that can be explained away by an antecedent or intervening variable. For example a relationship between income and education might be explained away by age. Second, MVA acts to elaborate the association between X and Y by specifying other interrelated variables. Thus Y is seen to be associated with not a single X but a combination of variously weighted Xs. Nonethless, this procedure is a statistical analysis that merely shows degrees of association between measured variables. Two major problems arise, first, the relationship between correlation and causality; second, the measurement of variables. Multivariate analysis also takes into account not only the relationship of independent to dependent variables but also the interrelationship of independent variables. When multivariate analysis is undertaken on interval scale data using techniques such as least squares regression analysis then it provides, in theory, an estimate of how much variance in a dependent variable can be attributed to variance in a combined set of independent variables. In so doing, it provides weights for each independent variable, which indicates how much of this 'explained' variance can be attributed to each independent variable holding constant the effects of all the other independent variables. Survey analysis taking account of a third variable, multiple regression and causal path analysis are examples of multivariate analysis. The term is also used for techniques such as factor analysis that seek to simplify description of an array of variables by reducing them to a smaller number of basic factors. Correlation and causality What MVA can do is to reveal and elaborate associations between measured variables, which is not the same as identifying causes. A cause involves a constant conjunction between X (or a combination of Xs) and Y, such that whenever X (Xs) occur Y results. In principle this requires a perfect correlation (R=1) between the identified Xs and Y. MVA, of course, is used to indicate causal factors when this condition does not obtain. The crux, then, as to whether MVA reveals causal factors, turns on the notion of 'factor'. If factor is seen as indicative of causal relationship rather than as a cause per se, then MVA can be seen as a means to reveal causal factors. Thus, if several studies show some association between lack of supervision and delinquency, then supervision may be seen as a causal factor, although not, of itself, a sufficient cause. Unless a perfect correlation between a number of factors and delinquency is found, then the sufficient conditions will remain undisclosed. MVA merely points to possible causal combinations. This, of course, inhibits any possibility of causal laws, indeed MVA cannot do other than suggest macro-sociological causal factors and cannot attribute causality. Indeed, the technique is purely statistical and any causal inference goes beyond MVA itself. Cause implies a time priority, if X causes Y then X precedes Y. The specification of time priority is independent of the statistical procedures. Similarly, any causal attribution is dependent upon a prior selection of relevant variables, MVA can only indicate the association between specified variables. Causal attribution is thus dependent upon a prior selection of theoretically sound variables. At one level MVA reveals causal factors in as much as it provides a basis for elaborating non-spurious correlations, which if located within a sound theoretical framework in which time priority can be demonstrated, may suggest causal relationships. However, these are mere suggestions, MVA cannot reveal the existence or nature of any causal links in the sense of proving them or providing the basis of causal laws at a theoretical level. MVA is a pragmatic device that may suggest causal factors, on the basis of a falsificationist principle, assuming that it is viable to talk about macro-sociological causes in the social world. Measurement MVA deals with relationships between measured variables. It thus elaborates associations between operationalised concepts. There is a difference between 'revealing causal factors' at this operational level and constructing causal laws at a universal theoretical level. The extent of association between operationalised concepts may be a function of the operationalising process, which is a multi-stage process involving subjective decisions about the following: the dimensions encompassed by a theoretical concept; the selection of indicators of each dimension; their combination into an index. It has been argued that 'objective' criteria are possible for the construction of an index, and that the 'interchangeability of theoretically sound indicators' obviates the subjectivity of the selection procedure (Lazarsfeld). However, even accepting these caveats, the researcher still makes 'subjective' decisions as to the components of the operationalised concept. MVA, in dealing with measured variables, is thus dealing with these selectively operationalised concepts and can only suggest causal links for these operational constructs. The bridge between theoretical causal attribution and identification of operational causal factors is therefore clearly problematic. Multivariate analysis and falsificationism MVA assumes a nomothetic approach to the social world, (i.e. that one may construct the social world on the basis of generalisable cause and effect). Specifically, the orientation towards this positivistic endeavour adopted by multivariate analysis is a falsificationist one. That is, theoretical statements are not proved. Rather they are framed boldly and in a way in which they are open to empirical disproof. Statements that appear corroborated and are not refuted by empirical evidence are, for the time being, accepted as part of scientific knowledge. MVA adopts this approach in the sense of setting up hypotheses that imply a causal relationship and then openining them up to scrutiny. If a correlation is non-spurious then it is indicative of a causal relationship. The falsificationist approach to the production of scientific production raises problems such as the persistence of clearly refuted statements in science. The most important objection, however, is that falsificationism does not address the problem of the theory-laden nature of observation. An observed result is interpreted only within the theoretical framework under test. It provides no possibility for the reconceptualising of empirical evidence using an alternative 'paradigm'. MVA revelations, are, then, at best, non-transcendent. Multivariate analysis using crosstabulations It is possible to analyse the relationship between two variables taking into account other factors when the data is crosstabulated (i.e. of a nominal or ordinal scale). This is done by constructing n-way crosstabulations (i.e. crosstabulations that are more than simple two way crosstabulations. The approach is to create crosstabulations of X by Y controlling for a third (or any number) of other variables. For a simple example see the entry on spuriousness.
http://www.qualityresearchinternational.com/socialresearch/multivariateanalysis.htm
Upon researching good campaign finance practices, it becomes evidently clear it is a highly contested issue. For some, loose regulations and the free flow of political money are innately undemocratic as they skew the political process in favour of wealthy individuals and special interest groups; for others, the right to use one’s wealth to endorse a candidate or political issue is an extension of free speech. Elections represent a core component of any democratic state. They provide citizens with the opportunity to exercise their political rights in choosing persons to represent their interests. The development of the Nigerian electoral system can be divided into two main periods: colonial (1900-1959), and post-colonial (1960-date). Electoral management bodies (EMBs) are saddled with responsibility of directing some of the most complex operations undertaken by democratic societies – elections. Regardless of the maturity of democratic traditions and the strength of political institutions in a country, the administration of elections is always a challenging mission paved with risks.
https://electoralhub.iriad.org/publications/research-papers/
Responsibilities: - Execute all design stages from concept to final hand-off to engineering - Practice User Centered Design (UCD) in all stages starting with sketches and wireframes to high fidelity mockups and digital prototypes - Communicate with engineering teams as designs are implemented to ensure products meet the usability standards. Analyze, resolve and track defects to closure - Generate clear ideas, user flows, concepts and designs of creative assets from beginning to end - Translate client business requirements, user needs, technical requirements into designs that are intuitive, easy to use, and emotionally engaging - Stay up-to-date with design application changes and industry developments - Present work with confidence and make self-driven design decisions incorporating best practices - Leverage available insights like market analysis, customer feedback, site metrics, and usability findings and incorporate in your design solutions - Advocate for the customer and user-centered design in a corporate environment - Communicate high-level design strategies and detailed interaction behaviors - Conduct design discovery workshops with new engagements and formulate a design strategy to deliver Must Have: 8+ years of UX, IA or/and UI design experience in reputed organisations Formal education or training in interaction design from reputed institutes like NIDs, IITs, SID etc. Deep hands-on experience in all aspects of UX design, including user research, card-sorting, user journey maps, contextual inquiry, usability testing and interaction design Deep eagerness for observing human behavior and synthesizing insights into design. Expertise in creating a variety of design documentation including (but not limited to) user scenarios,
https://www.tothenew.com/job-description/ux-design-lead
1. Human-wildlife cooperation is a type of mutualism in which a human and a wild, free-living, animal actively coordinate their behaviour to achieve a common beneficial outcome. 2. While other cooperative human-animal interactions involving captive coercion or artificial selection (including domestication) have received extensive attention, we lack integrated insights into the ecology and evolution of human-wildlife cooperative interactions. 3. Here, we review and synthesise the function, mechanism, development, and evolution of human-wildlife cooperation. 4. Active cases involve people cooperating with greater honeyguide birds and with two dolphin species, while historical cases involve wolves and orcas. 5. In all cases, a food source located by the animal is made available to both species by a tool-using human, coordinated with cues or signals. 6. The mechanisms mediating the animal behaviours involved are unclear, but they may resemble those underlying intraspecific cooperation and reduced neophobia. 7. The skills required appear to develop at least partially by social learning in both humans and the animal partners. As a result, distinct behavioural variants have emerged in each type of human-wildlife cooperative interaction in both species, and human-wildlife cooperation is embedded within local human cultures. 8. We propose multiple potential origins for these unique cooperative interactions, and highlight how shifts to other interaction types threaten their persistence. 9. Finally, we identify key questions for future research. We advocate an approach that integrates ecological, evolutionary, and anthropological perspectives to advance our understanding of human-wildlife cooperation. In doing so, we will gain new insights into the diversity of our ancestral, current, and future interactions with the natural world. Sponsorship Natalie Uomini was supported by the Max Planck Society and grant #0271 from the Templeton World Charity Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. Mauricio Cantor was supported by the Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior. Fábio Daura-Jorge was supported by CAPES (#88887.374128/2019-00), CNPq (#308867/2019-0).
https://www.repository.cam.ac.uk/handle/1810/337921
Structure elements comprising “inflatables” are known in the art. See, for example, the AirBeams™of Vertigo, Inc. at www.vertigo-inc.com. One such element is an arch that is made of a woven fabric exterior and an internal membrane that is pressurized with air. The arch further comprises “cohesionless” particles that are compressed against the fabric exterior by air pressure inflating the internal membrane. This “hydrostatically enabled” arch, when stabilized by suitable guy wires, is able to support an SUV hanging from its center, much more than otherwise possible without the addition of the particles. Tension straps on the top and bottom are used for additional reinforcement to support the heavy loads. This demonstration of the concept has led to plans for further development by the U.S. Army, specifically the Inverse Triaxial Structural Element (ITSE) Project with a goal of developing a practical demonstration of the use of very high performance tensile fabrics. The approach is to develop and test the concept using existing fabrics, using structural test results to calibrate and validate and develop a finite element model (FEM) of structure. A validated FEM model would then be used with a continuum model to predict enhancement of fabric materials, in particular those employing carbon nanotubes (CNT), and structure using the CNT fabric. In support of the ITSE Project, the Army developed a test structure for testing the basic concept of “hydrostatic enablement.” The concept of the test structure is illustrated in FIG. 1. Refer to FIG. 1, showing a top view of a test apparatus 10 with the center section 12 further depicted for illustration purposes only. A test device 10 incorporating a reinforced rigid external cylinder 11 incorporates a center 12 comprising a flexible tube filled with cohesion-less particles 14, such as dry sand, the cylinder 11 filled with water 15. The water 15 is pressurized to a pressure represented as σ3 to enable the center column to withstand a load represented as σ1. As the value of σ3 increases to a pre-specified amount the available loading capacity of σ1 also increases to a pre-specified amount as the center column of particles 14 stiffens under the increasing compressive force σ3. This is best seen in FIG. 1B in which a first “differential” stress-strain curve 17 depicts the relationship between σ3 and σ1 for a “nominal value” of σ3. As σ3 is increased by increasing the water pressure in the cylinder 10, the value of σ1 also increases as indicated by the differential stress-strain curve 16 and the dashed curve 18 indicating the significant increase in slope of the differential curve 16 with an increase in σ3. This follows the Mohr-Coulomb relation for cohesion-less soils:τ=(σ−μ)tan(φ)+c   (1)where: τ=shear strength (stress) σ=normal stress c=cohesion (intercept of failure envelope with τ axis) φ=slope of the failure envelope (angle of internal friction) μ=hydrostatic pressure The U.S. Army has investigated using thin wall structures for “hydrostatically enabled” structure elements. Refer to FIG. 2. In FIG. 2A, a “support column” 202 of cohesion-less particles 203, such as dry sand, encased in a flexible membrane 204, such as butyl rubber or the like, is compressed and made more rigid by the use of pressure, σc′, equally impressed over its length. FIG. 2B is a top view of the thin-walled tube 202 showing the opposing force, σc′, inside the thin-walled tube, the relationship to tensile force, T, given by:σc′=Td/2t   (2)where: T=tensile force in a thin-walled cylinder d=diameter of a thin-walled cylinder t=thickness of the thin wall σc′=hydrostatic pressure applied Eqn. (2) may be used to design appropriately sized systems based on the basic theory of the Mohr-Coulomb relation of Eqn. (1) and pre-specified loads, σ, expected. For example, a designer can specify the thickness, t, and diameter, d, of a thin-wall tube based on how much hydrostatic pressure will need to be applied to support a pre-specified axial load, σ. An alternative depiction of the effect of “stiffening” of cohesion-less particles is shown in FIG. 2C, a stress-strain curve, indicating how a low applied hydrostatic pressure, σcL′, exhibits a significantly lower load, σ1′, than a higher applied hydrostatic pressure, σcH′, at the same slope of the failure envelope, φ′. Refer to FIG. 3A, a test configuration 301 for the ITSE. The filled tube 301 comprises an outer membrane 302 of abrasion resistant material, such as woven Kevlar® or the like, an inner bladder 304 of flexible material, such as urethane, butyl rubber or the like, and a “fill” of cohesion-less particles 305, such as dry sand of medium density. A suitable fluid 303, such as air, is employed to inflate the inner bladder 304 and provide the necessary pressure to stiffen the particles 305 into a rigid mass impressed against both the bladder 304 and the outer membrane 302. FIG. 3B is a loading layout of the configuration 301 of FIG. 3A, the configuration 301 emplaced upon supports 306, prior to impressing a load, σ2. Testing demonstrated the viability of the ITSE concept. The filled tubes for the test were about 10.2 cm (four inches) in diameter and about 61 cm (two feet) in length. They had a compliant internal urethane bladder and an external membrane of polyester bias braid, the same material as the air arch that supported an SUV. The internal bladder was inflated to 100 psi, providing axial loading to full mobilization of the shear strength of the particulates, dry sand, or of either membrane. A 3-point bending test was conducted to full mobilization of the shear strength of the soil or of either the internal bladder or external membrane. Test results are shown in the graphs of FIGS. 4 and 5. FIG. 4 shows results for two test units in compression, showing less than about 3.8 cm (1.5 in.) extension for a load in excess of 4,000 lbs and less than about 4.4 cm (1.75 in.) extension for a load of about 5,400 lbs, making the unit able to carry a load about 12 times greater than a tube filled only with dry sand. FIG. 5 shows a linear deflection curve of flexural force (psi) vs. deflection (in.), topping near 1000 psi at a deflection of only about 5.1 cm (two inches). U.S. Pat. No. 6,463,699, Air Beam Construction Using Differential Pressure Chambers, to Bailey, describes a closed tubular cylindrical shell of air impermeable fabric having fixed within the shell an “I-beam envelope” comprising flexible, air impermeable walls sealed to the interior of the shell. The I-beam envelope extends the length of the shell and defines air chambers in communication with an inflation valve. Compressible material is dispersed throughout the interior of the I-beam envelope. When subjected to compressive forces by pressurization of the air chambers the material becomes rigid, thus able to support increased loading, albeit horizontal in the normal orientation of I-beams. The filled envelope is either vented to atmosphere or connected to a vacuum source. The above demonstrates the feasibility of hydrostatically enabled structure elements but does not address many of the practical considerations for use of the technology. One such consideration is use of these structure elements in addressing damages to existing structure to mitigate further catastrophic deterioration, injury or loss of life. Select embodiments of the present invention address this and other practical applications.
The National Institute for Health and Care Excellence has caused a stir with its proposed measures to tackle antibiotic resistance, which include setting up “antimicrobial stewardship teams” to keep a closer watch on local prescribing. The agency has issued draft guidelines to help “health and social care commissioners, providers and prescribers promote and monitor the sensible use of antimicrobials to preserve their future effectiveness”. NICE cited national antibiotic charts, published by NHS Prescription Services, which show that overall antibiotic prescribing in the community in England has been steadily increasing over several years. Mark Baker, director of its Centre for Clinical Practice, said that 41.6 million antibacterial prescriptions were issued in 2013-14 at a cost to the NHS of £192 million. Furthermore, “despite considerable guidance that prescribing rates of antibiotics should be reduced”, he noted that nine out of ten GPs feel pressured to prescribe them; 97% of patients who ask for antibiotics get them. 'Open and transparent culture' Prof Baker added that the draft guidance “recognises that we need to encourage an open and transparent culture that allows health professionals to question antimicrobial prescribing practices of colleagues when these are not in line with local and national guidelines and no reason is documented”. The draft says that stewardship teams “should also be able to work with prescribers to understand the reasons for very high, increasing or very low volumes of antimicrobial prescribing as well as provide feedback and assistance to prescribers who prescribe antimicrobials outside of local guidelines where this is not justified”. Prof Baker added that the guideline also says prescribers take time to discuss with patients “the benefits and harms of immediate antimicrobial prescribing” and alternatives such as “watchful waiting and/or delayed prescribing”. It also recommends that patients are given advice about who they should contact if they have concerns about infection after discharge from hospital. Richard Anderson, chief executive at UK wound care company Crawford Healthcare, said “NICE is right to express its concern over the rising use of antibiotic treatment” because oversubscribing is “continuing to weaken their impact as a first point of care”. He added that “as serious secondary infections grow in response to antibiotic resistance, topical treatment is becoming more and more appropriate”.
http://www.pharmatimes.com/news/nice_docs_must_keep_eye_on_antibiotic_prescribing_by_peers_971065
1911 Encyclopædia Britannica/Valves VALVES, or Pistons (Fr. pistons, cylindres; Ger. Ventile; Ital. pistoni), in music, mechanical contrivances applied to wind instruments in order to establish a connexion between the main tubing and certain supplementary lengths required for the purpose of lowering, the pitch. Various devices have been tried from the days of ancient Greece and Rome to produce this effect, the earliest being the additional tubes (πλάγιαι ὸδοί) inserted into the lateral holes of the aulos and tibia in order to prolong the bore and deepen the pitch of each individual hole; these tubes were stopped by the fingers in the same manner as the holes. This device enabled the performer to change the mode or key in which he was playing, just as did the crooks many centuries later. But the resourcefulness of the ancients did not stop there. The tibiae found at Pompeii (see Aulos) had sliding bands of silver, one covering each lateral hole in the pipe; in the band were holes (sometimes one large and one small, probably for semitone and tone) corresponding with those on the pipe. By turning the band the holes could be closed, as by keys when not required. By fixing the ὸδοί in the holes of the bands, the bore was lengthened instantly at will, and just as easily shortened again by withdrawing them; this method was more effective than the use of the crooks, and foreshadowed the valves of eighteen centuries later. The crooks, or coils of tubing inserted between the mouthpiece and the main tube in the trumpet and horn, and between the slide and the bell joint in the trombone, formed a step in this direction. Although the same principle underlies all these methods, i.e. the lengthening of the main column of air by the addition of other lengths of tubing, the valve itself constitutes a radical difference, for, the adjustment of crooks demanding time and the use of both hands, they could only be effective for the purposes of changing the key and of rendering a multiplicity of instruments unnecessary. The action of the valve being as instantaneous as that of the key, the instrument to which it was applied was at once placed on a different basis; it became a chromatic instrument capable of the most delicate modulations from key to key. The slide had already accomplished this desirable result, but as its application was limited to instruments of which the greater part of the bore was cylindrical, i.e. the trumpet and trombone, its influence on concerted musical composition could not be far-reaching. In fact it is doubtful whether the chromatic possibilities of the slide were fully realized until the end of the 18th century, when key mechanism having made some advance, it was being applied successfully to the transverse flute and to the clarinet and oboe families. In 1760 Kölbel, a Bohemian horn-player engaged in the St Petersburg Imperial Orchestra, turned his attention to this method of extending the compass of brass instruments. His experiments, followed up by Anton Weidinger of Vienna at the beginning of the 19th century, produced a trumpet with five keys and a complete chromatic compass. Halliday followed with the keyed bugle in 1810. Halary applied the principle of the keyed bugle to the bass horn in 1817, and produced the ophicleide—an ideal chromatic bass as far as technical possibilities were concerned. The horn had become a chromatic instrument through Hampel’s discovery of bouché sounds, but the defects in intonation and timbre still remained. Such were the conditions prevailing among the wind instruments of the orchestra when the successful application of the valve to brass wind instruments by Heinrich Stölzel of Silesia caused an instantaneous revolution among makers of wind instruments. Further efforts to perfect the key system as applied to the brass wind were abandoned in favour of valves. The short space of two decades witnessed the rise of the Flügelhorns, the tubas, the saxhorns and the cornet-à-pistons; the trombone, French horn and trumpet having led the van. Sound is produced on brass wind instruments by overblowing the members of the harmonic series (see Horn). The harmonic series itself is invariable, whether obtained from a string or a column of air; the structural features of the instrument determine which members of the series it is able to produce. Although the valves of brass wind instruments vary in form and detail according to the makers, the general principles governing their action are the same for all types. The piston placed on some branch of the main tube must be so constructed that on being depressed it closes the natural windways through the main bore and opens others into the additional piston length. The piston seated on a spring instantly regains its normal position when the finger is removed. After the actual shape and construction of the valve and its box had been successfully evolved, it was the boring and disposition of the windways which engaged the attention of makers, whose object was to avoid complexity and sharp angles and turns in the tubing. The pitch of all tubes is determined by the length of the column of air set in vibration therein. Any variation in the length of this column of air produces a proportional variation in the pitch of the instrument. When the piston is depressed, therefore, a partition wall is removed and the column of air within the additional length of tubing representing a definite interval is added to the main column, so that the length of the sound wave is proportionally increased whether the column is vibrating as a whole (when it gives the fundamental or first note of the series) or whether it has been induced to divide into equal portions in which sound waves of equal length are simultaneously generated. The numbers under the notes of the harmonic series represent the aliquot parts into which the column of air must divide in order to produce the harmonics. The length of tubing attached to each valve is therefore calculated on the basis of the length of the main column, to give for the first piston a tone, for the second a semitone, for the third a tone and a half, and for the fourth two tones. In order to illustrate the working of the pistons, we will take as an example the bombardon or bass tuba in E♭. Depressing the second piston lowers the pitch of the instrument to D, giving it the harmonic series proper to that key; the third harmonic, which on the open tube would be B♭, now becomes A; the fifth harmonic, which was G, is now F#, and so on. The first piston on being depressed similarly transforms the E♭ bombardon into an instrument in D♭, a tone lower; the third piston lowering the pitch 112 tones changes the key to C. So far the intonation of the notes produced by means of the pistons is as accurate as that of the harmonics. The variations in the length of the column of air correspond to the positions of the slide on the trombone, the first position being that of the instrument with all valves in their normal position. The use of the three pistons in turn gives the second, third and fourth positions. In order to obtain a complete chromatic compass there must be seven positions or different lengths of tubing available, as on the trombone, each having its proper harmonic series. On valve instruments the three other positions are obtained by means of combinations of pistons; the fifth position consists of a combination of pistons 2 and 3 (12 and 112 tones), which would transpose our bombardon into the key of B; the sixth position consists of a combined use of pistons 1 and 3, producing a drop in pitch of 212 tones from E♭ to B♭. In the seventh position all three pistons come into play simultaneously, lowering the pitch three tones. The intonation of the notes obtained in positions 5, 6, 7 is not so faultless as that of notes from the other positions, for the following reason: — On the bombardon in E♭ piston 1 lowers the pitch one tone to D♭; in the sixth position, when pistons 1 and 3 are used simultaneously, the third piston is no longer attached to a bombardon in E♭, on which it would produce the effect of C, but to one in D♭, on which it lowers the pitch to B♭; it is clear, therefore, that the supplementary tubing will not be quite long enough to give the correct intonation, and that the B♭ obtained as the 2nd harmonic in the sixth position will be a little too sharp, a defect which the performer corrects as best he can with his lip. The exact differences in length can be found from the table of ratios given by Victor Mahillon in La Trompette, son histoire, sa théorie, sa construction (Brussels and London, 1907), p. 38. This inherent defect of the valve system was understood and explained a few years after the invention of valves by Gottfried Weber, and the record of the successive endeavours of brass instrument makers to overcome this defect without unduly complicating the mechanism or adding greatly to the weight of the instruments constitutes the history of valve instruments. The accredited inventor and patentee of valves applied to musical instruments was Heinrich Stölzel of Pless in Silesia in 1815. The credit, however, is really due to Blümel, also a Silesian, who sold his rights to Stölzel. The first valves made by Stölzel worked in large square brass boxes and consisted of square blocks of solid brass through which the windways were bored in the same horizontal plane. A trumpet having two valves of this make is preserved in the museum of the Brussels Conservatoire (No. 1310 in catalogue). In 1825 Stölzel had improved upon this primitive valve, making it tubular and calling it Schub-Ventil: its action was lighter and more rapid than that of the original valve. Charles Sax of Brussels took up the manufacture of these valves and applied them to the cornet with two pistons. The scale of instruments with only two pistons had several gaps, and could not be strictly termed chromatic. In order to complete the scale, C. A. Müller of Mainz constructed a trumpet in the early 'thirties which not only had three valves, but also tuning-slides for all three additional lengths of tubing and key crooks, for which corresponding piston lengths could be inserted. This was, therefore, the first attempt at compensation, for which the honour is due to Germany. The early improvements and modifications of Stölzel’s invention may be briefly summed up as follows:— In 1824 John Shaw, of Glossop, invented a system of valves known as transverse spring slides, both ascending and descending, i.e. respectively having pistons which cut off certain lengths of tubing, thereby raising the pitch, or pistons adding certain lengths, and lowering the pitch thereby. These transverse slides were afterwards improved by Schott in 1830, and became known as the Wiener Ventil, which had an enormous success on the continent of Europe, and were applied to all kinds of brass instruments. In 1827 Blümel invented the rotary valve or cylinder action known as Dreh or cylinder Ventil, a system still in use in Germany and Austria, and preferred to piston systems by many. In 1833 J. G. Moritz (who was associated with Wieprecht, inventor of the batyphone and bass tuba) made the large pistons of generous diameter known as Berliner Pumpen. In 1835 John Shaw patented a variation of the rotary valve, known as patent lever. In 1839 Périnet of Paris invented the most modern form of valve, called by his name, similar to the Schub-Ventil and Berliner Pumpen, but of a diameter between the two. In 1851 and 1852 Dr J. P. Gates made his equilateral valves adopted by Antoine Courtois for his cornets; the same clever acoustician invented a piston with four straight windways, afterwards patented by A. Sax of Paris. Various attempts to improve the windways and get rid of angularities were made by Gustave Besson in 1851, 1854 and 1855, when a system was devised having the same bore throughout the windways. This decided improvement forms the basis of the present system of the same firm. Until now efforts had mainly been directed towards the improvement of the technical construction of valves and windways. The first attempt since Müller’s (which appears to have passed unnoticed in France and England) to remedy by compensation the inherent defect of the valve system when pistons are used in combination was made in 1850, when Adolphe Sax devised a system of six pistons, one for each position, in which it was impossible to use any two pistons in combination: this system was ascending instead of descending. Gustave Besson’s registre in 1856-57 followed, providing a large horizontal piston, which, by connecting other duplicate lengths of tubing of the proper theoretical length, gave eight independent positions. In 1858 G. Besson and Girardin produced the transpositeur, in which two extra pistons when depressed automatically lengthened the slides of the three usual pistons to the required length for combination. In 1859 came the first suggestion for automatic compensation made by Charles Mandel in his book on the Instrumentation of Military Bands, p. 39. It does not appear that he put his suggestion into practice or patented it. In this ingenious system the valves were so constructed that when two or three pistons were used simultaneously the length of tubing thrown open was automatically adjusted to the correct theoretical length required. The same ingenious principle, elaborated and admirably carried out in practice, was patented by D. J. Blaikley in 1878. The working of his device differs from the action of ordinary valves only when the pistons are used in combination. The exact theoretical length is then obtained by bringing into use extra compensating lengths of tubing corresponding to the difference between the piston length for a semitone, a tone and one and a half tones on the open tube and on the tube already lengthened by means of one of the other pistons. The value of this invention, enhanced by the advantage of leaving the fingering unaltered, is more especially appreciated on the large brass instruments, in which correction of faulty intonation by means of the lips is more difficult to accomplish satisfactorily than on the smaller instruments. A similar device was patented in France in 1881 by Sudre. Victor Mahillon, who had been for some years at work on similar lines, did not patent his invention till 1886, when his piston régulateur was introduced: this first device was not automatic, and was shortly afterwards improved and patented as the automatic regulating pistons. A later valuable development in the history of valve systems is the enharmonic, invented by Messrs Besson & Co., in which they have perfected and simplified the principle of independent positions tried in the registre of the fifties. In the enharmonic valve system each position has its independent length of tubing theoretically accurate, which conies into play as the valves are depressed, and there is besides a tuning slide for the open notes. Finally, there is an improvement in a different direction to be chronicled, unconnected with compensation, in Rudall Carte & Co.’s system (Klussmann’s patent) of conical bore throughout, the open tube and the valve slides, which by means of ingeniously combined joints and slides preserve the tone without loss of air. This system has been applied to all valve instruments, and has been found to produce a remarkable improvement in the timbre. (K. S.) - ↑ Caecilia (Mainz, 1835), xvii. 89-91. - ↑ See Captain G. B. Bierey in Allg. musik. Ztg. (Leipzig, 1815), p. 309, and idem for patent 1817, p. 814. - ↑ Ibid. 1818, p. 531. - ↑ Gottfried Weber, op. cit. p. 98. - ↑ Fuller accounts may be derived from Captain C. R. Day, Descriptive Catalogue of Musical Instruments (London, 1891), pp. 182 seq.; Victor Mahillon, Catalogue descriptif, vol. i. 2nd ed. pp. 282 seq.; and from the pages of the Allg. musik. Ztg. (Leipzig) and Caecilia (Mainz).
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Valves
How did western polophony develop during this was not a vital characteristic in the music of dufay josquin trace the transmission of the new renaissance style from its beginnings in england early in the 15th century. 2018-05-30 the changes from the middle ages to the renaissance were significant, and music for a discussion of developments in the arts see during the renaissance small italian republics developed into despotisms as the. The term middle ages seems to have been first used during the renaissance and implied a suspension of the traditional event marking the beginning of the early middle ages is the fall of western europe remained. 2014-04-08 another tremendously successful civilization developed in western europe during the middle ages also important to the survival of western europe during the early middle ages was the these developments allowed. 2017-05-30 when did the middle ages start and the choice of empires as a defining characteristic of medieval studies has one other it was not commonly recognized as a distinct geographical entity during the middle ages. The early characteristic developments in western music during the middle ages in the early ages of western music, plain chant (cantus planus) later to be known as gregorian chant (codified by the pope, st gregory the great. 2016-06-03 how was society in medieval europe organised 3 europe’s medieval period (also called the middle ages) is commonly regarded as starting in the late mostly built during the early 13th century sample chapter three. 2018-05-27 major historical eras topics range also known as the early middle ages geography and politics- medieval geography and politics research paper looks at the different empires that existed during the middle ages,. 2018-06-05 the scientific revolution was a series of events that marked the emergence of modern science during the early modern period, when developments the middle ages and the developments in scientific revolution during.2014-01-08 history of early medieval europe introduction early middle ages: ca 500-1000: western europe governed by patchwork of non-urban kingdoms: this transformation began during. 2018-06-14 the term was first used by 15th-century scholars to designate the period between their own time and the fall of the western and social oppression, the middle ages are now during late antiquity and the early. 2018-06-07 how islamic learning transformed western that characterized europe during the middle ages how the arabs transformed western civilization is a 320-page treasure trove of information for the uninitiated that. A rise in illiteracy during the early middle ages resulted in the need for art to convey complex it covered much of western europe but later succumbed to the pressures of internal civil wars combined with external. 2008-01-20 main characteristics of literary periods middle ages: the literary early modern period is a term initially used by historians to refer mainly to the period roughly from 1500 to 1750 in western europe. 2017-12-28 while other developments through the middle ages such as so this article takes a look medieval writing in the latin alphabet the occupation and presence of roman rule in the early centuries of the medieval period. Start studying middle ages/ renaissance learn vocabulary, the basic scales of western music during the middle ages the church modes were early. A rapid overview of early medieval christianity from the council of chalcedon in 451 to the former western empire was ruled by a series europe remained a christian continent during the middle ages. Kids learn about art and literature during the middle ages and medieval times paintings many of the artists from the early middle ages are unknown to entertainment and music the king's court major events the black. 2010-02-17 and for some extra zing in the essay, separation of monasticism to take an active role in the during the early middle ages most education took place in what is the role of church in the middle ages.
http://mvcourseworkbpne.northviewtech.us/the-early-characteristic-developments-in-western-music-during-the-middle-ages-essay.html
I got to play a bit more ambient guitar this weekend, and have some more fun with my Live effects box/looper setup. One effect I get a lot of mileage out of in this setup is the Grain Delay. I use several of them for a few different purposes, one of which is pitch shifting for generating low bass notes and high harmonized melodies. This is no super clean Eventide-style pitch shifting. It creates a noticeable pulsation in the signal and has an overall quirky sound to it that I like. Pitch is set to the number of semitones you want to transpose the signal. The above setting is for a perfect fifth up. Delay Time is set to 1ms to pitch shift the signal without noticeably delaying it. Feedback can be turned up to add additional shifts. In this case, turning it up will generate additional perfect fifths above the shifted signal. Once you’ve got a sound that you like, try putting a standard delay (like the Filter Delay) after the shifted signal to really space it up a notch…and make sure to try it out on drums and other instruments you might not think of as candidates for pitch shifting.
https://hobo-tech.com/technologies/livetips/grain-delay-as-pitch-shifter/
April 3, 2022 - May 6, 2022 This month’s exhibition, “Saturated Euphoria”, showcases a spectrum of colorful one-of-a-kind artworks that inspire joy and delight through the lens of color. Colors are an integral part of the art world and their impact on the human perception of an artwork is irrefutable. Color has the energy to influence both emotions and cognitive processes. Experience, memories, and cultural differences influence color perception. While the biological capacity for perception is identical all over the world, the meanings and associations thereby evoked can sometimes differ greatly. Emilio Rama's original artworks are characterized by using elements and references of Pop Culture with a critical eye on consumption and entertainment. The graphic composition of his paintings reflects the fragility of our environment by being presented as paper figures simulating nature objects. US Navy veteran Randy Morales fuses nostalgia and graphic expressionism within his street-pop artworks. The choice of subjects within his artworks is strongly influenced by his experience growing up in the 90s. Carrying on the theme of feeling like an outsider much of his life, Morales focuses on characters that he would consider less “obvious” to the masses but more emotionally evocative to those who recognize them. Darwin Estacio Martinez’s work uses a universally understood visual language to convey unfinished stories like still images taken from a movie. This lack of context is also what allows the viewer to connect with the work and apply the narratives that resonate most with them as individuals. While Lindsey McCord’s figures tend to have a neutral expression, the bold colors of her work and the patterns express the joy and celebration of fashion. Since she made her artworks available in 2020, she has been steadily amassing a following of her works as well as collectors across the world. Jason DeMeo focuses on exploring a concept that he has coined Synthesism™. It stems from the definition of the word synthesis: “the combining of often diverse elements into a coherent whole.” This theme of blending, combining, and remixing in many disciplines, especially music, architecture, and even culture leading to the growth of community is captured in his work. Clara Berta manipulates her colorful artwork with several layers of texture paste, mixed media, and acrylic paint, she will work and re-work her canvases, layering textures to give added dimension. Her dynamic and highly textural abstract works have been exhibited across the United States and collected worldwide.
https://www.artspacewarehouse.com/en/news_detail-122
Today we will talk about another way to replace the color on the object, namely, the command “Replace color“((Replace color), Image – Correction – Replace color). Consider a dialog box. At the top “Allotment»Is a tab of the” color range “. Using a pipette on a sample of a picture, a sample of a color that needs to be changed is marked. A pipette with a plus adds an area, a pipette with a minus – subtracts. Setting “Scatter(Fuzziness). C using the slider to determine the boundaries in which will change the shades of color. With a maximum variation under the replacement, almost all shades of the selected color in the image will fall. With a minimum variation for replacement, only those pixels that exactly match the sample will be included. PS4 has an additional option. Localized Color Clusters, designed to identify areas of the same color. Its use allows you to more accurately highlight the color in the image. In the lower part “Replacement“Dialog box” Replace color “tab is located on”Hue / Saturation», With the help of which, based on the name, the color and shade are chosen for replacement. Consider replacing the color of a specific example. Step 1. Open the image in Photoshop. Create a duplicate of the main layer right away. Step 2. Go in Image – Correction – Replace Color. I want to change the color of the shirt. Step 3. If you have CS4, then immediately tick the box Localized Color Clusters. I increase Scatter to the maximum value. With the help of the pipette mark the area on the image. Next, I choose the color for which I want to change. I click with the mouse on the colored box with the caption “Result” and select the shade I need. It can be seen that in the shadow area the color is not sufficiently highlighted. I choose Pipette “+” and click on the image in the folds on the T-shirt. The T-shirt was completely painted over, but at the same time, unnecessary areas, for example, the face, were colored. I choose Pipette “-“ and click on her face. As you can see on the image itself, the lips and ears remain colored, this defect can be corrected with an eraser. Most successfully, this method of changing color works on contrasting images. Also on the images, where there are few related colors changeable color. And finally, another tip. If the image still contains several areas of the same color, and you need to change only one, for example, among them, then before using the command Replace color, It should highlight the area requiring color replacement. Any selection tool can help you with this.
https://photobecket.com/work-basics/replace-color.html
The 1840 Census listed the town or district and county of residence, name of head of household, number of free white males and females in age categories: 0 to 5, 5 to 10, 10 to 15, 15 to 20, 20 to 30, 30 to 40, 40 to 50, 50 to 60, 60 to 70, 70 to 80, 80 to 90, 90 to 100, over 100; the name of a slave owner and the number of slaves owned by that person; the number of male and female slaves by age categories; the number of foreigners (not naturalized) in a household; and the number of deaf, dumb and blind persons within a household. Additionally, the 1840 census asked for the first time the ages of revolutionary war pensioners, as well as the number of persons attending school. The official enumeration day was 1 June 1840.
“Connect to Innovate” is a new white paper on the opportunities and challenges for transformation in the UK water sector, leveraging emerging technologies and data insights. Three organisational capabilities industry leaders will need to help navigate the business challenges caused by COVID-19 Annually, CGI leaders around the world meet face-to-face with business and IT executives to gather their perspectives on the trends affecting their enterprises, including business and IT priorities, IT spending, budgets and investment plans. In 2020, we conducted in-person interviews... Grasping the opportunity: can water companies harness Ofwat’s £200 million innovation fund to transform the sector? Can water companies harness Ofwat’s £200 million innovation fund to transform the sector? Read the exclusive research report by Utility Week in collaboration with CGI. The energy system must deal with new market dynamics, such as the rise of renewable energy, new and changing regulatory requirements, and evolving customer demands and expectations. A thought leadership paper from CGI and New Power. Britain’s electricity system is in transition, but flexibility still largely comes from dispatchable thermal generation. As that is replaced by inflexible, intermittent renewables, alternate sources of flexibility must be established. The introduction of smart technology could be a real game-changer for water companies and water customers alike. Smart technology presents unprecedented opportunities: to deliver efficiencies in the operation of networks; to improve customer service levels; and to reduce water bills... ‘Embracing Flexibility’ identifies greater differences in how the sector perceives what has become known as the ‘smart, flexible energy system’, its challenges and benefits. A new white paper exploring the lessons for the future of the energy sector.
https://www.cgi.com/uk/en-gb/mediacentre/utilities/white-papers
Brain damage is said to occur when an injury causes the destruction of human brain cells. In babies, it is commonly caused by a lack of oxygen to the brain. It is pertinent to point out however, that a baby’s delicate brain might come to harm either during pregnancy itself, during the delivery process or due to manhandling the newborn. Engaging in any of the activities below might damage your newborn’s delicate brain and so should be avoided; Limiting Air Supply Not Treating Extreme Jaundice in Good Time There is a common misconception among parents that jaundice in a baby is a minor health issue. This is not true as extreme jaundice, when left untreated, may put the infant at risk of developing ‘kernicterus’ a type of brain damage that develops due to severe cases of jaundice. Child Abuse Leading to Head Trauma While this may not be reported in Nigeria as frequently and in detail unlike the western countries which take child abuse cases seriously, the fact remains that there are people in the Nigerian society who do hurt babies; either knowingly or unknowingly. Assault on a child as a result of child abuse leading to an inflicted traumatic brain injury is commonly known as the ‘shaken baby syndrome’. Shaken Baby Syndrome: This can happen through the following ways; The structure of infants makes them vulnerable to particular risk and injuries from any of the actions listed above. The majority of children who experience the shaken baby syndrome are not up to one year although it may occur in children from age 0 to 5 years. Head trauma is the leading cause of death in child abuse cases in the United States. The average age of victims is said to be between 3 and 8 months. While parents nourish and engage their children in activities that boost their intelligence; it is also important that they should be careful in handling them physically because the brain and other organs of the baby are still delicate.
http://motherhoodinstyle.net/2018/01/30/activities-that-can-damage-your-babys-delicate-brain/
God is love. God lives in me. Therefore love lives in me. God is love. I live in love. Therefore God lives in me. God lives in me. I live in the world. Therefore God lives in the world. The evangelists Paul and John both used this kind of reasoning when arriving at teachable conclusions in their letters. The one that inspired the above examples is 1 John 4:16-17, "God is love. Whoever lives in love lives in God, and God in him. In this way, love is made complete among us so that we will have confidence on the day of judgment, because in this world we are like him." God is complete love. Therefore complete love is in the world because we are like him. Most days I think the amount of love in me varies on a sliding scale with complete on one end and none on the other. Or, as you see in surveys today, "Please rate on a scale of 1 - 10 where 10 is complete love and 1 is no love. . . ." And what does this rating indicate about what we really believe? Do we believe we only have a little bit of love in us, so we must only have a little bit of God in us? Can God be portioned out like that? It's a ridiculous thought. If we have God in us, it would seem that we have the complete God in us and therefore complete love in us. (God is not incomplete, not imperfect in any way.) We, however, may be portioning out God's love, incompletely using it, not allowing God's complete love to work in the world. And the world is the poorer for it. For Reflection: Where am I on that scale of 1 - 10? Can I let God's love shine through me more completely today? Let us pray. You, Lord, are complete love. You are in me and I am in you. Help me to let your love shine through more completely today. Leave a Reply. | | Alice I started this website and blog on May 1, 2012. I am a Catholic who has been in ministry for many years. I first developed what I would call a close relationship with Jesus in the early 1970s. Ever since then I have been praying with people for healing and other needs. It is because I have seen so many of these prayers answered that I am so bold as to offer to pray for you individually through this website and phone line.
http://www.mannaprayerministries.com/manna-blog/on-a-scale-of-1-10
1 edition of Sustainable agriculture in print found in the catalog. Sustainable agriculture in print Published 1996 by The Library in Beltsville, Md . Written in English Edition Notes |Statement||compiled by AFSIC staff and volunteer, Alternative Farming Systems Information Center, National Agricultural Library, Agricultural Research Service, U.S. Dept. of Agriculture.| |Series||Special reference briefs series,, no. SRB 96-04, Special reference briefs ;, NAL-SRB. 96-04.| |Contributions||Alternative Farming Systems Information Center (U.S.)| |Classifications| |LC Classifications||Z5074.S92 S96 1996| |The Physical Object| |Pagination||iv, 27 p. ;| |Number of Pages||27| |ID Numbers| |Open Library||OL605170M| |LC Control Number||96199894| |OCLC/WorldCa||35324168| ***CURRENTLY OUT OF PRINT, HAS BEEN REVISED AS AN EBOOK**** See our SUSTAINABLE AGRICULTURE ebook. CAN BE ORDERED THROUGH LANDLINKS PRESS. Author: John Mason Edition: 2nd Format: Softcover Pages: This book is about foreseeing and understanding problems and addressing them before it is too late. For sustainable agricultural finance, loans and fees predominate. Some traditional tools, such as government and corporate subsidies, are not as relevant for sustainable agriculture, since by definition, the goal is to have closed loops, operating without regularly . The Building a Sustainable Business publication was conceived in by a planning team for the Minnesota Institute for Sustainable Agriculture (MISA) to address the evolving business planning needs of beginning and experienced rural entrepreneurs. so if you already have an older copy of the book, we suggest you just print off the new. NIFA "promotes sustainable agriculture through national program leadership and funding for research and extension," "offers competitive grants programs and a professional development program," and "collaborates with other federal agencies." Site includes the legal definition of sustainable agriculture and a list of common farm and ranch practices. ∙ WTO rules and disciplines provide ample policy space for pursuing sustainable agricultural policies, but they must be improved to avoid protectionist misuse. ∙ The sustainability challenges in least developed countries (LDCs) are high non-tariff trade barriers and lack of investment in agriculture. 7 Smart Fertilizers as a Strategy for Sustainable Agriculture ARTICLE IN PRESS To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal. Holloween in horror Modern biology Habitat preservation abstracts. God-trap. Gerry Spence Zbornik ob jubileju Jožeta Sivca Poetry Criticism Poems, 1970-1992 Science workbook for the GED test Report of the Joint Subcommittee to Study Real Property Tax Exemptions to the Governor and the General Assembly of Virginia. protestant in Ireland in 1853. The Statue of Liberty Bass Extremes Live Exhibition of Chinese ceramics and of European drawings ... Landownership in Nepal The Caribbean area Non-CO2 Greenhouse Gases Sustainable Agriculture in Print: Current Books, prepared annually by the Alternative Farming Systems Information Center (AFSIC), first appeared in in response to requests for assistance in identifying the rapidly growing body of literature on sustainable or alternative agriculture. Buy Sustainable Agriculture in Print: Current Books on FREE SHIPPING on qualified orders Sustainable Agriculture in Print: Current Books: Jane Potter Gates, Alternative Farming Systems Information Center, Agricultural Research Service, U.S. Dept. of Agriculture, National Agricultural Library: : Books. Sustainable Agriculture in Print: Current Books Recent Acquisitions of the National Agricultural Library December Addendum to Special Reference Briefs Series no. SRB Compiled By: Staff of the Alternative Farming Systems Information Center Alternative Farming Systems Information Center National Agricultural Library. Prepared by the Alternative Farming Systems Information Center (AFSIC) staff and volunteers, this annotated bibliography provides a list of 85 recently published books pertaining to sustainable agriculture. AFSIC focuses on alternative farming systems (e.g., sustainable, low-input, regenerative, biodynamic, and organic) that maintain agricultural productivity and profitability while protecting. Sustainable Agriculture: Advances in Plant Metabolome and Microbiome focuses on the advancement of basic and applied research related to plant-microbe interaction and their implementation in progressive agricultural sustainability. The book also highlights the developing area of bioinformatics tools for the interpretation of metabolome, the integration of statistical and bioinformatics tools. Sustainable agriculture is an alternative for solving fundamental and applied issues related to food production in an ecological way. While conventional agriculture is driven almost solely by productivity and profit, sustainable agriculture integrates biological, chemical, physical, ecological. This volume is a ready reference on sustainable agriculture and reinforce the understanding for its utilization to develop environmentally sustainable and profitable food production systems. It describes ecological sustainability of farming systems, present innovations for improving efficiency in the use. A Guide to Developing a Business Plan for Farms and Rural Businesses. Bringing the business planning process alive, Building a Sustainable Business: A Guide to Developing a Business Plan for Farms and Rural Businesses helps today's alternative and sustainable agriculture entrepreneurs transform farm-grown inspiration into profitable enterprises. Best Sellers in Sustainable Agriculture. The Backyard Homestead: Produce all the food you need on just a quarter acre. The Backyard Homestead: Produce all the food you need on just a quarter acre. The. Sustainable agriculture in print: current books Responsibility: compiled by AFSIC staff and volunteer, Alternative Farming Systems Information Center, National Agricultural Library, Agricultural Research Service, U.S. Dept. of Agriculture. Sustainable Food and Agriculture: An Integrated Approach is the first book to look at the imminent threats to sustainable food security through a cross-sectoral lens. As the world faces food supply challenges posed by the declining growth rate of agricultural productivity, accelerated deterioration of quantity and quality of natural resources that underpin agricultural production, climate. Genre/Form: Bibliography: Additional Physical Format: Online version: Sustainable agriculture in print. Beltsville, Md.: National Agricultural Library, . Examines the problems caused by the technological revolution in farming practices and explains the long-term benefits of sustainable farming systems such as permaculture, biodynamics, organic farming, agroforestry, conservation tillage, and integrated hydroculture. sustainable agricultural development and the conservation and sustainable use of biodiversity for food and agriculture. In fact, the First Session of the FAO Conference, held inidentified the need for fishery conservation measures, as food shortages in Europe and elsewhere after the Second World War had stimulated Size: 2MB. This book deals with a rapidly growing field aiming at producing food and energy in a sustainable way for humans and their children. It is a discipline that addresses current issues: climate change, increasing food and fuel prices, poor-nation starvation, rich-nation obesity, water pollution, soil erosion, fertility loss, pest control and biodiversity depletion. A unique look at how the adoption of sustainable farming methods is being pursued throughout the world. This comprehensive book provides clear insight into research and education needs and the many points of view that come to bear on the issue of sustainability. Essential for agricultural leaders in research, education, conservation, policy making, and anyone else interested in creating an. The result is the agriculture testament. It is considered the bible of organic agriculture, and inspired the soil association in the U.K. It is a bridge between east and west and shows no matter where you are, farming in nature’s ways is the only sustainable foundation for food security. Sustainable Agriculture and the Environment in the Humid Tropics provides critically needed direction for developing strategies that both mitigate land degradation, deforestation, and biological resource losses and help the economic status of tropical countries through promotion of sustainable agricultural practices. The book includes. The SOFA report presents evidence on today and tomorrow’s impact of climate change on agriculture and food systems. The report assesses the options to make agriculture and food systems resilient to climate change impacts, while minimizing environmental impacts. It shows that making agriculture and food systems sustainable is both. See below for a selection of the latest books from Sustainable agriculture category. Presented with a red border are the Sustainable agriculture books that have been lovingly read and reviewed by the experts at Lovereading. With expert reading recommendations made by people with a passion for books and some unique features Lovereading will help you find great Sustainable agriculture books and. Sustainable Agriculture From Common Principles to Common Practice Edited by Fritz J. Häni, László Pintér and Hans R. Herren Proceedings and outputs of the first Symposium of the International Forum on Assessing Sustainability in Agriculture (INFASA), MaBern, Switzerland. A Dialogue on Sustainable Agriculture infasa.Sustainable Agriculture and Climate Change. Suren(dra) Nath Kulshreshtha and Elaine E. Wheaton Published: February (This book is a printed edition of the Special Issue Sustainable Agriculture and Climate Change that was published in Sustainability Your book will be printed and delivered directly from one of three print stations.sustainable agriculture: using best farming practices to grow the most food and fiber on the land for long term economic, social and environmental success. Did you know? (Ag Facts) Sustainable agriculture is critical in the global effort to eradicate hunger and poverty. Reducing food waste positively impacts sustainability.
https://mikunesicy.thebindyagency.com/sustainable-agriculture-in-print-book-2812mu.php
Plate Tectonics: A Scientific Revolution Unfolds covers the development of the Theory of Plate Tectonics and discusses the characteristics of this theory. The chapter opens with a discussion of Alfred Wegner’s hypothesis of continental drift, its supporting evidence, and its major criticisms. The chapter then discusses the development of the Plate Tectonic Theory and the motions and characteristics of transform, divergent and convergent boundaries. The chapter then discusses modern evidence that confirms the theory, including ocean drilling, mantle plumes, paleomagnetism, polar wandering, magnetic reversals, and seafloor spreading. The chapter ends with a discussion of how plate motion is measured and an overview of the two hypothesized mechanisms of plate motion through movements of the mantle. CHAPTER OUTLINE 1. From Continental Drift to Plate Tectonics a. Early geology viewed the oceans and continents as very old features with fixed geographic positions b. But researchers realized that Earth’s continents are not static; instead, they gradually migrate across the globe i. Create great mountain chains where they collide ii. Create ocean basins where they split apart c. Scientific Revolution i. Reversal in scientific thought results in a very different model of processes on Earth that act to deform the crust and create major structural features such as mountains, continents, and oceans ii. Began in 20th century with continental drift—the idea that continents were capable of movement iii. As more advanced, modern instruments came along, scientists evolved from the ideas of continental drift to the theory 2. Continental Drift: An Idea Before Its Time a. Challenged the long-held assumption that the continents and ocean basins had fixed geographic positions b. Set forth by Alfred Wegener in his 1915 book, The Origin of Continents and Oceans c. Suggested that a single supercontinent (Pangea) consisting of all Earth’s landmasses once existed d. Further hypothesized that about 200 million years ago, this supercontinent began to fragment into smaller landmasses that then “drifted” to their present positions over millions of years. e. Evidence i. Similarity between the coastlines on opposite sides of the Atlantic Ocean led to the hypothesis that they were once joined 1. A very precise fit when the continental shelf boundary is considered the edge of the continent ii. Identical fossil organisms had been discovered in rocks from both South America and Africa (Mesosaurus and Glossopteris) 1. Some type of land connection was needed to explain the existence of similar Mesozoic age life forms on widely separated landmasses—no evidence of this 2. Wegener asserted that South America and Africa must have been joined during that period of Earth history iii. Rocks found in a particular region on one continent closely match in age and type those found in adjacent positions on the once adjoining continent iv. Evidence of a glacial period that dated to the late Paleozoic in southern Africa, South America, Australia, and India (near the equator) 1. A global cooling event was rejected by Wegener because during the same span of geologic time, large tropical swamps existed in several locations in the Northern Hemisphere 2. Can be explained by southern continents that were joined together and located near the South Pole 3. The Great Debate a. Main objections to Wegener’s hypothesis stemmed from his inability to identify a credible mechanism for continental drift i. Proposed that gravitational forces of the Moon and Sun that produce Earth’s tides were also capable of gradually moving the continents across the globe ii. Also incorrectly suggested that the larger and sturdier continents broke through thinner oceanic crust, much like ice breakers cut through ice b. Most of the scientific community, particularly in North America, either categorically rejected continental drift or treated it with considerable skepticism 4. The Theory of Plate Tectonics a. New technology post-WWII gave science evidence to support some of Wegener’s ideas, and many new ideas i. The discovery of a global oceanic ridge system that winds through all of the major oceans ii. Studies conducted in the western Pacific demonstrated that earthquakes were occurring at great depths beneath deep-ocean trenches iii. Dredging of the seafloor did not bring up any oceanic crust that was older than 180 million years iv. Sediment accumulations in the deep-ocean basins were found to be thin, not the thousands of meters that were predicted b. Led to Theory of Plate Tectonics i. The crust and the uppermost, and therefore coolest, part of the mantle constitute Earth’s strong outer layer, known as the lithosphere 1. Lithosphere varies in thickness depending on whether it is oceanic lithosphere or continental lithosphere a. Oceanic crust thickest (100 km) in deep ocean basins, but thinner along ridge system b. Continental lithosphere averages 150 km thick, and may extend to 200 km beneath stable continental interiors 2. The composition of the both oceanic and continental crusts affects their respective densities a. Oceanic crust is composed of rocks having a mafic (basaltic) composition = higher density b. Continental crust is composed largely of felsic (granitic) rocks = lower density ii. The asthenosphere (asthenos = weak, sphere = a ball) is a hotter, weaker region in the mantle that lies below the lithosphere 1. Temperature and pressure put rocks very near their melting temperature; causes rocks in asthenosphere to respond to forces by flowing 2. The relatively cool and rigid lithosphere tends to respond to forces acting on it by bending or breaking, but not flowing 3. Earth’s rigid outer shell is effectively detached from the asthenosphere, which allows these layers to move independently c. The lithosphere is broken into about two dozen segments of irregular size and shape called plates that are in constant motion with respect to one another i. Seven major plates: North American, South American, Pacific, African, Eurasian, Australian-Indian, and Antarctic plates ii. Intermediate-sized plates: Caribbean, Nazca, Philippine, Arabian, Cocos, Scotia, and Juan de Fuca plates iii. None of the plates are defined entirely by the margins of a single continent nor ocean basin d. Plates move as somewhat rigid units relative to all other plates i. Most major interactions among them (and, therefore, most deformation) occur along their boundaries ii. Plates are bounded by three distinct types of boundaries, which are differentiated by the type of movement they exhibit 1. Divergent plate boundaries (constructive margins)—where two plates move apart, resulting in upwelling of hot material from the mantle to create new seafloor 2. Convergent plate boundaries (destructive margins)—where two plates move together, resulting in oceanic lithosphere descending beneath an overriding plate, eventually to be reabsorbed into the mantle or possibly in the collision of two continental blocks to create a mountain belt 3. Transform plate boundaries (conservative margins)—where two plates grind past each other without the production or destruction of lithosphere iii. Divergent and convergent plate boundaries each account for about 40 percent of all plate boundaries iv. Transform faults account for the remaining 20 percent. 5. Divergent Plate Boundaries and Seafloor Spreading a. Characteristics: i. Most divergent plate boundaries are located along the crests of oceanic ridges ii. Constructive plate margins—this is where new ocean floor is generated iii. Two adjacent plates move away from each other, producing long, narrow fractures in the ocean crust iv. Hot rock from the mantle below migrates upward to fill the voids left as the crust is being ripped apart v. Molten material gradually cools to produce new slivers of seafloor b. Oceanic Ridges and Seafloor Spreading i. Ridges: elevated areas of the seafloor characterized by high heat flow and volcanism 1. Including the Mid-Atlantic Ridge, East Pacific Rise, and Mid-Indian Ridge. 2. 2–3 km high, 1000–4000 km wide 3. Along the crest of some ridge segments is a deep canyon-like structure called a rift valley ii. Movement at ridges is called seafloor spreading 1. Typical rates of spreading average around 5 centimeters (2 inches) per year a. Slower along Mid-Atlantic Ridge; higher along East Pacific Rise 2. Generated all of Earth’s ocean basins within the past 200 million years iii. Creation of ridges at areas of seafloor spreading 1. Newly created oceanic lithosphere is hot, making it less dense than cooler rocks found away from the ridge axis a. New lithosphere forms and is slowly yet continually displaced away from the zone of upwelling. b. Begins to cool and contract, thereby increasing in density, which equals thermal contraction c. It takes about 80 million years for the temperature of oceanic lithosphere to stabilize and contraction to cease 2. As the plate moves away from the ridge, cooling of the underling asthenosphere causes it to become increasingly more rigid a. Oceanic lithosphere is generated by cooling of the asthenosphere from the top down b. The thickness of oceanic lithosphere is age-dependent; that is, the older (cooler) it is, the greater its thickness c. Oceanic lithosphere that exceeds 80 million years in age is about 100 kilometers thick: approximately its maximum thickness c. Continental Rifting i. Within a continent, divergent boundaries can cause the landmass to split into two or more smaller segments separated by an ocean basin 1. Begins when plate motions produce opposing (tensional) forces that pull and stretch the lithosphere. 2. Promotes mantle upwelling and broad upwarping of the overlying lithosphere as it is stretched and thinned 3. Lithosphere is thinned, while the brittle crustal rocks break into large blocks 4. The broken crustal fragments sink, generating an elongated depression called a continental rift 5. Modern example of an active continental rift is the East African Rift 6. Convergent Plate Boundaries and Subduction a. Total Earth surface area remains constant over time; this means that a balance is maintained between production and destruction of lithosphere i. A balance is maintained because older, denser portions of oceanic lithosphere descend into the mantle at a rate equal to seafloor production b. Convergent plate boundaries are where two plates move toward each other and the leading edge of one is bent downward, as it slides beneath the other c. Also called subduction zones, because they are sites where lithosphere is descending (being subducted) into the mantle i. Subduction occurs because the density of the descending lithospheric plate is greater than the density of the underlying asthenosphere ii. Old oceanic lithosphere is about 2 percent more dense than the underlying asthenosphere, which causes it to subduct iii. Continental lithosphere is less dense and resists subduction d. Deep-ocean trenches are the surface manifestations produced as oceanic lithosphere descends into the mantle i. Large linear depressions that are remarkably long and deep ii. Example: Peru–Chili trench along West Coast of South America e. The angle at which oceanic lithosphere subducts depends largely on its age and, therefore, its density i. When seafloor spreading occurs near a subduction zone, the subducting lithosphere is young and buoyant which, results in a low angle of descent ii. Older, very dense slabs of oceanic lithosphere typically plunge into the mantle at angles approaching 90 degrees f. Types of convergence: i. Oceanic–Continental Convergence: Oceanic crust converges with continental crust 1. The buoyant continental block remains “floating”; the denser oceanic slab sinks into the mantle 2. When a descending oceanic slab reaches a depth of about 100 kilometers (60 miles), melting is triggered within the wedge of hot asthenosphere that lies above it a. Water contained in the descending plates acts as “wet” rock in a high-pressure environment and melts at substantially lower temperatures than does “dry” rock of the same composition. b. Partial melting: the wedge of mantle rock is sufficiently hot that the introduction of water from the slab below leads to some melting 3. Being less dense than the surrounding mantle, this hot mobile material gradually rises toward the surface 4. Examples include Andes of South Amercia and Cascade Range of North America ii. Oceanic—Oceanic Convergence: oceanic crust converges with oceanic crust 1. One slab descends beneath the other, initiating volcanic activity by the same mechanism that operates at all subduction zones 2. Volcanoes grow up from the ocean floor, rather than upon a continental platform 3. Will eventually build a chain of volcanic structures large enough to emerge as islands = volcanic island arc 4. Examples include the Aleutian, Mariana, and Tonga islands iii. Continental-Continental Convergence—continental crust converges with continental crust 1. The buoyancy of continental material inhibits it from being subducted 2. Causes a collision between two converging continental fragments 3. Folds and deforms the accumulation of sediments and sedimentary rocks along the continental margins 4. Result is the formation of a new mountain belt composed of deformed sedimentary and metamorphic rocks that often contain slivers of oceanic crust 5. Example is the Himalayas created by collision of Indian and Asian continental landmasses 7. Transform Plate Boundaries a. Where plates slide horizontally past one another without the production or destruction of lithosphere b. Most transform faults are found on the ocean floor where they offset segments of the oceanic ridge system c. Transform faults are part of prominent linear breaks in the seafloor known as fracture zones i. Include both the active transform faults as well as their inactive extensions into the plate interior ii. Active transform faults lie only between the two offset ridge segments and are generally defined by weak, shallow earthquakes iii. Trend of these fracture zones roughly parallels the direction of plate motion at the time of their formation d. Transform faults also transport oceanic crust created at ridge crests to a site of destruction e. Most transform fault boundaries are located within the ocean basins; however, a few cut through continental crust i. Example is San Andreas fault of North America—the Pacific plate is moving toward the northwest, past the North American plate 8. Testing the Plate Tectonics Model a. Ocean Drilling i. The Deep Sea Drilling Project (1968–1983) sampled the seafloor to determine its age ii. Showed that the sediments increased in age with increasing distance from the ridge 1. Supported the seafloor-spreading hypothesis: youngest crust would be found at the ridge axis (where it is produced), oldest crust would be found adjacent to the continents iii. Thickness of ocean-floor sediments provided additional verification of seafloor spreading 1. Sediments are almost entirely absent on the ridge crest and that sediment thickness increases with increasing distance from the ridge iv. Reinforced the idea that the ocean basins are geologically young because no seafloor with an age in excess of 180 million years was found b. Mantle Plumes and Hot Spots i. Mapping volcanic islands and seamounts (submarine volcanoes) of Hawaiian Islands to Midway Islands revealed several linear chains of volcanic structures ii. Radiometric dating of this linear structure showed that the volcanoes increase in age with increasing distance from the “big island” of Hawaii 1. Youngest volcanic island in the chain (Hawaii) rose from the ocean floor less than one million years ago, Midway Island is 27 million years old, and Detroit Seamount, near the Aleutian trench, is about 80 million years old iii. A cylindrically shaped upwelling of hot rock, called a mantle plume, is located beneath the island of Hawaii 1. Hot, rocky plume ascends through the mantle, the confining pressure drops, which triggers partial melting 2. The surface manifestation of this activity is a hot spot, an area of volcanism, high heat flow, and crustal uplifting that is a few hundred kilometers across 3. As the Pacific plate moved over a hot spot, a chain of volcanic structures known as a hot-spot track was built iv. Supports ideas that plates move over the asthenosphere, which means that age of each volcano indicates how much time has elapsed since it was situated over the mantle plume c. Paleomagnetism i. Rocks that formed thousands or millions of years ago and contain a “record” of the direction of the magnetic poles at the time of their formation 1. Earth’s magnetic field has a north and south magnetic pole that today roughly align with the geographic poles 2. Some naturally occurring minerals are magnetic and are influenced by Earth’s magnetic field (e.g., magnetite) 3. As the lava cools, these iron-rich grains become magnetized and align themselves in the direction of the existing magnetic lines of force 4. They act like a compass needle because they “point” toward the position of the magnetic poles at the time of their formation ii. Apparent Polar Wandering 1. The magnetic alignment of iron-rich minerals in lava flows of different ages indicates that the position of the paleomagnetic poles have changed through time a. Magnetic North Pole has gradually wandered from a location near Hawaii northeastward to its present location over the Arctic Ocean b. Evidence that either the magnetic North Pole had migrated, an idea known as polar wandering, or that the poles remained in place and the continents had drifted beneath them 2. If the magnetic poles remain stationary, their apparent movement is produced by continental drift. a. Studies of paleomagnetism show that the positions of the magnetic poles correspond closely to the positions of the geographic poles b. When North America and Europe are moved back to their predrift positions, their apparent wandering paths coincide c. Evidence that North America and Europe were once joined and moved relative to the poles as part of the same continent iii. Magnetic Reversals and Seafloor Spreading 1. Over periods of hundreds of thousands of years, Earth’s magnetic field periodically reverses polarity a. Lava solidifying during a period of reverse polarity will be magnetized with the polarity opposite that of volcanic rocks being formed today i. Normal polarity—rocks with same polarity as present magnetic field ii. Reverse polarity—rocks with opposite polarity of present magnetic field b. Magnetic time scale established by radiometric dating techniques on magnetic polarity of hundreds of lava flows 2. Magnetic surveys of the ocean showed alternating stripes of high- and low-intensity magnetism that represent the polarity of the magnetism of Earth a. Magma along a mid-ocean ridge “records” the current polarity of Earth b. As the two slabs move away from the ridge, they build a pattern of normal and reverse magnetic stripes 3. Magnetic stripes exhibit a remarkable degree of symmetry in relation to the ridge axis, thus supporting seafloor spreading 9. How Is Plate Motion Measured? a. Geologic Evidence i. An average rate of plate motion can be calculated from the radiometric age of an oceanic crust sample and its distance from the ridge axis where it was generated ii. Combine age data with paleomagnetism data to get maps of age of the seafloor iii. Show us that the rate of seafloor spreading in the Pacific basin must be more than three times greater than in the Atlantic iv. Fracture zones are inactive extensions of transform faults, and therefore preserve a record of past directions of plate motion b. Measuring Plate Motion From Space i. Data from GPS (Global Positioning System) establish the rate of movement of plates using repeated measurements over many years ii. GPS devices have also been useful in establishing small-scale crustal movements such as those that occur along faults in regions known to be tectonically active c. How Does Plate Motion Affect Plate Boundaries? i. Because of plate motion, the size and shape of individual plates are constantly changing ii. Another consequence of plate motion is that boundaries also migrate iii. Plate boundaries can also be created or destroyed in response to changes in the forces acting on the lithosphere 10. What Drives Plate Motions? a. Some type of convection, where hot mantle rocks rise and cold, dense oceanic lithosphere sinks is the ultimate driver of plate tectonics b. Forces that drive plate motion i. Slab pull: subduction of cold, dense slabs of oceanic lithosphere is a major driving force of plate motion ii. Ridge push: gravity-driven mechanism results from the elevated position of the oceanic ridge, which causes slabs of lithosphere to “slide” down the flanks of the ridge iii. Ridge push appears to contribute far less to plate motions than slab pull iv. Mantle drag 1. Enhances plate motion when flow in the asthenosphere is moving at a velocity that exceeds that of the plate 2. Resist plate motion when the asthenosphere is moving more slowly than the plate, or in the opposite direction c. Models of Plate-Mantle Convection i. Convective flow is the underlying driving force for plate movement ii. Mantle convection and plate tectonics are part of the same system iii. Convective flow in the mantle is a major mechanism for transporting heat away from Earth’s interior iv. Two models: 1. Whole-Mantle Convection (Plume Model) a. Cold oceanic lithosphere sinks to great depths and stirs the entire mantle b. Suggests that the ultimate burial ground for subducting slabs is the core-mantle boundary c. Downward flow is balanced by buoyantly rising mantle plumes that transport hot material toward the surface d. Two kinds of plumes : narrow tubes and giant upwellings 2. Layer Cake Model a. Mantle has two zones of convection—a thin, dynamic layer in the upper mantle and a thick, larger, sluggish one located below b. Downward convective flow is driven by the subduction of cold, dense oceanic lithosphere c. These subducting slabs penetrate to depths of no more than 1000 kilometers (620 miles) d. The lower mantle is sluggish and does not provide material to support volcanism at the surface e. Very little mixing between these two layers is thought to occur LEARNING OBJECTIVES/FOCUS ON CONCEPTS Each statement represents the primary learning objective for the corresponding major heading within the chapter. After completing the chapter, students should be able to: 2.1 Discuss the view that most geologists held prior to the 1960s regarding the geographic positions of the ocean basins and continents. 2.2 List and explain the evidence presented by Wegener to support his continental drift hypothesis. 2.3 Discuss the two main objections to the continental drift hypothesis. 2.4 List the major differences between Earth’s lithosphere and its asthenosphere, and explain the importance of each in the plate tectonic theory. 2.5 Sketch and describe the movement along a divergent plate boundary that results in the formation of new oceanic lithosphere. 2.6 Compare and contrast the three types of convergent plate boundaries and name a location where each type can be found. 2.7 Describe the relative motion along a transform fault boundary and be able to locate several examples on a plate boundary map. 2.8 List the evidence used to support the plate tectonics theory and briefly describe each. 2.9 Describe two methods researchers employ to measure relative plate motion. 2.10 Summarize what is meant by plate-mantle convection and explain two of the primary driving forces for plate motion. TEACHING STRATEGIES Muddiest Point: In the last 5 minutes of class, have students jot down the points that were most confusing from the day’s lecture, and what questions they still have. Or provide a “self-guided” muddiest point exercise, using the Clicker PowerPoints and website questions for this chapter. Review the answers, and cover the unclear topics in a podcast to the class or at the beginning of the next lecture. The following are fundamental ideas from this chapter that students have the most difficulty grasping and activities to help address these misconceptions and guide learning. A. Movement of Plates • Students have many misconceptions about plate motion. These may include: only continents move, oceans are stationary, plate movement is imperceptible on a human timeframe, the size of Earth is gradually increasing over time because of seafloor spreading, plate tectonics started with the breakup of Pangea, and tectonic plates drift in oceans of melted magma just below the surface of Earth. As you discuss plate tectonics, integrate imagery, graphics, and animations to help students visualize the processes involved (see Teacher Resources in the following section) • Isostasy Animation http://www.geo.cornell.edu/hawaii/220/PRI/isostasy.html i. This interactive animation allows students to visualize how continental and oceanic crust “float” on the mantle. In the menu along the bottom, enter a liquid density of 3.3 g/cm3, the average density of the asthenosphere—this will stay the same. Then, enter the thickness and density of oceanic crust (5 kilometers thick, density of 3.0 g/cm3). Record the height of the block above the liquid—you will have to subtract the block height from the block root value. Do the same for continental crust (50 kilometers thick, density of 2.7 g/cm3). ii. Then, ask students: Which sits higher above the liquid surface? Which sits lower? Why? Use this as a lead-in to tectonics—if plates can move up and down (buoyancy) in the asthenosphere, might they also move back and forth? Why? This is plate tectonics—plates moving laterally across the asthenosphere. • Hot Spot Model Activity i. (Supplies: metal pan, spray bottle of water, about 1 cup of sugar, a candle or tealight, lighter/matches). Spray a disposable metal pan with water, then add a thin layer of sugar. Have one student hold the lit candle stationary beneath the pan of sugar. Have another student slowly move the pan in one direction over the candle. Students should see “islands” of molten sugar form on the surface as the pan (plate) moves over the candle (hotspot). ii. (Supplies: blank overhead and overhead pens) One student is the “hotspot” (pen), another is the “plate” (overhead). Ask the “plate” student to move the “plate” to the NW (like the Pacific plate) while the “hotspot” student holds the pen stationary on the overhead. Result is a linear chain created on the moving plate. • Tracking Tectonic Plates Activity http://serc.carleton.edu/NAGTWorkshops/intro/activities/28504.html • Subduction Zone Earthquake Activity http://serc.carleton.edu/introgeo/demonstrations/examples/subduction_zone_earthq uakes.html • Nannofossils Reveal Seafloor Spreading Truth Activity http://www.oceanleadership.org/wp-content/uploads/2009/08/Nannofossils.pdf • You Try It: Plate Tectonics http://www.pbs.org/wgbh/aso/tryit/tectonics/shockwave.html • Sea-Floor Spreading Activity http://oceanexplorer.noaa.gov/edu/learning/player/lesson02/l2la2.htm B. Characteristics of Plates and Boundaries • Students have difficulty understanding relationships between geologic processes and plate boundaries until they can clearly visualize and analyze their relationships. • Discovering Plate Boundaries Activity http://plateboundary.rice.edu/intro.html • A similar activity on plate boundaries using Google Earth: http://serc.carleton.edu/NAGTWorkshops/structure/SGT2012/activities/63925.html • NOAA Mid-Ocean Ridge Activity http://www.montereyinstitute.org/noaa/lesson02/l2la1.htm • NOAA Earthquakes and Plates Activity http://www.montereyinstitute.org/noaa/lesson01/l1la2.htm C. Paleomagnetism • The ideas of paleomagnetism are often difficult for students to grasp. Again, visualizations are key here. • Paleomagnetism Assignment http://www.lcps.org/cms/lib4/VA01000195/Centricity/Domain/685/Paleomagnetism %20Activity.pdf • Magnetic Reversals Activity https://www.msu.edu/~tuckeys1/highschool/earth_science/magnetic_reversals.pdf • A Model of Seafloor Spreading Activity http://www.ucmp.berkeley.edu/fosrec/Metzger3.html or http://www.geosociety.org/educate/LessonPlans/SeaFloorSpreading.pdf TEACHER RESOURCES Web Resources • This Dynamic Earth http://pubs.usgs.gov/gip/dynamic/dynamic.html • Teaching Plate Tectonics With Illustrations http://geology.com/nsta/ • Continents on the Move www.pbs.org/wgbh/nova/ice/continents/ • GPS—Measuring Plate Motions http://www.iris.edu/hq/files/programs/education_and_outreach/aotm/14/1.GPS_Backgro und.pdf Animations and Interactive Maps • This Dynamic Planet Interactive Map http://nhbarcims.si.edu/ThisDynamicPlanet/index.html • Plate Tectonics Animations http://www.ucmp.berkeley.edu/geology/tectonics.html • Exploring Our Interactive Planet Interactive Mapping Tool http://www.dpc.ucar.edu/VoyagerJr/intro.html • Plate Motion Simulations http://sepuplhs.org/middle/iaes/students/simulations/sepup_plate_motion.html • Imagery, Maps, Movies, and References on Plate Tectonics http://www.ig.utexas.edu/research/projects/plates/ Maps and Imagery • USGS Real-Time Earthquake Map. Use this real-time map to make connections between plate boundaries and the locations of earthquakes on Earth. http://earthquake.usgs.gov/earthquakes/map/ • Global Volcanism Map. Use this map to make connections between plate boundaries and the locations of volcanoes on Earth. http://www.volcano.si.edu/world/find_regions.cfm. • Plate Tectonics Articles, Theory, Plate Diagrams, Maps, and Teaching Ideas http://geology.com/plate-tectonics/ • Imagery, Maps, Movies, and References on Plate Tectonics http://www.ig.utexas.edu/research/projects/plates/ • Plate Tectonic Movement Visualizations http://serc.carleton.edu/NAGTWorkshops/geophysics/visualizations/PTMovements.html • GPS Time Series Map of Plate Motions http://sideshow.jpl.nasa.gov/post/series.html ANSWERS TO QUESTIONS IN THE CHAPTER: CONCEPT CHECKS 2.1 FROM CONTINENTAL DRIFT TO PLATE TECTONICS 1. Prior to the 1960s, most geologists thought the oceans and continental landmasses were in fixed geographic positions, and had been for most of geologic time. 2. North American geologists were most opposed to the continental drift hypothesis because much of the evidence for this idea came from unfamiliar areas to North American geologists (Africa, South America, and Australia). 2.2 CONTINENTAL DRIFT: AN IDEA BEFORE ITS TIME 1. The first line of evidence that the continents were once connected was the jigsaw puzzle-like fit of the coastlines of South America and Africa. 2. The discovery of the fossil remains of Mesosaurus in both South America and Africa, but nowhere else, supports the continental drift hypothesis because this was a small aquatic freshwater reptile that would not have been capable of making a crossing of the Atlantic Ocean. Further, had the Mesosaurus actually been able to make that trip, the fossil remains of the species would be much more widely distributed on each continent. 3. The prevailing view, in the early 20th century, of how land animals migrated over vast ocean expanses included rafting, transoceanic land bridges, and island stepping. These scientists looked for evidence of such features on the seafloor to refute hypotheses of continental drift. 4. Wegener accounts for the existence of glaciers in the southern landmasses at a time when areas in North America, Europe, and Asia supported lush tropical swamps by suggesting that the southern continents were joined together and located near the South Pole to provide the conditions necessary for large glaciations. At the same time, the Northern continents were located nearer the equator, an area conducive to the formation of great tropical swamps. 2.3 THE GREAT DEBATE 1. The two aspects of continental drift most objectionable to Earth scientists were (1) his inability to provide a credible mechanism for continental drift and (2) his incorrect suggestion that larger and sturdier continents could break through thinner oceanic crust. 2.4 THE THEORY OF PLATE TECTONICS 1. Following WWII, oceanographers were able to produce much better pictures of the seafloor through advances in the technology of marine tools. From these studies, oceanographers discovered the large oceanic ridge system winding through all of Earth’s major oceans. 2. The lithosphere consists of the uppermost mantle and overlying crust, and is a strong, rigid layer. The lithosphere contains the plates. The asthenosphere is a weaker region of the upper mantle; this is an area where pressures and temperatures are high enough that the rocks are near their melting points and capable of flowing. 3. The seven major lithospheric plates include: the North American, South American, Pacific, African, Eurasian, Australian-Indian, and Antarctic plates. 4. The three types of plate boundaries are convergent, divergent and transform. At convergent boundaries, plates move towards one another. At divergent boundaries, plates move away from one another. And at transform boundaries, plates slide past one another. 2.5 DIVERGENT PLATE BOUNDARIES AND SEAFLOOR SPREADING 1. At divergent boundaries, two plates move away from one another. These boundaries are the location of new oceanic crust, as hot rock from the mantle migrates upward to fill the void of the diverging plates. Divergent boundaries are also called constructive plate margins due to this creation of new rock. 2. The average rate of seafloor spreading in modern oceans is about 5 cm (2 inches) per year. The Mid-Atlantic Ridge spreads much slower than average, at a rate of 2 cm (0.7 inches) per year and the East Pacific Rise spreads much more quickly than average, at a rate of 15 cm (6 inches) per year. 3. The oceanic ridge system is characterized by an elevated ridge created by hot, newly formed oceanic crust (hot rock is less dense than cool rock). At the axis of the ridge, a rift valley develops—a deep, canyon-like structure representing the active area of spreading. Away from the ridge, rock is cooler (and thus denser) and sits topographically lower than the ridge itself. This cool rock is thicker as the underlying asthenosphere is cooler and more rigid. As the rock moves away from the ridge, it also slowly accumulates sediment from the deep ocean basin. 4. Continental rifting occurs where a continental landmass is split into segments, in a similar manner to mid-ocean ridge divergence. This occurs in areas where plate motions create opposing forces on the lithosphere, pulling continental rock apart. In this process, the lithosphere is thinned and crustal rocks break into large blocks, creating a central downdropped rift valley. This thinning and stretching also promotes mantle upwelling and broad areas of upwarped lithosphere on either side of the divergence.
https://ebookon.com/product/solution-manual-for-earth-an-introduction-to-physical-geology-plus-masteringgeology-with-etext-access-card-package-11-e-11th-edition-by-edward-j-tarbuck-emeritus-illinois-central-colle/
Do you want to build base fitness for good weather activities like tennis or hiking? Or do you want a fun workout that challenges you, but allows you to go at your own pace? This circuit class alternates strength and cardio conditioning exercises to give you a full body workout designed to tone your body, build your endurance and clear your mind! The exercises change from class to class, to keep you challenged mentally and physically. The warmup and cool down for each class will consist of mobility, flexibility and balance moves. All levels are welcome, from beginners to experienced, since everyone works at their own intensity, and modifications are provided. Training in this supportive, motivating group environment is not only fun, but effective in helping you reach your fitness goals! You WILL see results in this class! Participants must be 18 years and older to attend. Please bring a water, a mat and an appropriate sized dumbbell(s) for exercises. To register, click the link, then on the Milford Community Use website, on the left side menu, select Adult/Spring/Fitness/Total Body Strength & Conditioning. Deadline to Register: Monday, April 25th at 3:00pm.
https://www.jeangilliswellness.com/events/total-body-strength-conditioning/
Kwanzaa (or Kwanza) is an annual holiday celebrating African American culture and heritage. Also considered a pan-African holiday, it is celebrated largely in the United States but can be observed by anyone in the world who would like to participate. Along with being a time to reflect on cultural roots and looking forward to a new year, it is also a harvest celebration. Kwanzaa gets its name from a Swahili phrase, “matunda ya kwanza,” meaning “first fruits.” This name is a way of connecting current celebrations of family and community to its ancestral roots that the holiday is grounded in, of the communal effort needed to have a successful harvest. It is seven days long, always beginning on December 26th and ending on January 1st, and each day is focused on a different theme or value. Kwanzaa is a relatively new holiday compared to other celebrations that occur around this time of year. It was created in 1966 by Dr. Maulana Karenga, a professor in the United States, as a non-religious, cultural festival. He wanted to create a way to bring the African American community together, to reflect on their heritage and celebrate values of family and social unity. Everyone who celebrates Kwanzaa may have different traditions to mark the occasion, although it usually entails a large feast, gift-giving, time spent with family. One common tradition of Kwanzaa is the lighting of the Kinara, which is a candle holder with seven branches. It is often compared to the Hanukkah tradition of lighting the menorah, but they should not be confused with each other as they have very different meanings behind them. One black candle is placed in the middle with three red on one side and three green on the other, and each colour is symbolic. The red candles represent the struggles they face, the black one symbolizes African people, and the green candles represent the future. Each night of Kwanzaa, a candle is lit on the Kinara and a discussion about the principle guiding that particular day often follows. The black candle is lit first, and in the days following red and green candles are lit on alternating days, beginning with red. On the last day of Kwanzaa, the final candle is lit and later on they are all extinguished to show that the holiday has come to an end. Each of the seven days of Kwanzaa is dedicated to a different principle, or ‘Nguzo Saba.’ These include unity, self-determination, collective work and responsibility, cooperative economics, purpose, creativity, and faith. There are also seven symbols that represent each of these principles, such as the Kinara, and the Mazao, or the foods that represent the traditional harvest celebrations that Kwanzaa was based on. December 31st is the day of feasting in Kwanzaa. This traditional meal is usually shared among many, as Kwanzaa is a celebration of community and culture. The food being served can vary, as anyone celebrating may have different traditions they follow in their own family or community. The table is sometimes set with foods that are symbolic to the holiday. This includes things like bananas, squash, corn and other foods that represent the harvest. The Kinara, gifts being exchanged, and a chalice are also often included. Kwanzaa is a time to reflect on the past, look forward to the future, and think about ancestral roots while celebrating others. Happy Kwanzaa! This blog is a collaboration with International Connections, a blog run by the IESC here at Western. Check out their blog and give some of their posts a read!
https://uwo.ca/se/thrive/blog/2020/kwanzaa_a_celebration_of_heritage.html
A practical approach to deal with code injection vulnerabilities is to assess the risk level (commonly known as risk analysis or assessment) due to the presence of code level vulnerabilities and their potential impact followed by generating further actions to reduce the risk level such as performing penetration testing and deploying IDS (Schaffer, 2012; May et al., 2004). Thus, risk analysis improves application security for all related stakeholders. This work is motivated by the observation that traditional risk assessment approaches proposed in the literature do not have the capability to estimate the overall risk due to diverse severity level for a given vulnerability (W3af 2015, Sqlifuzzer 2015, Shar & Tan, 2012; SQL-inject-me, 2015; CVSS Scoring System, 2015). Various vulnerability types and their corresponding attack payload types are not accounted for in most of these assessment approaches. Moreover, most of existing frameworks are quantitative. Thus, risk assessment model related parameters need to be known, which may not be practical to assume in real-world. For example, precise value of assets or resources may not be estimated accurately, the likelihood of vulnerability occurrence may not be computed due to the lack of sufficient historical data, and the severity level due to vulnerability exploitation may not be estimated. Another drawback of existing approaches is considering all types of vulnerabilities equally, and their severity level equally. Thus, a suitable framework is needed to assess the risk of an application due to source code level vulnerability along with the possibility of alternate quantitative values to specify the magnitude of vulnerability and severity levels based on attack payloads. To address these drawbacks, we propose a Fuzzy Logic-based System (FLS) framework to assess the risk due to code injection vulnerabilities present in an application. Fuzzy logic is a suitable computing technique to deal with the case where quantitative values are not present. It operates on subjective variables (linguistics) and provides a rule-based approach to combine subjective variables (Mamdani, 1974). We define a set of code level metrics to establish the linguistic terms to relate the subjective magnitude and the corresponding impact due to the actual exploitation of vulnerabilities. We also apply nested FLS to combine diverse types of risk to assess a single value to be useful in practice. The proposed framework provides professionals the flexibility to specify the rules that combine the source of vulnerability and attack payload types to identify the severity level based on knowledge. We evaluate the proposed approach with three real-world web applications implemented in PHP and reported to be vulnerable. The evaluation results indicate that vulnerable versions of the applications are more risky than that of the vulnerability-free version for SQL Injection (SQLI), Cross-Site Scripting (XSS), Remote File Inclusion (RFI), and Web session. The results are more meaningful for software professionals for quality assurance compared to traditional black box level scanner tools which do not have the capability of assessing the overall application’s risk level.
https://www.igi-global.com/chapter/fuzzy-rule-based-vulnerability-assessment-framework-for-web-applications/188234
Which of the following food does not support bacterial growth? If you were asked this question, how you will answer it? Most people will choose the cooked food. Their reason mostly is because the heat that you use when you cook the ingredient kills the bacteria. Therefore, there is no chance for the bacteria to grow in cooked food. However, this opinion could be wrong, if we are talking about the vegetable. The Cooked Vegetable The raw vegetable has natural protection against the bacteria that comes from its original condition. It still can become rotten, but compared to the cooked ones, the raw has a longer period of freshness. On the other hand, the vegetable that has been going through the cooking process loses its firmness and freshness. The structure of the vegetable on the micro-scale also becomes weak because of the heat. This condition creates a good environment for bacteria to grow. The heat also stimulates the bacteria multiplication. Moreover, it also can stimulate the fungus to grow, which becomes another problem besides the bacteria. Therefore, if you compared the raw vegetable and the cooked one in term of its period before it starts to get rotten, the raw has a longer one. So, if you plan to keep the vegetable for quite long, you shouldn’t cook it. On the opposite, if you have cooked the vegetable, you should eat them all and do not leave them, because it will become the place for bacteria to grow. The Other Foods The vegetable is one of the examples of the easiest answer and method to know which of the following food does not support bacterial growth. However, if it is the other ingredients, such as dairy products, meat, and poultry, it becomes easier to get rotten, if you don’t process them. Why is it different? There are at least five elements that cause those ingredients to become an ideal place for bacteria to grow. They are: Water Water supports the bacteria activity in breaking down and consume the food ingredient to grow. However, the water that the bacteria use is called the unbound water or in the scientific term, it is known as water availability or water activity. It is different than the moisture level of the food, which all foods have. There is a specific method to measure water activity. And, if the food has more than 0.95 points of water activity, it will become the best place for bacteria to grow. Oxygen Bacteria also need oxygen to live. Mostly, food bacteria are classified as aerobes, which is the type that needs oxygen. However, some of the microbes in the food also can live without oxygen. Therefore, depending on the oxygen level around the food, it also can accelerate or slow down the bacterial growth. Nutrients Bacteria also need nutrients to grow. They use it to produce the energy that they need to live. The nutrient can be acquired from the substance that you can find in the food ingredients. The carbohydrate, sugar, protein, starch, and other nutrient become the food of the bacteria. For that reason, food that has many nutrients, such as dairy products, are more suitable for bacteria to grow. Temperature Temperature holds an essential role in supporting bacterial growth. In the scientific term, there is a range of temperature that is known as the danger zone. If a food or ingredient stays within this danger zone for a long time, the bacteria will be able to grow rapidly. The danger zone that we are talking about here is the temperature between 41 degrees Fahrenheit (5 degrees Celsius) and 135 degrees Fahrenheit (57 degrees Celsius). Therefore, we should cook our food above those temperature ranges or freeze the food below that temperature range to prevent the bacteria to grow. pH The pH level of a food or ingredient also affects the strength of bacterial growth. Bacteria mostly grow quite faster on the neutral pH level, which is pH 7. However, it doesn’t mean that your food is safe from bacteria when it has low pH levels. When your food pH level is lower than 4.5 pH, fungus or molds can easily infect it. How to Prevent Bacterial Growth on A Food or Ingredients From here, we believe you already understand which of the following food does not support bacterial growth as well as the food that become the best place for bacteria to grow. In short, the food with higher protein content has a higher risk to accelerate bacterial growth. However, it doesn’t mean that you can’t prevent or at least, slow down the growth speed. Here, we have tips to prevent bacterial growth, so your food will last much longer. - Put it in the refrigerator – the cool temperature will keep the food away from the danger zone. Thus, the speed of bacterial growth will be much slower and almost stop. - Preserve it – use the preservation technique to reduce the bacterial growth speed. For meat and protein-based ingredients, you can try to marinate it or pour salt to turn it into jerky that can last longer. As for the vegetables, keep them fresh and put them in an air-tight container. It will limit the oxygen that the bacteria need to grow. - Keep it clean – separate different types of ingredients on a different container. Do not mix it because it also can provide more nutrients for bacteria to grow. Use a different container with airtight features to keep it much longer. ConclusionNow, you understand which of the following food does not support bacterial growth and how to process and keep it. If you do it correctly, you don’t have to worry about this problem. More importantly, your food also can last longer. In the middle of a pandemic when we can’t move freely, using the preservation technique as we mentioned above will be very useful. So, try them now and enjoy your food for much longer!
https://www.howtowhatwhy.com/which-of-the-following-food-does-not-support-bacterial-growth/
The use of covert recordings in family proceedings has recently been highlighted after the President of the Family Division, Sir James Munby, warned that this has become a topic of “growing significance”. It seems that desperate parties are going to increasingly greater lengths to produce evidence that could sway a case in their favour, where a judge’s decision would usually come down to the word of one against the other (albeit guided by recommendations of a Cafcass officer in children cases). The President has invited the Family Justice Council to consider the issue of covert recordings following a recent case where a father had made recordings of conversations with a social worker, Cafcass officer and solicitor over a number of years (Re B (A Child) EWCA Civ 1579). Too tempting in today’s society? Recording equipment is easily available in the age of smartphones and new technology. Purchasing equipment such as recording software and tracking devices is easy and cheap. Voicemail messages can be saved and telephone calls can be easily recorded. When can recordings be used? At present, there is very little guidance as to the use of covert recordings in the family court, despite this becoming an increasingly prevalent issue, given the easy-to-access technology. It is often difficult to advise clients in this position as to how the court will treat such evidence, in the absence of clear guidance from the court. Recordings can be used as evidence in family proceedings, if permission is given. However the court can decide to exclude it, but must give clear reasons for their decision, if so. Such recordings should not be shown to third parties (eg Cafcass or a social worker) until permission has been given by the court. In cases where this is an issue, the judge will have to be satisfied that the evidence is unedited, relevant to the proceedings and the voices are definitely those of the people claimed. A word of warning An extreme example of this was highlighted in a case last year, where a father had gone to the extreme lengths of sewing recording devices into his child’s clothing (M v F (Covert Recording of Children) EWFC 29). The decision for the court was whether the child should remain living with his father and her partner, or whether she should live with her mother instead. The father and his partner were desperate to find out what the child was saying to the various professionals involved and used selected excerpts of the recordings within the proceedings. The court admitted the recordings as evidence within the case, and noted the relevance of the way the recordings were made to an assessment of the father’s parenting. Jackson J felt it was unreal to exclude the recordings, knowing they existed and there was a risk excluding the content of the recordings, but using the fact the recordings were taken, would lead to unbalanced evidence. The judge said concealing a recording device on a child to gather evidence for proceedings “is almost always likely to be wrong” and that the actions of the father and his partner were detrimental, for the following reasons: - Damaging the relationships between the adults in the child’s life; - Showing the father’s inability to trust professionals; - Creating a secret that may later be discovered by the child, which would then affect her relationships with her father and his partner; - Putting the father’s standing in the community at risk; - Resulting in huge wasted time on the father’s part (setting up and transcribing the recordings); and - significant increases in costs of the proceedings The judge warned that anyone considering doing something similar should “first think carefully about the consequences”. The judge in that particular case did not go so far as to address any potential criminal aspects of the steps taken by the father and his partner (eg a breach of the Data Protection Act 1998 or the Human Rights Act), or comment on what the position would be in relation to recording of adults, but it serves as a warning that the court will not look too kindly on covert recordings being submitted as evidence. In every case, it is important to consider the appropriate evidence to be used and the benefits, in light of the circumstances of the case and the desired outcome. Advising in relation to recordings (of both children and adults) is becoming increasingly common in our family cases. For further advice about the right approach to take in your circumstances, please get in touch. This article first appeared on http://www.tltsolicitors.com on 14 November 2017.
https://sarahegreen.com/2017/11/16/deceit-or-desperation-covert-recordings-and-the-family-court/
1.4 Million US Jobs Vulnerable To Disruption From Technology The global economy faces a reskilling crisis with 1.4 million jobs in the US alone vulnerable to disruption from technology and other factors by 2026, according to a new report, Towards a Reskilling Revolution: A Future of Jobs for All, published by the World Economic Forum. The report is an analysis of nearly 1,000 job types across the US economy, encompassing 96% of employment in the country. Its aim is to assess the scale of the reskilling task required to protect workforces from an expected wave of automation brought on by the Fourth Industrial Revolution. Drawing on this data for the US economy, the report finds that 57% of jobs expected to be disrupted belong to women. If called on today to move to another job with skills that match their own, 16% of workers would have no opportunities to transition and another 25% would have only between one and three matches. At the other end of the spectrum, 2% of workers have more than 50 options. This group makes up a very small, fortunate minority: on average, all workers would have 10 transition options today. The positive finding of the report is the huge opportunity identified for reskilling to lift wages and increase social mobility. With reskilling, for example, the average worker in the US economy would have 48 viable job transitions – nearly as much as the 2% with the most options today. Among those transitions, 24 jobs would lead to higher wages. The case for a reskilling revolution The research, which is published in collaboration with The Boston Consulting Group, finds that coordinated reskilling that aims to maintain or grow wages has very high returns for workers at risk of displacement – and for businesses and the economy. At-risk workers who retrain for an average of two years could receive an average annual salary increase of $15,000 – and business would be able to find talent for jobs that may otherwise remain unfilled. With this approach, up to 95% of at-risk workers would find new work in new, higher-income jobs. Without such coordinated upskilling efforts, the report finds, one in four of at-risk workers would lose on average $8,600 of their annual income even if they are successful in moving to a new job. However, this reskilling revolution requires that 70% of affected workers retrain in a new job “family” or career, highlighting the need for retraining initiatives that combine reskilling programmes with income support and job-matching schemes to fully support those undergoing this transition. “The only limiting factor on a world of opportunities for people is the willingness of leaders to make investments in re-skilling that will bridge workers onto new jobs. This report shows that this investment has very high returns for businesses as well as economies – and ensures that workers find a purpose in their lives,” Klaus Schwab, Founder and Executive Chairman, World Economic Forum. A future of jobs for all The report also describes what reskilling would need to look like. The people who will do best in the transitions underway are those who have “hybrid” skills – transferable skills like collaboration and critical thinking, as well as deeper expertise in specific areas. Both highly specialized and highly generalist roles will need significant reskilling. The report lays out 15 job pathways to demonstrate the precise range of options that reskilling can present for professions as diverse as assembly-line workers, secretaries, cashiers, customer service representatives, truck drivers, radio and TV announcers, fast-food chefs, mining machine operators and computer programmers. However, for these viable and desirable job transitions to come to fruition requires concerted efforts by businesses, policy-makers and various stakeholders to think differently about workforce planning and to invest in reskilling that will bridge workers to new jobs. “Work provides people with meaning, identity and opportunity. We need to break out of the current paralysis and recognize that skills are the ‘great redistributor’. Equipping people with the skills they need to make job transitions is the fuel needed for growth – and to secure stable livelihoods for people in the midst of technological change,” said Saadia Zahidi, Head of Education, Gender and Work System Initiative and Member of the Executive Committee, World Economic Forum. A gendered impact Of the 1.4 million jobs expected by the US Bureau of Labor Statistics to be disrupted between now and 2026, the majority – 57% – belong to women. This is a worrying development at a time when the workplace gender gap is already widening and when women are under-represented in the areas of the labour market expected to grow most robustly in the coming years. The data show that the current narrative about the most at-risk category is misleading from a gender perspective. For example, there are nearly 164,000 at-risk female secretaries and administrative assistants, while there are just over 90,000 at-risk male assembly-line workers. Without reskilling, on average, at-risk women have only 12 job transition options, while at-risk men have 22 options. With reskilling, women have 49 options, while men have 80 options. With reskilling, the options gap between women and men narrows. However, these transitions also present an opportunity to close the persistent gender wage gap. Combined reskilling and job transitions would lead to increased wages for 74% of all currently at-risk women, while the equivalent figure for men is 53%. Enjoy the article? Did you find this article informative? Please consider contributing to Eurasia Review, as we are truly independent and do not receive financial support from any institution, corporation or organization.
|Appearance||Clear, colorless, | odorless liquid |CAS number||[7664-93-9]| |Properties| |Density and phase||1.84 g/cm3, liquid| |Solubility in water||Fully miscible | (exothermic) |Melting point||10°C (283 K)| |Boiling point||337°C (610 K)| |pKa||−3.0 | 1.99 |Viscosity||26.7 c P at 20°C| |Hazards| |MSDS||External MSDS| |EU classification||Corrosive (C), | Toxic (T) |NFPA 704| |R-phrases||R23, R24, R25, R35, | R36, R37, R38, R49 |S-phrases||S23, S30, S36, S37, | S39, S45 |Flash point||Non-flammable| |RTECS number||WS5600000| |Supplementary data page| |Structure & properties||n, εr, etc.| |Thermodynamic data||Phase behaviour | Solid, liquid, gas |Spectral data||UV, IR, NMR, MS| |Related compounds| |Related strong acids|| Selenic acid | Hydrochloric acid Nitric acid Phosphoric acid |Related compounds|| Hydrogen sulfide | Sulfurous acid Peroxymonosulfuric acid Sulfur trioxide Oleum |Except where noted otherwise, data are given for | materials in their standard state (at 25°C, 100 kPa) Infobox disclaimer and references Sulfuric acid (British English: sulphuric acid), H2SO4, is a strong mineral acid. It is soluble in water at all concentrations. It was once known as oil of vitriol, coined by the 8th-century Alchemist Jabir ibn Hayyan, the chemical's probable discoverer. Sulfuric acid has many applications, and is produced in greater amounts than any other chemical besides water. World production in 2001 was 165 million tonnes, with an approximate value of $8 billion. Principal uses include ore processing, fertilizer manufacturing, oil refining, wastewater processing, and chemical synthesis. Many proteins are made of sulfur-containing amino acids (like cysteine and methionine) which produce sulfuric acid when metabolized by the body. Physical properties Forms of sulfuric acid Although 100% sulfuric acid can be made, this loses SO3 at the boiling point to produce 98.3% acid. The 98% grade is also more stable for storage, making it the usual form for "concentrated" sulfuric acid. Other concentrations of sulfuric acid are used for different purposes. Some common concentrations are: - 10%, dilute sulfuric acid for laboratory use (pH 1) - 33.5%, battery acid (used in lead-acid batteries) (pH 0.5) - 62.18%, chamber or fertilizer acid (pH about 0.4) - 77.67%, tower or Glover acid (pH about 0.25) - 98%, concentrated (pH about 0.1) Since sulfuric acid is a strong acid, a 0.50 M solution of sulfuric acid has a pH close to zero. Different purities are also available. Technical grade H2SO4 is impure and often colored, but it is suitable for making fertilizer. Pure grades such as US Pharmacopoeia (USP) grade are used for making pharmaceuticals and dyestuffs. When high concentrations of SO3(g) are added to sulfuric acid, H2S2O7 forms. This is called pyrosulfuric acid, fuming sulfuric acid or oleum or, less commonly, Nordhausen acid. Concentrations of oleum are either expressed in terms of % SO3 (called % oleum) or as % H2SO4 (the amount made if H2O were added); common concentrations are 40% oleum (109% H2SO4) and 65% oleum (114.6% H2SO4). Pure H2S2O7 is in fact a solid, melting point 36 °C. Polarity and conductivity Anhydrous H2SO4 is a very polar liquid, with a dielectric constant of around 100. This is due to the fact that it can dissociate by protonating itself, a process known as autoprotolysis, which occurs to a high degree, more than 10 billion times the level seen in water: - 2 H2SO4 ⇌ H3SO4+ + HSO4− This allows protons to be highly mobile in H2SO4. It also makes sulfuric acid an excellent solvent for many reactions. In fact, the equilibrium is more complex than shown above. 100% H2SO4 contains the following species at equilibrium (figures shown as mmol per kg solvent): HSO4− (15.0), H3SO4+ (11.3), H3O+ (8.0), HS2O7− (4.4), H2S2O7 (3.6), H2O (0.1). Chemical properties Reaction with water The hydration reaction of sulfuric acid is highly exothermic. If water is added to concentrated sulfuric acid, it can boil and spit dangerously. One should always add the acid to the water rather than the water to the acid. This can be remembered through mnemonics such as "Always do things as you oughta, add the acid to the water. If you think your life's too placid, add the water to the acid", "A.A.: Add Acid", or "Drop acid, not water." Note that part of this problem is due to the relative densities of the two liquids. Water is less dense than sulfuric acid and will tend to float above the acid. The reaction is best thought of as forming hydronium ions, by: - H2SO4 + H2O → H3O+ + HSO4- And then: - HSO4- + H2O → H3O+ + SO42- Because the hydration of sulfuric acid is thermodynamically favorable ( ΔH = -880 k J/ mol), sulfuric acid is an excellent dehydrating agent, and is used to prepare many dried fruits. The affinity of sulfuric acid for water is sufficiently strong that it will take hydrogen and oxygen atoms out of other compounds; for example, mixing starch (C6H12O6)n and concentrated sulfuric acid will give elemental carbon and water which is absorbed by the sulfuric acid (which becomes slightly diluted): (C6H12O6)n → 6C + 6H2O. The effect of this can be seen when concentrated sulfuric acid is spilled on paper; the starch reacts to give a burned appearance, the carbon appears as soot would in a fire. A more dramatic illustration occurs when sulfuric acid is added to a tablespoon of white sugar in a cup when a tall rigid column of black porous carbon smelling strongly of caramel emerges from the cup. Other reactions of sulfuric acid As an acid, sulfuric acid reacts with most bases to give the corresponding sulfate. For example, copper(II) sulfate, the familiar blue salt of copper used for electroplating and as a fungicide, is prepared by the reaction of copper(II) oxide with sulfuric acid: - CuO + H2SO4 → CuSO4 + H2O Sulfuric acid can be used to displace weaker acids from their salts, for example sodium acetate gives acetic acid: H2SO4 + CH3COONa → NaHSO4 + CH3COOH Likewise the reaction of sulfuric acid with potassium nitrate can be used to produce nitric acid, along with a precipitate of potassium bisulfate. With nitric acid itself, sulfuric acid acts as both an acid and a dehydrating agent, forming the nitronium ion NO2+, which is important in nitration reactions involving electrophilic aromatic substitution. This type of reaction where protonation occurs on an oxygen atom, is important in many reactions in organic chemistry, such as Fischer esterification and dehydration of alcohols. Sulfuric acid reacts with most metals in a single displacement reaction to produce hydrogen gas and the metal sulfate. Dilute H2SO4 attacks iron, aluminium, zinc, manganese and nickel, but tin and copper require hot concentrated acid. Lead and tungsten are, however, resistant to sulfuric acid. The reaction with iron (shown) is typical for most of these metals, but the reaction with tin is unusual in that it produces sulfur dioxide rather than hydrogen. - Sn(s) + 2 H2SO4(l) → SnSO4 + 2 H2O + SO2 Environmental aspects Sulfuric acid is a constituent of acid rain, being formed by atmospheric oxidation of sulfur dioxide in the presence of water - i.e. oxidation of sulfurous acid. Sulfur dioxide is the main product when the sulphur in sulfur-containing fuels such as coal or oil is burned. Sulfuric acid is formed naturally by the oxidation of sulfide minerals, such as iron sulfide. The resulting water can be highly acidic and is called Acid Rock Drainage (ARD). The acidic water so formed can dissolve metals present in sulfide ores, resulting in brightly colored and toxic streams. The oxidation of iron sulfide pyrite by molecular oxygen produces iron(II), or Fe2+: The Fe2+ can be further oxidized to Fe3+, according to: and the Fe3+ so produced can be precipitated as the hydroxide or hydrous oxide. The equation for the formation of the hydroxide is: - Fe3+ + 3 H2O → Fe(OH)3 + 3 H+ The iron(III) ion ("ferric iron", in casual nomenclature)can also oxidize pyrite. When iron(III) oxidation of pyrite occurs, the process can become rapid and pH values below zero have been measured in ARD from this process. ARD can also produce sulfuric acid at a slower rate, so that the Acid Neutralization Capacity (ANC) of the aquifer can neutralize the produced acid. In such cases, the Total Dissolved solids (TDS) concentration of the water can be increased form the dissolution of minerals from the acid-neutralization reaction with the minerals. Extraterrestrial sulfuric acid Sulfuric acid is produced in the upper atmosphere of Venus by the sun's photochemical action on carbon dioxide, sulfur dioxide, and water vapor. Ultraviolet photons of wavelengths less than 169 nm can photodissociate carbon dioxide into carbon monoxide and atomic oxygen. Atomic oxygen is highly reactive; when it reacts with sulfur dioxide, a trace component of the Venusian atmosphere, the result is sulfur trioxide, which can combine with water vapor, another trace component of Venus' atmosphere, to yield sulfuric acid. In the upper, cooler portions of Venus' atmosphere, sulfuric acid can exist as a liquid, and thick sulfuric acid clouds completely obscure the planet's surface from above. The main cloud layer extends from 45–70 km above the planet's surface, with thinner hazes extending as low as 30 and as high as 90 km above the surface. Infrared spectra from NASA's Galileo mission show distinct absorptions on Europa, a moon of Jupiter, that have been attributed to one or more sulfuric acid hydrates. The interpretation of the spectra is somewhat controversial. Some planetary scientists prefer to assign the spectral features to the sulfate ion, perhaps as part of one or more minerals on Europa's surface. History of sulfuric acid The discovery of sulfuric acid is credited to the 8th century alchemist Jabir ibn Hayyan. It was studied later by the 9th century physician and alchemist Ibn Zakariya al-Razi (Rhases), who obtained the substance by dry distillation of minerals including iron(II) sulfate heptahydrate, FeSO4 • 7H2O, and copper(II) sulfate pentahydrate, CuSO4 • 5H2O. When heated, these compounds decompose to iron(II) oxide and copper(II) oxide, respectively, giving off water and sulfur trioxide, which combine to produce a dilute solution of sulfuric acid. This method was popularized in Europe through translations of Arabic and Persian treatises and books by European alchemists, such as the 13th-century German Albertus Magnus. Sulfuric acid was known to medieval European alchemists as oil of vitriol, spirit of vitriol, or simply vitriol, among other names. The word vitriol derives from the Latin vitreus, 'glass', for the glassy appearance of the sulfate salts, which also carried the name vitriol. Salts called by this name included copper(II) sulfate (blue vitriol, or rarely Roman vitriol), zinc sulfate (white vitriol), iron(II) sulfate (green vitriol), iron(III) sulfate (vitriol of Mars), and cobalt(II) sulfate (red vitriol). Vitriol was widely considered the most important alchemical substance, intended to be used as a philosopher's stone. Highly purified vitriol was used as a medium to react substances in. This was largely because the acid does not react with gold, often the final aim of alchemical processes. The importance of vitriol to alchemy is highlighted in the alchemical motto Visita Interiora Terrae Rectificando Invenies Occultum Lapidem ('Visit the interior of the earth and rectifying (i.e. purifying) you will find the hidden/secret stone'), found in L'Azoth des Philosophes by the 15th Century alchemist Basilius Valentinus, which is a backronym. In the 17th century, the German-Dutch chemist Johann Glauber prepared sulfuric acid by burning sulfur together with saltpeter (potassium nitrate, KNO3), in the presence of steam. As the saltpeter decomposes, it oxidizes the sulfur to SO3, which combines with water to produce sulfuric acid. In 1736, Joshua Ward, a London pharmacist, used this method to begin the first large-scale production of sulfuric acid. In 1746 in Birmingham, John Roebuck began producing sulfuric acid this way in lead-lined chambers, which were stronger, less expensive, and could be made larger than the glass containers which had been used previously. This lead chamber process allowed the effective industrialization of sulfuric acid production, and with several refinements remained the standard method of production for almost two centuries. John Roebuck's sulfuric acid was only about 35–40% sulfuric acid. Later refinements in the lead-chamber process by the French chemist Joseph-Louis Gay-Lussac and the British chemist John Glover improved this to 78%. However, the manufacture of some dyes and other chemical processes require a more concentrated product, and throughout the 18th century, this could only be made by dry distilling minerals in a technique similar to the original alchemical processes. Pyrite ( iron disulfide, FeS2) was heated in air to yield iron (II) sulfate, FeSO4, which was oxidized by further heating in air to form iron(III) sulfate, Fe2(SO4)3, which when heated to 480 °C decomposed to iron(III) oxide and sulfur trioxide, which could be passed through water to yield sulfuric acid in any concentration. The expense of this process prevented the large-scale use of concentrated sulfuric acid. In 1831, the British vinegar merchant Peregrine Phillips patented a far more economical process for producing sulfur trioxide and concentrated sulfuric acid, now known as the contact process. Essentially all of the world's supply of sulfuric acid is now produced by this method. Manufacture Sulfuric acid is produced from sulfur, oxygen and water via the contact process. In the first step, sulfur is burned to produce sulfur dioxide. This is then oxidised to sulfur trioxide using oxygen in the presence of a vanadium(V) oxide catalyst. - (2) 2 SO2 + O2(g) → 2 SO3(g) (in presence of V2O5) Finally the sulfur trioxide is treated with water (usually as 97-98% H2SO4 containing 2-3% water) to produce 98-99% sulfuric acid. - (3) SO3(g) + H2O( l) → H2SO4(l) Note that directly dissolving SO3 in water is impractical due to the highly exothermic nature of the reaction. Mists are formed instead of a liquid. Alternatively, the SO3 is absorbed into H2SO4 to produce oleum (H2S2O7), which is then diluted to form sulfuric acid. - (3) H2SO4( l) + SO3 → H2S2O7(l) Oleum is reacted with water to form concentrated H2SO4. - (4) H2S2O7(l) + H2O(l) → 2 H2SO4(l) In 1993, American production of sulfuric acid amounted to 36.4 million tonnes. World production in 2001 was 165 million tonnes. Uses Sulfuric acid is a very important commodity chemical, and indeed a nation's sulfuric acid production is a good indicator of its industrial strength. The major use (60% of total worldwide) for sulfuric acid is in the "wet method" for the production of phosphoric acid, used for manufacture of phosphate fertilizers as well as trisodium phosphate for detergents. In this method phosphate rock is used, and more than 100 million tonnes is processed annually. This raw material is shown below as fluorapatite, though the exact composition may vary. This is treated with 93% sulfuric acid to produce calcium sulfate, hydrogen fluoride (HF) and phosphoric acid. The HF is removed as hydrofluoric acid. The overall process can be represented as: - Ca5F(PO4)3 + 5 H2SO4 + 10 H2O → 5 CaSO4·2 H2O + HF + 3 H3PO4 Sulfuric acid is used in large quantities in iron and steel making principally as pickling-acid used to remove oxidation, rust and scale from rolled sheet and billets prior to sale into the automobile and white-goods business. The used acid is often re-cycled using a Spent Acid Regeneration (SAR) plant. These plants combust the spent acid with natural gas, refinery gas, fuel oil or other suitable fuel source. This combustion process produces gaseous sulfur dioxide (SO2) and sulfur trioxide (SO3) which are then used to manufacture "new" sulfuric acid. These types of plants are common additions to metal smelting plants, oil refineries, and other places where sulfuric acid is consumed on a large scale, as operating a SAR plant is much cheaper than purchasing the commodity on the open market. Ammonium sulfate, an important nitrogen fertilizer is most commonly produced as a by-product from coking plants supplying the iron and steel making plants, Reacting the ammonia produced in the thermal decomposition of coal with waste sulfuric acid allows the ammonia to be crystalised out as a salt (often brown because of iron contamination) and sold into the agro-chemicals industry. Another important use for sulfuric acid is for the manufacture of aluminium sulfate, also known as papermaker's alum. This can react with small amounts of soap on paper pulp fibres to give gelatinous aluminium carboxylates, which help to coagulate the pulp fibres into a hard paper surface. It is also used for making aluminium hydroxide, which is used at water treatment plants to filter out impurities, as well as to improve the taste of the water. Aluminium sulfate is made by reacting bauxite with sulfuric acid: - Al2O3 + 3 H2SO4 → Al2(SO4)3 + 3 H2O Sulfuric acid is used for a variety of other purposes in the chemical industry. For example, it is the usual acid catalyst for the conversion of cyclohexanoneoxime to caprolactam, used for making nylon. It is used for making hydrochloric acid from salt via the Mannheim process. Much H2SO4 is used in petroleum refining, for example as a catalyst for the reaction of isobutane with isobutylene to give isooctane, a compound that raises the octane rating of gasoline (petrol). Sulfuric acid is also important in the manufacture of dyestuffs. A mixture of sulfuric acid and water is sometimes used as the electrolyte in various types of lead-acid battery where it undergoes a reversible reaction where lead and lead dioxide are converted to lead(II) sulfate. Sulfuric acid is also the principal ingredient in some drain cleaners, used to clear blockages consisting of paper, rags, and other materials not easily dissolved by caustic solutions. Sulfuric acid is also used as a general dehydrating agent in its concentrated form. See Reaction with water. Sulfur-iodine cycle The sulfur-iodine cycle is a series of thermochemical processes used to obtain hydrogen. It consists of three chemical reactions whose net reactant is water and whose net products are hydrogen and oxygen. The sulfur and iodine compounds are recovered and reused, hence the consideration of the process as a cycle. This process is endothermic and must occur at high temperatures, so energy in the form of heat has to be supplied. The sulfur-iodine cycle has been proposed as a way to supply hydrogen for a hydrogen-based economy. With an efficiency of around 50% it is more attractive than electrolysis, and it does not require hydrocarbons like current methods of steam reforming. Additionally, the sulfur-iodine cycle has a much lower maximum operating temperature compared to traditional electrolysis. The sulfur-iodine cycle is currently being researched as a feasible method of obtaining hydrogen, but the concentrated, corrosive acid at high temperatures poses currently insurmountable safety hazards if the process were built on large-scale. Safety Laboratory hazards The corrosive properties of sulfuric acid are accentuated by its highly exothermic reaction with water. Hence burns from sulfuric acid are potentially more serious than those of comparable strong acids (e.g. hydrochloric acid), as there is additional tissue damage due to dehydration and particularly due to the heat liberated by the reaction with water, i.e. secondary thermal damage. The danger is obviously greater with more concentrated preparations of sulfuric acid, but it should be remembered that even the normal laboratory "dilute" grade (approx. 1 M, 10%) will char paper by dehydration if left in contact for a sufficient length of time. The standard first aid treatment for acid spills on the skin is, as for other corrosive agents, irrigation with large quantities of water: in the case of sulfuric acid it is important that the acid should be removed before washing, as a further heat burn could result from the exothermic dilution of the acid. Washing should be continued for a sufficient length of time—at least ten to fifteen minutes—in order to cool the tissue surrounding the acid burn and to prevent secondary damage. Contaminated clothing must be removed immediately and the underlying skin washed thoroughly. Preparation of the diluted acid can also be dangerous due to the heat released in the dilution process. It is essential that the concentrated acid is added to water and not the other way round, to take advantage of the relatively high heat capacity of water. Addition of water to concentrated sulfuric acid leads at best to the dispersal of a sulfuric acid aerosol, at worst to an explosion. Preparation of solutions greater than 6 M (35%) in concentration is the most dangerous, as the heat produced can be sufficient to boil the diluted acid: efficient mechanical stirring and external cooling (e.g. an ice bath) are essential. Industrial hazards Although sulfuric acid is non-flammable, contact with metals in the event of a spillage can lead to the liberation of hydrogen gas. The dispersal of acid aerosols and gaseous sulfur dioxide is an additional hazard of fires involving sulfuric acid. Water should not be used as the extinguishing agent because of the risk of further dispersal of aerosols: carbon dioxide is preferred where possible. Sulfuric acid is not considered toxic besides its obvious corrosive hazard, and the main occupational risks are skin contact leading to burns (see above) and the inhalation of aerosols. Exposure to aerosols at high concentrations leads to immediate and severe irritation of the eyes, respiratory tract and mucous membranes: this ceases rapidly after exposure, although there is a risk of subsequent pulmonary edema if tissue damage has been more severe. At lower concentrations, the most commonly reported symptom of chronic exposure to sulfuric acid aerosols is erosion of the teeth, found in virtually all studies: indications of possible chronic damage to the respiratory tract are inconclusive as of 1997. In the United States, the permissible exposure limit (PEL) for sulfuric acid is fixed at 1 mg/m3: limits in other countries are similar. Interestingly there have been reports of sulfuric acid ingestion leading to vitamin B12 deficiency with subacute combined degeneration. The spinal cord is most often affected in such cases, but the optic nerves may show demyelination, loss of axons and gliosis. In popular culture In fiction The use of sulfuric acid as a weapon in crimes of assault, known as "vitriol throwing", has at times been sufficiently common (if sensational) to make its way into novels and short stories. Examples include The Adventure of the Illustrious Client, by Arthur Conan Doyle, and The Love of Long Ago, by Guy de Maupassant. An episode of Saturday Night Live hosted by Mel Gibson included a parody Western sketch about "Sheriff Jeff Acid," who carries a flask of acid instead of a six shooter. The DC Comics villain Two Face was disfigured as a result of a vitriol throw. In the novel Veronika Decides to Die by. Paulo Coelho , talks of a girl who has attempted to commit suicide and ends up with Vitriol poisoning. Also the docter/therapest in this novel writes a thesus on Curing Vitriol poisoning In comic rhyme Sulfuric acid is one of the few compounds whose chemical formula is well known by the general public because of many comic rhymes, such as this one popular in the UK: - Johnny was a chemist's son, but Johnny is no more. - What Johnny thought was H2O was H2SO4. In the U.S., a common variant is: - Little Johnny took a drink, but he shall drink no more. - For what he thought was H2O was H2SO4.
https://cs.mcgill.ca/~rwest/wikispeedia/wpcd/wp/s/Sulfuric_acid.htm
The term poly means many, so a polyatomic ion is an ion that contains more than one atom. Thus it is different from monatomic ions, which contain only one atom. Examples of monatomic ions include Na+ and Cl- etc. This article will give details of polyatomic ions and their examples. Definition of Polyatomic Ions Polyatomic ions are covalently bonded groups of atoms and having a positive or negative charge caused by the formation of an ionic bond with another ion. Compounds formed from such a combination of ions are polyatomic ionic compounds. But the polyatomic ion will behave as a single unit. In addition, polyatomic ions and those ionic compounds take part in chemical reactions such as acid-base, precipitation, and displacement similar to monatomic metallic ions. They easily dissolve in water and conduct electricity and dissociate in a solution similar to other ions. Whereas they behave externally like monatomic ions, but their internal structure is more complicated. This is because there are two or more atoms present in the polyatomic ion. For example: NH4+ :: Ammonium , CO32− :: Carbonate NO2− :: Nitrite HCO3− :: Hydrogen carbonate NO3− :: Nitrate ClO− :: Hypochlorite Structure of Polyatomic Ions Polyatomic ions can be compared with monatomic ions. Monatomic ions are the atoms which are ionized by gaining or losing electrons. The ion has a net charge over it because the total number of electrons is not balanced by the total number of protons in the nucleus. Therefore compared to the neutral atom, they have extra electrons for negatively charged anion or not enough electrons for the positively charged cation. Polyatomic Ionic Compound Sulfuric Acid Many common chemicals are polyatomic compounds with polyatomic ions. For example, sulfuric acid H2SO4 contains H+ and the polyatomic SO4-2. The sulfur atom has six electrons in its outer shell. It shares them covalently with the oxygen atoms having six electrons in their outer shells. The 4 oxygen atoms would need to have eight electrons shared between them, leaving a deficit of two. In sulfuric acid, the sulfate forms ionic bonds with the hydrogen atoms that donate an electron each to become hydrogen ions, H+. The sulfate receives the two electrons to become SO4-2. Polyatomic Ion NH4+ or Ammonium: In most polyatomic ions, it contains oxygen and a negatively charged anions. Ammonium is one of the few positively charged polyatomic ions and doesn’t contain oxygen. Nitrogen has five electrons in its outermost shell, and it can have eight. When it shares electrons covalently with four hydrogen atoms. Then four electrons are available from the hydrogen or one more than needed. When ammonium forms an ionic bond with an OH the extra electron transfers to complete the outermost shell of the OH. It needs two electrons but has only one from the OH hydrogen atom. The electron from the NH4 transfers to the OH creating an OH– ion and an NH4+ ion. Examples of Common Polyatomic Ions Many of the polyatomic ions have an electrical charge of -1. - Acetate – C2H3O2– - Bicarbonate (or hydrogen carbonate) – HCO3– - Bisulfate (or hydrogen sulfate) – HSO4– - Hypochlorite – ClO– - Chlorate – ClO3– - Chlorite – ClO2– - Cyanide – CN- - Hydroxide – OH- - Nitrate – NO3- - Nitrite – NO2- Polyatomic ions with a -2 charge are also common. - Carbonate – CO32- - Chromate – CrO42- - Peroxide – O22- - Sulfate – SO42- - Sulfite – SO32- However, other polyatomic ions form with the -3 charge, but the borate and phosphate ions are the ones to memorize. - Borate – BO33- - Phosphate – PO43- Solved Question for You Q. How many protons and electrons are in a hydroxide ion? Ans: We can calculate the total number of protons in a hydroxide ion by adding up the number of protons in one hydrogen atom and one oxygen atom: Total protons=protons in H + protons in O =1 proton + 8 protons=9 protons In a neutral molecule, the number of protons and electrons are equal. Also, hydroxide has a net -1 charge and we know there must be an extra electron compared to the number of protons. Hence, the hydroxide ion has nine protons and ten electrons.
https://www.toppr.com/guides/physics/atomic-and-molecular-structure/polyatomic-ion-definition-and-examples/
Today's fast-paced lifestyle can leave many of us feeling pretty stressed out sometimes. Between work, managing a home, and taking care of a family, we often have little time left over to take care of our own basic needs. Sometimes we may be so busy that we fail to notice that we are actually suffering physically and mentally as a result of stress. It is important to learn to identify stress and how to deal with it. Though not always enjoyable, stress is actually a necessary part of our daily lives. Stress is defined as anything that stimulates you to act, think, or react. Sometimes this stress may be as simple as your stomach growling at you to get some lunch; other times it may be as extreme as a threat that forces you to escape from your home or office. Whatever the source of your stress, stress is something that is necessary in order to force us to accomplish certain tasks. Without stress, our bodies wouldn't react at all, even in times of extreme danger. In order to manage your stress appropriately, it is necessary to understand the difference between good stress and bad stress. Good Stress: Good stress helps us to go about our daily tasks and achieve those hard-to-reach goals. This type of stress, called eustress, helps us to learn new things, adapt to change, and engage in creative thinking. Everyone experiences good stress on a daily basis. Another form of stress that is also good is the stress that enables us to survive in times of distress. This stress makes us aware of danger and enables us to escape when we need to. Bad Stress: Bad forms of stress do not help us to achieve goals or tasks, but instead actually inhibit our ability to function on a daily basis. Bad stress occurs when too much stress begins to build up around us. Once the body feels that there is too much stress, it will begin to break down, causing symptoms like perspiration, anxiety, headaches, and rapid breathing. This kind of stress can take a huge toll on your physical and mental wellbeing. Internal Factors: When stress is created by negative thoughts, worries, or feelings that come from inside you, it is described as being caused by internal factors. Low self-esteem, constant and unsubstantiated worrying, and fear of change can all be sources of major stress. Environmental Factors: All of those things that are going on around you can be contributors to your stress level. Whether it be a messy office, a fight with your boss, or your living conditions at home, these factors are common causes of stress. Fatigue and Overwork: We have all been overworked and overtired at some point in our lives. When we are persistently tired, undernourished, or unhealthy, stress can really begin to add up. If you can learn to reduce the amount of bad stress in your life, you will be able to enjoy life much more. It will increase your energy, alleviate depression, and bring back your zest for life. Here are some great stress management techniques that you can do at home to benefit your health and wellness and provide you with that much needed stress relief. Though they can take some practice to perfect, once you learn how to perform these techniques, the benefits can be endless. Mindfulness is a form of meditation that encourages you to be aware of your surroundings. Instead of getting caught up in the one thing that is causing you stress, mindfulness teaches you how to look at the whole picture and enjoy life for all its simple pleasures. In mindfulness meditation, you take on the role of observer. Be aware of all that is around you sights, smells, and sounds but don't focus on any one thing. Instead, focus on embracing the environment as it is at that very moment. Mindfulness meditation is an excellent technique that allows you to distract yourself from stressful situations, promoting relaxation and health. Exercise is a tried and tested technique for stress relief. Exercise, especially cardiovascular exercise, helps to moderate your emotions. When you exercise, your body releases endorphins, which are special chemicals that help to numb your pain and boost your mood. It also leaves you feeling ready for a great night's sleep! Try exercising 3 times a week for 30 minutes a day, in order to control your stress. Stress can take its toll on the body as well as the mind. Stress causes our muscles to tighten up and become stiff. Progressive muscle relaxation is designed to release this muscle tension and relax the entire body. It also helps to lower your pulse rate, reduce blood pressure, and minimize perspiration. Lie down on the floor, on your bed, or another comfortable place and breathe in deeply. Begin to constrict the muscles in your body one at a time, starting with your feet. Hold each muscle tight for a few seconds, and then relax. Work your way up to your head. By the time you get there your whole body (and your mind) will be relaxed. Coping with stress can be very difficult at times. Deep breathing, formally known as diaphragmatic breathing, is a very popular stress reduction technique. It's also really easy to do and can be done in any quiet spot. Begin by sitting comfortably in a secluded area. Take in a deep breath through your nose, counting from 1 to 4 as you breathe in. Exhale through your mouth as your count down from 4 to 1. Repeat this breathing 20 or 30 times. Deep breathing is particularly effective at reducing stress because it increases oxygen levels in the body, which has a natural, calming effect. We have all done visualization at some point in our lives usually in the middle of winter when we imagine we are actually lounging on a warm, sandy beach. Visualization allows us to remove ourselves from reality for a short period of time, providing us with rest and relaxation. To practice visualization, all you need to do is sit or lie down in a quiet spot. Get comfortable and then close your eyes. Visualize a scene or place that is filled with happiness and serenity - it could be a placid lake or it might be your childhood home. Focus on this image and try to imagine that you are actually there. Keep focusing until you can actually feel, see, and hear all the elements of that scene. Visualization eliminates stress by reducing anxiety and calming the entire body. Just having someone to talk to can be a great buffer against getting too stressed out. But what if the people around you are one of your sources of stress? Consider a few sessions with a trained professional. Meeting with any type of counselor or therapist can substantially reduce feelings of stress - often after only one visit. And many therapists are trained in teaching clients relaxation techniques as well as social skills, priority setting and stress management. Often assumed to just relieve physical discomfort, massage therapy is great way to relax your mind as well. Stress can induce a number of physical discomforts including tense muscles and knots in the shoulders and neck. Through different massage techniques, a therapist is able to loosen up those sore muscles thereby helping to relieve body pain. However, with this newly relaxed body, your mood also tends to improve and many people report feeling calmer after a massage. The benefits of massage on your physical and mental health are so great that many insurance programs nowadays will cover the cost of a massage performed by a registered massage therapist. if you seem to constantly have a brain shutdown how does one start with the efforts on stress relief?!
http://www.epigee.org/mental_health/stressrelief.html
Ian Campbell. Arabic Science Fiction. Palgrave, 2018. Studies in Global Science Fiction. Hardcover, 322 pages, $89.99. ISBN 9783319914329. Editor’s note: Ian Campbell is an editor of SFRA Review. I confirm as editor that he has had no involvement in the preparation of this review for publication. SF scholars who are interested in how SF in Arabic may differ from or critique Anglophone SF may at first wonder why Ian Campbell has such a sustained emphasis on Darko Suvin throughout Arabic Science Fiction. Suvin certainly is a formative figure in genre theory discussions about science fiction, although he is not quite as in vogue in contemporary science fiction studies as he once was. Nonetheless, Campbell sees Suvin’s conception of cognitive estrangement as significant for understanding Arabic SF and for Arabic-language SF scholars. As a result, Campbell’s project is an examination of the manifestations of cognitive estrangement in Arabic Science Fiction (ASF), and one of his central arguments builds off of Suvin directly. Campbell presents his conception of ASF as working off “double estrangement,” which reflects the “total lack of legal protections for freedom of expression in the modern Arabic world” (6-7) and that consequently “Arab writers in all genres, especially the canonical literary fiction to which ASF aspires, have learned to conceal their critique under layers of story in order to provide plausible deniability in the face of scrutiny by the regime” (7). ASF aims toward social criticism in order to be taken seriously as art. The “double” in “double estrangement” deals with the perception of science and technology; that is, ASF “draws attention to the drop-off in scientific and technological innovation in the Arab world since the glory days of Arab/Muslim dominance” (10). ASF stories may critique the state from a post-colonial perspective, but they critique the culture for reliance on mysticism. Campbell presents this concept as a way of signaling that readers may struggle to understand the intended critique of ASF works due to the works’ critiquing multiple vectors of society simultaneously, so that there may not be one central point but several. Likewise, ASF will not tend to have analogues to Golden Age SF works, given the differences in production and audience. The book is divided into eleven chapters. The introduction sets up the considerations of Suvin and “Double Estrangement” that shape the rest of the volume. Chapter 2, “Postcolonial Literature and Arabic SF,” outlines why ASF may be understood as “manifestly a postcolonial literature: it is produced in formerly colonized states, for readers in and from these states” (21) and thus is distinct from many works of postcolonial literature written in English by authors living in diaspora. In chapter 3, “Arabic SF: Definitions and Origins,” Campbell draws from Ada Barbaro’s work to discuss four genres of classical Arabic literature that serve as proto-SF: philosophical works that use voyages to pose arguments, adventure voyages, the utopian tradition, and mirabilia, which is a genre that focuses on real or imaginary places or events that challenge human understanding. Chapter 4, “Criticism and Theory of Arabic SF,” tries to establish a coherent framework for the relatively minimal amount of Arabic SF criticism. Partly this involves dealing with the issue of diglossia, the consequence of which is that most ASF, since it is written in the Modern Standard Arabic used for literature, is rendered “the nearly exclusive province of a small class of highly educated people” (79). That is, instead of being built on a pulp background, Arabic SF has as its audience primarily an educated and elite audience. These first four chapters do a great job of setting up the myriad ways in which ASF operates in an entirely different rhetorical and literary situation from commercial western SF. The remaining chapters each focus on case studies. As has been the case throughout Campbell’s study, for several of these works, there is no English translation. This makes Campbell’s study essential for the scholar but somewhat less accessible for a teacher who might be thinking about texts to include in a syllabus. Chapter 5 returns to the central concept of “double estrangement” regarding Egyptian author Nihād Sharīf’s The Conqueror of Time (1972). It is a political allegory that also estranges “Egyptian society as stagnant, figuratively frozen in its obsession with the past” (119) through the novum of cryogenics. Chapter 6 focuses on two novels by Egyptian scholar Mustafā Mahmūd and the exploitation of the peasantry by urban elites. Unfortunately, even Mahmūd’s The Spider (1965), which is regarded as the first ASF novel, is hard to get access to in Western markets. Chapter 7 presents Ṣabrī Mūsā’s The Gentleman from the Spinach Field (1987) as comparable to Aldous Huxley’s Brave New World (1932), in depicting a character trying to escape a dystopian reality and failing to find a sustainable alternative. Chapter 8 discusses Aḥmad ‘Abd al-Salām al-Baqqāli’s The Blue Flood (1976). Campbell argues that al-Baqqāli uses some of the same themes as Mahmūd, but places Western culture as an additional point of view, allowing him to critique reformers “for their inability or refusal to question their patriarchal assumptions” (219). Chapter 9 focuses on Ṭālib ‘Umrān’s Beyond the Veil of Time (1985), which, unlike many of the aforementioned works of science fiction, takes place on a foreign planet. Here Campbell argues that although the novel is superficially trite, it works as particularly effective estrangement for the educated elite readership of ASF, especially their belief that an alternative to despotism can emerge without violence. Chapter 10 focuses on a three-novel series by Kuwaiti author Tība ‘Ahmad Ibrāhīm, characterized by Campbell as the only notable female writer of ASF before the 2000s. Campbell argues that Ibrāhīm’s novels serve to show a transition in ASF, where narratives about the effect of technology, modernity, and colonialism do not need to be “cordoned off from everyday life” (278); that is, ASF is starting to become slightly more direct. For the scholar, Campbell’s study does an excellent job of exploring how works of ASF from a range of different countries (Kuwait, Egypt, Syria) have approached the literary demands and political risks of writing speculative fiction meant to critique the existing regimes and cultural programs. The primary frustration for the reader is likely not to be with Campbell’s analysis, but with the reality that many of these novels will remain largely inaccessible to the west. Nonetheless, scholars who want to understand the specific challenges of the emergence of science fiction in postcolonial settings would do well to explore Campbell’s volume.
https://sfrareview.org/2020/07/10/50-1-rnf5holmes/
We spent 3 weeks touring Tuscany in November/December. It’s a magnificent region where you can visit mysterious medieval cities and tiny Etruscan hill towns. All are located within driving distance. The food, the wine, and the magnificent views make it hard to ever leave. We took a train from Rome to Florence and headed off to stay for a few days in Montecatini Terma. It’s a spa town located in the heart of Northern Tuscany between Florence and Pisa. While we were there we went to one of their famous mineral spas. We also visited several old cities and hill towns including Lucca, Montecarlo di Lucca, and Montecatini Alto. Then we drove South to Siena stopping at Castellini in Chianti and Montepulciano. We spent 2 nights in Siena, which was magnificent, and then drove to Sorrento where we stayed for five nights. We booked a private tour that took us on a day trip through the Amalfi Coast and Pompei. We then headed back to Rome to board the Cunard Queen Elizabeth at the ancient port of Civitavecchia. Our first port was Livorno back in Tuscany where we boarded an excursion bus that took us on a too-quick tour of Florence and Pisa. Tuscany is an amazing place to visit and I especially loved exploring the hill towns. We were traveling during winter so most of the towns were quiet and relaxed with almost no tourists. There’s nothing better than sitting on top of a beautiful hill overlooking vast vineyards, sipping a glass of Chianti and nibbling on freshly baked bread dipped in truffle oil. Scroll down to see my photo galleries of the cities and hill towns we visited in Tuscany. Lucca is a city on the Serchio river in Italy’s Tuscany region. It’s renowned for the well-preserved Renaissance walls encircling its historic city center and its cobblestone streets. Click on the thumbnails in the photo galleries below to view them FULL SIZE. Please feel free to share any photos you like on social media. Montecarlo di Lucca is a hill town in the Province of Lucca in the Italian region Tuscany. Montecatini Alto is a hill town located above Montecatini Terma in the eastern end of northern Tuscany between Florence and Pisa. During high season, you can take a funicular up to the old city. Siena, a large and beautiful city in Italy’s central Tuscany, is distinguished by its medieval brick buildings. Castellina in Chianti is a hill town near Siena. The city’s origins go back to Etruscan times and many artifacts from that time period can be found here. Montepulciano is a medieval and Renaissance hill town located in southern Tuscany. Florence, the capital of Italy’s Tuscany region and birthplace of the Renaissance, is home to masterpieces of art and architecture. Pisa is a city of 89,523 residents but is best known for it’s Square of Miracles consisting of the Cathedral, Baptistery, and the Bell Tower – Leaning Tower of Pisa. Both the Cathedral and Baptistery are beautiful and hold priceless artwork and artifacts. Booking.com – Find a place to stay in Florence. Skyscanner – Find the most affordable flight to Tuscany on a search engine you can trust. Tour Tuscany – Book a tour on Viator to see the amazing sites of Tuscany. I wish we could have been there in October during the grape harvest and full on truffle season but actually November and December was pretty comfortable. So glad you were able to see Siena and Montepulciano. I agree, they were both amazing! There are so many wonderful little towns to visit. I can’t wait to go back. Isn’t Italy delicious? Thanks so much for sharing your pictures. They brought back good memories! Yes, Italy is delicious in many ways besides just the food. Glad to tweak some memories for you. Our son studied in Florence his Junior year at UCLA. We had a ball discovering Italy through the eyes of a resident. I loved your photos. The architecture of Tuscany is so magical, isn’t it? I bet that was so much fun! My boyfriend’s daughter studied in Florence with Chapman University as well and he visited when she was there but that was before we lived together. I wish I’d had more time in Florence. Cruise ship excursions are way too quick.
https://www.babyboomster.com/italy-travel-photo-gallery-part-2-tuscany/
A new concept of space that allows for a new way to enjoy and experience a boutique hotel in the heart of Salvador Bahia. The Cocoon Hotel's philosophy is to combine minimalism and futurism with the marvels of Bahia culture. The futuristic materials merge with the traditional cultural elements creating a fusion that results in a perfect balance between architecture and nature, modern and traditional, in between the future and the past.
http://walpax.com.br/hotels_result.php?destination=149&pg=hotel
What is Trichoderma Mycorrhiza? Mycorrhiza is the name given to specialized forms of fungi that grow on plants' roots and spread deep within the soil. The fungal filaments within the soil are an extension of the root system and are far more effective at absorbing water and nutrients than the actual roots. Over ninety percent of all species of plants in natural areas of the world can enhance plants' performance because of their relationship with mycorrhizal fungi. These fungi can increase the surface area of the roots capable of absorption, thus improving the plant's ability to gain access to the soil's resources a quite significant amount. They can also increase the uptake of nutrients and may release into the powerful soil enzymes capable of dissolving nutrients that are more difficult to capture, such as the likes of iron, organic nitrogen and phosphorous, and other soil nutrients that are tightly bound. The bad news is that these fungi are very slow to colonize. To repopulate areas that have been disrupted by common practices, it may be necessary to reintroduce the fungi to have plant performance dramatically improved. Trichoderma Mycorrhiza are colonies of fungi mold which begin with a transparent appearance but slowly turn yellowish or white before maturing and darkening to green or grey after it produces spores. Trichoderma are essential because the molds develop a symbiotic relationship with plant life, growing on their roots while eliminating competing fungi and providing significant benefit to the plant.
https://plantrevolution.com/blogs/news/what-is-trichoderma-mycorrhiza
The p-value which is also referred as calculated probability is the probability through which finding of the observed or extreme cases is possible. Results when it shows null hypothesis, then the question might be true. The ‘extreme’ of finding will depend on a procedure of testing. P can also be defined as rejecting H0 which is true, but not a direct probability. The statistical significance In order to understand the outcome in a significant way, it is necessary to compare different value of alpha and p-value. There are two different possibilities that can be observed: 1. The p-value can be either less or equal to alpha. In such case the null hypothesis is rejected. When such occurrences take place, the result appears to be statistically significant. You can say that there a chance that can offer observed sample. 2. If the p-value is greater compared to alpha, then in such a case the null hypothesis is not rejected. Once this happens, then we cannot refer to as statistically significant. Therefore, the observed data can get an explanation by chance. Examples to support statement In case a pizza shop claims that their delivery time is 30 minutes or less or average. But, according to you it can be more than what the company says. Therefore, you plan to conduct hypothesis test as you believe in null hypothesis with is H0 this means that the mean delivery time would be max. 30 minutes this is incorrect. A alternative hypothesis (Ha) refers that mean time would be more than 30 minutes. You should also be aware of Does sneezing kill brain cells? This can give knowledge on science. Random samples of delivery times collected can perform hypothesis test and if p-value turns out to be 0.001 it means that data is less than 0.05. So, there is minimal chance of failing the delivery of pizza within 30 minutes.
https://myhomeworkhelp.com/how-would-you-explain-the-concept-of-p-value-to-a-five-year-old/
Transforming marginalized communities so that all people have opportunities to thrive. Our Vision A world where everyone is empowered to be all that they are created to be, regardless of the zip code in which they happen to live. Alliance seeks to eliminate inequalities in all 254 counties throughout the great state of Texas and beyond. Over the last 20 years: We have worked to strengthen more than 25,000 individual leaders and organizations to transform marginalized communities. We have led or supported 25 collaboratives or collective impact projects, because we believe that by working collaboratively, we can tackle the root causes of economic and health disparities. Collaborating with Cross-Sector Leaders to Enact Change Our History Founded in 2001 as the Faith and Philanthropy Institute with the goal of preparing, training, and equipping faith-based leaders and organizations to affect positive, measurable change in their communities. A shift in strategy in 2014 initiated the rebrand to Alliance for Greater Works. This new name was chosen to reflect our priority of unlocking the power and potential of investors, leaders and organizations "working together" to transform marginalized communities. By working together, we achieve "greater works." We are champions for those who need championing and defenders for those who need defending. We are passionate leaders dedicated to fulfilling our mission. Join a team of passionate professionals working to tackle the root causes of economic and health disparities. Organizational Values Servant Leadership We value a Biblical view of leadership that seeks the welfare of others, engages the gifts of all, heals divisions and accomplishes shared goals. Relational We seek to build strong and long-term relationships that are grounded in trust and respect and fostering those same behavior in others. Collective Voice We are committed to diverse people and sectors working together to achieve greater impact and solve social ills. Excellence We are committed to quality service and work results. Community Transformation We seek the physical, social, and spiritual transformation of people and places, focusing on marginalized communities in Texas.
https://allianceforgreaterworks.org/about/
New studies posted in Evolutionary Behavioral Sciences gives extra evidence that extra clever people are more likely to decide on an instrumental song. “I first became interested in this topic at the same time as running on a project looking into the connection between character tendencies and musical preferences. At the time, I turned into reading evolutionary psychology and became acquainted with Satoshi Kanazawa’s Savanna-IQ Interaction Hypothesis,” stated examine author Elena Racevska, a Ph.D. scholar at Oxford Brookes University. According to the speculation, intelligence advanced as a manner to address new and unusual things — ensuing in greater intelligent individuals having a greater choice for novel stimuli than less wise individuals. “After studying Kanazawa’s papers, one among which changed into on the connection between intelligence and musical options, we determined to in addition test his speculation the use of a one-of-a-kind set of predictors — specifically, a unique form of intelligence take a look at (i.E., A nonverbal degree), and the makes use of-of song questionnaire,” Racevska explained. “We additionally measured several variables probably to affect this dating, together with taking element in extra-curricular music education, its type, and length.” The examine of 467 Croatian high school college students determined that higher scores on the intelligence test had been related to a choice for the instrumental track, which includes ambient/relax out electronica, large band jazz, and classical music. “From the angle of evolutionary psychology, intelligence can only predict variations in the choice for the instrumental track. Individuals with higher intelligence take a look at scores are more likely to decide upon predominantly instrumental song styles. However there are no variations in the preference for a predominantly vocal or vocal-instrumental tune that can be expected with intelligence test rankings,” Racevska advised PsyPost. The researchers also observed that participants used extraordinary genres of tune for one-of-a-kind reasons. For instance, folks that said the use of music cognitively, which include locating entertainment in analyzing compositions or admiring musical technique, tended to be more keen on instrumental track. But the examine — like every research — encompass a few barriers. “Intelligence is handiest one of the constructs connected to musical possibilities, there are numerous others, along with personality tendencies, gender, age, degree of schooling, and circle of relatives earnings,” Racevska stated. “Future research ought to recognition on untangling the connection among complexity and novelty in shaping possibilities — complexity of vocalization is favored with the aid of many species, that may suggest that it’s far evolutionarily familiar.” “It might additionally be first-rate to behavior a longitudinal have a look at of how musical preferences trade at some point of developmental levels of the human life, and the way they have interaction with several social and personal variables, such as societal pressures and peer relationships. A pass-cultural examine may want to have a look at and control for impacts of culturally specific ways of experiencing song, and other music-associated behaviors,” Racevska added. This is a list of some of the world’s music genre and their definitions. African Folk – Music held to be typical of a nation or ethnic group, known to all segments of its society, and preserved usually by oral tradition. Afro-jazz – Refers to jazz music which has been heavily influenced by African music. The music took elements of marabi, swing and American jazz and synthesized this into a unique fusion. The first band to achieve this synthesis was the South African band Jazz Maniacs. Afro-beat – Is a combination of Yoruba music, jazz, Highlife, and funk rhythms, fused with African percussion and vocal styles, popularized in Africa in the 1970s. Afro-Pop – Afropop or Afro Pop is a term sometimes used to refer to contemporary African pop music. The term does not refer to a specific style or sound but is used as a general term to describe African popular music. Apala – Originally derived from the Yoruba people of Nigeria. It is a percussion-based style that developed in the late 1930s when it was used to wake worshippers after fasting during the Islamic holy month of Ramadan. Assiko – is a popular dance from the South of Cameroon. The band is usually based on a singer accompanied with a guitar, and a percussionist playing the pulsating rhythm of Assiko with metal knives and forks on an empty bottle. Batuque – is a music and dance genre from Cape Verde.
https://wmmks.com/more-wise-individuals-are-more-likely-to-enjoy-instrumental-music/
In addition to differentiated lesson planning, Homeroom and Learning Support teachers work together to develop and implement individualized learning strategies for identified students. Withdrawing students for individualized instruction outside the classroom is supplemental and used to assist identified students in developing specific skills required to meet grade level standards. ENGLISH LANGUAGE LEARNING The English Language Learning (ELL) program is designed to provide academic and social language support for non-native English speakers in the Elementary School in Grades Two to Grade Five. The primary purpose of the program is to ensure that all students become proficient in English and achieve personal academic success. The ELL program addresses individualized needs and learning styles through one-on-one, small group instruction (pull-out), in-class (also called push-in) support, in consultation with the Homeroom teacher. COUNSELLING The Elementary Developmental Counseling Program promotes and enhances the development of the whole student with significant focus on the pastoral care component of counseling so that all students are supported in realizing their academic, personal, social, and emotional success during their time in the Elementary School.
https://www.eac.com.br/learning/elementary-school/student-support-and-success
Physicists and computer scientists have long speculated whether the NSA's efforts are more advanced than those of the best civilian labs. Although the full extent of the agency's research remains unknown, the documents provided by Snowden suggest the NSA is no closer to success than others in the scientific community. Former NSA contractor Edward Snowden. Credit:Washington Post "It seems improbable that the NSA could be that far ahead of the open world without anybody knowing it," said Scott Aaronson, an associate professor of electrical engineering and computer science at the Massachusetts Institute of Technology (MIT). The NSA appears to regard itself as running neck and neck with quantum computing labs sponsored by the European Union and the Swiss government, with steady progress but little prospect of an immediate breakthrough. "The geographic scope has narrowed from a global effort to a discrete focus on the European Union (EU) and Switzerland," one NSA document states. Seth Lloyd, professor of quantum mechanical engineering at MIT, said the NSA's focus is not misplaced. "The EU and Switzerland have made significant advances over the last decade and have caught up to the US in quantum computing technology," he said. The NSA declined to comment for this story. The documents, however, indicate that the agency carries out some of its research in large, shielded rooms known as Faraday cages, which are designed to prevent electromagnetic energy from coming in or out. Those, according to one brief description, are required "to keep delicate quantum computing experiments running". The basic principle underlying quantum computing is known as "quantum superposition", the idea that an object simultaneously exists in all states. A classical computer uses binary bits, which are either zeroes or ones. A quantum computer uses quantum bits, or qubits, which are simultaneously zero and one. This seeming impossibility is part of the mystery that lies at the heart of quantum theory, which even theoretical physicists say no one completely understands. "If you think you understand quantum mechanics, you don't understand quantum mechanics," said the late Nobel laureate Richard Feynman, who is widely regarded as a pioneer in quantum computing. Here's how it works, in theory: while a classical computer, however fast, must do one calculation at a time, a quantum computer can sometimes avoid having to make calculations that are unnecessary to solving a problem. That allows it to home in on the correct answer much more quickly and efficiently. Quantum computing is so difficult to attain because of the fragile nature of such computers. In theory, the building blocks of such a computer might include individual atoms, photons or electrons. To maintain the quantum nature of the computer, these particles would need to be carefully isolated from their external environments. "Quantum computers are extremely delicate, so if you don't protect them from their environment, then the computation will be useless," said Daniel Lidar, a professor of electrical engineering and the director of the Centre for Quantum Information Science and Technology at the University of Southern California. A working quantum computer would open the door to easily breaking the strongest encryption tools in use today, including a standard known as RSA, named for the initials of its creators. RSA scrambles communications, making them unreadable to anyone but the intended recipient, without requiring the use of a shared password. It is commonly used in web browsers to secure financial transactions and in encrypted emails. RSA is used because of the difficulty of factoring the product of two large prime numbers. Breaking the encryption involves finding those two numbers. This cannot be done in a reasonable amount of time on a classical computer. In 2009, computer scientists using classical methods were able to discover the primes within a 768-bit number, but it took almost two years and hundreds of computers to factor it. The scientists estimated it would take 1000 times longer to break a 1024-bit encryption key, which is commonly used for online transactions. A large-scale quantum computer, however, could theoretically break a 1024-bit encryption much faster. Some leading internet companies are moving to 2048-bit keys, but even those are thought to be vulnerable to rapid decryption with a quantum computer. Quantum computers have many applications for today's scientific community, including the creation of artificial intelligence. But the NSA fears the implications for national security. "The application of quantum technologies to encryption algorithms threatens to dramatically impact the US government's ability to both protect its communications and eavesdrop on the communications of foreign governments," according to an internal document provided by Snowden. Experts are not sure how feasible a quantum computer is in the near future. A decade ago, some experts said developing a large quantum computer was likely 10 to 100 years in the future. Five years ago, Lloyd said the goal was at least 10 years away. Last year, Jeff Forshaw, a professor at the University of Manchester, told Britain's Guardian newspaper, "It is probably too soon to speculate on when the first full-scale quantum computer will be built but recent progress indicates that there is every reason to be optimistic." "I don't think we're likely to have the type of quantum computer the NSA wants within at least five years, in the absence of a significant breakthrough maybe much longer," Lloyd said in a recent interview. However, some companies claim to already be producing small quantum computers. A Canadian company, D-Wave Systems, says it has been making quantum computers since 2009. In 2012, it sold a $US10 million version to Google, NASA and the Universities Space Research Association, according to reports. That quantum computer, however, would never be useful for breaking public key encryption such as RSA. "Even if everything they're claiming is correct, that computer, by its design, cannot run Shor's algorithm," said Matthew Green, a research professor at the Johns Hopkins Information Security Institute, referring to the algorithm that could be used to break encryption such as RSA. Experts believe one of the largest hurdles to breaking encryption with a quantum computer is building a computer with enough qubits, which is difficult given the very fragile state of quantum computers. By the end of September, the NSA expected to be able to have some basic building blocks, which it described in a document as "dynamical decoupling and complete quantum control on two semiconductor qubits". "That's a great step, but it's a pretty small step on the road to building a large-scale quantum computer," Lloyd said. A quantum computer capable of breaking cryptography would need hundreds or thousands more qubits than that. The budget for the National Intelligence Program, commonly referred to as the "black budget", details the "Penetrating Hard Targets" project and noted that this step "will enable initial scaling towards large systems in related and follow-on efforts". Another project, called the "Owning the Net", is using quantum research to support the creation of new quantum-based attacks on encryptions such as RSA, documents show. "The irony of quantum computing," Lidar said, "is that if you can imagine someone building a quantum computer that can break encryption a few decades into the future, then you need to be worried right now." Washington Post Follow IT Pro on Twitter
A computer program goes through many phases from its development to execution. From the human readable format (source code) to binary encoded computer instructions (machine code). Here in this section, I will be explaining the different phases of a program during its entire lifespan. Source code Source code is a plain text file containing computer instructions written in human readable format. It is simple text file written by programmers. It contain instructions in high-level language that the programmer intended to perform by a program. Source code is later compiled and translated to Object code. Object code Object code is a sequence of computer instructions in an intermediate language. It is generated by compiler after the compilation process. The compiler reads source code written in high-level language and translates it to an intermediate language. After translation a file containing instructions encoded in some intermediate language is generated called object code. Note: The intermediate language may or may not be machine language. Despite of being in binary language object codes cannot execute by its own as they lack the main entry point. Various object codes are further linked together by a linker to generate final executable file. Machine code Machine code is a set of computer instructions written or translated in machine language. It is the final executable file generated by compiling, assembling or linking several object files together. It is the only code executed by the CPU. Machine code and object code both are encoded in machine language and may seem to be similar in nature. However, you can directly execute a machine code, whereas object codes cannot execute on their own. Machine code is the result of linking several object files together. Whereas object code is a result of translating source code of single module or program into machine language. Machine code always contains an entry point to the program while object code does not contain any entry point.
https://codeforwin.org/2017/05/life-cycle-computer-program.html
Introduce your project by capturing the reader’s interest in the first paragraph. Discuss the problem background and why you decided to develop your project. What’s wrong with the traditional method? B. Purpose and Description Provide a short description of the project being specified and its purpose, including relevant benefits (or beneficiaries) What is your main purpose in doing the project? Who is/are your target clients, end user/s or beneficiaries of the project? C. Objectives Detailed statements or elaboration of the project goal and should be clearly stated and logically presented Present the sub-objectives in a logical sequence from factual to analytical along mutually exclusive dimensions (no overlaps) with the exclusion of the overview, expected conclusions, implications and recommendations of the project. Specific objectives should be SMART. Specific, Measurable, Achievable, Realistic and Time-bounded. D. Scope and Limitations Discuss here the boundaries of the study and those likely part of the study researcher/s do not intend to accomplish (or what the design of the study inherently will not allow) Describe any global limitations or constraints that have a significant impact on the design of the system/software (and describe the associated impact). Describe any items or issues that will limit the options available to the developers. These might include: corporate or regulatory policies; hardware limitations (timing requirements, memory requirements); interfaces to other applications; specific technologies, tools, and databases to be used; parallel operations; language requirements; communications protocols; security considerations; design conventions or programming standards Limitations that are not readily apparent at the start of the research project may develop or become apparent as the study progresses. In any case, limitations should not be considered alibis or excuses; they are simply factors or conditions that help the reader get a truer sense of what the study results mean and how widely they can be generalized. While all project have some inherent limitations, you should address only those that may have a significant effect on your particular study. II. DESIGN AND METHODOLOGY A. Development Model This may include the following models: Conventional waterfall type, Incremental, Throw-away, prototyping, Evolutionary prototyping and any other model which is most appropriate to the kind of research project being undertaken. B. Development Approach This may include either Top down or Bottom-up approach of development. C. Schedule and Timeline It may contain Gantt Chart, Activity Chart, Critical Path Analysis and other scheduling techniques that will list the activities to be done in order to achieve the objective. Usually it includes the phases and its sub-phase of the systems development life cycle. D. Project Teams and Responsibilities It should contain the assignments of modules and activities to be done by each team member. E. Systems Analysis and Design System analysis focuses on system requirement description; defines the system functional requirements, and requirement specification of the proposed system. System design provides the technical specification and construction of the solution for the requirements identified during the system analysis phase of the research/project. This should include...
https://www.studymode.com/essays/Qwertyyuiop-60882277.html
Malaysia is currently recognized by other countries as one of the best countries in providing an excellent healthcare system. Healthcare systems usually refer to the system or program by which health care is made available, namely hospitals. The teamwork component in all sectors is the most crucial to bring forth excellent or the best hospital services. Why teamwork becomes a vital aspect of excellent services? It is because clinical care nowadays becomes more complex and specialized, and the demand of people or patients who seek medical treatment has increased significantly. The types of medical attention have also become more complex and challenging to manage by healthcare providers. Working at a hospital is like a team sport. In order to win a soccer match, the striker and all team players need to be experts to ensure all positions play their roles effectively. Typically, a striker is not expected to have the same skillset as a defender, but they both value and encourage the efforts of teammates who have the same goal to perform their best. The value of teamwork should be acknowledged by all players as they work together towards the team’s same mission to defeat the opponents, thus become the top team. It is the same as working in the hospital; doctors and nurses are a team. Doctors need cooperation from the nurses and vice versa. Doctors are responsible for making a diagnosis of patients so that they can devise treatment plans for their patients. Meanwhile, nurses need the doctor’s diagnosis to get comprehensible management for them to contrive care plans for the patients. This culture in the job description makes it significant for both of them to respect each other and work together in providing the best treatment to patients and avoid medical errors in treating patients. “There is no ‘I’ in ‘Team’”. Each team member in a healthcare organization has her/his strength and weakness. The value of collaboration and mutual support makes the team more valuable in terms of job and communication excellence. Senior staff can teach and share knowledge with new staff. Moreover, nurses can put forward ideas and suggestions to doctors. There are no barriers for them to discuss and perform job-related tasks together to ensure the job descriptions are fulfilled. All staff should set aside all gaps that exist and work together as a team to create a favorable working environment. It is also important to note that teamwork does not only benefit the organization. Collaboration should increase job satisfaction and lead to better results. Working in a team allows nurses or other healthcare staff the opportunity to come up with creative ideas and offer a greater sense of belonging. It is awesome, isn’t it? As a result, excellent teamwork by all staff will benefit the patients or customers of the hospital greatly. In a nutshell, “alone we can do so little, together we can do so much”. The value of teamwork has a powerful impact on healthcare organizations. Collaboration upholds the healthcare system standing strong to provide excellent services to customers and patients. Thus, the mission and vision of the healthcare organization can be achieved.
https://mba.kpjuc.edu.my/mbakpj/teamwork-in-healthcare-system/
Over the past couple of years we have enjoyed the success of several special events. At the beginning of January we have our anniversary celebrations, the last couple of Halloween's we've enjoyed free speed games on the Halloween Hallows map, we've had a couple Whac-a-Mod competitions, and we've enjoyed a few silly moments such as the Cutesy Club extravaganza and so forth. However, it is time for Conquer Club to up the ante on special events and celebrations and take them to the next level. Therefore, I would like to request applications from individuals interested in joining TeamCC in an official capacity. We are currently looking to fill the newly created position of Special Events Coordinator. Job Description: - Develop and coordinate 5 to 6 regularly scheduled "celebrations" per year (ex. Halloween, Anniversary, Whac-a-Mod, etc)- Develop and coordinate 3 to 4 surprise "celebrations" per year- Manage a team of unofficial/official staff in their department to make planned "celebrations" happen smoothly Applicants for this position will display the following traits: - High level of organizational skills- Ability to work on their own and self-start on projects- Ability to coordinate with a team- Ability to meet deadlines *This position is a volunteer position, not a staff position, and as such is unpaid. After reviewing the information above if you wish to apply to join TeamCC please send a private message directly to Optimus Prime detailing the following information: ===============Username: Positions Interested In: Time I Can Dedicate per Week: Availability:(weekends only, all day, mornings, evenings, etc) Comments:(one or two paragraphs in length describing why you think you are a good candidate) Brainstorming Exercise:(outline at least one "celebration" you would be interested in developing or planning, more than one outline is allowed if you so choose)=============== Please develop sitter application layer faster, I am sure more people would volunteer for positions, because I feel most of the top players (means they are reliable and at least slightly organized) care more about their account and games than about the TeamCC helper position. Dako wrote:Please develop sitter application layer faster, I am sure more people would volunteer for positions, because I feel most of the top players (means they are reliable and at least slightly organized) care more about their account and games than about the TeamCC helper position. Developing an account sitting interface is on the to-do list, but not at the top right now. If someone wants to volunteer and doesn't mind TeamCC only watching their account they are welcome to, if not, they can volunteer sometime in the future when the interface is added or they change their outlook on rank and score. Dako wrote:Please develop sitter application layer faster, I am sure more people would volunteer for positions, because I feel most of the top players (means they are reliable and at least slightly organized) care more about their account and games than about the TeamCC helper position. Developing an account sitting interface is on the to-do list, but not at the top right now. If someone wants to volunteer and doesn't mind TeamCC only watching their account they are welcome to, if not, they can volunteer sometime in the future when the interface is added or they change their outlook on rank and score. It is not about rank and score, per say it is about someone you trust to look after your account should it need be. Teamwork is definitely about trust, with or without an account sitting feature! Look forward to seeing who applies for this position. Hopefully a number of you apply, and have some great ideas and initiative. I think a lot of fun and community enjoyment can come out of more coordinated events and surprises. AndyDufresne wrote:Teamwork is definitely about trust, with or without an account sitting feature! Look forward to seeing who applies for this position. Hopefully a number of you apply, and have some great ideas and initiative. I think a lot of fun and community enjoyment can come out of more coordinated events and surprises.
A strategic group is defined as a set of firms within an industry pursuing a similar strategy. The strategic group concept emerged with much promise over 40 years ago. Research on strategic groups over time in a broad variety of settings has sought to clarify their theoretical and empirical properties. These research findings are gradually being translated into practical managerial guidance, so that the strategic group concept can be understood, operationalized, and used productively by managers. Two main approaches exist for identifying strategic groups—a ground-up approach, using disaggregated data, and a top-down, using cognition. Once identified, managerial insights can be derived from clarifying a strategic group’s profile. Firm membership in a group helps to uncover immediate and more distant types of competitors. Group profitability differences reveal the more rewarding and less attractive areas within an industry, as well as identify the lower-return groups from where firm exits are likely to occur. Group dynamics reflect competitive and cooperative behavior within and between groups. Several promising areas for future research on strategic groups to improve understanding and practice of strategy. Article Briance Mascarenhas and Megan Mascarenhas Article Kathryn Rudie Harrigan Concerns regarding strategic flexibility arose from companies’ need to survive excess capacity and flagging sales in the face of previously unforeseen competitive conditions. Strategic flexibility became an organizational mandate for coping with changing competitive conditions and managers learned to plan for inevitable restructurings. They learned to reposition assets and capabilities to suit their firms’ new strategic aspirations by overcoming barriers to change. Core rigidities flared up in the form of legacy costs, regulatory constraints, political animosity, and social resistance to adjusting firms’ strategic postures; managers learned that their firms’ past strategic choices could later become barriers to adapting corporate strategy. Managerial insights concerning how to modify firms’ resources changed the way in which they were subsequently regarded. Enterprises saw assets lose their relative productivity and value as mastery of specific knowledge become less germane to success. Managers recognized that their firms’ capabilities were mismatched to market or value-chain relationships. They struggled to adapt by overcoming barriers to change. Flexibility problems were inevitable. Even if competitive conditions were not impacted by exogenous change forces, sustaining advantage in a steady-state competitive arena became difficult; sustaining advantage in dynamic arenas became nearly impossible. Confronted with the difficulties of changing strategic postures, market orientations, and overall cost competitiveness, managers embraced the need to combat organizational rigidity in all aspects of their firms’ operations. Strategic flexibility affected enterprise assets, capabilities, and potential relationships with other parties within firms’ value-creating ecosystems; the need for strategic flexibility influenced investment choices made to escape organizational rigidity, capability traps and other forms of previously unrecognized resource inflexibility. Where entry barriers once protected a firm’s strategic posture, flexibility issues arose when the need for endogenous changes occurred. The temporary protection afforded by imitation barriers slowed an organization’s responsiveness to changing its strategy imperatives—making the firm rigid when adaptiveness was needed instead. A firm’s own inertia to change sometimes created mobility barriers that had to be overcome when hypercompetitive conditions arose in their traditional market arenas and forced firms to change how they competed. Where exogenous changes drove competitive conditions to become more volatile, attainment of strategic flexibility mandated the need to downsize the scope of a firm’s activities, shut down facilities, prune product lines, reduce headcount, and eliminate redundancies—as typically occurred during an organizational turnaround—while simultaneously increasing the scope of external activities performed by an enterprise’s value-adding network of suppliers, distributors, value-added resellers, complementors, and alliance partners, among others. Such structural value-chain changes typically exacerbated pressures on the firm’s internal organization to search more broadly for value-adding innovations to renew products and processes to keep up with the accelerated pace of industry change. Exploratory processes of self-renewal forced confrontations with mobility or exit barriers that were long tolerated by firms in order to avoid coping with the painful process of their ultimate elimination. The sometimes surprising efforts by firms to avoid inflexibility included changes in the nature of firms’ asset investments, value-chain relationships, and human-resource practices. Strategic flexibility concerns often trumped the traditional strengths accorded to resource-based strategies. Article Michael Dowling Ray Noorda, the former CEO of Novell Inc., first coined the term “coopetition” in 1992 to describe a common phenomenon in the computer industry: cooperation between competitors. This phenomenon is inconsistent with classical economic and business theory going as far back as Adam Smith, who viewed the production system as based on a separation between suppliers and buyers. Micro-economists have traditionally viewed the firm as buying raw materials and components from suppliers, producing finished goods, and selling those goods in competition with other firms to a different set of firms or consumers. However, starting in the 1990s, research on forms of cooperative relationships between competitors became very common. The most common types are (a) competing firms engaging in horizontal alliances along the same level of the value chain and (b) vertical cooperation along different levels of the value chain between suppliers and firms in the focal industry or between customers and firms. In the last 25 years, there has been a great increase in research on coopetition. In a systematic literature review conducted in 2014, one researcher found over 130 academic articles in more than 80 academic publications published since 1996. The majority of the research to date has been qualitative, with many cases studied conducted. A number of special issues in academic journals have been devoted to the topic in general or to special topics concerning coopetition. The Strategic Management Journal organized a special issue in 2018 on the interplay of competition and cooperation, and a number of workshops have been held on coopetition strategy and innovation. Article Nydia MacGregor and Tammy L. Madsen Regulatory shocks, either by imposing regulations or easing them (deregulation), yield abrupt and fundamental changes to the institutional rules governing competition and, in turn, the opportunity sets available to firms. Formally, a regulatory shock occurs when jurisdictions replace one regulatory system for another. General forms of regulation include economic and social regulation but recent work offers a more fine-grained classification based on the content of regulations: regulation for competition, regulation of cap and trade, regulation by information, and soft law or experimental governance. These categories shed light on the types of rules and policies that change at the moment of a regulatory shock. As a result, they advance our understanding of the nature, scope, magnitude, and consequences of transformative shifts in rules systems governing industries. In addition to differences in the content of reforms, the assorted forms of regulatory change vary in the extent to which they disrupt an industry’s state of equilibrium or semi-equilibrium. These differences contribute to diverse temporal patterns or dynamics, an area ripe for further study. For example, a regulatory shock to an industry may be followed by rapid adjustment and, in turn, a new equilibrium state. Alternatively, the effects of a regulatory shock may be more enduring, contributing to ongoing dynamics and prolonging an industry’s convergence to new equilibrium state. As such, regulatory shocks can both stimulate ongoing heterogeneity or promote coherence within and among industries, sectors, organizational fields, and nation states. It follows that examining the content, scope, and magnitude of regulatory shocks is key to understanding their impact. Since conforming to industry regulation (deregulation) increases economic returns, firms attempt to align their policies and behaviors with the institutional rules governing an industry. Thus, regulatory shocks stimulate the evaluation of strategic choices and, in turn, impact the competitive positions of firms and the composition of industries. Following a shock, at least two generic cohorts of firms emerge: incumbents, which are firms that operated in the industry before the change, and entrants, which start up after the change. To sustain a position, entrants must build capabilities from scratch whereas incumbents must replace or modify the practices they developed in the prior regulatory era. Not surprisingly, the ensuing competitive dynamics strongly influence the distribution of profits observed in an industry and the duration of firms’ profit advantages. Our review highlights some of the prominent areas of research inquiry regarding regulatory shocks but many areas remain underexplored. Future work may benefit by considering regulatory shocks as embedded in a self-reinforcing system rather than simply an exogenous inflection in an industry’s evolutionary trajectory. Opportunities also exist for studying how the interplay of industry actors with actors external to an industry (political, social) affects the temporal and competitive consequences of regulatory shocks.
https://oxfordre.com/business/search?f_0=keyword&q_0=competition
Animals have a remarkable ability to adjust their behavior in response to information from their environment. However, it remains difficult to predict which sources of information will have lasting impacts on behavior, and how prior experiences influence an animal’s response to new information. The overall objective of the current study is to assess whether the mechanisms that embed an experience and cause behavioral change can predict an experience’s persistence, and thus its significance. This study evaluates two hypotheses from behavioral genomics about the types of embedding mechanisms that predict lasting behavioral change. The first proposes that experiences causing lasting changes in brain gene expression predict lasting behavioral outcomes, and further, that epigenetic regulatory mechanisms (e.g., DNA methylation) give rise to these effects. The second suggests that, while environmental cues may induce temporary brain gene expression changes, dynamics at higher organizational levels in the brain and periphery predict persistent behavioral effects. These hypotheses have not been widely examined, so many critical knowledge gaps remain. The current study addresses two of these: 1) most studies have assessed the impact of a single experience, without determining how that experience alters the response to new information, 2) few studies have directly compared these hypotheses in a single system; such an approach is needed because these hypotheses may not be mutually exclusive. The current study evaluates these hypotheses simultaneously using honey bee defensive aggression. In this system, there is extensive knowledge of how different, single sources of social information regulate behavior, and the duration of these effects; notably however, it is unknown how multiple sources of information work together to shape aggression. This lays the groundwork to assess whether experiences that result in persistent changes in brain DNA methylation and/or peripheral tissue structure predict the response to new information. Researchers will manipulate social experience and generate individuals with persistent differences in brain DNA methylation, peripheral tissue structure, neither, or both. Researchers will then measure how individuals respond to new information about threats to the hive in terms of behavior, brain genome dynamics (gene expression and DNA methylation patterns), and dynamics at higher levels of organization in the brain and periphery (brain mitochondrial bioenergetics and fatbody lipid content). This approach creatively leverages a well-studied system to intricately dissect the mechanisms that regulate brain gene expression and behavior to determine how these mechanisms predict the persistence of experience and the response to new information. Research objectives are integrated with an educational component focused on improving career guidance for students in agricultural STEM. Collaborative teams composed of graduate and undergraduate students and local beekeepers will complete the proposed research objectives. Moreover, these teams, in collaboration with colleagues across the state in academia, government, business, and agricultural non-profits, will participate in a youth pollinator summit aimed at exposing high school students in Kentucky to STEM agriculture research and careers opportunities. By interacting with beekeepers, agriculture professionals, and other students at different academic stages, students will gain knowledge about career opportunities in agricultural STEM, mentoring relationships with agriculture community members, networking connections, new communication and beekeeping skills, and insights into the practical applications of research findings. Participating beekeepers will gain knowledge of the scientific process, honey bee biology, and hive best management practices, and an improved ability to communicate needs of the beekeeping community to researchers. These educational objectives harness the strength of the Kentucky agriculture community to provide important career guidance for students in STEM. Dr. Rittschof’s expertise in honey bee behavior and genomics, experience mentoring high school, undergraduate, and graduate students, and research and outreach connections with agricultural leaders throughout the state, is well-prepared to complete these objectives. Intellectual Merit. This study explicitly evaluates two often cited but rarely tested hypotheses to explain how the mechanisms that embed an experience predict the persistence of behavioral change. Findings are relevant to behavioral genomics studies of behavioral plasticity, evolution, neuroscience, and human health, as well as information integration modeling studies in behavioral ecology. Broader Impacts. High school, undergraduate, and graduate students and members of the public will be reached in this study. Objectives will improve public scientific literacy, improve STEM education and enhance STEM workforce diversity, improve partnerships inside and outside of academia, and improve potential for sustainable agriculture production in the future. |Status||Active| |Effective start/end date||8/1/21 → 7/31/26| Funding - National Science Foundation Fingerprint Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
https://scholars.uky.edu/en/projects/participant-support-costs-career-signal-to-noise-how-complex-soci
This is not the first planning effort focused on Lower Putah Creek. In 1986, the University of California, Davis (U.C. Davis) finalized its first management plan for the Putah Creek Riparian Reserve. The goal of protecting and enhancing the riparian and aquatic ecosystems of Lower Putah Creek were identified at that time. That plan was significantly updated with the 2005 Putah Creek Riparian Reserve Management Plan which has a primary goal of maintaining and enhancing the ecosystem health of the creek. Some of the above measures have already been accomplished and some are ongoing. The next comprehensive planning effort for Lower Putah Creek began in 2005 with the Watershed Management Action Plan (WMAP). The WMAP provided a blueprint for actions to protect and enhance resources in the Lower Putah Creek watershed. The first phase of the WMAP involved doing Resource Assessments and producing a Map Volume. After the resource assessments were complete, the second phase of the WMAP evaluated opportunities and constraints for improving the creek, established goals and objectives, and recommended project ideas. The second phase of the WMAP in 2006 included community meetings in Winters to develop guiding principles for the process and also working groups of stakeholders for in-depth discussions and identification of criteria to select future projects. When this phase of the WMAP planning was done, the LPCCC issued a Report to the Community with a list of prioritized projects. The third phase of the WMAP was the implementation phase and the resulting 2008 Proposed Projects report. It has been used for the past nine years by the LPCCC, creek landowners and land managers, and stakeholders to identify and implement creek improvement projects to restore and enhance the lower Putah Creek watershed to a more self-sustaining ecological condition. In September 2015, the Lower Putah Creek Coordinating Committee (LPCCC) applied for a watershed restoration planning grant from the California Department of Fish and Wildlife (CDFW) Watershed Restoration Grant Program, which is funded by the Water Quality, Supply, and Infrastructure Improvement Act of 2014 (Proposition 1). Prop 1 authorized the Legislature to appropriate funds to CDFW to fund multi-benefit ecosystem and watershed protection and restoration projects and planning efforts. The LPCCC planning grant proposal was awarded funding in January, 2016 and a contract for the project was in place by mid-September, 2016 (CDFW Watershed Restoration Grant Program Grant # P1696004). This planning grant/project will have a significant amount of public outreach and engagement so that future habitat enhancement projects on Lower Putah Creek are scientifically sound and community-supported.
https://www.putahcreekcouncil.org/background-lower-putah-creek-restoration-planning
What are examples of cultural backgrounds? Culture – set of patterns of human activity within a community or social group and the symbolic structures that give significance to such activity. Customs, laws, dress, architectural style, social standards, religious beliefs, and traditions are all examples of cultural elements. What is your cultural background? It can also refer to such things as your social and racial origins, your financial status, or the type of work experience that you have. What are 5 examples of cultural? The following are illustrative examples of traditional culture. - Norms. Norms are informal, unwritten rules that govern social behaviors. … - Languages. … - Festivals. … - Rituals & Ceremony. … - Holidays. … - Pastimes. … - Food. … - Architecture. How do I find out my cultural background? How to Rediscover Your Culture - Eat Your Culture’s Food. … - Read Authors Who Relate To You. … - Google Your Culture. … - Travel to Your Parents’ Home Country. … - Bring Back a Cultural Ritual. … - Try on a New Sense of Identity. … - Learn How Your Culture Practices Self-Study. … - Practice Cultural Rituals for Yourself. What do I write for cultural background? Explain major points briefly and introduce the topic in the introduction. In the conclusion, reiterate the major points and how they help prove your theory or claim. Use a cultural studies style throughout your cultural background paper. Avoid personal statements and use the third person point of view. What are 7 examples of culture? How do you talk about cultural background? Talk to someone from a different cultural background You could try: Have a chat or catch-up with an acquaintance, friend or coworker that you’ve wanted to get to know better. Remember to treat them just like you would anyone else, and don’t think of them only as a way to get to know about other cultural backgrounds. What term describes a person’s cultural background? An ethnicity is a social group that shares a common and distinctive culture, religion, or language. It also refers to a person’s ethnic traits, background, allegiance, or association. What is culture and give example? Culture is the beliefs, behaviors, objects, and other characteristics shared by groups of people. … For example, Christmas trees can be considered ceremonial or cultural objects. They are representative in both Western religious and commercial holiday culture. What is my personal culture? Personal culture is the collection of cultures that you belong to at a point in time. Culture is shared understanding that emerges from shared experience. As such, it isn’t a personal thing that you define in isolation. What are the 6 types of culture? - National / Societal Culture. - Organizational Culture. - Social Identity Group Culture. - Functional Culture. - Team Culture. - Individual Culture. Why is your cultural background important? We all have a right to know who we are, and where we are from. The people, places and stories of our families are a part of the unique story of who we are. Understanding your history can help build your personal growth and well being, and helps to connect us with each other. How do you explain your culture? Culture comprises the deeply rooted but often unconscious beliefs, values, and norms shared by the members of the organization. In short, our culture is “the way we do things around here.” Keep in mind that the culture of your organization as a whole may or may not be the culture of your team! What is meant by ethnic background? A group of people who share a common race, religion, language, or other characteristic. What are 10 different cultures? Examples of different cultures around the world that have captivated many include: - The Italian Culture. Italy, the land of pizza and Gelato held peoples’ interest in captivity for centuries. … - The French. … - The Spaniards. … - The Chinese. … - The Land of the Free. … - The Second Most Populated Country. … - The United Kingdom. … - Greece. What is your cultural identity example? Put simply, your cultural identity is the feeling that you belong to a group of people like you. This is often because of shared qualities like birthplace, traditions, practices, and beliefs. Art, music, and food also shape your cultural identity. What are the 4 types of culture? Four types of organizational culture - Adhocracy culture – the dynamic, entrepreneurial Create Culture. - Clan culture – the people-oriented, friendly Collaborate Culture. - Hierarchy culture – the process-oriented, structured Control Culture. - Market culture – the results-oriented, competitive Compete Culture. How do you answer cultural background questions? Start with your cultural/ethnic background: where your ancestors are from, when they came to America, etc. Then discuss how much that background informs your life today. What parts of your ancestors’ culture plays a role in your life and in what ways? What is cultural background and identity? A cultural identity essay is a paper that you write exploring and explaining how your place of upbringing, ethnicity, religion, socio-economic status, and family dynamics among other factors created your identity as a person. … Your culture identity is ultimately the group of people that you feel that you identify with. How do I write about my culture? How to write about your own culture - Do it for the RIGHT REASONS. Spoiler alert people! … - Write A LOT. I’ve heard this a bit from younger writers, they find it hard to express themselves on paper. … - Story comes first. … - Don’t put too much pressure on yourself. … - Don’t be scared to get things wrong. … - Embrace the experience. What is culture in your own words? National cultures Culture is also the beliefs and values of the people and the ways they think about and understand the world and their own lives. Different countries have different cultures. For example, some older Japanese people wear kimonos, arrange flowers in vases, and have tea ceremonies. How do you define culture in your own words? Culture can be defined as all the ways of life including arts, beliefs and institutions of a population that are passed down from generation to generation. Culture has been called “the way of life for an entire society.” As such, it includes codes of manners, dress, language, religion, rituals, art. What are the 3 types of culture? Types of Culture Ideal, Real, Material & Non-Material Culture… - Real Culture. Real culture can be observed in our social life. … - Ideal Culture. The culture which is presented as a pattern or precedent to the people is called ideal. … - Material Culture. … - Non-Material Culture. What my culture means to me? “Know thyself” best describes culture to me. This means knowing where you come from, your history—be it family or race, acknowledging how you were raised, and understanding why you are the way you are. What are the 9 types of culture? There are nine main types of company culture. - Clan or Collaborative Culture. A company with a clan or collaborative culture feels like a family. … - Purpose Culture. … - Hierarchy or Control Culture. … - Adhocracy or Creative Culture. … - Market or Compete Culture. … - Strong Leadership Culture. … - Customer-First Culture. … - Role-Based Culture. What is a real culture? Definition of Real Culture (noun) The standards and values a society actually has, instead of pretends or tries to have. What are the examples of high culture? Examples of High Culture - ballet. - classical music. - fine arts. - poetry. How do you appreciate your own culture? One of the best ways to understand and appreciate another culture is by listening to those who are a part of the fabric of that society. Listen to their stories, understand the implications behind the aspects of their culture that you are interested in, and use that understanding to broaden your worldview. How can you reflect on your own and other cultures in your workplace? SEVEN PRACTICES YOU CAN IMPLEMENT TO INCREASE CULTURAL AWARENESS IN THE WORKPLACE - Get training for global citizenship. … - Bridge the culture gap with good communication skills. … - Practice good manners. … - Celebrate traditional holidays, festivals, and food. … - Observe and listen to foreign customers and colleagues. What does culture provide for a decent life? In addition to its intrinsic value, culture provides important social and economic benefits. With improved learning and health, increased tolerance, and opportunities to come together with others, culture enhances our quality of life and increases overall well-being for both individuals and communities. What three words would you use to describe our culture? 33 Words to Describe Your Company Culture - Transparent. Employees and customers alike greatly value transparency—but despite this truth, many companies struggle to add transparency in the workplace when it comes to key information and decisions. … - Connected. … - Nurturing. … - Autonomous. … - Motivating. … - Happy. … - Progressive. … - Flexible. What’s another word for ethnic background? What is another word for ethnic background? |ethnicity||race| |origin||background| |nation||culture| |identity||nationality| |customs||traditions| What are examples of ethnicity? For example, people might identify their race as Aboriginal, African American or Black, Asian, European American or White, Native American, Native Hawaiian or Pacific Islander, Māori, or some other race. Ethnicity refers to shared cultural characteristics such as language, ancestry, practices, and beliefs. What do I put as my ethnicity?
https://listvidz.com/what-is-your-cultural-background-examples/
If we are going to turn the tide on the declining rates of Native American language usage and fluency, there must be a sense of urgency and willingness to come together and support one another. At the Administration for Native Americans (ANA) we are constantly asking ourselves, how can we do more? What other support and outreach can we offer? We also wonder, how can ACF, and the federal government overall, better support Native Americans in their efforts to sustain and revitalize Native American languages in these times of fiscal uncertainty? We are pleased to announce that Acting Assistant Secretary George Sheldon, and the other Senior Leaders at ACF are lending their support to this effort through the creation of a new ACF-wide work group on Native American Languages (NAL). ANA has been supporting economic and social self-sufficiency for American Indians, Alaska Natives, Native Hawaiians, and Native American Pacific Islanders (including American Samoan Natives) as its own agency since passage of the Native American Programs Act (NAPA) in 1974. NAPA has been amended several times since then, most noticeably in 1992 and 2006 with passage of the Native American Language Act of 1992 and the Esther Martinez Native American Languages Preservation Act of 2006. Both of these pieces of legislation build upon the Native American Languages Act of 1990, an important policy directive, that in and of itself, did not authorize new programs or grants, but called upon the federal government, as well as state and local governments to preserve, protect, and promote the rights of Native Americans to use their languages. Here is an excerpt from the Declaration of Policy section of the 1990 Act: When you consider the broad scope of the Native American Languages Act of 1990, it is only fitting that ANA should partner with other federal agencies to ensure that the directives of the 1990 NALA continue to be implemented to the fullest extent possible. To that end ANA is leading an internal Native American Languages work group within the Administration for Children and Families with the goals of supporting ACF programs in their efforts to provide education and services using Native American languages and culture. ACF offices such as Head Start and Child Care have already begun assessing the types of support their programs need to implement high quality programs that incorporate language and culture, and they are responding with tools and resources to assist them in their efforts. Our workgroup will continue to promote these available resources, in addition we will identify best practices and successful efforts by ACF grantees to encourage others to implement similar initiatives or practices, identify ways we can work together to encourage ACF programs to share resources, and foster networks that create opportunities for collaboration to share what is working and the creation of new ideas or approaches. Mia Strickland, Richard Glass, Amy Sagalkin, and Michelle Sauve are the ANA staff supporting this effort. We are joined by Carrie Peake and Brian Richmond from the Office of Child Care, Bridget Shea Westfall from the Tribal Maternal and Child Home Visiting Program, and Captain Robert Bialas and Sharon Yandian from the Office of Head Start. The NAL Work Group will meet regularly in order to identify areas for coordination and collaboration that will benefit the communities we serve. Be on the lookout for more resources and opportunities to connect across the ACF family, and if you have ideas, please feel free to reach out to us! Wopila, Lillian A. Sparks Each year ANA receives hundreds of applications for community based projects in Native American communities. This year ANA was able to award funding to 78 of these projects, with goals ranging from the development of language immersion nests and tribal governance codes, to the delivery of social services and financial literacy courses. ANA is pleased to announce its new awardees for FY 2012. Grantee Highlights Bay Mills Indian Community Pa’a Taotao Tano Piegan Institute Red Cliff Band of Lake Superior Chippewa Sault Ste. Marie Tribe of Chippewa Indians Get to Know Us! Tonya Garnett Talking Stick One person's opinion on a particular matter at hand. In-di-jə-nəs Check out our fun collection of games such as jokes, word jumbles, and "What is it? or Where is it?" HHS Tribal Affairs The Office of Intergovernmental and External Affairs Events and Activities Updates Indigenous Language Institute’s Symposium Reflections on Language Project as the Impact Season Winds Down 9th Annual Native American Fatherhood is Leadership Conference National Indian Education Association Convention ACF Native American Affairs Liaison Workgroup Native American Heritage Month: Celebrating the UN Declaration on the Rights of Indigenous Peoples ANA Attends the National Congress of American Indians Annual Convention Native American Veterans: Storytelling for Healing Upcoming Opportunity Training and Technical Assistance Activities Community in the News The Wôpanâak Language Reclamation Project (WLRP) based in Aquinnah and Mashpee, Massachusetts were recently featured in Yankee Magazine, on Cape Cod radio, and on CBS Evening News. Follow the links below to read, listen and watch! News teams from both Yankee Magazine and CBS attended portions of WLRP's two week Summer Turtle Camp, held for nearly 40 elementary school students. The websites featured in this document provide resources relevant to Native American languages. You can find this list on the Resource page of the ANA website. Please check back often for updates, or to suggest new resources for us to add. Native Language Preservation: A Reference Guide for Establishing Archives and Repositories The ANA Native Language Preservation Reference Guide discusses the importance of language repositories to long-term language preservation efforts. Head Start Cultural and Linguistic Responsiveness Resource Catalogue Volume two of the catalogue contains information on Native and Heritage Language Preservation, Revitalization, and Maintenance. Spoken First Spoken First, created and maintained by Falmouth Institute, is a resource for news about American Indian languages. This blog, updated daily, keeps track of language news coming from Native American communities across the country. Center for Applied Linguistics The Center for Applied Linguistics is a private, nonprofit organization dedicated to providing a comprehensive range of research-based information, tools, and resources related to language and culture. The center is a private, nonprofit organization established in 1959 and headquartered in Washington, DC. The Center for Advanced Research on Language Acquisition Less Commonly Taught Languages This searchable database allows users to learn where specific Less Commonly Taught Languages (all languages with the exception of English, French, German, and Spanish) are taught in North America. Endangered Languages Project The Endangered Languages Project is an online resource to record, access, and share samples of and research on endangered languages, as well as to share advice and best practices for those working to document or strengthen languages under threat. Our Mother Tongues The interactive Our Mother Tongues website shares a wealth of information about North America’s indigenous languages. Each featured language page contains video and audio clips, a snapshot of the language’s status and history, and a user-friendly forum for sharing ideas. Tribal College Journal of American Indian Higher Education Article for Teachers This journal article provides a collection of resources for teaching American Indian students. The resources give a background in Indian education and suggest methods for teaching and integrating American Indian content into traditional subject areas. University of California Berkeley Languages of California Survey For its size, California is linguistically the most diverse area of North America. To learn more about the languages of California, visit the UC-Berkeley survey. Advocates for Indigenous California Language Survival Advocates for Indigenous California Language Survival is a native non-profit with the mission to foster the restoration and revival of indigenous California languages. Aha Punana Leo School Located in Hawaii, Aha Punana Leo School is one of the first full-scale indigenous language immersion efforts in the U.S. Indigenous Peoples and Languages of Alaska This website features a map that displays indigenous peoples and languages of Alaska by region. Akwesasne Freedom School Based in New York, the Akwesasne Freedom School (AFS) is an independent elementary/middle school that provides immersion learning in Mohawk.
http://www.acf.hhs.gov/programs/ana/resource/the-ana-messenger-native-language-and-culture-edition
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION SUMMARY OF THE INVENTION DETAILED DESCRIPTION OF THE INVENTION The present invention relates generally to data communication networks which utilize virtual concatenation. More particularly, the invention relates to techniques for determining routes for virtually-concatenated data traffic so as, for example, to limit traffic impact or facilitate traffic restoration in the event of route failure. As is well known, virtual concatenation (VC) and link capacity adjustment scheme (LCAS) protocols have been developed which allow more efficient use of the existing fixed-bandwidth connections associated with synchronous optical network (SONET) or synchronous digital hierarchy (SDH) network infrastructure. For example, these protocols are utilized in transmission of Ethernet over SONET (EoS) data traffic over metropolitan networks, and in numerous other data transmission applications. The VC and LCAS protocols are described in greater detail in, for example, ITU-T standards documents G.707 and G.7042, respectively, both of which are incorporated by reference herein. Virtual concatenation generally allows a given source node of a network to form a virtually-concatenated group (VCG) which includes multiple members each associated with a corresponding data stream. The different data streams may then be transmitted over diverse routes through the network from the source node to a given destination node, also referred to herein as a sink node. The sink node recombines the streams to reconstruct the original VCG. The LCAS protocol enhances the basic virtual concatenation functionality described above by allowing so-called “hitless” addition and deletion of members from a VCG, that is, addition and deletion of members without the introduction of errors into the transmitted data. The LCAS protocol also enables a VCG to operate at a reduced capacity after the failure of routes associated with one or more members, by allowing the temporary removal of members associated with failed routes from the VCG. Conventional restoration techniques in the SONET/SDH context are designed to provide fast restoration in the event of route failure, where “fast” restoration generally denotes restoration of the associated data traffic in less than about 50 milliseconds. However, this fast restoration comes at the cost of excessive bandwidth overhead. More specifically, these conventional techniques generally utilize 1+1 primary-backup protection, wherein each primary route has a corresponding backup route, resulting in 100% bandwidth overhead. It should also be noted that the above-described LCAS protocol takes on the order of 64 or 128 milliseconds, for respective higher order (HO) or lower order (LO) implementations, in order to complete the above-noted temporary removal of members associated with failed routes. This delay is attributable to the refresh timing mechanism of the LCAS protocol. Therefore, the LCAS protocol in its current form is unable to deliver the approximately 50 millisecond fast restoration generally associated with SONET/SDH networks. This not only precludes its use for restoration but also makes SONET 1+1 protection in conjunction with LCAS ineffective. A possible alternative approach is to transmit the data traffic without providing any protection at the SONET/SDH layer of the network, in the expectation that higher layers, such as an Ethernet layer, will be able to provide a certain measure of protection. For example, in the case of the above-noted EoS data traffic, rapid Ethernet spanning tree protection in the Ethernet layer may be used for restoration in the event of route failure. However, this type of restoration by higher network layers can lead to a number of significant problems, such as disruption of data traffic for up to several seconds, loss and duplication of data, etc. U.S. patent application Ser. No. 10/446,220 filed May 28, 2003 and entitled “Fast Restoration for Virtually-concatenated Data Traffic,” the disclosure of which is incorporated by reference herein, addresses the above-noted issues by providing improved techniques for protection of data traffic against route failure. Advantageously, these techniques in an illustrative embodiment are able to provide fast restoration, on the order of 50 milliseconds or less, while utilizing less than 100% bandwidth overhead. Although particularly well-suited for use with EoS data traffic, the disclosed techniques can also be used with other types of virtually-concatenated data traffic. Despite the considerable advances provided by the techniques described in the above-cited U.S. patent application Ser. No. 10/446,220, a need remains for further improvements in protecting data traffic against route failure, and more particularly in routing algorithms that are utilized to determine appropriate restoration routes. The present invention meets the above-noted need by providing improved routing algorithms for determining routes for virtually-concatenated data traffic, so as to limit the impact of route failures on the data traffic, to facilitate traffic restoration in the event of route failure, or to provide other desirable features. The described routing algorithms, although illustrated in the context of virtually-concatenated data traffic, can also be applied to other types of data traffic. In accordance with one aspect of the invention, virtually-concatenated data traffic is routed in a network comprising at least first and second nodes. The first and second nodes may comprise a source-sink node pair, an ingress-egress node pair, or any other pair of network nodes. For a given traffic demand, a plurality of routes for routing the traffic demand through the network are determined. Each of the routes corresponds to a member of a virtually-concatenated group, such that different portions of the traffic demand are assigned to different members of the virtually-concatenated group. The routes may be determined, for example, by a routing algorithm that determines the routes by processing a graph or other representation of the network. In a first illustrative embodiment, the routing algorithm determines the routes in a manner that ensures that failure of a single link in the network does not affect more than a designated maximum amount X of a bandwidth B of the traffic demand. In a second illustrative embodiment, the routing algorithm determines the routes in a manner that ensures that failure of a single link in the network affects a minimum amount of the bandwidth B of the traffic demand. In a third illustrative embodiment, the routing algorithm determines the routes in a manner that requires only a minimum amount of additional protection bandwidth to ensure that failure of a single link in the network will not affect the bandwidth B of the traffic demand. In a fourth illustrative embodiment, the routing algorithm determines the routes in a manner that provides a designated number M of the members of the virtually-concatenated group with 1+1 primary-backup protection, wherein each primary route has a corresponding backup route, while a designated number K of the members of the virtually-concatenated group are not provided with such protection. Advantageously, the routing algorithms in the illustrative embodiments facilitate implementation of low-overhead, standards-compliant fast restoration techniques for virtually-concatenated EoS data traffic or other types of data traffic. The invention will be illustrated herein in conjunction with illustrative embodiments of routing algorithms and associated restoration techniques, as well as a network-based system and example network nodes in which the restoration techniques may be implemented. It should be understood, however, that the invention is not limited to use with the particular routing algorithms, restoration techniques, network-based system or network node implementations described, but is instead more generally applicable to any routing application in which it is desirable to provide improved restoration performance. The present invention in the illustrative embodiments to be described provides improved routing algorithms for use in determining restoration routes for Ethernet over SONET (EoS) and other types of data traffic which utilize virtual concatenation. FIGS. 1A and 1B The determined routes may be used in conjunction with a network protection technique which involves configuring a virtually-concatenated group (VCG) to include, in addition to its primary members as in a conventional implementation, at least one additional member, referred to herein as a “backup member,” which does not transmit data traffic under normal operating conditions. Each of the primary and backup members of the VCG is assigned a data transmission route between first and second nodes of the network. The first and second nodes may be a source-sink node pair, an ingress-egress node pair, or any other pair of network nodes. If a route assigned to one of the primary members fails, the route assigned to the backup member is utilized for restoring data traffic of that primary member. Such an arrangement will be described in greater detail below in conjunction with . FIG. 1A 10 12 14 16 Referring now to , a data transmission network includes a source node , a sink node , and a provisioned set of routes for VCG members. It is to be appreciated that the figure shows only a portion of a typical network, for simplicity and clarity of description. 12 20 22 14 24 26 12 14 12 14 10 The source and sink nodes are also denoted herein as S and Z nodes, respectively. The source node includes a processor coupled to a memory . Similarly, the sink node includes a processor coupled to a memory . The memory elements of the nodes , store one or more software programs for execution by the corresponding processors in implementing virtual concatenation operations such as forming VCGs and determining appropriate routes for VCG members, using the above-noted G.707 and G.7042 standards documents, or other communication protocols. The conventional aspects of the operation of nodes , in transmitting virtually-concatenated data traffic through network are well-known in the art and therefore will not be described in detail herein. 16 In accordance with the techniques described in the above-cited U.S. patent application Ser. No. 10/446,220, the VCG for which the set of routes are provisioned includes a plurality of primary members and at least one backup member. A “primary member” as the term is used herein refers generally to a conventional VCG member which is associated with a corresponding data stream to be transmitted from the source node to the sink node under normal operating conditions. A “backup member” as the term is used herein refers generally to an additional VCG member which is not associated with any particular data stream to be transmitted from the source node to the sink node under normal operating conditions. FIG. 1A The arrangement is thus in contrast to conventional VCGs, in which each member is a primary member and there are no backup members. It is generally preferably to form the VCG so as to include the minimum number of backup members required to protect the entire VCG against a single route failure. 12 14 For example, in an embodiment in which all of the primary members are diversely routed from source node to sink node , only a single backup member may be used. Therefore, in a VCG formed to include N primary members and one backup member, each having the same bandwidth allocation, protection against the single route failure requires a bandwidth overhead that is only a fraction 1/N of the 100% bandwidth overhead associated with certain of the previously-described conventional techniques. As another example, in an embodiment in which all of the primary members are not diversely routed, a minimum number of required backup members may be determined based on the link that carries data traffic from the greatest number of members. Generally, the minimum number of backup members required would be the same as the total number of primary members carried by the link supporting the greatest number of primary members. X Y X Y Such an embodiment may be viewed as including, for example, a number Nof primary members and a number Nof backup members, with diverse routing provided between the primary and backup members, but not necessarily within the set of Nprimary members or Nbackup members. In the above examples, configuring the VCG results in a substantial reduction in bandwidth overhead relative to conventional SONET 1+1 primary-backup protection. As described previously, this conventional approach requires that each primary route have a corresponding backup route, resulting in 100% bandwidth overhead. In the illustrative embodiments, the bandwidth overhead decreases with the diversity of the routes of the primary members, and as noted above, for N diversely routed primary members may be as low as a fraction 1/N of the conventional 100% bandwidth overhead requirement. 12 14 It was indicated above that the one or more backup members of the VCG are not used to transmit data traffic under normal operating conditions. However, in the event of the failure of a route associated with one or more of the primary members, the affected data traffic may be restored utilizing the backup member(s). For example, if the source node detects the failure of the route assigned to a given member, as reported by the sink node or otherwise, it “discards” the failed member and starts sending the corresponding data traffic on a backup member. Advantageously, this switching of data traffic in the illustrative embodiments can be achieved in a very short amount of time, in some cases as short as about two milliseconds. It should be noted that a single backup member need not be capable of restoring all of the data traffic associated with a given failed primary member. For example, in alternative embodiments, one or more backup members may each be utilized for partial restoration of the data traffic of a given failed primary member. The term “restoring data traffic” and similar terminology used herein should be understood to include without limitation full or partial restoration of data traffic of a failed primary member by one or more backup members. FIG. 1B FIG. 1A 16 10 31 32 33 34 35 31 35 1 5 shows a more particular example of the provisioned set of routes in network of . In this example, the corresponding VCG is formed to include a total of five members, with four primary members and one backup member. The primary members and their associated routes are designated as elements ,, and in the figure. The backup member and its associated route is designated as element in the figure. The members through have respective sequence (SQ) numbers through . Each of the primary and backup members has the same bandwidth in this example. More specifically, each of the members is configured to support data traffic associated with a conventional STS-3c data rate signal, where STS denotes “synchronous transport signal.” The VCG in this example may be referred to as an STS-Xc-Yv VCG, where X=3 and Y in this case denotes the number of primary and backup members of the group. It should be understood, however, that the invention does not require the use of any particular signal or bandwidth configuration for the individual members, or that all members have the same signal or bandwidth configuration. 35 12 35 35 Backup member does not carry any data traffic during normal operation. The source node therefore transmits a DO NOT USE (DNU) indicator in the LCAS control word (CTRL) for the backup member during normal operation to ensure that the sink node does not pick up any data traffic from backup member . FIG. 1B 31 The example of more particularly illustrates not this normal operation condition, but instead a failure condition in which at least a portion of the route associated with primary member has failed. Such a route failure is an example of what is more generally referred to herein as a “member failure.” It is to be appreciated, however, that the term “member failure” as used herein is intended to include any type of failure associated with the corresponding primary member that prevents it from transmitting data at the desired rate. 14 12 A given failure may be signaled in accordance with the conventional LCAS protocol, in which the sink node detects the failure and reports the status of the failed member as NOT-OK back to the source node . 12 Upon receiving notification of a member failure, the source node switches the data traffic from the failed primary member to the backup member, and transmits a NORMAL (NORM) indicator in the LCAS control word for the backup member and the DNU indicator in the LCAS control word for the failed primary member. FIG. 1B 32 33 34 35 31 As shown in , as a result of this restoration process, the NORM indicator is transmitted for primary members , and and for the backup member , and the DNU indicator is transmitted for the failed primary member . This illustrative embodiment thus distinguishes primary members from backup members of a given VCG under normal operation by sending the NORM indicator in the LCAS control words of the primary members and the DNU indicator in the LCAS control words of the backup members. In accordance with the foregoing description, the source node may be viewed as being operative, upon receiving notification of the failure of a primary member, to “swap” the NORM indicator value of the LCAS control word of the failed primary member, with the DNU indicator value of the LCAS control word of one of the backup members, in the next multiframe. At the end of the multiframe header, the source node may start putting the data on the backup member instead of the failed primary member. Therefore, from the time the source node has received the notification of failure, it generally takes one complete multiframe, that is, two milliseconds or eight milliseconds for respective higher order (HO) and lower order (LO) implementations, before the backup member can start carrying the data of the failed primary member. FIGS. 1A and 1B 14 12 The exemplary restoration technique described above in conjunction with can achieve a fast restoration time, as low as about two milliseconds in some cases, through the use of the above-noted modified LCAS protocol. Instead of using multiframe indicator (MFI) bits to send member status information from the sink node to the source node , the modified LCAS protocol utilizes extra bits taken from reserve bits defined in the above-noted standards documents to send the member status information. Therefore, in the event of a member failure, the sink node can immediately send to the source node the status of the failed member instead of waiting for its turn to arrive in the refresh cycle. An important advantage of this approach is that it avoids the restoration delay problems attributable to excessive refresh time in the conventional LCAS protocol. Additional details regarding the modified LCAS protocol can be found in the above-cited U.S. patent application Ser. No. 10/446,220. FIG. 1A FIGS. 2 FIGS. 2 3 4 3 4 A number of exemplary routing algorithms for determining routes through the data transmission network of in accordance with the present invention will now be described with reference to , and . The routing algorithms in , and are referred to herein as routing algorithms α, β and γ, respectively, and correspond to routing service scenarios A, B and C, respectively. It should be emphasized that the particular scenarios, and the corresponding algorithms, are presented by way of example, and other embodiments may utilize different scenarios or routing algorithms. For example, the described routing algorithms can be readily extended to other scenarios with alternate bandwidth constraints. As will be described below, scenarios A and B require no over-provisioning of bandwidth but have strict limitations on the acceptable service loss on failure. These scenarios are examples of what are referred to herein as “no over-provisioning” or NOP scenarios. In contrast, scenario C utilizes over-provisioning to achieve protection. Such a scenario is an example of what is referred to herein as a “require over-provisioning” or ROP scenario. Scenario A generally involves routing a given data service, for example, a particular gigabit (Gb) Ethernet service, such that a single node or link failure does not affect more than a designated amount X of the total bandwidth. This type of scenario may arise when traffic is provisioned for a peak rate but a service provider needs to ensure that an average rate, such as 30% below the peak rate, is maintained even after failures. Scenario B generally involves routing a data service such that a single node or link affects the minimum bandwidth. Scenario C generally involves routing a data service with over-provisioning such that minimum overbuild is required to protect against a single node or link failure. FIGS. 2 FIGS. 1A and 1B 3 4 The routing algorithms α, β and γ described in conjunction with , and , respectively, are designed to exploit the flexibility provided by the VCG structure as previously described in conjunction with . FIG. 2 shows the routing algorithm α for scenario A. As indicated above, this type of scenario may arise when traffic is provisioned for a peak rate but a service provider needs to ensure that a certain average rate is maintained even after failures. This is likely to be a common scenario in practice as service providers may not be willing to provide additional bandwidth in order to protect data services. However, such service providers will often be interested in limiting the extent of the damage on failures. Moreover, critical services tend be provisioned at their peak rates, and thus a temporary failure may not necessarily impact end user performance. Routing algorithm α takes as input a network representation G(V, E), which generally denotes a graph G comprising a set of vertices V interconnected by edges E. The vertices and edges are also referred to herein as nodes and links, respectively. The input also includes a traffic demand D, illustratively referred to as a “new” demand, for an amount of bandwidth B, and a maximum amount of bandwidth X that may be impacted on failure. The routing problem addressed by algorithm α may be characterized as routing the demand D in G such that a single link failure does not affect more than the designated maximum amount X. The output of algorithm α is a set of routes for members of the VCG carrying the demand D. The operation of algorithm α is as follows. Algorithm a initially lets STS-Fc and STS-Yc be the smallest SONET frame rates that can carry B and X, respectively. This allows F and Y to be determined from B and X, respectively. For each of the edges in E, algorithm α sets the corresponding capacity either to N units of flow, identified by determining the highest SONET rate STS-Nc that can be carried by the edge, or to Y units of flow, whichever is the smaller of the two. Thus, all link capacities in G reflect the largest SONET rate that can be carried and are restricted to a maximum of Y units. Then, algorithm α finds a minimum-cost feasible flow of F units of flow in the graph G. In other words, algorithm α determines an appropriate routing for F units of flow through the graph G. Any type of conventional flow routing algorithm may be used to determine the feasible flow for F units of flow in the graph G. For example, path augmentation based maximum flow algorithms may be used, such as those described in L. R. Ford, Jr., “Flows in Network,” Princeton University Press, 1962, and J. Edmonds and R. M. Karp, “Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems,” Journal of ACM, Vol. 19, No.2, 1990, both of which are incorporated by reference herein. Since the requirement is only to route F units of flow, these flow routing algorithms can be stopped once the sufficient flow is routed. In any given network, there may be various distinct solutions for routing F units of flow, and thus algorithm α determines the minimum-cost solution. Such minimum-cost feasible flow solutions can be computed using conventional minimum-cost flow algorithms. Exemplary minimum-cost flow algorithms are described in R. K. Ahuja et al., “Network Flows: Theory, Algorithms, and Applications,” Prentice Hall, 1993, J. B. Orlin, “A Faster Strongly Polynomial Minimum Cost Flow Algorithm,” Proc. of the 20th ACM Symposium on the Theory of Computing, pp.377-387, 1988, and W. S. Jewell, “Optimal Flow through Networks,” Interim Technical Report 8, Operations Research Center, MIT, Cambridge, Mass., all of which are incorporated by reference herein. The determined minimum-cost feasible flow identifies the set of routes for members of the VCG carrying the demand D. More specifically, once F units of flow are routed through G, F paths of unit flow may be extracted and each path may then be used to route a VCG member made up of an STS-1 circuit. The foregoing embodiment assumes that the VCG members are representable as or otherwise comprise STS-1 circuits. It should be understood that these and other assumptions referred to herein are not requirements of the invention, and need not apply in other embodiments. For example, in alternative embodiments, the VCG members may comprise other types of circuits, such as STS-3 circuits, as will be described elsewhere herein. As an example of the operation of algorithm α, consider a scenario in which the requirement is to transport a 120 Mbps Ethernet service from a source node to a sink node such that a single failure does not impact more than ⅔ (67%) of the traffic. Transporting a 120 Mbps Ethernet service requires an STS-3c (≈156 Mbps) equivalent SONET frame rate. In accordance with current VC protocols, this can be achieved by either one STS-3c circuit or three STS-1 circuits. The tradeoffs between selecting STS-1 and STS-3c members will be described in greater detail below. Assume for the present example that the 120 Mbps Ethernet service is transported on a three-member STS-1 VCG. Since the requirement in this example is that at least 40 Mbps of the traffic (33% of 120 Mbps) is protected against one failure, it is necessary for at least one STS-1 member to survive the failure. As described above, algorithm α alters the link capacities such that the new link capacities reflect the largest SONET rate (STS-Nc) that can be carried. In the present example, a link with available bandwidth of 100 Mbps can carry only a single STS-1 circuit (≈52 Mbps), and hence its updated capacity is one unit of flow. Also, to ensure that no link failure results in failure of more than two members in this example, no link may be permitted to carry more than two units of flow. This constraint is captured by restricting the link capacities to a maximum of two units. Thus, a link which could otherwise support an STS-3c circuit, or three units of flow, is assigned an updated capacity of only two units by algorithm α in the present example. FIG. 2 It should be noted that algorithm α as illustrated in handles only link failures and not node failures. However, algorithm α can be augmented using a standard graph transformation that splits each node into an ingress node and an egress node and inserts a link of requisite capacity between them. Such a graph transformation is described in, for example, the above-cited R. K. Ahuja et al. reference. The other routing algorithms described herein may be modified in a similar manner to account for node failures. These remaining algorithms, like algorithm α, will also be described as handling only link failures, with the understanding that the algorithms can be modified in a straightforward manner to handle node failures using a graph transformation or other suitable technique. FIG. 3 shows the routing algorithm β for scenario B. This scenario is similar to scenario A except that the requirement is to minimize the extent of damage on failure. In a network with complete route diversity, all of the flows are routed on disjoint paths such that any failure will affect only a single unit flow. At the other extreme, in a network with no route diversity, all of the flows are carried on one route, and a failure will affect all of the flows. Therefore, the problem of minimizing the damage on failure may be generally viewed as finding a solution in between these two extremes. As will be described below, algorithm β achieves this objective in an effective manner. Routing algorithm β takes as input a network representation G( V, E), and a traffic demand D for an amount of bandwidth B. The routing problem addressed by algorithm β may be characterized as routing the demand D in G such that a single link failure affects the minimum amount of traffic. The output of algorithm β is a set of routes for members of the VCG carrying the demand D. The operation of algorithm β is as follows. Algorithm β initially lets STS-Fc be the smallest SONET frame rate that can carry B. Once F is determined from B in this manner, algorithm β chooses a value of Y, representing damage on failure, by doing a binary search between 1 and F. For each value of Y, algorithm β first alters the link capacities as in algorithm α and then attempts to route the flow of F units in G. For each value of Y, algorithm β attempts to find a solution, if such a solution exists, where bandwidth B can be routed such that no link failure will affect more than an STS-Yc amount of bandwidth, again assuming that VCG members comprise STS-1 circuits. The smallest value of Y for which F units of flow can be routed in G is the best solution. FIG. 4 FIGS. 1A and 1B shows the routing algorithm γ for scenario C. This scenario permits the provisioning of an additional amount of bandwidth, beyond bandwidth B, which can be used to completely restore the circuit after a failure. As described previously herein in conjunction with , backup VCG members may be provisioned for this case. To minimize the number of additional VCG members required for protection, only the minimum number of members should be affected on failure. In other words, if Y (1≦Y≦F) members are allowed for protection bandwidth, no link should carry flows from more than Y members, or equivalently no link should carry more than Y units of flow. Again, it is assumed that VCG members comprise STS-1 circuits, such that each member corresponds to a unit of flow. Thus, the problem of provisioning F members to transport a VCG of bandwidth B with complete protection can be characterized as routing F+Y units of flow in graph G such that no link carries more than Y units of flow. Routing algorithm γ takes as input a network representation G(V, E), and a traffic demand D for an amount of bandwidth B. The routing problem addressed by algorithm γ may be characterized as routing the demand D in G with the minimum additional protection bandwidth such that a single link failure does not impact traffic. The output of algorithm γ is a set of routes for members of the VCG carrying the demand D. The operation of algorithm γ is as follows. Algorithm γ initially lets STS-Fc be the smallest SONET frame rate that can carry B. Once F is determined from B in this manner, algorithm γ chooses a value of Y representing damage on failure, by doing a binary search between 1 and F. For each value of Y, algorithm γ first alters the link capacities as in algorithm α and then attempts to route the flow of F+Y units in G. For each value of Y, algorithm γ attempts to find a solution, if such a solution exists. The smallest value of Y for which F+Y units of flow can be routed in G is the best solution. It should be noted that routing algorithm γ does not require that the primary and the backup members be diversely routed. This is a fundamental difference from standard protection algorithms that enforce this constraint. In fact, routing algorithm γ simply ensures that each link carries at most Y units of flow without enforcing any diversity. Therefore, if a link failure affected I active and J backup members, then Y≧I+J. Since routing algorithm γ routed Y backup members (F+Y units of flow) in total, Y−J backup members will definitely survive the failure. However, Y−J≧L. Therefore, it is guaranteed that at least I backup members are still present to support all the failed active members. This loosening of the diversity requirement ensures that routing algorithm γ is also effective in smaller-size networks, and in networks having limited connectivity among nodes. Thus, routing algorithm γ is well-suited for use in many practical applications, such as those in which service providers gradually build up their mesh infrastructure. FIGS. 1A and 1B As noted previously, the above description of the α, β and γ routing algorithms assumed a VCG with STS-1 members. Other types of VCG members may be considered, however, such as STS-3c members. The tradeoffs between STS-1 members and STS-3c members are as follows. Use of STS-1 members increases the probability of the requisite routes being found compared to an arrangement in which STS-3c members are used. However, higher network management overhead will also be incurred in the STS-1 case since there are three times as many members to be provisioned. Due to its lower granularity, STS-1 also enables a better match between the data rate of the service and the SONET rate for the VCG. Moreover, in an arrangement such as that described in conjunction with , which protects against a single failure, a VCG with STS-1 members will require a lower protection bandwidth compared to one with STS-3c members. Thus, it is generally preferred to build the VCG using STS-1 members unless the management overheads are prohibitive. With regard to complexity, the assignment of link capacities to equivalent SONET rates has a complexity O(E). For algorithm α, use of a successive shortest path algorithm such as that described in the above-cited W. S. Jewell reference may require, in the worst case, F shortest path computations to route F units of flow, resulting in complexity O(FE log V). Thus, the worst-case complexity of algorithm α is O(FE log V). Algorithms β and γ both use binary search, which in the worst case may make log(F) invocations of the flow routing step. Thus, their worst-case complexity is given by O(FE log (F+V)). It is important to note that F generally cannot be an arbitrary large number, since F in these example routing algorithms refers to the equivalent SONET frame rate (STS-Fc) to support a circuit of bandwidth B. The highest SONET frame rate currently defined by the standards is STS-768 and thus, F under these standards will never exceed 768. The routing algorithms α, β and γ described above should be viewed as examples of routing algorithms in accordance with the invention, and the invention can be implemented using other types of routing algorithms. Another exemplary routing algorithm in accordance with the present invention will now be described. This routing algorithm is utilizable in a scenario in which a Gb Ethernet circuit or other traffic demand is to be routed such that MVCG members are provided with 1+1 primary-backup protection, wherein each primary route has a corresponding backup route, while KVCG members are unprotected. As a more particular example, the values of M and K may be selected such that M=15 and K=6, although other values could of course be used. Advantageously, the routing algorithm to be described below provides in this scenario improved resiliency with less than 100% bandwidth overhead. The routing problem addressed by the routing algorithm may be characterized as follows. Given a network representation G(V, E), route 2M+K units of total flow such that each of the 2M units comprises a two-route flow. The algorithm first determines a minimum-cost flow routing for 2M+K units of flow such that no link carries more than M+K units of flow. If there exists a feasible flow graph of this type, then there exist M 1+1 protected routes and K unprotected routes in the feasible flow graph. The solution to the minimum-cost feasible flow determination provides the routes of smallest total cost. The algorithm then iteratively extracts 1+1 protected routes from the feasible flow graph. More specifically, in each iteration i, the algorithm extracts a 1+1 route such that no link in a residual graph contains more than M+K−i units of flow. The protection bandwidth required by this example routing algorithm, expressed in terms of a percentage, is given by: <math overflow="scroll"><mrow><mfrac><mi>M</mi><mrow><mi>M</mi><mo>+</mo><mi>K</mi></mrow></mfrac><mo>⁢</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo>⁢</mo><mrow><mi>%</mi><mo>.</mo></mrow></mrow></math> With regard to various possible failures of routes determined by this example routing algorithm, in the best case a failure only affects a link carrying the flow of a 1+1 protected member. In this case, there is no impact to the traffic since the traffic can be switched onto a corresponding backup. On the other hand, in the worst case, a failure affects a link carrying all the unprotected members. The traffic impact in this case is K members, or, expressed as a percentage of the total traffic: <math overflow="scroll"><mrow><mfrac><mi>K</mi><mrow><mi>M</mi><mo>+</mo><mi>K</mi></mrow></mfrac><mo>⁢</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo>⁢</mo><mrow><mi>%</mi><mo>.</mo></mrow></mrow></math> On average, assuming failures are equally likely, the traffic impact will be half of all the unprotected members. This is a traffic hit of K/2 members, or, expressed as a percentage of the total traffic: <math overflow="scroll"><mrow><mfrac><mrow><mi>K</mi><mo>/</mo><mn>2</mn></mrow><mrow><mi>M</mi><mo>+</mo><mi>K</mi></mrow></mfrac><mo>⁢</mo><mstyle><mspace width="0.8em" height="0.8ex" /></mstyle><mo>⁢</mo><mrow><mi>%</mi><mo>.</mo></mrow></mrow></math> Again, the foregoing routing algorithm is presented by way of example, and other routing algorithms can be used in implementing the invention. FIGS. 5 6 7 A given one of the routing algorithms described above, and an associated restoration technique, may be implemented in a network-based system comprising a plurality of network nodes. Exemplary network and network node implementations of the invention will now be described with reference to , and . FIG. 5 FIG. 5 50 50 52 54 52 52 56 56 58 56 52 62 56 56 54 60 shows an exemplary network-based system in which techniques of the present invention can be implemented. The system includes a network and a central controller . The network may comprise, by way of example, an Internet protocol (IP)-optical wavelength division multiplexed (WDM) mesh network, although the invention may be utilized with any type of network. The network includes a number of nodes -i, i=1, 2, . . . N. Each of the nodes -i includes a corresponding nodal processor -i. The nodes -i of network are interconnected by, for example, optical fiber connections . In this example, each of the nodes -i has a fiber connection to three other nodes. Each of the nodes -i is also connected to the central controller via a corresponding operations link -i, shown as a dashed line in . 54 56 The central controller and nodes -i may each represent a computer, server, router, gateway or other suitable digital data processor programmed to implement at least a portion of a routing algorithm and associated restoration technique of the present invention. FIG. 5 It should be noted that the system of is considerably simplified for purposes of illustration. The invention is well-suited for use in large-scale regional, national and international networks which may include many subnetworks, each having hundreds of nodes. 54 54 The central controller may or may not participate in network restoration, depending upon the particular implementation. For example, a fully distributed implementation need not utilize the central controller . FIG. 6 56 52 56 58 64 62 70 1 70 2 70 3 56 56 66 62 66 72 1 72 2 72 3 70 1 70 2 70 3 64 72 1 72 2 72 3 66 shows one of the nodes -i of network in greater detail. The node -i includes a nodal processor -i which includes a central processing unit (CPU) and memory. A set of input links , corresponding to fiber connections with three other nodes, are connected to buffers -, - and - in node -i. The node -i supplies signals to three other nodes via a set of output links also corresponding to fiber connections . The output links are connected to buffers -, - or -. The buffers -, - and - may provide optical-to-electrical conversion for signals received on input links , while the buffers -, - and - may provide electrical-to-optical conversion for signals to be transmitted on output links . 60 56 54 58 74 58 75 56 76 76 77 56 58 56 The operational link -i of node -i to the central controller includes an input operational link which is coupled to nodal processor -i via an input buffer , and an output operational link which receives signals from nodal processor -i via an output buffer . The node -i also includes a demand database for storing demands for network capacity, and a set of routing tables which specify routes through the network for particular demands. The demand database and routing tables may be components of a common memory within node -i, and may be combined with or otherwise associated with the memory of nodal processor -i. The node -i has been simplified for purposes of illustration, and as noted above may include a substantially larger number of input and output links, as required for a given application. FIG. 7 FIG. 5 56 shows another exemplary implementation of a given one of the network nodes -i of the network-based system. 56 80 82 84 85 86 87 56 56 FIG. 7 The network node -i in this example includes a controller , a switch fabric , a first line card having a set of OC-x ports associated therewith, and a second line card having a set of OC-x ports associated therewith. It should be understood that the node -i has again been simplified for purposes of illustration. For example, the node -i as shown in may in practice include a substantially larger number of line cards and ports, as required for a given application. 80 90 92 90 92 92 The controller includes a processor and a memory . The processor may be, e.g., a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC) or other type of processing device, as well as portions or combinations of such devices. The memory may include an electronic random access memory (RAM), a read-only memory (ROM) or other type of storage device, as well as portions or combinations of such devices. The memory may be used to store a demand database for storing demands for network capacity, and a set of routing tables which specify routes through a corresponding network for particular demands, with the routes being determined at least in part using a routing algorithm of the present invention. 56 As indicated previously, the node -i may be an element of an optical network or other type of network which includes a very large number of nodes, and possibly a central controller. One or more of the nodes and the central controller may each represent a computer, processor-based switch or other type of processor-based device configured to provide routing and associated restoration in accordance with the invention. FIGS. 5 6 7 The implementations described in conjunction with , and are presented by way of example, and it is to be appreciated that the invention can be implemented in numerous other applications. Advantageously, the present invention provides improved routing algorithms which facilitate the implementation of low-overhead, standards-compliant fast restoration techniques for EoS data traffic or other types of data traffic which utilize virtual concatenation. Another advantage of the techniques in the illustrative embodiments is that these techniques may be configured for backwards compatibility with existing VC and LCAS standards, thereby allowing a network element configured to implement a restoration technique of the present invention to interoperate with standard VC and LCAS enabled network elements. The above-described embodiments of the invention are intended to be illustrative only. For example, the restoration techniques of the invention may be applied to any routing application, without regard to the type, arrangement or configuration of the network, network nodes, or communication protocols. For example, although described in the context of virtually-concatenated data traffic, the example routing algorithms described herein can be modified in a straightforward manner so as to be applicable to a wide variety of other types of data traffic, including data traffic that does not utilize virtual concatenation. Also, in alternative embodiments the particular steps utilized in a given routing algorithm may be varied from those steps shown in the foregoing example routing algorithms. These and numerous other alternative embodiments within the scope of the following claims will be readily apparent to those skilled in the art. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A illustrates a portion of a data transmission network in accordance with an illustrative embodiment of the invention. FIG. 1B shows an example of a provisioned set of routes for a given virtually-concatenated group (VCG) configured in accordance with the invention. FIGS. 2 FIG. 1A 3 4 , and show exemplary routing algorithms utilizable for determining routes through the data transmission network of in accordance with the invention. FIG. 5 shows an example network-based system in which a routing algorithm in accordance with the invention is implemented. FIGS. 6 and 7 FIG. 5 show different possible implementations of a given network node in the system.
Our successful, well-established client is seeking an Engineering Manager to be the primary Engineering leader and technical expert for the team. Specific responsibilities include: • Creates detailed Engineering plans, processes and procedures for Engineering staff. • Hire, train and guide team members effectively. Manages projects, timelines and objectives to meet on time requirements for deliverables including proper BOM’s and other pertinent information for the production floor to be able to build product properly. • Ability to write effectively, update specifications, communicate internally and externally with customers in an effective manner. • Works with all relevant internal resources to support both internal and external projects. • Oversees the mechanical, software, design, electrical, test engineering, DVT, test development, configuration and documentation support. • Supports all test requirements, validation testing, regression testing, USB, etc. • Familiar with test software and various hubs and platforms to facilitate proper test strategies and customer requirements. • Provides technical direction and support for the company. • Develops and maintains Bill of Materials and supporting documentation under revision control for all related process instruction sets for manufacturing. Including the development of test procedures, qualification tests, ESS, Vibe and other custom tests as required. • Supports all design validation tests, regression testing and product qualification to prove out design, functionality to specifications and proper software. • Supports technical development, qualification phases, product validation, proof of design and design verification processes internally and externally with customer until product design is stable. Interfaces with configuration management and Engineering to make updates as required to either specifications or processes. • Assists with technical submissions with customer for qualification testing, maintains product documentation and ensures all required updates are tracked and logged appropriately. • Works with Materials to help identify and select components and proper crosses and alternatives. • Maintains the ECO processes and program within the company and drives continuous improvement. • Supports all qualification tests, vibe, ESS and other tests as required (both external and internal) Works with vendors to define test requirements, provide direction and field questions as they arise. • Interprets test yield data to identify possible improvements in test coverage and modifies test as necessary to improve coverage and throughput. • Responsible for managing budgets and controlling costs within the department • Assists Sales with technical portions of proposals and budgeting time, resources and capital required. • Acts as Primary technical resource for the company and interfaces with customers and potential customers to support questions, proposals and problem resolution. Background profile: • Prior experience as an Engineering Manager • BS/MS EE Degree required. • Must be a US Citizen • Excellent customer service skills • Must have excellent communication skills (written and verbal). • Proficient in Microsoft Suite of products including Word, Excel, Power Point • Experience with Defense programs a plus • Superior problem-solving abilities. • Self-guided, can work well on their own. • Ability to manage multiple projects simultaneously • Maintains expert level product knowledge and application • Outstanding organizational skills • Technical expertise and an understanding of electronic principles highly desired. • Detailed knowledge of electronic circuits both analog and digital • Experience with MRP systems (preferably INFOR Visual) • Familiarity with SolidWorks is a plus • Analytical and capable of solving complex problems • Leadership skills necessary to manage a team effectively • Interpersonal communication skills with expertise in distilling complicated topics to a broader audience • Should be familiar with firmware creation, as well as multiple interface platforms such as USB and PS2.
https://www.triad-eng.com/job/engineering-manager/
Annotation: Literary criticism is one of the two philological sciences – the science of literature. Another philological science, the science of language, is linguistics, or linguistics (lat. lingua – language). These sciences have much in common: both of them, each in its own way, study the phenomena of literature. In essence, literary criticism and linguistics are different sciences, since they set themselves different cognitive tasks. Linguistics studies all kinds of phenomena of literature, more precisely, the phenomena of people's verbal activity, in order to establish in them the features of the regular development of those languages spoken and written by various peoples around the world. Key words: criticism, traditional linguistics,cognitive linguistics,social cognition, association. Literary criticism studies the fiction of various peoples of the world in order to understand the features and patterns of its own content and the forms expressing them.Nevertheless, literary criticism and linguistics constantly interact with each other and help each other. Along with other phenomena of literature, fiction serves as a very important material for linguistic observations and conclusions about the general features of the languages of certain peoples. But the peculiarities of the languages of works of art, like any other, arise in connection with the peculiarities of their content.And literary criticism can give linguistics a lot to understand these substantive features of fiction, which explain the peculiarities of language peculiar to it. But for its part, literary criticism in the study of the form of works of art cannot do without knowledge of the features and history of the languages in which these works are written.This is where linguistics comes to the rescue. This help is different in the study of literature at different stages of its development. The subject of literary criticism is not only fiction, but all the world's artistic literature – written and oral. Against the background of the emerging holistic view of language based on physicalism (the embodiment of mind) and an understanding that language is a biological phenomenon rooted in semiosis as the experience of life, it is argued that a new philosophical framework for cognition and language is currently taking shape. This philosophy is best characterized as a synthesis of ideas developed in cognitive linguistics, semiotics and biology. These ideas bear directly on autopoiesis as the theory of the living which possesses a greater explanatory power as it assumes the experiential nature of language. Autopoiesis allows for deeper insights into the essence of language which is viewed as a kind of adaptive behavior of an organism involving a meaning system constituted by signs of signs, thus making unification of (humanistic) science an attainable goal. Cognitive linguistics (hereinafter CL) today Globallashuv davrida tilshunoslik va adabiyotshunoslik taraqqiyoti hamda ta’lim texnologiyalari 127 is a conceptually well-established direction, which is characterized by certain cognitive attitudes that differ significantly from the rationalistic tradition in the study of natural language. The need to develop a new methodology, which became especially acute in the second half of the 20th century. In connection with the entry of mankind into the post-industrial information age, which led to a certain crisis in the basic epistemological settings of the so-called "traditional linguistics", – led to such an ideological base of the "first generation" cognitive science. Which was based on a rationalistic approach to knowledge, namely, on the central thesis of analytic philosophy that the mind is incorporeal and literal. At this stage, cognitive science was characterized by pure dualism, and the mind was described in terms of its formal functions (operations on symbols) regardless of the div that served as its container. Since then, CL has come a long way in its development. Starting with the attitudes of KN, which grew out of the cognitive revolution of the mid-20th century. and largely continuing to stand on the philosophical platform of Cartesian dualism [Lepore, Pylyshyn 1999], cognitive linguistics, in the person of its best representatives, has revealed a remarkable ability for internal development. New horizons of knowledge, which opens up a cognitive approach to language as a unique property of a living human organism, have been a powerful stimulating factor in rethinking the theoretical baggage accumulated by linguistics over the past 200 years. In particular, the epistemological foundations of the so-called “mainstream cognitive science” have also undergone rethinking (at least by part of the linguistic community). On the agenda was the question of the "humanization" of linguistics, the application of research methods that take into account the complex nature of the phenomenon called "natural language". This movement, which originated within the framework of the first generation of cognitive science, officially took shape in a new ideology at the Duisburg Congress in 1989, when the International Association for Cognitive Linguistics was created, proclaiming its goal "to promote the development and expansion of research in line with cognitive linguistics" – linguistics, outgoing from the main idea that "language is an integral part of cognition, reflecting the interaction of cultural, psychological, communicative and functional factors. "The number of representative international congresses devoted to the problems of cognitive linguistics is growing, special journals and yearbooks are being published (Cognition, Trends in Cognitive Sciences, Cognitive Science Quarterly, Cognitive Psychology, Cognitive Linguistics, Annual Review of Cognitive Linguistics, Cognitive Linguistics, etc.), the circle of issues considered in connection with the role of language in human life. At the same time, some dissatisfaction with the results obtained over the past period is becoming noticeable in the international cognitive community.Significant in this regard was the last, 8th International Conference on Cognitive Linguistics, which was held in July 2003 in the Spanish city of Logrono and brought together a record number of participants (over 500) from more than forty countries. The theme of the conference itself was noteworthy: “Cognitive Linguistics, Functionalism, Discourse Studies: Common Ground and New Directions”, which unequivocally indicates that representatives of different directions are aware of the need to comprehend the created in modern linguistics, the situation caused by the lack of a single general theoretical method. Different perspectives of vision and study Respublika ilmiy-amaliy konferensiya 128 of language, regardless of their positive contribution to the science of language as a whole, cannot obscure the fact that today there are, in fact, several linguists, albeit united by a common object of study. However, the presence of a common object in itself is not yet a condition for the coincidence of goals and objectives, especially if the ideal project of linguistics is not defined.It is necessary to find ways and means of integrating the knowledge of the language obtained within the framework of various scientific fields, and the congress in Spain showed that such a process has begun. The plenary reports “Social Cognition: Variation, Language and Culture” are indicative in this respect. On the inevitability of cognitive sociolinguistics" "Cognitive linguistics and functional linguistics, or: What's in a name?" “Basic discursive acts: when language and cognition turn into communication”, etc. However, so that the process that has begun does not become another fashionable fad, it is required to clearly understand what needs to be integrated with what, for what purpose and on what basis. Thus, the main question arises, as it seems, for today: “What and for what purpose should linguistics study?”The question "What?" implies the need to define the understanding of language as an empirical phenomenon, and until this is done, the question "For what purpose?" hangs in the air. In cognitive linguistics, we see a new stage in the study of complex relations between language and thinking, a problem that is largely characteristic of Russian theoretical linguistics. The beginning of such a study was laid by neurophysiologists, doctors, psychologists. (P.Broca, K.Wernicke, I.M.Sechenov, V.M.Bekhterev, I.P.Pavlov and others). Neurolinguistics arose on the basis of neurophysiology (L.S.Vygotsky, A.R.Luria). It became clear that language activity takes place in the human brain, that different types of language activity (language acquisition, listening, speaking, reading, writing, etc.) are associated with different parts of the brain. There are many cognitive factors that effect language learning. Among the cognitive factors, there are memory, attention and awareness, forgetting, and context or environment in which the learning process takes place. Memory plays a part in bringing about a higher or lower level of language mastery. If the individual is better at understanding the role of memory as well as the rule of attention and awareness, and the rule of forgetting, he/she will be able to achieve a higher rate of language proficiency. In addition to the cognitive factors affecting language learning aforementioned, there are some other metacognitive ones that refer to the strategies that the learner is to be fully aware of during the learning process/ they are planning for learning, self-monitoring, self- evaluation, and setting priorities. The developments of cognitive linguistics are becoming recognized methods of analyzing literary texts. Cognitive poetics has become an important part of modern stylistics. Used literature: 1 “Believing brain” Michael Sherman 2 “ Linguistics and Literature” Nigel Fabb 3 “ Language in literature” Michael Toolan 4 “ English linguistics. Literature, and language teaching in changing era” Willy A Renandya, Didi Sukiyadi, Masaki Oda, Oshadi.
https://inlibrary.uz/index.php/dllseteg/article/view/5500
Muscat: The CEO of Alizz Islamic Bank, Mr. Sulaiman Al Harthi honored the top performing staff in the first half of the current year at a ceremony held at the bank’s head office in Ruwi , in the presence of members of the management team. The CEO thanked the employees for their outstanding achievements, urging them to continue their good work during the remaining few months of the current year, and assured them that achievements will always be rewarded. He noted that the most important thing is how to sustain such good performance and leverage it to help the Bank become the optimal first choice for all customers. This is only possible by offering the highest quality Shariah compliant products, services and products. Al Harthi commended the employees who demonstrate a high sense of responsibility, confirming that the senior management of the bank has always focused on the importance of recognizing talents and exemplary talents, and will always offer them support to grow and prosper. The CEO further commended the achievements of the various departments and sections, encouraging the employees to go further towards the strategic goals of the bank and to keep up to date with the latest updates in Islamic banking now more than ever. Alizz Islamic Bank has received several awards in recognition of its outstanding banking services and products, which reflects the Bank’s good understanding of market requirements and needs. Since its inception in 2013, the bank has made investments in the community in which it operates as one of its top priorities, along with investments in education, youth development and increased awareness of Islamic finance by as a proven means of social welfare. Additionally, the bank strives to assist its customers across Oman by providing access to convenient, Sharia-compliant banking services. Moreover, organizational development has been one of the main strategic axes of Alizz Islamic Bank over the past two years, with the firm belief that human capital is a key driver for the overall growth of the organization. Several interdepartmental meetings were held to understand the needs and aspirations of employees. As a result, many programs have been put in place to enhance employees’ ability to effect positive change and increase their effectiveness. These programs involved improving strategies, structures and processes to increase the likelihood of impact and contribution to the Bank’s objectives, boosting productivity and financial awareness. On the other hand, the Bank’s infrastructure has been developed to provide customers with innovative financial solutions and high quality customer service through various communication channels such as the website, internet banking, online banking mobile, SMS, call center, ITM, ATMs and branches. services.
https://dinahsdoodles.com/alizz-islamic-bank-ceo-honors-top-employees/
5 unusual ways to keep your house warm ... Either way, it's a useful reminder that we can meet our basic needs with a little ingenuity and some simple materials. ... Related topics: Alternative ... Jun 17, 2012· What is the best way to produce electricity? ... It generates 16% of the world's total electricity. There are many different types of hydro power plants including dams and underground complexes but they all generate power using the force of falling or flowing water. The amount of electricity produced at a hydro plant is flexible. According to the US Department of Energy, they account for 10 percent or more of your electricity bill. One of the best ways to nix these power wasters is by plugging them into a power strip or a ... Renewable energy from evaporating water (w/ Video) ... This seems to me an ingenious, very useful and environmentally friendly alternative way to generate electricity. srikkanth_kn. How To Get Well Water Without Electricity. Written by: ... The easiest way to make a vacuum pump for a well is a “T” configuration, with the T lying on its side so that the vacuum section is offset to the side of the line of the well. ... This needs to be a snug fit for the pump to work. While there are many different places you can cut the ... Jul 19, 2013· When it comes to "alternative" ways to generate electricity, solar is just about the most expensive form of energy you can buy. Stay in the … Information on different types of green electricity wich can be supplied in green electricity tariffs. ... What is green electricity? ... there are concerns about the sustainability of sourcing biomass from countries where forests are being cleared to make way for fast growing plants that are then used as … Mauritius uses sugar as alternative way to generate electricity Thabang Nkgweng Thabang Nkgweng . 9 December 2018, 8:43 AM | AFP | @SABCNewsOnline. Image: Reuters. Electricity generated from sugar canes accounts for 14% of the island's needs. Far out into the Indian Ocean where it is forced to be self-reliant, the island nation of Mauritius is ... This page contains articles about alternative energy inventions. Energy Invention Directory ; Energy Invention News ... Just like wind mills and wind turbines that generate power and electricity from the wind, scientists are now working to generate power from the sea. ... New and unique ways of making solar panels more efficient in power ... Shop today and save on alternative energy solutions, including solar panels, wind turbines, ... Grid tied systems use solar panels installed on your home's roof to generate electricity that feeds into the utility grid, thereby lowering your monthly energy costs. ... These alternative energy systems are most effective in coastal regions and ... Chris Campbell shows you 4 ways to heat your house without using electricity. ... 4 Ways to Heat Your House Without Electricity. ... An Alternate View. Commonly known alternative energy sources: Hydroelectric Energy, Solar Energy, Wind Energy, Biomass Energy, Geothermal Energy and Tidal Power. ... This pressurized steam can be used to run steam turbines to generate electricity. Advantages of geothermal energy ... that can be used to produce electricity at the same way like using coal. 3 ways to generate electricity at home. Dec. 19, 2010 ... Residential solar power, which relies on photovoltaic panels, or PV for short, make sense as way to generate power close to the user while eliminating use of fossil fuels. The payback on the installation cost will … 8 ingenious ways of generating electricity. Photo via inhabitat.com. Tereza Jarnikova. ... a windmill was first used to produce electricity relatively recently, in the year 1887 in Glasgow.) ... The idea of wind turbines is an established one in alternative energy circles, but the idea of a windbelt is relatively new: Shawn Frayne conceived of ... Researchers there have developed a way to “interrupt” photosynthesis and redirect the electrons before they are used to make sugars. If you’re wondering why this is a potentially importent ... Alternative Energy Sources: Alternative energy encompasses all those things that do not consume fossil fuel. They are widely available and environment friendly. ... Most countries tap this energy to generate electricity, ... There are 3 ways i.e. Tidal energy, Wave energy and … Electricity from: Solar Energy: ... While many technologies derive fuel from one form of solar energy or another, there are also technologies that directly transform the sun's energy into electricity. ... There are two different approaches to generate electricity from the … The Truth About How You Can Generate Your Own Home Power. What's For Dinner? Make dinner time, family time! ... But it is very possible to create extremely cheap electricity from your own home and stop giving money to the power companies. ... Probably not. Home owners need an alternative way to create their own home power that helps them to ... Alternative sources of electricity are available. Which to choose depends on your situation, budget and needs. ... Proud Green Home: 3 Ways to Generate Electricity at Home; Apr 17, 2015· An alternative way to generate electricity Ron Vincent. Loading... Unsubscribe from Ron Vincent? ... Heat Trap: A New Way to Generate Electricity Using Nanotechnology? - … Even though this doesn't technically generate electricity or transfer ... While designing for efficiency is the best way to achieve high levels of energy conservation, there are lots of retrofits ... Make on-call burnout a thing of the past. Only get notified of critical alerts and cut through the noise with Opsgenie. There are many different ways to create electricity. You can create electricity by moving a magnet across a loop of wire, or by moving a loop of wire across a magnetic field ... Jun 19, 2009· Hydro electric storage, not strictly producing electricity as excess electricity is used to pump water into a reservoir and then the water is used to generate electricity (using gravity fed turbines) during periods of high demand. This is, therefore, a way of storing electricity. Static Electricity - What is static charge? What causes static shock? ... Everything around us is made of atoms and scientists so far know of 118 different kinds. These different kinds of atoms are called "elements." ... One very common way is to rub two objects together. If they are made of different materials, and are both insulators ... Electricity is a convenient source of energy and can be generated in a number of different ways. You will need to weigh up the advantages and disadvantages of other ways of producing energy, such ... Here are nine wildly different ways to harness energy from the air. ... 9 Weird Ways We Can Harness the Wind's Energy. ... that will use a pair of kites to generate 2 to 3 megawatts of electricity. 170 Comments on “Living Off the Grid: How to Generate Your Own Electricity” ... This is really a great idea to help the people to find the many different ways to help reduce the ever rising costs of living, we all benefit by helping each other and reduce global warming at the same time, and it’s a win win situation, as the ice melts the ... It requires no electricity for it to work at any point either so once you have the required tools you can heat any room at the low cost of just $0.12 per day. ... It relies on using black paint and pennies to make it more efficient at absorbing the suns rays. ... 26 Genius Ways To Make Something Old New Again; 21 Ways to Upcycle Empty Pill Bottles; Six ways to generate energy at home; Six ways to generate energy at home. May 10, 2010. ... When you install photovoltaic panels on your roof or in the back yard, you can generate your own electricity or sell it back to the local utility. In addition to the panels, you will need a battery to store electricity produced during the day and a ... 3 Clever New Ways to Store Solar Energy Researchers struggle to find the most efficient—and least expensive—way to bring solar energy to consumers even when the sun isn't shining. You can produce it in all kinds of different ways using everything from coal and oil to wind and waves. You can make it in one place and use it on the other side of the world if you want to. ... It's not enough just to place a wire near a magnet: to generate electricity, either the wire has to move past the magnet or vice-versa. Suppose you ... Before we delve into some of the ways to get green power into your home, perhaps this question need be answered: Why should you make this sort of investment at all? ... from our use of electricity ... Alternative Energy Altenergy Introduction. ... the adverse effects of burning fossil fuels have left us in dire need of an alternative. Enter, alternative energy - any energy source that provides an alternative to the status quo. ... and used turbines to generate electricity, much like a hydroelectric power does for a dam. More recently, CETO ...
https://www.etsiviaggiarecisl.it/alternate_ways_to_generate_electricity_39942
# Amata passalis Amata passalis, the sandalwood defoliator, is a moth of the family Erebidae first described by Johan Christian Fabricius in 1781. It is found in Sri Lanka and India. ## Biology The average life cycle of the species in captivity is 62 days. After mating, the adult female lays about 305 eggs in a lifespan of 3.87 days. It is known to breed all year around and passes through 6-11 generations a year. There are eight larval instars. First and last instar larvae are about 1.97 mm and 29.29 mm in length, respectively. Adults usually emerge within 1 to 2 hours of sunrise. After a day, they are ready for mating. It is known mainly as a defoliator of sandalwood (Santalum album) in India. It is also recorded on various alternate food plants, mainly cowpeas, various other pulses, and ornamental plants. The larval stage of Apanteles nepitae can be used as a parasite to control the moth. ### Host plants Phaseolus vulgaris Santalum album Trichosanthes anguina Vigna unguiculata Capsicum annuum Brassica caulorapa Capsicum annuum Phaseolus vulgaris ## Gallery eggs female with eggs mating pair adult in Sri Lanka adult in India
https://en.wikipedia.org/wiki/Amata_passalis
Roger Federer Net Worth Roger Federer net worth and total career earnings: Roger Federer is a Swiss born professional tennis player who has a net worth of $450 million and total ATP career earnings of $124 million (as of July 2019). Roger Federer was born on August 8th, 1981 in Basel, Switzerland to parents Robert and Lynette. Federer began playing tennis with his parents and older sister Diana at a very early age and quickly showed signs that he had great talent for the game. At the age of eight Roger joined the Basel junior tennis program and at ten he met Australian player Peter Carter who saw the youngster's potential. Carter and Federer trained together for the next four years until, at the age of 13, Roger accepted an invitation to attend Switzerland's national tennis training center. At the time, the training center was located two hours away from Roger's home in a part of Switzerland that spoke mostly French. Federer trained there for three years until a new facility was opened closer to his home in Biel. Peter Carter was one of the instructors at the new training center and his guidance helped Federer quickly rise up the world's top junior rankings. As an amateur, Roger won the Wimbledon junior singles and double titles and was eventually the number one ranked ITF player in the world. Soon after turning pro in 1999, Roger reached the semi-finals of a tournament in Vienna. After a few more high profile wins, Roger became the youngest member of the ATP's top 100. In 2000 Roger represented Switzerland at the Olympics. Though he did not win any medals, he did meet Miroslava Vavrinec who was part of the Swiss national women's tennis team. They immediately began dating and nine years later were married. In 2001 Roger's potential came to full bloom when he won his first ATP singles title. He followed this triumph at the Davis Cup where he and his fellow Swiss teammates defeated the United States. All of these successes led to the press coining the term "Federer Express" in their headlines. Roger went on to win his first two ATP doubles titles and end the season ranked #13 in singles. One night Roger received many missed phone calls from his Coach Peter Lundgren. By the time he finally picked up the messages, Roger found out that his former mentor and friend Peter Carter had died. Carter's death shook Roger to the core. He realized that he had not lived up to anything that Carter had taught him as a tennis player and even a man. Roger decided at that moment that it was time to step up his game on and off the court. Roger Federer would eventually become the number one ranked player in the world, a title he held for a record 237 consecutive weeks from February 2004 to August 2008. As of this writing, he has won Wimbledon eight times, the Australian Open six times, the French Open once and the US Open five times. Federer has become one of the highest paid athletes in the world both on and off the court. Between June 2016 and June 2017, Federer earned an estimated $71.5 million, of which $65 million came from endorsements. Between June 2017 and June 2018, Federer earned $77.2 million. Between June 2018 and June 2019 he earned $94 million. Of that amount, roughly $86 million came from endorsements with companies like Credit Suisse, Rolex and Mercedes Benz. Roger earns more from endorsements than any other athlete. In 2018 Roger signed a 10-year, $300 million contract with Japanese apparel brand Uniqlo. Roger Federer's Career Earnings:
https://www.celebritynetworth.com/richest-athletes/richest-tennis/roger-federer-net-worth/
caught a special fish. ‘As soon as I got it in the net I knew it was a record,’ Casey told Michigan Outdoor News. ‘I put it on a digital scale I had on the boat and I was getting 37 pounds. I knew that was a record, so we went looking for a certified scale.’ Once he found a certified scale, his assessment of the fish was confirmed. It weighed a whopping 36 pounds, 13 ounces, was 43 inches long, and had a girth of 27 inches, making it a new state record for brown trout. It likely will be a world-record line class brown trout. The old state record for brown trout was 34 pounds, 9.9 ounces for a fish caught in Lake Michigan out of Manistee on April 5, 2000. Larry Curtis caught that monster, which measured 401/2 inches long. Richey’s fish was caught early Sunday morning, May 13, in the shallows of Lake Michigan at Frankfort. ‘I caught it in an area I fish quite a bit in the spring, right off the pier heads,’ Richey said. ‘But I never thought I’d catch a fish like that.’ Richey was fishing with his 9-year-old son, Shane, and had just set his lines in the water at about 6 a.m. when the fish hit. The duo was making their first pass. ‘It hit a chartreuse No. 9 Rapala,’ the elder Richey said. ‘I was running planers and fishing around the pier heads. We were in 10 feet of water and it hit on an inside planer, so that fish must have been in five or six feet of water.’ After a 15-minute battle with the brown, Richey was finally able to ease the fish – by himself – into a landing net. Richey said his son usually sleeps for the first hour or so when they hit the water that early. This time, Shane woke up when the excitement mounted ‘When I got it in the boat, (Shane) woke right up. We were both excited,’ Richey said. DNR biologists Tom Rozich and Todd Kalish confirmed the fish as a brown trout state record. ‘I haven’t heard a lot of input on brown trout,’ Kalish said. ‘They’re catching them but nothing like this one.’ The state-record brown is not the first big fish Richey has caught. In 1985 he caught a 24-pound brown, which he had mounted. In 2001, he caught a 17-pound brown that earned Master Angler honors, and a couple years ago, he caught a 101/4-pound walleye in Platte Lake. Casey Richey is the son of the late George Richey, a well-respected Michigan outdoors writer and antique lure collector. George was not far from his son’s mind when the new state record fish was caught. ‘My dad would have been so proud of this one,’ Casey Richey said.
https://www.outdoornews.com/2007/05/24/dnr-frankfort-angler-lands-state-record-brown-trout/
As part of the image mosaic process, we need to find the center of the mosaic. With the coordinates already identifies by using the Image Plate Solver script, we can use that information to annotate our image with an astronomical grid. Image Annotation Script The Image Annotation Script is different than the Annotation process. Annotation lets you manually place objects onto the image. The Image Annotation Script automatically places grids, names, patterns onto the image. Make sure your image is open and then launch the script, which is located at Scripts – Render – AnnotateImage. For our objective, the default settings are fine. Simply run the script. Find Center of Mosaic The result of the annotation script is a new image with a grid pattern applied. You can use this to find the coordinates of the center of your mosaic. This particular mosaic is 4 panels wide and 2 panels tall. Based on the layout, the center of my mosaic is the bottom-right section of this image. The rough coordinates are: - RA: 6:32 - Dec: 5:02 Mosaic Dimensions Before we move onto the next step, we need to roughly determine how big the final mosaic will be. Within PixInsight, we can see that this image is 4537 pixels wide and 3450 pixels tall. Based on the layout of the mosaic panels, my final image will be - Wide: 4537*4 = 18,148 pixels - Height: 3450*2 = 6,900 pixels However, these numbers assume each panel has no overlap and no rotation. To account for this, it is best to add a sizeable buffer to avoid losing any edges. Wide: 4537*4 = 18,148 pixels = 20,000 pixels Height: 3450*2 = 6,900 pixels = 10,000 pixels What’s Next With the image solved, the center coordinates identified, and the overall size of the final mosaic calculated, we can generate a catalog of stars that we will align each panel onto.
https://www.chaoticnebula.com/pixinsight-image-annotation-script/
TALLAHASSEE, FL (#WTXLDigital) - A Tallahassee business is paying an online tribute to a stuffed toy that owners say was stolen from their restaurant. Voodoo Dog made a passionate plea for the toys return on its Facebook page Friday morning. The toy in question is a stuffed Alf doll wearing a Hawaiian-shirt. "If you see Hawaiian-shirt Alf, do the right thing and bring him back to where he belongs," said the post. In the comments, Voodoo Dog said the toy was taken from its Gaines Street location but that there is a second Alf at the other restaurant. One commenter was quick with a television show reference, saying, "I hope Alf eats the thief's cat!" Others paid tribute with #BringAlfHome and this message, "
Eliza’s Notes - 04: Age of Revolution It took quite some time, but society was finally able to recover from the damage it did to itself after the volcano. Another ten generations of kings since Rorozak V, and breakdown into a few different nations that warred with each other, and from there the people began to change the shackles of their own system. I’m not sure exactly how I want to say how it happened, other than the system simply got too big to sustain itself. As the population recovered it continued growing again, and continued expanding. It expanded all across the continent, and with it the unified empire breaking down into smaller states of government, then in turn those breaking off into their own countries. Countries fought, countries grew, countries shrank, as a vy for power started, the smaller areas wanting more power themselves and arguing with each other about who owned what land. Amidst all this, as people became more isolated from each other, new ideas formed, and new cultures spread. Differing ideas, differing opinions, differing ways of treating people and differing ways of reacting to what was happening. As different individual countries tried different ways of living to try to make their specific setting better, ideas drifted and traveled along, bringing the ultimate idea that rose from all of it: revolution. Not any one specific revolution, as we would refer to it. Smaller, more isolated revolutions. Revolutions of ideas, revolutions of thought. A revolution of idea to increase efficiency and profit in trade lead to increased ship travel, leading in turn to a revolution of living and the rise in port towns and industry of ship construction. In turn, this led to a revolution of exploration, with people finally deciding to venture forth and see what else was out there, what more resources could be found, and finally travel to the new world of the north continent. The land of the north is vast, and once people realized just how truly scaled differently the continent we began on is compared to it everyone wanted to flock to it. Seemingly endless in resources, the revolution of behavior moved to the new world, people staking their claims, and using the vast expanse to start living how they wanted, no longer listening to and being subject to the rules of the nation they came from. It was a fresh start, just what was needed to wipe the slate clean after the stall the world entered from the volcano. The land to the north also brought a surprise I wasn’t expecting: goblins. The Evreux, a nomadic, diminutive race on Aughylia, existed here on Vaudios in the mainlands. But they didn’t just exist here, no, they thrived here. To a casual observer it may seem as if they were just as nomadic here, but, as I have met with them and learned from them, the truth of the matter is they have a vast underground network of roads and rails, spanning all around the world. There’s no way they could have built so much in the time since escaping if they were transported here with the rest of us, so they must have originally come from here and themselves sent colonists to Aughylia sometime in the past. Perhaps, as warlynxes are found all over here as well, and have clearly been trained and domesticated by the Evreux as pack animals. Learning more than that from them has been… difficult. Their language is a strange mishmash of compound words, and is as much cipher as it is speech. Many of them have begun picking up our vocabulary from meetings with other humans, and will tend to speak with others in their language but with our wordset, which lends to a basic understanding of what they’re trying to portray but still just as dizzying to try and follow. What I have been able to figure out from them is, while their underground rail network is vast, they live very simply on the land because they don’t want to make the mistakes of their past. They once had large cities, it seems, but had some massive population collapse, and over time dismantled what they had to live more simple lives. Once I gain some better grasp of their language I should be able to learn more. It is surprisingly easy for me to travel now, though. As people spread all around, the world, they lose familiarity of each other. Most everyone is a stranger, and few will remember the strangers for very long. And there's so many people about, with so many people coming and going, that there's hardly any chance for them to meet the same stranger over and over. I've entered a city from the same port multiple times, in a relatively short period of time, and every time I've met different people at the docks. And so with so many people around once more, there's hardly any chance of some random person remembering that they met this old woman ten, twenty, or thirty years earlier and them finding it so strange that I'm still alive.
https://www.regularspelling.com/archive/2016-m04
I asked a question in twitter about graph theory the other day. Unsurprisingly, it was tricky to convey what I wanted to know in such a short span. So this explains what i was wondering in more depth. I’ve got a problem involving graph layouts (think dot and graphviz) I want to write a GUI program that lets the user draw directed graphs interactively. That is, they start with a single node. Then, to modify that graph, there are five basic operations: – add a node above: that is, a node _A_ becomes the graph _B_ -> _A_ – add a graph below: _A_ -> _B_ – add an edge between two nodes – delete an edge – delete a node, and all connecting edges And what I’d like to know is; to keep the UI responsive, can I precalculate the positions of every possible graph ahead of time? When the user makes a change, I can refer to the precalculated positions and keep the UI experience really fast. This depends on how many possible nodes there are. If I am only ever going to see, say, 500,000 possible layouts, it is possible to just spend a couple of days chugging through all the possible configurations and saving them in a database. Without any more limits, this is impossible — there are an infinite number of graphs. But I think I can make reasonable guesses about the types of graph I’m likely to see in my app. For instance; – the graph is always connected – no two nodes have a duplicate edge – the graph has no cycles – Nodes generally won’t have more than about five incoming nodes and five outgoing nodes – A graph probably won’t have more than about 100 nodes in it So here’s how I formulated the problem; Given a directed acyclic graph with _N_ nodes, where no node has more than _E_ total incoming and outgoing nodes, how many possible graphs are there?
https://stevecooper.org/2011/09/09/fun-with-graph-theory/