content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
LPG = UNP i.e. liquid petroleum gas is bought in a liquid state under the pressure of around 5 bars and is paid by the litre, and the car tank has thinner sides while there are more than 260 gas-filling stations in B&H. CNG = compressed natural (earth) gas is bought in a gas state under the pressure of around 220 bars and paid by the kilo, the car tank has thicker sides but its biggest flaw is the number of stations, only in Tuzla. CNG gas can only be bought at Junuzović Kopex gas-filling stations in Tuzla.
http://junuzovic-kopex.ba/eng/natural-gas/differences-cng-lpg/
Through the analysis of two exemplary sources pertaining to the genre of the Nahua primordial titles of the seventeenth and eighteenth centuries, the aim of this essay is to contribute further to our understanding of how this distinct Nahua colonial genre can be used for the study of Nahua social memory during Spanish colonial times. More precisely, what this present essay endeavors to identify the subtextual and supra-textual layers in these two sources. Second, it aims to highlight the replicated memory formulas applied in these specific texts; and third, to analyze the role of Christianity in these memory plots. By way of these three aspects, the task of this present study is to demonstrate that customs of remembrance, deeply rooted in the practice of a collective social memory were still cherished and kept vibrant during the mid colonial period. Nahua Repositories of Social Memory The Seventeenth-Century Mexican Primordial Titles Amos Megged The Baikal-Amur Mainline Memories and Emotions of a Socialist Construction Project Olga Povoroznyuk The Baikal-Amur Mainline (BAM), a railroad in East Siberia and the Russian Far East, became the last large Soviet industrial project. Its construction in the 1970s and 1980s attracted migrants from across the USSR, who formed the bamovtsy, or group of BAM builders. They share a history of working and living along the BAM and constitute the majority population in the region. The article argues that emotionally charged social memory of the BAM construction plays the central role in reproducing and reinforcing the bamovtsy identity in the post-Soviet period. Drawing on in-depth interviews and focus groups, the article examines the dynamics of both individual and collective remembering of the socialist BAM. It forms a vibrant discursive and emotional field, in which memories and identities are reconstructed, relived, and contested. Commemorative ceremonies such as the fortieth anniversary of the BAM serve as forums of public remembering and arenas for the politics of emotions. On Memory Work in Post-communist Europe A Case Study on Romania's Ways of Remembering its Pronatalist Past Lorena Anton Taking the memory of pronatalism in contemporary Romania as a case study, this article is an attempt to view the national politics of memory of contemporary Europe with regard to its communist past from an anthropological perspective. From 1966 to 1989, the communist regime imposed extreme policies of controlled demography in Romania, as it was imputed, for 'the good of the socialist nation'. Profamily measures were developed in parallel to the banning of abortion on request and the making of contraception almost inaccessible. The social remembering of such a difficult past is still a taboo in contemporary Romanian society. This general lack of public remembering, which is still playing a role in the current situation of Romania's reproductive health, is influenced by the interrelations between the different forms of pronatalist memory. The analysis is based on oral history fieldwork conducted between 2003 and 2008, and is theoretically informed by the interdisciplinary field of Memory Studies. Tolganay Umbetalijeva This article is devoted to an investigation of the 'feeling of membership' of certain subtribes and tribes that is typical of the cultural and social memory of traditional Kazakhs. Our empirical study found that people in present-day Kazakhstan are strongly interested in their social and historical roots and traditions and in a sense of tribal (zhuzal) belonging. This tendency is most probably a result of the necessity for Kazakhs to find a new self-identification, as the old one has been destroyed. Along with the development of traditional values, there has been a growth of Western innovations and cultural values in Kazakh society. We examine the interlacing of old values and ideas with new motives and ways of social activity, a process that has affected societal behaviour in everyday life. Cartographies of Communicability and the Anthropological Archive Civil War Executions and the Harvard Irish Study Brigittine French This article traces ideological constructions of communication that enable powerful actors to determine what counts as silences, lies and surpluses in efficacious narratives about violence (Briggs 2007) in order to elucidate occlusions regarding legacies of the Civil War in the Irish Free State. It does so through a precise triangulation of multiple competing and overlapping narratives from unpublished fieldnotes, interviews, published ethnographies and other first-person accounts. The inquiry highlights social memories of the Irish Civil War that have been 'assumed, distorted, misunderstood, manipulated, underestimated, but most of all, ignored' (Dolan 2003: 2). The article argues that the excesses of the anthropological archive make the recuperation of a multiplicity of collective memories possible through a linguistic anthropological perspective that enumerates the kind of erasures at play in contentious memory-making moments, highlights polyvocality in metapragmatic discourse and tracks the gaps in entextualisation processes of historical narratives about political turmoil. Troublesome Temporalities Europe between Nostalgia and Promise Cris Shore The three articles published in this Forum section were all finalists for the Graduate Student Prize of the Society for the Anthropology of Europe (SAE), which met at the American Anthropological Association’s 2013 meeting in Chicago. While they deal with different parts of Europe (Bulgaria and Romania and Spain, respectively), what unites them is a shared interest in issues of loss, social memory, identity, agency and death, and, in particular, the way people experience temporality and change (see Connerton 1989; Forty and Küchler 1991). The authors brilliantly capture the mood of uncertainty and anxiety facing Europeans in a period of unprecedented uncertainty, insecurity and austerity. What they also show is how Europe’s poor and marginalised are both shaped by and, in turn, try to shape or subvert the national and European policy regimes to which they are subjected. Walking Memory Berlin's “Holocaust Trail“ Maria Pia Di Bella Since the early 1990s, Berlin has developed what I call a “Holocaust trail“-circa twenty-five officially dedicated memorial sites recalling significant historical events leading to the Final Solution-without acknowledging it yet as a “trail.“ Berlin is already well known for its two famous museums-memorials: the Memorial to the Murdered Jews of Europe (2005) and the Jewish Museum (2001), two strong statements meant to show how the town deals with the heritage of the Holocaust, how it tries to underline the absolute impossibility of its erasure from social memory and to fight revisionism. The different memorial sites of the Holocaust trail came into existence thanks to multiple initiatives that allowed the town to become a true laboratory for the politics of memory concerning the crimes of the Nazi state and the sufferings of the Jewish citizens that fell victim to the state's genocide. Strange Fruit The South African Truth Commission and the Demonic Economies of Violence Allen Feldman At no other time more than in the present day has individual, social and institutional memory come under such concerted pressure, critique and exposure as a fragile foundation for truth and facticity. This current reluctance to authenticate social memory is intimately tied to well-known postmodernist depredations, which profoundly disenchanted the authority of tradition and authenticity, and emptied core institutionalised myths of their temporal and semantic continuity. As institutionalised memory fails to provide overarching master narratives that can win cultural consent, it has also become increasingly disjunctive with previously unnarratable history and experience. Consider the synchronic fictions of recent ethno-histories, the historians’ debate in Germany on the facticity of the Holocaust, or even the critique of post-traumatic stress disorder and other recuperations of traumatic memory whose fictive psycho-medical legitimacy has been challenged by Alan Young and Ian Hacking. Felicitas Macgilchrist, Barbara Christophe and Alexandra Binnenkade This special issue of the Journal of Educational Media, Memory and Society explores memory practices and history education. The first point of departure for the texts collated here is that memory (whichever concept we use from the current range including collective memory, cultural memory, social memory, connected memory, prosthetic memory, multidirectional memory, travelling memory and entangled memory) is a site of political contestation, subject formation, power struggle, knowledge production, and community-building. Our second point of departure is that history education is a site where teachers and pupils as members of distinct generations engage with textbooks and other materials as specific forms of memory texts that guide what should be passed on to the younger generation. As editors, we solicited papers that investigate how what counts as “worth remembering” in a given context is reproduced, negotiated and/or interrupted in classrooms and other educational practices. This introduction aims to sketch the overarching understanding of memory practices which guide the contributions, to point to the purchase of attending explicitly to the “doing” of memory, to highlight the difference between our approach to history education and approaches focusing on historical thinking, and to introduce the six articles.
https://www.berghahnjournals.com/search?q=%22social%20memory%22
Radiology is the branch of medicine that provides visualization of the inside of the human body. These views help in the early diagnosis and treatment of many conditions. A radiologist is a physician who is a specialist in radiology. Radiologists work side-by-side with primary care physicians, pediatricians, obstetricians, surgeons, oncologists and many other medical specialists to determine a patient’s best treatment plan. The field of radiology began in 1895 when Wilhem Roentgen discovered x-rays. Today this medical specialty includes highly sophisticated, computer-aided imaging technology for diagnostic procedures. Spectrum Medical Group’s Radiology Division is comprised of both diagnostic and interventional radiologists. Each year Spectrum radiologists perform professional radiology services for approximately 600,000 procedures including: - 70,000 mammograms - 42,000 ultrasound exams - 30,000 MRI exams - 10,000 angiography procedures As an integral part of the medical team, the physicians of Spectrum Medical Group’s Radiology Division specialize in diagnosing and treating diseases by obtaining and interpreting medical images. Spectrum radiologists have had four years of medical school with additional radiology residency training. Many of our radiologists have additional subspecialty training or fellowship training in areas including computed tomography, magnetic resonance imaging, pediatric imaging, nuclear medicine, neuroradiology, musculoskeletal radiology, breast imaging, and interventional radiology. All of Spectrum’s radiologists are board-certified. Our physicians are committed to providing superior-quality radiology services to our patients. This includes our regular attendance at national and international conferences, and participation in teaching program for doctors at Maine Medical Center and Eastern Maine Medical Center. In addition to the attending radiologist, the radiology care team may include a radiology resident. The radiology resident, in training at Maine Medical Center’s Radiology Residency Program, is a graduate physician (MD or DO) who is pursuing additional training in radiology. Meet our current radiology residents. Our physicians work hard to optimize Spectrum Medical Group’s continuous quality improvement program. Quality of care delivered to patients, physician and patient satisfaction, and improvement of the efficiency with which care is provided are very important to us and regularly reviewed. Our commitment in these areas sets the standard of care at the highest level and ensures the best possible outcomes for our patients. We welcome patient comments and feedback as an important element in helping us to improve our services. If you would like to provide us with comments, we hope you will call, write, or e-mail us. About Radiologic Studies Diagnostic imaging tests provide valuable information for many medical conditions. By using non-invasive equipment, Spectrum radiologists can produce well-defined images that are important to physicians in developing treatment plans for patients. There are many different diagnostic radiology tests that a physician can order including plain film, fluoroscopy, barium study, MRI, CT, biopsy, ultrasound, arthrogram, PET, and nuclear medicine, to name but a few. Spectrum sub specialists perform a broad range of both routine and highly complex diagnostic procedures in the areas of: - gastrointestinal (digestive tract) radiology - genitourinary (urinary system organs and reproductive system organs) radiology - chest radiology - bone radiology - mammography (breast imaging) - neuroradiology (head, neck, and spine imaging) A CT or CAT scan is a valuable, painless diagnostic test that allows radiologists to see inside areas of the body using special x-ray equipment. The CT scan obtains images from different angles around the body and then reconstructs those images, using computer technology, into pictures of the anatomy inside the body. Areas of the body that may undergo a CT scan include the brain, sinuses, face, neck, chest, abdomen (liver, kidney, pancreas, etc.), pelvis, bones, joints, and spine. CT can also guide biopsies and fluid aspiration and drainage. A CT (computed tomography) scan is a noninvasive medical test that uses special x-ray equipment to produce multiple images or pictures of the inside of the body and a computer to join them together in cross-sectional views of the area being studied. CT scans of internal organs, bone, soft tissue and blood vessels provide greater clarity than conventional x-ray exams. During the scan, a thin beam of x-ray is focused on a specific part of your body. The x-ray tube moves very rapidly around this area, enabling multiple images to be made from different angles to create a cross-sectional picture. A computer analyzes the information and constructs an image for our radiologists to interpret. Common uses CT scanning is commonly used to diagnose problems such as cancers, cardiovascular disease, infectious disease, trauma and musculoskeletal disorders. Your radiologist will decide if a contrast agent (contrast agents highlight your organs and blood vessels and help our radiologists see them better. In the past, most contrast agents contained higher levels of iodine. The new contrast agents available today have lower iodine content, which greatly reduces the chance of an allergic reaction and most of the discomforts associated with the injection). For some CT’s the contrast agent can enhance the CT scan results. CT imaging is one of the best and fastest tools for studying areas of the body and head as it provides detailed, cross-sectional views of all types of tissue. It is often the preferred method for diagnosing many different cancers, including lung, liver and pancreatic cancer, since the image allows a physician to confirm the presence of a tumor and measure its size, precise location and the extent of the tumor’s involvement with other nearby tissue. It is also used to stage, plan and properly administer radiation treatments for tumors as well as monitor response to chemotherapy. CT imaging is an examination that plays a significant role in the detection, diagnosis and treatment of vascular diseases that can lead to stroke, kidney failure or even death. CT is commonly used to assess for pulmonary embolism (a blood clot in the lung vessels) as well as for abdominal aortic aneurysms (AAA). CT can often be invaluable in diagnosing and treating spinal problems and injuries to the hands, feet and other skeletal structures because it can clearly show even very small bones as well as surrounding tissues such as muscle and blood vessels. Physicians often use the CT examination to quickly identify injuries to the lungs, heart and vessels, liver, spleen, kidneys, bowel or other internal organs in cases of trauma. Our radiologists use CT to guide biopsies and other procedures such as abscess drainages and minimally invasive tumor treatments. We work with surgeons and review CT’s to plan for and assess the results of surgery, such as organ transplants or gastric bypass. A PET (positron emission tomography) scan creates an image of your body’s metabolic activity and shows the rate at which your body’s cells break down and use sugar (glucose), use oxygen and looks at blood flow. It helps to evaluate how well organs and tissues are functioning. This is done by injecting a small amount of radioactive material (FDG) into your blood stream and waiting for it to disperse to the area of focus. Depending on the type of nuclear medicine exam you are undergoing, the radiotracer is either injected into a vein, swallowed or inhaled as a gas and eventually accumulates in the organ or area of your body being examined, where it gives off energy in the form of gamma rays. This energy is detected by a device called a gamma camera, a (positron emission tomography) PET scanner and/or probe. These devices work together with a computer to measure the amount of radiotracer absorbed by your body and to produce special pictures offering details on both the structure and function of organs and tissues. A CT (computed tomography) scan is a noninvasive medical test that uses special x-ray equipment to produce multiple images or pictures of the inside of the body and a computer to join them together in cross-sectional views of the area being studied. CT scans of internal organs, bone, soft tissue and blood vessels provide greater clarity than conventional x-ray exams. A PET-CT is a relatively new diagnostic imaging exam that combines the functional information from a PET scan with the anatomical information from a CT scan. When a CT scan is superimposed over a PET scan, doctors can pinpoint the exact location of abnormal activity. They can also see the level and extent of that activity. Even when an abnormal growth is not yet visible on a CT scan, the PET scan may show the abnormal activity. What Are Some Common Uses? PET CT’s are used in the treatment of cancer, to detect cancer, to determine whether a cancer has spread in the body, to assess the effectiveness of a treatment plan, such as cancer therapy, and to determine if a cancer has returned after treatment. They are also used to evaluate blood flow to the heart muscle, and to determine the effects of a heart attack on areas of the heart. This can help doctors identify areas of the heart muscle that might benefit from a procedure such as angioplasty or coronary artery bypass surgery (in combination with a myocardial perfusion scan). A mammography exam, called a mammogram, is used to aid in the early detection and diagnosis of breast diseases. A mammogram is an x-ray exam of the breast. It is used to detect and evaluate breast abnormalities, both in women who have no breast complaints or symptoms, and in women who have breast symptoms (problems such as a lump, pain, or nipple discharge). An analog mammogram uses x-ray films to capture the image. A digital mammogram is a newer method that uses x-ray detectors and a computer to create the image. From the patient’s point of view, having a digital mammogram is essentially the same as having a conventional film screen mammogram. Computer-aided detection (CAD) systems use a digitized mammographic image that can be obtained from either a conventional film mammogram or a digitally acquired mammogram. The computer software then searches for abnormal areas of density, mass, or calcification that may indicate the presence of cancer. The CAD system highlights these areas on the images, alerting the radiologist to the need for further analysis. Screening Mammogram Mammography plays a central part in early detection of breast cancers because it can show changes in the breast up to two years before a patient or physician can feel them. Current guidelines from the U.S. Department of Health and Human Services (HHS), the American Cancer Society (ACS), the American Medical Association (AMA) and the American College of Radiology (ACR) recommend screening mammography every year for women, beginning at age 40. Research has shown that annual mammograms lead to early detection of breast cancers, when they are most curable and breast-conservation therapies are available. The National Cancer Institute (NCI) adds that women who have had breast cancer and those who are at increased risk due to a genetic history of breast cancer should seek expert medical advice about whether they should begin screening before age 40 and about the frequency of screening. Diagnostic Mammogram Diagnostic mammography is used to evaluate a patient with abnormal clinical findings – such as a breast lump or lumps – that have been found by the woman or her doctor. Diagnostic mammography may also be done after an abnormal screening mammogram in order to determine the cause of the area of concern on the screening exam. Magnetic Resonance Imaging, or MRI, is a painless diagnostic test that allows radiologists to see inside areas of the body that cannot be seen using conventional x-rays. MRI scanners use a very strong magnet and radio waves to produce cross-sectional views in multiple planes. Areas of the body that may undergo an MRI include the brain, joints, spine, soft tissue masses, abdominal organs, and pelvis. MRA (Magnetic Resonance Angiography) evaluates arteries or veins throughout the body. Neuroradiology is the specialty of radiology that helps in the diagnosis of diseases of the nervous system. Spectrum neuroradiologists provide a full range of services including MRI and CT of the head, neck, and spine, fluoroscopic-guided lumbar punctures, myelography, specialized angiographic procedures, and neuro interventional procedures. Nuclear medicine studies provide information about the structure of an organ and, most importantly, provide information about organ function. It is useful in the early diagnosis and treatment of numerous medical conditions. In a nuclear medicine study, patients typically receive an intravenous injection of a minute trace of radioactive material that highlights the organs under review. A PET (positron emission tomography) scan is a type of nuclear medicine imaging that provides images that pinpoint the location of abnormal metabolic activity within the body. Other tests, called radioassay procedures, involve no radiation exposure to the patient because radioactive material is added to the fluid specimen after the sample has been obtained from the patient. Ultrasound is a safe, non-invasive imaging technique that uses sound waves to produce images of the body’s organs. Since no radiation is involved, it is a safe procedure to evaluate anatomy in the abdomen, pelvis, neck, and blood vessels (Doppler ultrasound). Spectrum radiologists perform a full range of ultrasound testing as well as ultrasound-guided biopsies and fluid aspiration or drainage. Ultrasound is defined as sound with a frequency greater than 20,000 Hertz, above the range audible to the human ear. An Ultrasound exam, or sonogram, is a safe and generally non-invasive procedure that utilizes high-frequency sound waves to image an internal body structure. Common uses - Abdomen: Ultrasound can be used to detect gallstones, check the health of the liver, kidneys, pancreas, and spleen, and monitor the success of a kidney transplant. - Blood vessel: Ultrasound exams can reveal enlargements in vessels, blood clots or narrowing of arteries leading to the brain, which could result in stroke. - Pelvis: Ultrasound is used image the uterus, ovaries and other structures within the pelvis. It may assist in determining the source of pain or bleeding in the female pelvis. - Cancer: Ultrasound can locate lumps in organs and tissues, and can often distinguish the difference between fluid-filled cysts and cancerous tumors. It is frequently used to guide a needle biopsy (removal of tissue using a needle instead of surgery), and can be used to help detect prostate cancer and monitor treatment. - Ultrasound during pregnancy: Ultrasound is regarded as the Gold Standard diagnostic exam for monitoring pregnancy. Spectrum radiologists are highly experienced in caring for children and do everything possible to make both the child and the parents feel comfortable throughout any procedure. Using state-of-the-art techniques and equipment, many tests can be performed without the need for sedation. When necessary, sedation is available. Spectrum radiologists provide children with the following diagnostic testing services: - x-rays - ultrasound - CT scan - MRI - fluoroscopy, including GI (gastrointestinal) and GU (genitourinary / VCUG) studies - nuclear medic General Questions Your doctor’s office will provide you with information on how to prepare for your scheduled test, procedure, or treatment. In addition, general information on preparing for your specific exam is provided in the procedures section of our web site. If you have any questions, please feel free to call the hospital radiology department. You should wear loose, comfortable clothing on the day of your exam or procedure. Depending on the exam or procedure, you may be asked to change into a hospital gown. You will be instructed beforehand if there is anything special you should not wear. For more detailed information about how to prepare for a specific exam, see our procedures section. Rapid results are essential not only for your peace-of-mind, but also for your physician to begin planning your treatment immediately. We forward your results to your physician who will discuss them with you. Please feel free to contact the radiology department at the hospital where you are scheduled to have your exam or procedure any time if you have questions. Click here for a list of our service locations located on our website.
https://www.spectrumhcp.com/services/radiology/faqs/
Although strides are being made to empower women through progressive policies and laws in democratic South Africa, females still lag behind males at workplace and in economic participation, says Minister for Women, Youth and Persons with Disabilities in the Presidency Maite Nkoana-Mashabane. Addressing a virtual policy dialogue webinar on economic justice and rights during the Covid-19 pandemic yesterday, Nkoana-Mashabane said despite women playing a critical role in breaking the cycle of poverty, they continued to work “under vulnerable conditions and in low-paying jobs”. “Women do not have access to necessary finance, equal access to resources and opportunities,” said Nkoana-Mashabane. Other glaring forms of disparities, she said, included women not being offered “decent work with fair wages, safe working conditions – critical components of economic justice”. “We are also aware that when women have sustainable incomes, they invest in their communities and families,” maintained Nkoana-Mashabane. Just like political rights, economic rights are also fundamental human rights. Until the economic rights of women are fully realised, women will not be completely free.” Government over the years faced challenges “to ensure that women are not only integrated into the formal economy, but also equipped with the education and skills required to participate in a modern economy”. According to Nkoana-Mashabane, this has been despite government having enacted many laws to promote gender equality and women empowerment, which included the Promotion of Equality and Prevention of Unfair Discrimination Act; and the Employment Equity Act. Government was, however, committed to bridge the gender gap. “More work needs to be done to achieve the economic justice of women in our country through government introducing a range of policies and programmes designed to facilitate women’s economic empowerment,” said Nkoana-Mashabane. Turning to the impact of Covid-19 on women, she said: “As our country continues its response to Covid-19, the short- and longterm effects of this pandemic mostly affect the economic well-being of women. We need to improve our efforts so that women can benefit from Covid-19 relief interventions, including funding,” she said. “We all know that even before Covid-19 pandemic, there were many barriers to the full realisation of women’s economic justice and rights.” – [email protected] For more news your way, download The Citizen’s app for iOS and Android.
https://www.citizen.co.za/news/2346876/women-work-under-vulnerable-conditions-and-in-low-paying-jobs-says-nkoana-mashabane/
How does women’s empowerment contribute to gender equality? Two-thirds of all illiterate people are women and more than 65 percent of people living in the most extreme poverty globally are women and girls. Women’s empowerment is a key part of correcting this clear imbalance. Promoting gender equality and strengthening women’s rights is essential for sustainable development. GVI offers specific women’s empowerment projects that are designed to support the United Nations Sustainable Development Goal (UN SDG) five, gender equity. Helping to empower women is a vital step in making sure this goal is reached worldwide. What is gender equality? On the surface, gender equality seems simple. It means having equal rights and opportunities. However, it’s a complex issue. Gender inequalities are widespread and deeply rooted. The imbalance between genders affects almost all aspects of women’s lives. And the impacts are significant, ranging from social to political and economical. According to the UN, men can legally prevent their wives from working in 18 countries. Sons have greater inheritance rights than daughters in 39 countries. A further 49 countries don’t have any laws in place to protect women from domestic violence. Gender equality is important not only because it’s a human right, but because it is essential for building peaceful and prosperous nations. Making sure women and girls have equal access to quality education, healthcare, work opportunities, as well as political representation and participation in economic decision-making, is crucial to sustainable development and economic progress. As women gain education and skills, they can access more opportunities, which eventually means household incomes increase, infant mortality rates go down, and children are able to stay in school longer. Our ultimate aim is to break the cycle of intergenerational poverty. Taking steps to help empower women is a vital part of making sure gender equality is achieved worldwide. What exactly does empowering women mean? Empowering womenmeans equipping them with the tools they need to have power and control over their own lives. Empowered women have independence, equal opportunities, and the ability to make strategic choices in all areas of their lives. Education is one of the most powerful means available for providing women with the knowledge, skills, and self-confidence they need to participate equally in society. Helping to provide ongoing educational support is an important part of gender equality initiatives. According to UNICEF (The United Nations Children’s Fund), educating women and girls can help to fight poverty and disease. Each extra year a girl stays in secondary school can boost her income potential by up to 25 percent. Eventually, helping to empower women through education could help eradicate income inequalities between genders. To help empower women economically we must ensure work opportunities are available. The responsibilities for domestic or unpaid care work need to be shared, and any legal barriers that deny women the right to participate in public life must be removed. Finally, we need to raise awareness in communities through education and communication programs. These steps allow women to secure their own incomes. Another significant aspect of women’s empowerment is making sure women have full reproductive rights and access to sexual and reproductive healthcare. Deciding what happens to your body is central to being able to live with dignity, in exactly the way you choose. Part of empowering women can be as simple as helping women to understand and realise these rights. Inroads have been made in empowering women and girls. For example, UN statistics show that the risk for a girl of marrying during her childhood has dropped by more than 40 percent since 2000in Southern Asia. More than 100 countries now actively track budget allocations for gender equality. In 46 countries, women hold more than one-third of seats in at least one chamber of the national parliament. These are all positive leaps forward, but there is a long way to go before we reach gender equality around the world. GVI’s women’s empowerment programs GVI runs empowerment initiatives to promote gender equality and empower women in communities around the world. Volunteers and interns can contribute to various educational projects, such as English classes or healthcare and income generation workshops, giving women the tools they need to improve their circumstances. Teaching English classes in Laos is a great opportunity for volunteers to take part in a powerful education program. This project is deeply integrated into the local community, so you’ll have the chance to gain insights into ideas surrounding gender in Southeast Asian society. Taking part in teaching practical vocational skills is another key approach in helping to empower women. In Ghana, volunteers can work directly with local women in one-on-one sessions or group classes, teaching engaging classes such as IT literacy or sewing workshops. Our internship in Cape Town also offers support for women through teaching basic computer competency and sewing skills. Both projects focus on women but have a positive impact on wider communities. Volunteers and interns can help to instigate a ripple effect. On our internship in Nepal, you could work closely with women to help them meet their specific goals. This could include working on their conversational English skills, practising maths, or learning how to write a successful resume. You could also help the women to develop craft skills, which can provide a potential source of income and eventually help them to become self-sustaining. No matter which destination you choose, with enthusiasm and determination to make a difference you’ll be able to help empower women to strive for gender equality.
Coskun Demirok’s works appear as an action going ahead with his personal views on “What is art and aesthetic?" The artist is an architect living in Germany. He carries sympathy on minimalist style of architecture. He carried this doctrine to his art works but it is wrong and insufficient to think that he only adopted minimalist aesthetic and that's all. This movement kept artist away from showing indiviual feelings, symbolic explanations, expressionist approach that aims to mean a thing and ornamental works. Demirok's works overflow areas. What emerges is not a painting, whatever it is, what it will be. Neither context, nor styles are decisive for them. Accidentality and spontainty are the determiner of these processes. Another important factor is limitation.Artist moves in a specific place on a form then he stops himself. He sets the limit to the line. This is a kind of action in the formation of aesthetic rules. The artist sees it as an action on his own struggle with artistic painting. It could be appropriate to explain Coskun Demirok's understaning of aesthetic and taste of observer with philosopher Kant's aesthetics. According to Kant, judgement of taste is a sentiment and it occurs with free play of imagination and understanding. This subjective feeling is pure because it does not spoil with indiviual judgements. Kant asserted that saying beautiful to an object doesn't relate to existence of the object and also it is not something epistemolojical. İf you get integration with the existence of object, this judgement will turns into a biased judgement and loses its purity. Kant's concept of disinterestedness emerges at this point. So, my judgement of taste on object is a pure judgement within its two aspects. First, according to the principle of disinterestedness it does not say anything about the presence of the object. Seconly, it is pure judgement because of the free play between imagination and understanding. Aesthetic pleasure should not be related to the representation of a real object. When we say beautiful to an object, we don't want to know anything about its actual existence, or at least it is not dealing with it. What is interested us is pure form of the object. Coşkun Demirok does not identify to his own works as paintings. He asserted that all they are is a "happening" on its own. Works are designed as "it is what it is" and performance's “itself” is a work of art in his exhibitions. So, the relationship established between the observer and the object should conform to Kant's principles on aesthetic judgement such as “disinterestedness”, “purposiveness without any purpose” etc. This exhibition of our gallery named "Breath to Breath" mostly consists of photographs belongs to artist's own origin and past. Artist's incentive to find an answer to questions-living abroad, perhaps even strengthens that feeling- such as "What makes me me?", "What is the reason for being as such at present time?", "What is the connection between personal formation and our origin?"Demirok sets out an existential journey through the origin and past. Works accompanied by this feeling and with integration of photograph for the first time combined with his artistic style and aesthetic sense as well and greets us with an unusual manner. The works on photographs, the photos are used as a surface to paint and make the fetching of the present day. The viewers are not given any clue about people in these photographs which became timeless and brought into the present time properties. We do not know whether or not there is a relation between people and artist and even if there is a relation, what it is? Qualities filling the inside of the concept of "photograph" such as being an evidence and document to a moment; being expressionist and romantic, being memory between photographer and photographed person are deleted by the artist He thinks it, in a sense, as a resolution of a project. "Breath to Breath”, as a resolution of a project, is isolated from content, historicity and past. But still his exhibition project bears the stamp of his personal, professional life and world view "Architectural" depth, discipline and the style, regular brandishing of seed on video work. In fact, Coskun Demirok,by giving the name of "Breath to Breath" to his personal exhibition, transforming his personal photos for making it property of present, beside this giving an effort not to open their story to us,ie,adopting the minimalist approach, arent they a kind of personal choice and reflect the world opinion of him?
https://www.coskundemirok.de/index.php/exhibition/breathless-2012
30 Sec Answer: The short answer is that we cannot definitively say that Attention Deficit Hyperactivity Disorder (ADHD) is caused by a chemical imbalance in the brain. However, research does suggest that there may be changes in the levels of certain neurotransmitters associated with ADHD. Introduction Attention Deficit Hyperactivity Disorder (ADHD) is a common neurodevelopmental disorder characterised by difficulty concentrating and staying focused on tasks, hyperactivity, impulsivity, and difficulty controlling behaviour. It is estimated to affect between 2-5% of children globally, with many cases persisting into adulthood. The condition has long been thought to be related to some kind of “chemical imbalance” or disruption in neurotransmitter activity in the brain; however, research suggests that this theory may not be entirely accurate. In this article, we will explore whether ADHD is just a chemical imbalance and examine other potential factors involved in the development of the disorder. What Causes ADHD? The exact cause of ADHD remains unknown; however, scientists have identified various factors which can increase the risk of developing it. These include genetic predisposition, environmental exposures such as lead poisoning or maternal smoking during pregnancy, premature birth, low birth weight, and head injury. While these factors may increase an individual’s likelihood of developing ADHD, they do not necessarily explain why some individuals are more susceptible than others. Is There a Chemical Imbalance Involved in ADHD? One popular hypothesis is that a chemical imbalance in the brain causes or contributes to ADHD symptoms. Neurotransmitters are chemicals produced by nerve cells which enable them to communicate with each other. Certain neurotransmitters—such as dopamine and norepinephrine—are believed to play a role in regulating attention and focus, making them particularly relevant when considering ADHD. Some studies have found differences in levels of these neurotransmitters among those with ADHD compared to those without; however, it is unclear if these changes are due to the disorder itself or pre-existing conditions which may have contributed to its development. Brain Structure Changes and ADHD In addition to looking at neurotransmitter activity, researchers have also examined brain structure differences between those with and without ADHD. Several studies have found that certain areas of the brain tend to be smaller in individuals with ADHD than those without it; for example, the prefrontal cortex—which helps regulate executive functions such as impulse control—may be reduced in size or function less effectively than usual. Other studies have reported variations in regions responsible for motor coordination and visual processing among those with ADHD versus those without it. Again, it is uncertain whether these structural differences are related to pre-existing conditions or directly caused by the disorder itself. Can Diet Influence Symptoms? Some experts believe that dietary changes can help improve symptoms associated with ADHD; however, evidence supporting this theory is limited at present. One possible explanation is that certain foods may interact negatively with medications used to treat the condition; for instance, high-fat meals can slow down how quickly stimulants enter the bloodstream and reduce their effectiveness. Additionally, people with food sensitivities may experience behavioural problems as a result of consuming triggering items; however, further research is needed before any firm conclusions can be drawn about diet’s role in managing symptoms of ADHD. Do Environmental Factors Play a Role? Research suggests that environmental factors—such as exposure to lead paint or toxins like mercury—can contribute to the development of ADHD by interfering with normal brain functioning during key periods of development before birth or early childhood. These effects can persist into adulthood even after exposure has ceased; however, further study is needed before any definitive conclusions can be reached regarding this possibility. Could Stress Be Involved? Stressful life events—including family discord or traumatic experiences—have been linked to an increased risk of developing mental health conditions such as depression and anxiety disorders including post-traumatic stress disorder (PTSD). It stands to reason then that stress could potentially play a role in the development or exacerbation of symptoms related to ADHD; however, again more research is required before any clear answers can be provided on this topic. Social Implications Having a diagnosis of ADHD can impact every aspect of an individual’s life from academic performance through to social relationships and job prospects; however, there are several strategies which can help manage its effects both inside and outside of school/work environments. For example, being organised and breaking large tasks down into smaller chunks can make studying/working much easier while seeking extra support where necessary can go a long way towards alleviating feelings of overwhelm which often accompany diagnoses such as this one. Furthermore, learning mindfulness techniques can help individuals stay present and better handle challenging situations which arise due to their condition.
https://the-engine.net/entertainment/is-adhd-just-a-chemical-imbalance/
Understanding the environment complexity Welcome to the School of Architecture. This module on Understanding the Environment Complexity is part of the proposed curriculum for the bold experiment to teach Architecture online, which is unique, and unprecedented. Historically, it is a fact that some of the best architects in the world have been self learners, this School of Architecture proposes a practical guide of the most important steps that have made us Architects. You will enjoy this experimental process and the amazing experience of become an Architect. IntroductionEdit Just imagine: "In the developed world, buildings consume half the energy we generate and are responsible for half of CO2 emissions...that is alarming enough. But what will happen as the developing world catches up?" -- Norman Foster in a special CNN online feature examining visionary ideas for the 21st Century. Of course, for those unconcerned about global warming, that energy use is still a concern in terms of economics. Further, energy use by buildings is only a small fraction of the focus of architecture, since the profession deals with so many other concerns. The title of this section: ENVIRONMENT COMPLEXITY should not be misconstrued as focusing on "environmentalism" but instead the ENVIRONMENT or PLACE. To clarify, this is to discuss the metaphysical aspect of SPACE and TIME. Architecture by its nature is an art that develops the built environment, those elements of our world constructed by man. It is an art that deals with fixed locations, or places. While mobile homes exist, they are not architecture. While one could move a building with the right planning and procedure, that is not architecture. Architecture is of a fixed place. Place has an important role in architecture. Does anyone doubt the fact that replica Eiffel Towers in other places lack the presence and power to affect our emotions and psyche like the original in Paris. Would the Taj Mahal make sense in Tulsa, Oklaholma? When one approaches a fine building, its surroundings - both natural and man-made - have an important effect upon the building in question. Architecture both influences and is influenced by its environment. NatureEdit Before humans started constructing shelter, the natural world was. Whatever occurred to cause humankind to exit the caves and begin structuring their own world, whether the advent of agriculture or the gaining of conscience as a thinking people, it was an immediate exercise in responding to the natural environment. In the cave, mankind was sheltered from cold and wind and rain, but it was an existence dependant on geography and migration patterns of wild beasts. Building one's own shelter allowed men to emerge from the cave and create their own world based on their needs and desires. This early development of man-made structures, however, was shaped and molded by the forces that surround us: gravity, sun, wind, rain (snow), etc. In each clime, primitive peoples developed structures appropriate for the local environment, to afford the greatest level of comfort possible in a stone age world. Fast forward to today. High technology and abundant oil has allowed mankind to create hermetically sealed boxes capable of creating interior environments that ignore natural forces. The result is a lack of understanding of concepts as simple as natural ventilation, which was commonly understood and practiced just 100 years ago. Many now worry about global warming. Others worry about availability of energy. Both share a growing consensus that our buildings must respond to the environment and take advantage of natural forces to assist in the creation of comfortable spaces. In order to do that, however, it is important for the architect to understand some basic concepts of environmental science, including basic meteorology, biology, chemistry, physics, and other hard sciences. This section aims at covering the basics of those sciences that directly relate to the design of buildings for human occupation. It is assumed that the students of this section have completed at least high school.
https://en.m.wikiversity.org/wiki/Understanding_the_environment_complexity
View: - no detail - some detail - full detail Changes in TMS measures induced by repetitive TMS Joseph Classen, Ying-Zu Huang, and Christoph Zrenner Commonly used repetitive transcranial magnetic stimulation (rTMS) protocols, including regular rTMS, intermittent and continuous theta-burst stimulation (TBS) and quadripulse stimulation (QPS) are presented with respect to their induced neuromodulatory after-effects and the underlying cellular and synaptic neurophysiological mechanisms. The anatomical target is typically primary motor cortex since motor evoked potentials (MEPs) before and after the intervention can be used to assess effects of the respective rTMS protocol. High-frequency regular rTMS and intermittent TBS protocols tend to increase corticospinal excitability as indexed by MEP amplitude, whereas low-frequency regular rTMS and continuous TBS protocols tend to reduce corticospinal excitability. These effects are primarily due to LTP-like and LTD-like synaptic changes mediated by GABA and NMDA receptors. Changes in the balance between excitatory and inhibitory cortical microcircuits play a secondary role, with inconsistent effects as determined by paired-pulse TMS protocols. Finally, the challenge of large inter-subject response variability, and current directions of research to optimize rTMS effects through EEG-dependent personalized TMS are discussed. Changes in TMS measures of cortical excitability induced by transcranial direct and alternating current stimulation Michael A. Nitsche, Walter Paulus, and Gregor Thut Brain stimulation with weak electrical currents (transcranial electrical stimulation, tES) is known already for about 60 years as a technique to generate modifications of cortical excitability and activity. Originally established in animal models, it was developed as a noninvasive brain stimulation tool about 20 years ago for application in humans. Stimulation with direct currents (transcranial direct current stimulation, tDCS) induces acute cortical excitability alterations, as well as neuroplastic after-effects, whereas stimulation with alternating currents (transcranial alternating current stimulation, tACS) affects primarily oscillatory brain activity but has also been shown to induce neuroplasticity effects. Beyond their respective regional effects, both stimulation techniques have also an impact on cerebral networks. Transcranial magnetic stimulation (TMS) has been pivotal to helping reveal the physiological effects and mechanisms of action of both stimulation techniques for motor cortex application, but also for stimulation of other areas. This chapter will supply the reader with an overview about the effects of tES on human brain physiology, as revealed by TMS. Clinical utility of TMS-EMG measures Robert Chen and Kai-Hsiang Stanley Chen This chapter focuses on the utility of transcranial magnetic stimulation (TMS) for clinical diagnosis and follow-up. It first introduces the methods to measure corticospinal excitability, intracortical inhibitory and facilitatory circuits, and cortico-cortical connections. The chapter then discusses the use of TMS in several neurological disorders. Central motor conduction time (CMCT) can be used to detect myelopathy and localize the lesions, although the triple stimulation technique has higher sensitivity. CMCT can also detect upper motor neuron involvement in amyotrophic lateral sclerosis and multiple sclerosis. The ipsilateral silent period and CMCT are helpful for differentiating atypical parkinsonism from Parkinson’s disease. Distinct patterns of cortical excitability findings can be obtained from different genetic forms of hereditary spinocerebellar ataxia. Reduction of short afferent inhibition (SAI) can differentiate Alzheimer’s disease and frontotemporal dementia. Patients with diffuse Lewy body dementia and hallucination also have reduced SAI. The results of motor evoked potential measurements in the early stage of stroke are predictive of the long-term motor outcome. The chapter concludes that TMS has clinical diagnostic utility in a broad range of neurological diseases. Combining transcranial brain stimulation and PET/SPECT molecular imaging Sang Soo Cho and Antonio P. Strafella Transcranial brain stimulation (TMS) was introduced in 1985 by Barker and his colleagues. Since then, further improvements in technology have allowed additional applications and new stimulation protocols. In the last decade, while the use of TMS has expanded enormously in basic science as well as in the clinical scenario, the underlying neurophysiological or neurochemical mechanisms are still not fully understood. Positron emission tomography (PET) and single-photon emission computerized tomography (SPECT) are neuroimaging modalities utilized to investigate brain functions. In spite of their lower spatial and time resolution compared with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG), PET/SPECT have helped to elucidate some of the neurochemical mechanisms and neural plastic changes associated with TMS. In this chapter, we will provide an overview of these techniques, describing methodological details and application of TMS-PET/SPECT imaging in basic and clinical studies. Computational methods for dosimetry Axel Thielscher, Kristoffer H. Madsen, Gary E. Strangman, and Bradley E. Treeby Computational methods for dosimetry allow estimating and optimizing the spatial distribution and strength of the induced fields and waves in the brain, based on detailed models of the head anatomy that are derived from medical imaging data. This chapter gives an overview of the computational dosimetry methods for transcranial magnetic, electric, focused ultrasound and light stimulation. It starts with a brief introduction to the employed numerical methods and a summary of the status of the automatic generation of individual head models from magnetic resonance and computed tomography data. For each stimulation method, the basic physical equations underlying the numerical simulations are outlined, followed by a summary of the key results and validation studies. The chapter concludes with an overview of remaining limitations and open questions. Computational neurostimulation Ainslie Johnstone, James J. Bonaiuto, and Sven Bestmann Computational neurostimulation is the use of biologically grounded computational models to investigate the mechanism of action of brain stimulation and predict the impact of transcranial magnetic stimulation (TMS) on behavior in health and disease. Computational models are now widespread, and their success is incontrovertible, yet they have left a rather small footprint on the field of TMS. We highlight and discuss recent advances in models of primary motor cortex TMS, the brain region for which most models have been developed. These models provide insight into the putative, but unobservable, mechanisms through which TMS influences physiology, and help predicting the effects of different TMS applications. We discuss how these advances in computational neurostimulation provide opportunities for mechanistically understanding and predicting the impact of TMS on behavior. A conceptional framework for combining brain mapping and brain stimulation Anke Ninija Karabanov and Hartwig Roman Siebner Here, we introduce a conceptual framework for studies that combine non-invasive transcranial brain stimulation (NTBS) with neuroimaging. We outline the type of neuroscientific questions that can be addressed with a combined NTBS-neuroimaging approach and describe important experimental considerations. Neuroimaging methods differ with respect to their spatiotemporal resolution and reflect different neurobiological aspects of brain function, structure or metabolism. These characteristics need to be carefully considered in order to select the most appropriate neuroimaging modality. NTBS and neuroimaging can be combined concurrently (online) or sequentially (offline). The “online” approach applies neuroimaging while NTBS is delivered to the brain and thus, can reveal the immediate functional effects of NTBS on the targeted brain networks, but one has to deal with interfering effects of NTBS on brain mapping. The “offline” approach applies neuroimaging and NTBS in sequence: Offline neuroimaging can be performed BEFORE the stimulation session to inform NTBS parameter setting or AFTER the stimulation session to provide functional, metabolic or structural readouts of NTBS-effects. Since NTBS and neuroimaging can be separated in space and time, NTBS does not interfere with offline brain mapping. Finally, we discuss how NTBS and neuroimaging are gaining importance in clinical NTBS applications and how both techniques can be iteratively combined to create open-loop setups. Cortical silent period Markus Kofler, Ulf Ziemann, and Vasilios K. Kimiskidis The cortical silent period (cSP) refers to a period of suppression or silencing of ongoing electromyographic (EMG) activity during voluntary muscle contraction induced by a magnetic stimulus over the contralateral primary motor cortex. This chapter summarizes the physiological basis of the cSP, discusses technical aspects and recommendations on how to record and analyze it, and provides an overview of useful clinical applications. Evidence is presented that multiple spinal mechanisms are implicated in the initial part of the cSP, but some may be also active further on, whereas long-lasting cortical inhibitory mechanisms operate throughout the entire cSP, with an emphasis during its later part. The cSP is a highly relevant and clinically useful tool to assess inhibitory corticomotoneuronal mechanisms in health and disease. Ethical and Legal Issues in Transcranial Stimulation for Cognitive Enhancement Anita S. Jwa Transcranial stimulation has recently been gaining momentum as a promising tool for cognitive enhancement. Like other emerging biomedical technologies, however, the promises of transcranial stimulation as cognitive enhancement come with critical ethical and legal challenges. To ensure the safe and responsible use of this technology, we should complement technological development with rigorous ethical and regulatory analysis. This chapter aims to provide a general overview of ethical and legal challenges, lighting a path to maximize the benefits of transcranial stimulation as cognitive enhancement while minimizing the perils of this technology. Focusing on transcranial stimulation, it first introduces some of the major ethical concerns surrounding cognitive enhancement and then reviews previous discussions on the regulation of both cognitive enhancement devices and their uses in real-world settings. Given the impending widespread use of transcranial stimulation by the public, this chapter concludes by emphasizing the need for developing a sound policy for the use of transcranial stimulation as cognitive enhancement. Face Processing and TMS David Pitcher Faces are rich sources of social information that simultaneously convey someone’s identity, attentional focus, and emotional state. Humans process this wealth of socially relevant information in a network of face-selective regions distributed across the brain. This chapter reviews studies that have used transcranial magnetic stimulation (TMS) to study the cognitive operations and functional connections of the face network. TMS has been used to disrupt brain areas contributing to the processing of facial identity, facial expression, eye-gaze direction, head direction, trustworthiness, and the auditory-visual integration of speech. TMS has also been combined with neuroimaging to map how transient disruption of a targeted face area impacts connectivity across the face network. I also review chronometric TMS studies that have established when faces are processed across different brain areas down to a millisecond resolution. Motor threshold, motor evoked potential, central motor conduction time Sein H. Schmidt and Stephan A. Brandt In this chapter, we survey parameters influencing the assessment of the size and latency of motor evoked potentials (MEP), in normal and pathological conditions, and methods to allow for a meaningful quantification of MEP characteristics. In line with the first edition of this textbook, we extensively discuss three established mechanisms of intrinsic physiological variance and collision techniques that aim to minimize their influence. For the first time, in line with the ever wider use of optical navigation and targeting systems in brain stimulation, we discuss novel methods to capture and minimize the influence of extrinsic biophysical variance. Together, following the rules laid out in this chapter, transcranial magnetic stimulation (TMS) can account for spinal and extrinsic biophysical variance to advance investigations of the central origins of MEP size and latency variability. Paired-coil Measures Ritsuko Hanajima, Yoshikazu Ugawa, and Vicenzo Di Lazzaro Effective connectivity between two areas of the human brain can be studied by testing the effects of conditioning stimulation at one site on the effect of test stimulation at another site by using two coils (dual coil, or paired-coil transcranial magnetic stimulation (TMS)). TMS over the bilateral primary motor cortices (M1) induces interhemispheric inhibition (IHI) and weak interhemispheric facilitation (IHF), largely mediated by the corpus callosum. IHI consists of short-interval (~ 10 ms) IHI (SIHI) and long-interval (~ 50 ms) IHI (LIHI). Abnormalities of IHI have been identified in a variety of brain disorders. The cerebello-dentato-thalamo-motor cortical connection is studied with a conditioning stimulus over one cerebellar hemisphere and a test stimulus over the contralateral M1 at interstimulus intervals of 5–8 ms (cerebellar brain inhibition (CBI)). CBI is caused by inhibitory Purkinje cell activation by the cerebellar stimulus, which inhibits the dentato-thalamo-M1 facilitatory projection. Abnormally reduced CBI is a hallmark of disorders of the cerebello-dentato-thalamo-motor cortical connection. Sensory inputs from peripheral nerve stimulation also affect M1 excitability in the form of short-latency afferent inhibition (SAI, interstimulus interval ~ 20 ms) and long-latency afferent inhibition (LAI, interstimulus interval ~ 200 ms). SAI is a marker of central cholinergic function. SAI is reduced in disorders with central cholinergic dysfunction, such as Alzheimer’s disease. Paired-pulse interactions Robin F. H. Cash and Ulf Ziemann Paired-pulse transcranial magnetic stimulation (TMS) techniques provide an opportunity to examine and better understand the excitatory and inhibitory circuitry in the human cortex in health and disease. Typically, a conditioning stimulus is applied and the effect on cortical excitability is inferred by the change in motor evoked potential (MEP) amplitude elicited by a test stimulus delivered shortly (milliseconds) thereafter. This approach has revealed a range of distinct, but generally overlapping, excitatory and inhibitory phenomena, which have been characterized according to their temporal and pharmacological profile, activation threshold, and various other properties. These phenomena have provided new pathophysiological insights into neurological and psychiatric disorders, and paired-pulse TMS measures have demonstrated clinical diagnostic utility. More recently, via implementation of TMS-evoked electroencephalography (TMS-EEG), paired-pulse TMS protocols have started to expand into nonmotor regions. Pharmacology of TMS measures Ulf Ziemann Application of a single dose of a central nervous system (CNS) active drug with a defined mode of action has been proven useful to explore pharmaco-physiological properties of transcranial magnetic stimulation (TMS)-evoked electromyographic (EMG) measures of motor cortical excitability. With this approach, it is possible to demonstrate that TMS-EMG measures reflect axonal, or excitatory or inhibitory synaptic excitability in distinct interneuron circuits. On the other hand, the array of pharmaco-physiologically well-characterized TMS-EMG measures can be employed to study the effects of a drug with unknown or multiple modes of action, and hence to determine its main mode of action at the systems level of the motor cortex. Acute drug effects may be rather different from chronic drug effects, and these differences can also be studied in TMS experiments. Moreover, TMS or repetitive TMS (rTMS) may induce changes in endogenous neurotransmitter or neuromodulator systems. This offers the opportunity to study neurotransmission along defined neuronal projections. Finally, more recently, TMS-evoked electroencephalographic (EEG) responses have been developed to study cortical excitability and connectivity. Pharmaco-physiological testing can be employed to also characterize these TMS-EEG measures. All these aspects of the pharmacology of TMS measures in healthy subjects will be reviewed in this chapter. Physics and biophysics fundamentals of transcranial stimulation Boshuo Wang, Aman S. Aberra, Warren M. Grill, and Angel V. Peterchev Transcranial stimulation induces or modulates neural activity in the brain through basic physical and biophysical processes. Transcranial electrical stimulation and transcranial magnetic stimulation impose an exogenous electric field in the brain that is determined by the stimulation device and the geometric and electric parameters of the head. The imposed electric field drives an electric current through the brain tissue, which macroscopically behaves as a volume conductor. The electric field polarizes neuronal membranes as described by the cable equation, resulting in direct activation of individual neurons and neural networks or indirect modulation of intrinsic activity. Computational modeling can estimate the delivered electric field as well as the resultant responses of individual neurons. This dosimetric information can be used to optimize and individualize stimulation targeting. The field distributions of transcranial stimulation are well understood and characterized, whereas analysis and modeling of the neural responses require further investigation, especially at the network level. rTMS and nondeclarative learning Michael V. Freedberg and Eric M. Wassermann Nondeclarative learning and memory involve acquiring and retaining skills or habits and include subtypes, such as procedural learning, priming, and classical conditioning. Animal studies, lesion, and functional imaging studies in humans have implicated a range of brain areas, including frontal and parietal cortical regions, basal ganglia, cerebellum in these functions. Repetitive transcranial magnetic stimulation (rTMS) can modulate functional connectivity in brain networks and provide causal evidence for their involvement in behavior. In this chapter, we review the use of rTMS to investigate the brain networks underlying nondeclarative learning by stimulating their cortical nodes and examining the effects of these interventions on behavior and imaging measures of brain activity and connectivity, with emphasis on how the timing of stimulation (before, during, or after learning) affects these measures. State-dependent studies on perception and cognition Juha Silvanto Neuronal response to an external stimulus is affected not only by stimulus properties, but also by the baseline activation state; this is referred to as state-dependency. Leveraging this principle helps to enhance the specificity and reduce the variability of brain stimulation effects. State-dependent paradigms have proven to be successful in enhancing the functional resolution of brain stimulation to the extent that the tuning of neuronal representations can be revealed, and they have also enhanced clinical benefits in the treatment of disorders such as depression. Furthermore, state-dependent approach has been applied in various brain stimulation protocols, including online and offline transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), transcranial random noise stimulation (tRNS), and paired-pulse associative stimulation. This chapter describes the principles and mechanisms of state-dependent brain stimulation and summarizes its contribution to cognitive neuroscience. TMS in working memory research Jeffrey S. Johnson, Eva Feredoes, and Bradley R. Postle This chapter provides a broad overview of research focused on the use of transcranial magnetic stimulation (TMS), both alone and together with neural recording modalities such as magnetic resonance imaging (MRI) and electroencephalography (EEG), to elucidate the cognitive and neural underpinnings of working memory. It first considers research using TMS to create “virtual lesions” in targeted brain areas, with the goal of establishing the causal role, and sometimes the timing, of the targeted area in specific working memory component processes. Next, it highlights research adopting a “perturb-and-measure” approach, in which TMS is used in conjunction with simultaneous neural recording (e.g., functional MRI or EEG) to assess the role of brain excitability and inter-area connectivity in working memory. Finally, research using TMS to assess the role of neural oscillations in working memory is reviewed. Throughout, the chapter highlights how different TMS modalities can be used profitably to clarify the neural bases of working memory and to effect strong tests of predictions derived from psychological models. TMS measures explored by EEG recordings Risto J. Ilmoniemi, Nigel C. Rogasch, and Silvia Casarotto This chapter describes the use of electroencephalography (EEG) to measure neuronal activity evoked by transcranial magnetic stimulation (TMS). We discuss the major research and clinical applications of TMS–EEG and review synthetic measures designed to assess the functional status of cortical networks, such as neuronal excitability and effective connectivity. We first highlight that brain reactivity to TMS crucially depends on the anatomical and functional characteristics of the cortical region being stimulated. These findings suggest that the responses to stimulation of the primary motor cortex should be considered a special rather than a representative case of the brain’s reaction to TMS. Next, we describe TMS–EEG-based measures of (i) cortical reactivity in the time and frequency domains, (ii) excitation and inhibition provided by paired-pulse paradigms, and (iii) neuroplasticity induced by non-invasive neuromodulatory interventions. Finally, we discuss the methodological challenges related to concurrent TMS–EEG and review methods for minimizing/suppressing artifacts that may contaminate brain signals. We conclude that TMS–EEG has great potential for becoming an effective research and clinical tool, provided that: 1) data quality is monitored in real time; 2) effective and sound artifact-removal methods are developed and implemented; and 3) sufficient control studies are performed to assess the impact of TMS on the cortex. Transcranial brain stimulation and EEG/MEG Til Ole Bergmann, Leo Tomasevic, and Hartwig Roman Siebner Noninvasive transcranial brain stimulation (NTBS) techniques, such as transcranial magnetic stimulation (TMS) and transcranial direct or alternating current stimulation (TDCS/TACS) can be combined with electroencephalography (EEG) and magnetoencephalography (MEG). The combination of NTBS and EEG/MEG can 1) inform brain stimulation (where, when, and how to stimulate), and 2) reveal aftereffects of stimulation induced changes in cortical activity, and interregional connectivity (offline approach), as well as the immediate neuronal response to the stimulation (online approach). While offline approaches allow to separate NTBS and EEG/MEG in space and time, online approaches require concurrent stimulation and recording. While TMS and MEG cannot be combined online, concurrent TMS-EEG as well as TDCS/TACS-MEG/EEG are feasible but pose a range of methodological challenges at the technical and conceptual level. This chapter provides an introduction into the principal experimental approaches and research questions that can be tackled by the combination of transcranial brain stimulation and EEG/MEG. We review the technical challenges arising from concurrent recordings as well as measures to avoid or remove stimulation artefacts. We also discuss the conceptual caveats and required control conditions. View:
https://www.oxfordhandbooks.com/browse;jsessionid=4016DF24061FE727A291170EA8923A9B?btog=chap&pageSize=20&sort=titlesort&t=ORR%3ASCI01950
Plaintiff real estate developer appealed a decision from the Superior Court for King County (Washington), which entered a judgment in favor of defendant property owner individually and as executor of the estate of his late wife, in the developer's action to enforce an earnest money agreement for the purchase of realty owned by the owners. The owners had listed their property for sale with a real estate broker. The broker subsequently offered the property to the developer. The parties signed an earnest money agreement subject to an engineer's satisfactory feasibility report. When the developer sought to enforce the agreement, the trial court determined that by making its obligations subject to a satisfactory engineer's feasibility report, the developer rendered its promise to buy the property illusory. On appeal, the court reversed and concluded that the developer's promise was not illusory. The court found that the earnest money agreement created two conditions precedent to the developer's duty to buy the property. It had to receive the report and it had to find it satisfactory. The court found that the standard of evaluating the developer's satisfaction was good faith. The developer could cancel the contract only if it was not satisfactory, otherwise it had to give notice and purchase the property. Accordingly, the promise was not illusory and the earnest money agreement was supported by consideration. That the owners' agents failed to convey certain additional terms did not affect the validity of the agreement. The court reversed the judgment and remanded with instructions to enter a decree ordering specific performance of the earnest money agreement.
http://www.lawschoolcasebriefs.net/2013/12/omni-group-inc-v-seattle-first-national.html
Social and emotional learning education is broad in scope. According to The Collaborative for Academic, Social, and Emotional Learning (CASEL, 2019), social and emotional learning includes the following competencies: self -awareness, self- management, social awareness, relationship skills, and responsible decision making. The goal is to help our students become knowledgeable, responsible, and caring. Elias, Zins, Weissberg, Frey, Greenberg, Haynes, Kessler, Schwab-Stone, and Shriver (1997), explain that, ”Social and emotional competence is the ability to understand, manage, and express the social and emotional aspects of one’s life in ways that enable the successful management of life tasks such as learning, forming relationships, solving everyday problems, and adapting to the complex demands of growth and development” (p.2). Essentially, social and emotional learning is important for life. According to Elias, Ferrito, & Moceri (2016), “SEL refers to a set of skills that are important elements of everyday life and are present and relevant from infancy to old age: recognizing and managing emotions, developing empathy and concern for others, establishing effective relationships in one-on-one and group contexts, making responsible and ethical decisions, and handling challenging situations constructively” (p.4). When considering the emotional skills component of SEL, understanding the concept of emotional intelligence is essential. Emotional Intelligence refers to skills “which include self-control, zeal and persistence, and the ability to motivate oneself” (Goleman, p.xxii). The Five Factors associated with emotional intelligence as: Self Awareness, Managing emotions, Self motivation, Recognizing the emotions of others- Empathy, and Handling relationships (Goleman, 2006). To further explain, Goleman (2006), states that emotional intelligence includes, “abilities such as being able to motivate oneself and persist in the face of frustrations; to control impulse and delay gratification; to regulate one’s moods and keep distress from swamping one’s ability to think; to empathize and hope” (p. 34). There are two types of intelligence, rational and emotional, and, according to Goleman (2006), “intellect cannot work at its best without emotional intelligence” (p.28). Academic performance can be enhanced or inhibited by the state of one’s emotions, therefore, it is essential that skills that enhance a student’s emotional well-being are integrated into learning. Social Intelligence is well researched and is clearly explained. The social intelligence model emerged from research in the field of emotional intelligence. According to Daniel Goleman (2006), “The ingredients of social intelligence…can be organized into two broad categories: social awareness, what we sense about others- and social facility, what we do with that awareness” (p.84). Social awareness includes primal empathy, attunement, empathic accuracy, and social cognition. Social facility includes: synchrony (interacting smoothly at the nonverbal level), self-presentation, influence, and concern (Goleman, 2006). People are “wired” for social interactions. In the educational setting, students are situated in social settings in which social awareness and social facility skills can be advantageous. Goleman (2006) states that, “Neuroscience has discovered that our brain’s very design makes it sociable, inexorably drawn into an intimate brain-to-brain linkup whenever we engage with another person” (p.4). It is further explained that, “Mother-infant synchrony operates from a child’s first day of life; the more synchrony, the warmer and happier their overall interactions” (Goleman, 2006, p.166). This foundational research clearly identifies the skills associated with social intelligence that are important for healthy relationships in all aspects of life. Holistic Approach: Refers to educating the whole person regarding intellectual, social, emotional, mental, physical, and creative potential. Multi-Tiered System of Support (MTSS): An integrated, comprehensive framework that focuses on all students’ academic, behavioral, social, and emotional needs through multi-tiered support systems. It grew out of RTI and PBIS (positive behavior interventions and supports). Whole Child: An approach of education that takes into consideration the development of cognitive, physical, social and emotional well-being. It supports practices in which the child will be provided healthy, safe, engaged, supported, and challenged environments. Response to Intervention (RTI): A framework used to help students that includes three tiers of support including: whole class, small group interventions, and intensive interventions. Social Skills: Skills that are used to communicate both verbally and nonverbally through gestures, body language, and personal appearance. Social skills are the abilities necessary to get along with others to create and maintain relationships. Horizontal Integration: Alignment of instruction and assessment of skills across a particular grade level. Whole Systems Thinking: A method to understand how elements and systems are related, and how they influence each other within the whole. Social and Emotional Learning (SEL): The process through which students learn and understand how to manage emotions, set and achieve positive goals, feel and show empathy for others, establish and maintain positive relationships, and make responsible decisions. Vertical Integration: Alignment of instruction and assessment in which one lesson, course, or grade level expectation prepares the student for the subsequent lesson, course, or grade level in the progression of learning skills. Resilience: The capacity to recover quickly from difficulties. It is the process of adapting well during adversity and stressful situations. One can learn to develop a resilient mindset and attitude. Universal Intervention: Instructional strategies and supports provided to all students.
https://www.igi-global.com/chapter/overcoming-educational-challenges-through-social-and-emotional-learning/242182
What is Integral ? When something is integral it is essential to the whole. It is essential for a sense of completeness. The process of integration occurs when things which were initially separate in some way join together to form a bigger, more cohesive entity. Integration involves the relationship between parts and wholes. Holons As the philosopher Arthur Koestler points out, everything in the entire universe, including the universe itself, is simultaneously both a whole unto itself and a part of something bigger. ‘Holon’ is the term he uses to describe things involved in these whole/part relationships. What affects whether we see something at any given moment as a whole unto itself or a part of something bigger has to do with the context in which we are experiencing it. Atoms are wholes unto themselves while simultaneously being parts of molecules, which are wholes unto themselves while simultaneously being parts of cells which are parts of living tissue and organs and complex organisms such as human beings. On another level, human beings are whole unto themselves while simultaneously being parts of families, which are parts of neighborhoods which are parts of bigger communities like cities, states, and nations, all the way up to being a part of a world community. The limits are potentially endless. Everything is made up of these holons. And each whole provides a context for the smaller parts. A classic example from linguistics helps to describe these holonic relationships. The word “bark” is whole unto itself yet it is the bigger context of a sentence or paragraph that contextualizes the word and gives it meaning. The bigger context lets us know if we are talking about the bark of a dog as opposed to the bark of a tree. It lets us know if we are talking about some menacing dog barking with bad intent or one’s beloved childhood pet barking to welcome them home after a long journey away. It is the context that provides the definition to the holon. These holonic or whole/part relationships occur in many different domains. They occur physically within our bodies and the world around us. They occur interpersonally within our social relationships and internally within our own consciousness, in the very way we think and feel and give meaning to our experience. And they most definitely occur within the creation and appreciation of art. These whole/part relationships constitute the very foundation of our experience of reality and meaning in the world. Integral Art An integral approach to art attempts to acknowledge all the different whole/part relationships that contribute to the creation, appreciation, influence and ultimately the meaning of art. To factor in all the variables that contribute to the meaning of art is a daunting task but it can be simplified by focusing on three different perspectives that are important in understanding art in an integral way. These three perspectives are: 1) the perspective of the artist who creates the art; 2) the perspective of the viewer who gazes upon the work of art; 3) the perspective of the outside cultural context in which the work of art is a part. Some of the artist’s attributes which influence the meaning of the art are artistic training and technical skill, aesthetic sense, creative style, personal interests, as well as the artist’s psychological make-up, present emotional state, values, beliefs, level of cognitive development, level of self development, and level of spiritual development. While the artist obviously contributes a great deal to what a work of art ultimately means, an integral approach to art also takes into account the viewer's contribution to the meaning of a work of art, for it is very possible that what the artist intended and what the viewer derives are very different. From an integral perspective the viewer’s interpretations are just as much a part of the ultimate meaning of the work of art as the artist’s intentions. Just as with the artist, the viewer’s aesthetic sense, personal interests, as well as his or her psychological make up, present emotional state, values, beliefs, level of cognitive development, level of self development, and level of spiritual development, will all help to provide a context in which to determine the meaning of the art. Finally, the outside culture plays a large role in shaping the context in which the meaning of art is created. Some of the factors within this cultural perspective that would be influential to the meaning of a work of art are things like the present political currents in the world, the historical events that affect the present culture, the overall cultural values and definitions of what is beautiful and what is artistically skillful, and the overall average level of emotional, cognitive, moral, self identity and spiritual development within the culture. Even things like the success of the artist within a culture, the monetary value of his or her work and the galleries in which the work is exhibited will affect the overall meaning of the art. What Level of Consciousness Does a Work of Art Represent? One of the factors which is of significant importance for understanding art in a more complete way is the level of consciousness of the creator and viewer of a work of art. Consciousness can be divided up into a variety of different states and levels, and all of the great wisdom traditions of the world (i.e. Buddhism, Hinduism, Taoism, as well as Sufism, and Christian and Jewish mysticism) seem to differentiate levels of consciousness in a very similar way. We can divide consciousness into five major levels: gross/physical, mental/emotional, subtle/soulful, very subtle/casual and non-dual. Gross awareness is awareness of the gross or physical level of being. It is at this level of awareness that we all start out. Little children are almost exclusively aware of the physical aspects of the world. And their sense of identity is, for the most part,based on their physical being. This is the realm of the five senses, and a great deal of art has the singular intention of representing only this very literal, concrete, physical level of being. As we develop we become more aware of the mental/emotional level of being. We realize that we have a mind and thoughts as well as a physical body and, while we may have had feelings before, it is at this level that we are more capable of contemplating and being truly conscious of the emotional aspects of ourselves. At this level we begin to identify not only with our bodies but with the subjectivity of our minds and emotional hearts as well. We transcend the physical identity while simultaneously including that physical identity into our emerging bigger sense of self. Over the course of history art, like human beings, has evolved to express the more subjective mental and emotional aspects of being as well as the physical. This can be seen in the progression from exclusively concrete and literal art to works that incorporate the figurative and symbolic. If one’s consciousness continues along the developmental path it eventually reaches beyond the personal domain of the physical body, rational mind and personality- dependent emotional heart. It moves to the stage where it becomes more and more aware of the subtler, transpersonal aspects of being, traditionally depicted as soulfulness, subtle luminosity or transpersonal archetypes. And beyond this level is the even subtler, Causal level of being which is often depicted as the emptiness from which all form arises. The Causal level of being is the source or cause of all things, the backdrop on which the world of form resides and from which the world of form and contexts are born. The Casual is the unmanifest, and Casual awareness involves a state of absorption in the pure, formless ground of all existence which is Spirit or Godhead itself. Beyond the state of Causal absorption in which one is identified with the source from which all things arise, there is an even deeper level that transcends Causal absorption to hold both formlessness and form as inseparable. At this level of awareness there is a deeper state of integration in which there is the realization that form and formlessness are ultimately one and the same. There is the realization that the physical, mental, emotional, and soulful states of being are ultimately identical to the source from which all those states arise. They are not two but one – Nondual. From this perspective the simplest and smallest of things is simultaneously the most expansive - the universe and beyond in a single atom, the totality of being in the coo of a infant, the deepest and most profound expression and realization of Spirit in the simple act of chopping wood and carrying water. The Nondual is the realization of the ultimate oneness and most complete quality of integration. Just as the basic rudimentary nature of physical form and gross awareness can be expressed through art (i.e. in the painting of a bowl of fruit) so can the depth of soulfulness, the emptiness of causation and the ultimate integration of the Nondual be expressed through the medium of art. There are many artists such as Alex Grey and Wassily Kandinsky who strive to express these higher realms in their creative endeavors and there are many terms used to describe such works of art. Some use the terms spiritual, transformative, meditative or visionary art. Others describe it as mystical, astral, metaphysical or inspirational art. Some may see this art as form of yoga or as a tool for meditation and spiritual growth. And many spiritual traditions would label such higher art using the terminology of their traditions (i.e. Buddhist art, Hindu art, Tantric art, Christian mystic art,). But no matter how you label it and what tradition you are coming from the essence is the same. The integral Ideal in Art For me an integral approach to art attempts to acknowledge and represent how all of the levels of being exist simultaneously. It tries to convey how all these levels are ultimately inseparable from one another. It tries its best to celebrate the spirit of the Nondual and express that inexpressible state of profound integration. Art has the potential to help us realize our deeper nature if we allow ourselves to see how in its creative seeds there lies the birth, the source and the death of the entire universe. Art can help us realize that the simple painting of a bowl of fruit is no different from the vast unfolding of Spirit in and as the cosmos. It can help us realize that ultimately a speck of dust holds the same depth and breadth of magic, grace and beauty as a master’s painting, a symphony or the perfect poetic verse. The creation and appreciation of art from an integral perspective cannot help but transform us if we are open and willing to engage in the ongoing process of broadening our horizons. Aspiring towards an integral vision It is my personal aspiration as an artist and a human being to embody this type of integration to the best of my ability. As an artist this means acknowledging that I am in a co- creative relationship not only with all who view my art but also with the present, modern day culture that the art exists within. On a deeper, more personal level, my efforts to live this integral vision means doing my best to acknowledge not only within my creative endeavors but within all aspects of life, the presence of all the levels of consciousness. The physical, mental/emotional, soulful, and Casual aspects of being are all integral, all essential to the whole of existence. They are all aspects of our ultimate Nondual nature. In my art the photographs of physical form I use are representative of the physical aspects of being. The concepts and feelings I attempt to express in my work represent the mental/emotional aspects of being. The light and luminous aspects of my images represent the more subtle, soulful luminosity of being. While the black backgrounds and shadows I use in many of my images are representative of the emptiness of the Causal ground from which all things are born. And finally my attempt to integrate these elements in a cohesive whole is my way of expressing symbolically the ultimate integration of the Nondual. Whether or not my work or anyone’s work succeeds in expressing and conveying all these levels is not what is of greatest importance. What is of importance, at least for me, is the striving to expand and integrate. It is the journey towards that integral ideal which is most important. And a journey towards wholeness certainly holds a very important quality of wholeness in and of itself regardless of whether or not the ultimate, idealized destination is every reached.
http://integralarts.net/What_is_Integral.html
Big Data Report Shows Privacy Matters After a ninety day review, (including our own comments) the White House review group, led by John Podesta, released its report on the challenges and opportunities of big data, and how it may affect individual privacy. In tandem, the President’s Council of Advisors on Science and Technology (PCAST) released a report focused on technical issues. There have been a lot of promises made about how big data will improve our lives, and there are absolutely some uses of big data that will have positive social effects. But big data raises very real privacy concerns, and we’re pleased that the White House has raised public awareness of these issues through its review process. Users need to be able to control their personal information — Big Data must empower individuals, not reinforce existing power imbalances that benefit companies and governments. A Strong Set of Policy Recommendations At the highest level, the Review Group report issued six recommendations, all of which we support: • Advancing the 2012 Consumer Privacy Bill of Rights • Enacting national data breach legislation • Extending privacy protection to non-U.S. individuals under the Privacy Act • Ensuring that student data is used for educational purposes • Expanding technical expertise to prevent discrimination • Amending the Electronic Communications Privacy Act (ECPA) The other recommendations in the report would, if properly enacted, improve existing protections for individual data. For example, the recent high profile data breaches suffered by Target and other companies have pointed to a need for better security for consumer data, especially when companies hold sensitive financial or location data. A federal law would likely preempt state laws, so a weak federal standard could weaken the obligations on companies. A federal law with strong provisions and coordinated enforcement between the federal government and state attorneys general would help alleviate those concerns and promote strong consumer protections. The report also discusses the possibility of discrimination – on racial, ethnic, class, gender, sexual orientation, or other grounds – as a real concern when companies, government, or other entities have access to vast amounts of data that can be used to profile individuals. There have already been reports of data brokers using categories like “Ethnic Second-City Strugglers,” “Tough Start: Young Single Parents,” and “Rural and Barely Making It.” These categories could easily be used to discriminate against minority and underserved populations, and we’re pleased to see the White House echo our calls for a prohibition on such practices. Digital redlining – the 21st century analogue to the practice designed to keep racial minorities out of certain residential areas – is unacceptable; the fears that such practices could occur creates chilling effects that will discourage adoption of new services that rely on data collection and big data analytics. While there are existing laws, like the Fair Credit Reporting Act, that do some work to prevent data discrimination, there needs to be a strong framework in place to ensure that all individuals – regardless of the background – are treated fairly in the big data context. We therefore also support the recommendation of an extension of the Privacy Act to cover non-U.S. individuals. Finally, the White House calls for ECPA reform, which CDT has long argued for; we wrote yesterday on how updating ECPA from 1986 to the current digital age is long overdue. Taylor Rodriguez Doesn’t Care About Data Collection – But She Should One of the areas where the report falls short is on empowering users to make choices about their privacy. CDT has long advocated for limits on the collection of data, rather than relying upon use limitations to protect individuals. It’s disheartening to see that the PCAST report assumes a world in which collection is rampant, pervasive, and unceasing, and that neither report discusses government surveillance in detail. The PCAST report imagines a future world in which a fictional character, Taylor Rodriguez, prepares for a business trip. Her bag has an RFID tag; “the cloud” knows her itinerary; and airport security has access to data collected in her home. The report argues that today, we might find this “creepy,” but that social notions of privacy change and that Taylor is probably OK with the situation because of the benefits. That’s a pretty big assumption to make. If Taylor chooses to let “the cloud” hold her data, track her movements, and watch her in her house, there need to be limits on who has access to that data. And there needs to be a discussion about government access to that data; both the Podesta and PCAST reports don’t discuss that issue. It’s obvious that commercial collection of data and the NSA’s over broad surveillance practices are linked. To address commercial collection and use of data without discussing the danger of government access is a half answer at best. Despite PCAST’s claims in the Taylor Rodriguez story, collection of data should not be an automatic assumption. Companies, governments, and other entities that collect data from individuals need to make affirmative decisions about what data they collect, for what purposes, and for what lengths. While some have argued that the value of big data lies precisely in unanticipated uses, we think that context matters, and that consumers should know generally what kinds of applications their data might be used in. That may not require specifically delineating precisely what data may be used for, but relying on context – and the notion of contextual integrity, proposed by Helen Nissenbaum. Indeed, when companies have failed to affirmatively decide what data gets collected and how, they have often inadvertently sucked up too much data – in some cases resulting in FTC fines and consent decrees. Making deliberate choices about when to collect data is important to protect consumers, and pervasive collection is far from a foregone conclusion. Encouraging limitations on data collection, rather than relying on use limitations to protect consumers, is both beneficial to consumers and to companies. The incorporation of more devices that collect data doesn’t mean that all individuals will want a world in which their devices are always on and always collecting data. Automatic collection by all devices, all the time, is not a desirable or inevitable outcome, despite what some may think. Allowing for flexibility of choice – and empowering individuals to create privacy protective spaces in their daily lives – will be of vital importance. The Need to Respect Privacy There have already been instances of consumer devices collecting data out of context and without consent. The FTC settled with a company that was spying on people in their homes via laptop webcams – in some instances, watching people in intimate moments in their homes. LG TVs collected data about how individuals watched TV in their homes, and then sent that data back to the company without telling the TV’s owner. If the TV was connected to a home network, it would scan the file list and send that to LG as well. It should be obvious that consumers don’t want this. When consumers purchase devices and sign up for services, they should have some control over what data those devices and service collect, and be empowered to make affirmative choices to limit those collection and use practices. The White House report criticizes the current notice and choice framework as inadequate, and we agree that it needs to be reformed given current practices. But let’s not throw the baby out with the bathwater – notice and choice should be improved, not discarded, in order to more effectively protect and empower consumers. Empowering Consumers in the Big Data Age The major lesson here is that individuals need more power over their devices and services. The Podesta report recognizes this and discusses some good solutions, including endorsing Do Not Track and global privacy controls across sites. CDT has long supported the use of consumer controls – such as mobile device settings – to help consumers make choices regarding data collection and usage. The White House’s call for more technical measures to empower users, and for technical analysis of how existing laws can be used to prevent discrimination in big data promises, are welcome, and we hope that government and the private sector will work to promote research and development. Big data holds a great deal of promise, but consumer rights and civil liberties need to be placed at the center of the debate from the beginning. The White House report is a step in that direction, but actual, specific practices and protections will need to be created and implemented as soon as possible.
https://cdt.org/insights/big-data-report-shows-privacy-matters/
This new collection of essays follows in the footsteps of the successful volume Thinking Ahead - Essays on Big Data, Digital Revolution, and Participatory Market Society, published at a time when our societies were on a path to technological totalitarianism, as exemplified by mass surveillance reported by Edward Snowden and others. Meanwhile the threats have diversified and tech companies have gathered enough data to create detailed profiles about almost everyone living in the modern world - profiles that can predict our behavior better than our friends, families, or even partners. This is not only used to manipulate peoples’ opinions and voting behaviors, but more generally to influence consumer behavior at all levels. It is becoming increasingly clear that we are rapidly heading towards a cybernetic society, in which algorithms and social bots aim to control both the societal dynamics and individual behaviors.
https://link.springer.com/book/10.1007%2F978-3-319-90869-4
All these elements tend to disturb the natural composition of air and make it harmful for our health. Keeping the air quality at a safe level is one of the major challenges faced by modern urban communities today. Farmers also need to be aware of the chemicals they use, because it may runoff and contaminate nearby bodies of water. What Is Soil Pollution? In the case of contaminants which occur naturally in soil, even when their levels are not high enough to pose a risk, soil pollution is still said to occur if the levels of the contaminants in soil exceed the levels that should naturally be present. Types of Pollution Radiation Chemical Oil-spill Noise pollution Soil Pollution Causes All soils, whether polluted or unpolluted, contain a variety of compounds contaminants which are naturally present. Such contaminants include metals, inorganic ions and salts e. These compounds are mainly formed through soil microbial activity and decomposition of organisms e. Additionally, various compounds get into the soil from the atmosphere, for instance with precipitation water, as well as by wind activity or other types of soil disturbances, and from surface water bodies and shallow groundwater flowing through the soil. When the amounts of soil contaminants exceed natural levels what is naturally present in various soilspollution is generated. There are two main causes through which soil pollution is generated: Natural Pollutants Natural processes can lead to an accumulation of toxic chemicals in the soil. This type of contamination has only been recorded in a few cases, such as the accumulation of higher levels of perchlorate in soil from the Atacama Desert in Chile, a type of accumulation which is purely due to natural processes in arid environments. They can pollute the soil either alone or combined with several natural soil contaminants. Man-made soil pollution is usually caused by the improper disposal of waste coming from industrial or urban sources, industrial activities, and agricultural pesticides. The main pollutants of the soil are the biological agents and some of the human activities. Soil contaminants are all products of soil pollutants that contaminate the soil. Human activities that pollute the soil range from agricultural practices that infest the crops with pesticide chemicals to urban or industrial wastes or radioactive emissions that contaminate the soil with various toxic substances. Biological Agents Biological agents work inside the soil to introduce manures and digested sludge coming from the human, bird and animal excreta into the soil. Agricultural Practices The soil of the crops is polluted to a large extent with pesticides, fertilizers, herbicides, slurry, debris, and manure. Urban Waste Urban waste consists of garbage and rubbish materials, dried sludge and sewage from domestic and commercial waste. Industrial Waste Steel, pesticides, textiles, drugs, glass, cement, petroleum, etc. Examples of the most common and problematic soil pollutants can be found below.Pollution and physical barriers to root growth promote loss of urban tree cover. Animal populations are inhibited by toxic substances, vehicles, and the loss of habitat and food sources. Solutions. "Toxic Development: Pollution in India" documents the pollution crisis in both rural and urban areas by focusing on the environmental and social impacts of three of India’s most under-reported but leading causes of pollution: chromium contamination from tanneries along the Ganges river, lead pollution from battery recycling plants in Kolkata. Jump to main content. An official website of the United States government. Drinking water can be a source of exposure to chemicals caused by nutrient pollution. U.S. Geological Survey found that nitrates were too high in 64 percent of shallow monitoring wells in agricultural and urban areas. Causes of Urban Heat Island The UHI is an ‘inadvertent’ modification of the climate, caused by changes to the form and composition of the land surface and atmosphere. When a land cover of buildings and roads replaces green space, the thermal, radiative, moisture and aerodynamic properties of the surface and the atmosphere are altered. Petrochemical waste is one of the main causes of water pollution in urban areas. Sewage disposal sometimes causes water pollution Sewage is another human product which often ends up as a major pollutant of water sources. Read chapter The Characteristics of Urban Air Pollution in China--XIAOYAN TANG: In October , a group of experts met in Beijing under the auspices of t.
https://zyfuryqofaxymyte.leslutinsduphoenix.com/main-causes-of-pollution-in-urban-17781vp.html
Almost one person in four experiences poverty or social exclusion, and one young person in three; more than 118.7 million Europeans, are at risk of poverty or social exclusion, including 26.9 percent of children and 17.4 percent elderly people. Globalisation, technological changes, digitalisation and the economic crisis have increased inequalities in the EU. Over the years, the Parliament has repeatedly called for adequate social protection, particularly for vulnerable groups that include disabled people, low-income families, young people. and single parents. In a resolution adopted in November 2017, MEPs stressed that reducing inequalities was a precondition for economic recovery, decent job creation, social cohesion, and prosperity in the EU. The situation has become so unbearable, that the European Parliament organised today a special debate on finding ways to tackle poverty across the EU. The initiator of today’s debate, Udo Bullmann, acting leader of the Group of the Progressive Alliance of Socialists and Democrats (S&D), said: “Inequality has reached new shocking levels and is threatening to tear our societies apart, not only globally but also in Europe. In the EU, 120 million people are poor. 28 million children live in households that cannot provide for their basic needs: food, healthcare, education or housing. It is shameful that the richest continent of the world does not take care of its own children rightfully. “Even though the European economy is on the mend again and unemployment is going down, income and wealth inequality are on the rise. An alerting 40% of wealth is in the hands of only 1% of the richest Europeans. Clearly, the neoliberal myth of trickle-down economics is not going to sort out the situation, nor are today’s insufficient policy efforts. The election of Trump and the Brexit, which have also stemmed from a growing disaffection with the status quo, must be the wake-up call for the last sleepwalkers. The growing gap between the rich and the poor is undermining the social and democratic fabric of our societies. It is high time to act now.” Marianne Thyssen, Commissioner for Employment, Social Affairs, Skills and Labor Mobility warned, on behalf of the Commission, that the “discontent of the people who feel they have been left behind fuels the rise of nationalism, protectionism .and populism”. Eurosceptic MEPs, especially from UK and Poland, insisted that the effects of Middle East migration has been the driving force behind people’s discontent. They actually asked for more deregulation and less rules, while also speaking against the uniformisation of social security and of pension schemes. Agnes Jongerius, S&D MEP and spokesperson on employment, answered: “It’s a scandal that working class families are falling behind while big corporations are dodging taxes and driving down wages. Our economy must again work for the benefit of all and ensure that no one is left behind.” Many MEPS asked the EU to end the rhetoric of “free trade”, while the Estonian ALDE MEP Yana Toom spoke also about the poverty induced by geopolitical factors, such as the situation in some regions of Estonia close to the Russian border and inhabited by Russian speakers. Some facts and figures on poverty and inequality in Europe: • Income inequality remains at an all-time high, the Gini coefficient on income distribution increased on average from 0.28 in the 1980s to 0.38 in 2016. • The average income of the poorest 10 percent is 9½ times lower than that of the richest 10 percent. • The 40 percent of least wealthy households own a little over 3 percent of the total wealth, while the 10 percent of the wealthiest households holds 50 percent. The extreme concentration of wealth is even more significant to consider, with nearly 40 percent of total wealth in the hands of the 1 percent richest Europeans. • The post-crisis job gaps are closing on average in Europe, but there are still 1.4 million fewer jobs in the EU in 2015 compared with 2007. In addition, the data does not describe the quality or working conditions of these jobs. • Gender gaps in employment and earnings have declined in most countries in the EU, but at 9.8 percent and 12.8 percent respectively they persist – and women are still disadvantaged in terms of the type of jobs and occupation they hold. • There are close links between socio-economic backgrounds and education and health outcomes. Men with lower levels of education have 2.7 years less life expectancy than the better educated, and women, 1.2 years. • There is a gap in education outcomes among individuals with different parental socio-economic backgrounds. A child from an advantaged socio-economic background will score on average 20 percent higher in mathematics than a child from a disadvantaged background. • Low-skilled youth who are disconnected from both employment and learning represent 17 percent of 15-29 year-olds in the EU, and risk being permanently left behind in the labour market. • Over the last 20 years, the gap in productivity level between the frontier regions of Europe and the bottom 10 percent increased by 56 percent. • Major inequalities remain across countries in terms of overall employment, with unemployment rates reaching 20.7 percent in Greece versus 3.7 percent in Germany.
https://www.neweurope.eu/article/end-the-rhetoric-of-free-trade-and-tackle-inequality-eu-parliament-urges/
The Human Immune System 101: The Overactive Immune System Foreword: The immune system is one of the most complex, nuanced biological systems that function to keep the human race alive each and every day. It is both specific and broad, lifelong, and short-term, inherited and acquired, all at the same time. While it defends against deadly threats such as bacteria and viruses, defects in the immune system can be just as fatal. The Immune System 101 articles describe how this contradictory system works and will summarize the variety of functions of the immune system, its importance, and its potential failings in six different articles. 1. The Human Immune System 101: Innate Immunity 2. The Human Immune System 101: Adaptive Immunity 3. The Human Immune System 101: Bacterial & Viral Defenses Against the Immune System 4. The Human Immune System 101: The Overactive Immune System 5. The Human Immune System 101: The Underactive Immune System 6. The Human Immune System 101: The Immune System and Cancer An overactive immune system may seem like a good thing, even a great thing to have. However, it can be detrimental to one’s health. Autoimmune diseases occur when the immune system fails to recognize and differentiate self vs. non-self-tissue and immune cells, instead of attacking and killing foreign pathogens, target one’s own healthy tissue resulting in an autoimmune disease (Orbital, 2022). There are several different autoimmune diseases, some of which involve multiple organ systems and some of which involve only one system or cell type. These disorders include systemic lupus erythematosus (lupus), type 1 diabetes, rheumatoid arthritis, multiple sclerosis, and a multitude of others (Orbital, 2022). Figure 1: Autoimmune Symptoms Systemic Lupus Erythematosus Systemic lupus erythematosus—or more commonly known simply as lupus or SLE—is a common autoimmune disease that at least five million people worldwide suffer from, according to the Lupus Foundation of America (Unknown, Lupus Foundation of America, 2019). SLE “is a chronic autoimmune disease characterized by the production of autoantibodies specific for components of the cell nucleus and that causes damage to body tissues and organs,” (Arneth, 2019, p. 1). An important aspect of the onset of the disease deals with tolerance: the immune system's ability to distinguish between self and foreign tissue. During the production of antibodies and the B cells that release those antibodies, the antibodies are typically tested for reactivity and tolerance against self-cells. However, testing for self-tolerance that occurs in the bone marrow is not fail-proof and in patients with autoimmune disorders, self-reactive B cells exit the bone marrow despite checkpoints. At this stage of immature B cell development, the cell-surface antibody can bind antigens. In the bone marrow microenvironment in which immature B cells emerge, antigens that engage the BCR (B cell receptor) will almost always be self-antigens, which makes regulation at this stage essential...These processes are collectively known as central tolerance. (Nemazee, 2017, p. 281). In a healthy individual, during central tolerance, B cells that react with self-antigens in the bone marrow microenvironment undergo receptor editing in which the self-reactive immunoglobulin receptor is modified. Additionally, after this initial stage of B cell development, B cells go through a process known as peripheral tolerance in the spleen and lymph nodes; during this stage, B cells with BCRs that bind to self tissue are impaired, or induced by other regulatory cells to undergo cell death (Nemazee, 2017). Figure 2: Mechanisms of Self Tolerance These mechanisms of tolerance are not fail-proof and in patients with autoimmune diseases, there is a breakdown or error in them resulting in abnormal antibodies circulating in the blood and lymph system. Two of the most common antibody abnormalities found in patients with lupus are antinuclear antibodies (ANAs) and anti-DNA antibodies. ANAs “are a class of antibodies that bind to cellular components in the nucleus including proteins, DNA, RNA, and nucleic acid-protein complexes,” (Nosal, Superville, et al., 2021, p. 1). If antinuclear antibodies are present in a substantial concentration, then the patient’s own immune system begins to target the nucleus of the cell which houses the DNA and contains markers that antinuclear antibodies recognize and bind to. Antinuclear antibodies are able to penetrate living cells and bind to extracellular segments of degraded nuclei after a cell undergoes cell death (Alarco, 2001). This is one of the reasons why lupus patients often experience photosensitivity when exposed to the sun for long periods of time; UV light damages DNA in healthy skin cells, causing those cells to die and release fragmented nuclei parts which allows for a robust response from the antinuclear antibodies (Alarco, 2001). When ANAs bind to these nuclear segments, they form immune complexes which signal for the activation of the complement cascades. The C3 and C4 complement levels may be low in SLE patients during a period of high disease activity as they are used for immune complex clearance. Figure 3: How Immune Complexes Form With Anti-DNA Antibodies The most common symptoms of SLE are fatigue, chronic low-grade fever, a malar rash on the face in the shape of a butterfly, joint pain, sensitivity to sunlight (photosensitivity), hair loss, weight loss, shortness of breath, headaches, and chest pains (Unknown, 2021). Additionally, around 80-90% of lupus patients experience neuropsychiatric (mental health) symptoms according to the lupus research alliance. These symptoms include depression, anxiety, and bipolar disorder which patients with lupus are 1.74 times more likely to develop than the general population (Tiosano, Gendelman, et al., 2017). In a case study surveying respiratory involvement in lupus patients it was found that this particular disease manifestation can "present in very different ways, both concerning the clinical picture, acute or chronic, and the anatomic localizations. It is frequent and its detection is essential as the prognosis has been transformed by the use of immunosuppressant treatments," (Carmier, Marchand-Adam, et al., 2010, p. 75). Additionally, low blood oxygen was found in 25% of these cases with chronic interstitial pneumonia being observed in 3 to 13% of lupus patients (Carmier, Marchand-Adam, et al., 2010). There are many different treatments for SLE despite there being no cure. With these treatments, the disease can often be managed. Steroid treatments are often used for acute flare-ups of disease activity to reduce inflammation and pain, however, due to the long-term effects of steroids such as osteoporosis, this treatment is not typically favored for treating the disease in the long run (Unknown, Center for Disease Control, 2022). Additionally, antimalarial drugs such as hydroxychloroquine are used for long-term treatment to lower the concentration of disease-causing autoantibodies and have been known to help prevent organ damage if taken long-term. Immunosuppressive medications such as methotrexate and mycophenolate mofetil are also used to manage disease activity however one must be careful to prevent contracting infections when taking immunosuppressive (Unknown, Center for Disease Control, 2022). Finally, biologics such as Benlysta are often used. Benlysta is a monoclonal antibody that binds to B lymphocyte stimulators which help B cells to proliferate. When the Benlysta antibodies bind to the B lymphocyte stimulator, the concentration of autoantibody-producing B cells is reduced, and therefore disease activity is reduced too (Unknown, Center for Disease Control, 2022). Type 1 Diabetes Type 1 diabetes is another type of common autoimmune disorder, however, unlike SLE, diabetes is specific to the pancreas instead of affecting multiple organ systems. Additionally, type 1 diabetes is “characterized by autoreactive T cell-mediated destruction of insulin-producing pancreatic beta-cells,” (Toren, Burnette, et al., 2021, p. 1). Beta cells are specifically found in the pancreas and are responsible for insulin production which allows glucose to enter the body’s cells and provide them with energy. When a person consumes food, glucose enters the bloodstream, and insulin is needed to allow the sugar to actually enter the cells (Toren, Burnette, et al., 2021). The pancreas also regulates excess glucose in the blood by releasing glucagon—which is made from alpha cells in the pancreas—to store the extra sugar in the form of glycogen in the liver and muscles. In a person without diabetes, insulin binds to insulin receptors on every cell in the body to open up a glucose channel, which in turn allows glucose molecules to enter the cells and lowers a person's blood sugar (Toren, Burnette, et al., 2021). However, in a person with type 1 diabetes, due to the fact that the beta cells of the pancreas are the target of T cells, glucose is unable to enter the cells without the help of insulin (Unknown, Mayo Clinic, 2022). Figure 4: How Insulin Works Beta-cells are destroyed in type 1 diabetes by self-reactive T cells, resulting in the inability of cellular uptake of glucose. Both CD4+ and CD8+ T cells have been found to be self-reactive in diabetes patients (Knip & Siljander, 2008). Similar to B cells, T cells undergo tolerance testing to prevent autoimmune diseases, and, this process isn't failproof either. During development in the thymus, T cells "undergo T-cell receptor (TCR)-mediated apoptosis, a process known as negative selection. Negative selection is extremely important for establishing a functional immune system, as it provides an efficient mechanism for ridding the T-cell repertoire of self-reactive and potentially autoimmune lymphocytes," (Palmer, 2003, p. 383). For people without diabetes, this means that the population of T cells within a person's body is largely self-tolerant and won't attack healthy cells. However, in type 1 diabetes, when these processes of tolerance testing fail, T cells-- specifically CD8+ T cells-- attack insulin-producing beta cells. Patients with diabetes are at a higher risk for heart disease, nerve damage, kidney damage, eye damage, and foot damage; furthermore, the symptoms of type 1 diabetes include fatigue, hunger, weight loss, and increased urination, and increased thirst (Unknown, Mayo Clinic, 2022). While there is no cure for type 1 diabetes, the disease can be managed by monitoring one’s blood sugar levels, injecting insulin when the levels are too high, and eating glucose when one’s blood sugar is too low, also known as hypoglycemia. Diabetic ketoacidosis occurs when a diabetic's blood sugar is too high (hyperglycemia), glucose is unable to enter the cells and the body begins to break down fat. A case study showed a 6-year-old boy was found to be hyperglycemic with a fasting blood sugar level of 300 mg/dl when a healthy fasting blood sugar is considered below 99 mg/dl. This hyperglycemia led to symptoms of diabetic ketoacidosis such as frequent urination, stomach pain, confusion, fatigue, and excessive thirst (Kumar, 2015). The hyperglycemia was treated with insulin and the ketoacidosis resolved. Figure 4: How the Pancreas Makes and Releases Insulin. Autoimmune disorders are the result of a failure of a person’s immune system to recognize foreign pathogens versus health self-tissues. These disorders can be broad such as SLE or rheumatoid arthritis or specific to a certain cell type such as type 1 diabetes or multiple sclerosis. Additionally, they are often difficult to manage and can be detrimental to not only one’s physical health but their mental health as well. Cellular mechanisms responsible for preventing immune cells from attacking one's own cells are highly error-prone and a combination of genetic, environmental, and physical factors combined may lead to a breakdown in those systems and result in an autoimmune disease. Bibliographic References Alarco, D. (2001). Antinuclear antibodies: to penetrate or not to penetrate, that was the question. Lupus, 10(5), 315–318. https://doi.org/10.1191/096120301669579565 Arneth, B. (2019). Systemic Lupus Erythematosus and DNA Degradation and Elimination Defects. Frontiers in Immunology, 10. https://doi.org/10.3389/fimmu.2019.01697 Carmier, D., Marchand-Adam, S., Diot, P., & Diot, E. (2010). Respiratory involvement in systemic lupus erythematosus. Revue Des Maladies Respiratoires, 27(8), e66–e78. https://doi.org/10.1016/j.rmr.2010.01.003 Knip, M., & Siljander, H. (2008). Autoimmune mechanisms in type 1 diabetes. Autoimmunity Reviews, 7(7), 550–557. https://doi.org/10.1016/j.autrev.2008.04.008 Kumar, S. (2015). Type 1 diabetes mellitus-common cases. Indian Journal of Endocrinology and Metabolism, 19(7), 76. https://doi.org/10.4103/2230-8210.155409 Nemazee, D. (2017). Mechanisms of central tolerance for B cells. Nature Reviews Immunology, 17(5), 281–294. https://doi.org/10.1038/nri.2017.19 Nosal, R. S., Superville, S. S., & Varacallo, M. (2021). Biochemistry, Antinuclear Antibodies (ANA). In StatPearls. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK537071/ Orbital, A. M. (2022, July 22). What Are Common Symptoms of Autoimmune Disease? Johns Hopkins Medicine. https://www.hopkinsmedicine.org/health/wellness-and-prevention/what-are-common-symptoms-of-autoimmune-disease#:%7E:text=Autoimmune%20disease%20happens%20when%20the,wide%20range%20of%20body%20parts. Palmer, E. (2003). Negative selection — clearing out the bad apples from the T-cell repertoire. Nature Reviews Immunology, 3(5), 383–391. https://doi.org/10.1038/nri1085 Tiosano, S., Gendelman, O., Comaneshter, D., Amital, H., Cohen, A., & Amital, D. (2017). THU0248 The association between systemic lupus erythematosus to bipolar disorder – a real-life study. Poster Presentations. https://doi.org/10.1136/annrheumdis-2017-eular.3667 Toren, E., Burnette, K. S., Banerjee, R. R., Hunter, C. S., & Tse, H. M. (2021). Partners in Crime: Beta-Cells and Autoimmune Responses Complicit in Type 1 Diabetes Pathogenesis. Frontiers in Immunology, 12. https://doi.org/10.3389/fimmu.2021.756548 Unknown. (2022). Type 1 Diabetes. Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/type-1-diabetes/symptoms-causes/syc-20353011 Unknown. (2019). Lupus facts and statistics. Lupus Foundation of America. https://www.lupus.org/resources/lupus-facts-and-statistics#:%7E:text=How%20common%20is%20lupus%20and,and%20teenagers%20develop%20lupus%2C%20too.
https://www.byarcadia.org/post/the-human-immune-system-101-the-overactive-immune-system
What is photosynthesis PPT? DEFINITION OF PHOTOSYNTHESIS • Is the process by which autotrophic organisms use light energy to make sugar and oxygen gas from carbon dioxide and water. Occurs in plants, algae and some prokaryotes • Anabolic (small molecules combined) • Endergonic (stores energy) • Stored as carbohydrate in their bodies. Where does photosynthesis take place PPT? Photosynthesis in plants happens in the chloroplasts. Chloroplasts are full of thylakoids stacked in granum. The thylakoid membranes are lined by pigments such as chlorophyll and cartenoids. These pigments harvest light energy packets or photons when they absorb sunlight. What is photosynthesis a level biology? Photosynthesis is the method that plants and photoautotrophes utilize light energy to produce ATP via photophosphorylation in order to anabolise sugars. It is an energy transfer process, and almost all energy transferred to ATP in all organisms is derived from light energy trapped by autotrophs. What is photosynthesis PDF? Photosynthesis is a chemical process. that converts carbon dioxide into different organic compounds using solar energy. Oxygenic. and anoxygenic photosynthesis undergo different reactions in the presence and absence of. light (called light and dark reactions, respectively). What are types of photosynthesis? There are different types of photosynthesis, including C3 photosynthesis and C4 photosynthesis. C3 photosynthesis is used by the majority of plants. It involves producing a three-carbon compound called 3-phosphoglyceric acid during the Calvin Cycle, which goes on to become glucose. What is Calvin Cycle PPT? • The Calvin cycle is not totally independent of light since it relies on ATP and NADPH, which are products of the light dependent reactions. • The light reactions of the Calvin cycle can be organised in to three basic stages : – 1) Fixation 2) Reduction 3) Regeneration. How plants make their own food PPT? How plants make food PPT? 24. 23 TO SUM UP Plants combine carbon dioxide from the air, and water from the soil to make glucose. The energy needed for this process comes from sunlight The sunlight is absorbed by chlorophyll contained in the chloroplasts of the leaf. What are the stages of photosynthesis in order? Photosynthesis takes place in two sequential stages: The light-dependent reactions; The light-independent reactions, or Calvin Cycle. What is the importance photosynthesis? Importance of photosynthesis Photosynthesis convert radiant or solar energy into chemical energy. Productivity of agricultural crops directly depends upon the rate of photosynthesis. It provides oxygen in atmosphere for all living organisms. It maintains the balanced level of oxygen and carbon dioxide ecosystem. What are the factors affect photosynthesis? Three factors can limit the rate of photosynthesis: light intensity, carbon dioxide concentration and temperature. What is photosynthesis a level answer? What is photosynthesis? Photosynthesis is the name of the process whereby organisms convert light energy from the sun into sugar and other organic molecules. They can then go on to use these substances as chemical energy, in the same way, that humans use food. Photosynthesis is essential for almost all life on earth. How do you learn photosynthesis a level? What is photosynthesis equation? The process of photosynthesis is commonly written as: 6CO2 + 6H2O → C6H12O6 + 6O2. This means that the reactants, six carbon dioxide molecules and six water molecules, are converted by light energy captured by chlorophyll (implied by the arrow) into a sugar molecule and six oxygen molecules, the products. What are the 7 steps of photosynthesis? - Step 1-Light Dependent. CO2 and H2O enter the leaf. - Step 2- Light Dependent. Light hits the pigment in the membrane of a thylakoid, splitting the H2O into O2. - Step 3- Light Dependent. The electrons move down to enzymes. - Step 4-Light Dependent. - Step 5-Light independent. - Step 6-Light independent. - calvin cycle. What is photosynthesis explain with diagram? Photosynthesis is the process of preparing food in the presence of sunlight and chlorophyll. In this process, the plant uses water from the soil, carbon dioxide from the air. It occurs in green plants, certain algae, blue-green algae and bacteria. What is the unit of photosynthesis? The minimal and fundamental unit of photosynthesis in higher plants is a single chloroplast (1). A leaf is understood as a photosynthetic system which contains many chloroplasts in it. What are the 3 main types of photosynthesis? There are three main types of photosynthetic pathways: C3, C4 and CAM. They all produce sugars from CO2 using the Calvin cycle, but each pathway is slightly different. What are the two stages of photosynthesis? There are two main stages of photosynthesis: the light-dependent reactions and the Calvin cycle. Requires sunlight? Schematic of the light-dependent reactions and Calvin cycle and how they’re connected. The light-dependent reactions take place in the thylakoid membrane. What are the 2 kinds of photosynthesis? The reactions that make up the process of photosynthesis can be divided into light-dependent reactions, which take place in the thylakoids, and light-independent reactions (also known as dark reactions or the Calvin cycle), which take place in the stroma. What is C3 and C4 pathway? Definition. C3 plants use the C3 pathway or Calvin cycle for the dark reaction of photosynthesis. C4 plants use the C4 pathway or Hatch-Slack Pathway for the dark reaction of photosynthesis. Season. These plants are cool-season plants, commonly found in cool and wet areas. Why C3 cycle is called so? The conversion of CO2 to carbohydrate is called Calvin Cycle or C3 cycle and is named after Melvin Calvin who discovered it. The plants that undergo the Calvin cycle for carbon fixation are known as C3 plants. Calvin Cycle requires the enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase commonly called RuBisCO. What are the 3 stages of Calvin cycle? The Calvin cycle reactions can be divided into three main stages: carbon fixation, reduction, and regeneration of the starting molecule. How do green plants produce food? Green plants have the ability to make their own food. They do this through a process called photosynthesis, which uses a green pigment called chlorophyll. A pigment is a molecule that has a particular color and can absorb light at different wavelengths, depending on the color.
https://scienceoxygen.com/how-do-you-do-a-photosynthesis-presentation/
There are several kinds of information, but the most vital interpretation is that news is a thorough record of an event or action. It is written in an organized means and must be addressed to a details target market. Consequently, information should provoke viewers to act, which is the key purpose of the paper. Despite the type of news you review, there are some basic guidelines for writing news that will certainly aid you compose engaging material. The Oxford Dictionary defines information as “any kind of timely occurrence, publication, or info that has become noteworthy in the news.” Gerald W. Johnson and William Stead further define news as “any kind of event or case that is uncommon or not recognized by the nonprofessional prior to its discussion”. A common use of the term “damaging information” today remains in the context of business broadcasting cable television service. These solutions use live interactions satellite modern technology to give breaking news to customers all over the world. latinpost Along with the created word, news is also a kind of media. This is what permits individuals to make informed decisions and also act upon existing occasions. As long as information is current, there’s no reason that it can not be trusted. And while it’s not excellent, the capacity to share as well as consume it with others is essential for a healthy culture. And while it might not be excellent, information is essential in making the right choices. The news is a required part of our lives. One of the most intriguing information is not released anywhere else. It’s an unpublished account of human activity that intends to inform, inform, and also rate of interest visitors. To be categorized as news, an item of information needs to be new and unpublished as well as associate with human activity. Better, it has to be of rate of interest to visitors, whether physical or psychological. That’s why it is so crucial to check out news. You’ll rejoice you did! So, prepare yourself for the best buzz. News is the most up to date info on existing events. It assists us make decisions, consisting of a political one. It offers us with the most effective details possible. We can also make use of the news to inform ourselves concerning problems that affect our day-to-day lives. Basically, news is crucial to our lives. So, it’s necessary to recognize what’s going on worldwide, specifically if you wish to make enlightened decisions. And if you don’t know anything, it’s not news, yet rather it will certainly inform you of points that are happening worldwide. The objective of information is to inform and inform individuals about existing occasions. It is not planned to amuse. Rather, it’s there to educate and also enlighten. Yet there are likewise various other means to obtain entertainment from it. For example, in papers, you can read crossword problems or cartoons, or pay attention to the radio. Yet if you intend to be amused, information can be monotonous. Try writing a story with humor to make your viewers laugh. aninews Today, we have the ability to access an abundance of information regarding any type of topic with information electrical outlets. The weather, for example, is a constant information subject. The weather condition can produce unusually high or reduced temperatures and can lead to drought or flooding. Food and also drink stories are interesting to people of all courses, as they can educate them concerning brand-new tastes as well as costs for their preferred foods. Health as well as wellness stories include clinical explorations and also conventional solutions, and also feature diet as well as nourishment. Sex is a prominent subject for all cultures, as well as several newspaper article focus on practices that is beyond society’s norms. Information is a comprehensive record of events or an activity that happened. The writer of information is a reliable source of details that allows the recipient to make decisions based upon the most up to date details. Commonly, information was an aired program, which needed the press reporter to bring it in. In later years, news was produced in print, as well as needed to be hand-written and transmitted through wire services. By the 1970s, industrial broadcasting cord news services were offered all the time. The breaking-news feed is upgraded immediately. The objective of information media is to enlighten as well as inform. While amusement is a prominent negative effects of the information, it needs to not be the focus of the news. It ought to be a source of information as opposed to a resource of amusement. While newspaper article about the economy are occasionally severe as well as debatable, they should be presented in a funny method to engage viewers. In this manner, individuals can obtain the appropriate details and also make notified choices. Even if the content of the information isn’t amazing, it still holds worth as a resource of information. The objective of news media is to educate as well as inform. The objective is not to captivate, however to maintain individuals interested. In papers, as an example, animations and crossword problems give the home entertainment they need. The information does not need to be monotonous. The material of a news story can be entertaining, if it is written in a manner that is meant to keep the reader amused. So if you have a feeling of humour, news stories can be fun to check out. As we all know, the information is not simply a document that educates us regarding a specific event, yet likewise a type of details that affects our lives. It can affect our day-to-day options and also the method we interact with others. Whether it has to do with politics or science, news has a substantial effect on our lives. By making informed choices, we can make informed choices. So, if you’re searching for news, it’s not just about amusement, it has to do with a worldwide community. theprint What is information? A piece of information is a details broadcast that is widely distributed in the media. It provides information regarding recent events. A newspaper article can come from anywhere as well as may be local or international. Regardless of where it originates, it’s most likely to have an influence on individuals and also their lives. If you’re seeking a task, you’ll possibly intend to be a journalist. In many cases, you could be an exec in a company environment.
https://thedirectdemocracy.com/2022/02/21/this-tale-behind-information-will-haunt-you-forever/
Please use this identifier to cite or link to this item: http://hdl.handle.net/1/3708 Full metadata record |DC Field||Value||Language| |dc.contributor.author||Bange, Michael||-| |dc.date.accessioned||2016-01-06T04:51:16Z||-| |dc.date.available||2016-01-06T04:51:16Z||-| |dc.date.issued||2003-06-30||-| |dc.identifier.uri||http://hdl.handle.net/1/3708||-| |dc.description.abstract||The availability of accurate and continuous weather and climate data is essential for strategic research, operation of decision support systems (eg. CottonLOGIC, OZCOT crop simulation model) and numerous operational aspects of cotton agronomy and management. In addition historical climate data is being used by researchers to assess the potential of cotton growth in new regions and to analyse the performance of crops in current seasons in the context of the whole climatic record. Increasingly consultants and growers are using this information for making informed management decisions. At the start of this project the industry supports the maintenance of 14 stations spread throughout the cotton growing regions. These stations require regular maintenance and annual calibration. The information collected from these stations is made available via the Cotton CRC’s web site. After numerous problems installing the network they were operating at an acceptable level. However, the stations are now over 9 years and components are failing more frequently and the stations show visible signs of degradation from the weather. A brief outline of the major results and outcomes form this project is given below. 1. Provision of weather information through the existing cotton industry weather station network. These weather stations are now over nine years old. Of the initial 14 only five remain functioning. The parts from the stations removed from the field are being used to maintain these functioning stations. Three are being used for experiments while two remain in the field (Merah North and Breeza). More reliable and continuos data is now available through the SILO Internet site. During this time as part of this project we also installed a new weather station at ACRI. 2. Provision of historical climate patched point data sets to the Australian Cotton Research Institute for research purposes. ACRI and Cotton CRC researchers now have reliable and easy access to up to date weather information and historical data for research purposes. Researchers access this data through an internal website from a data server located at ACRI. On a daily basis the SILO web services updates the server with the latest numerical weather measurements recoded. This service has enabled any project utilising this data to run more effectively. No longer is there a significant cost in time and dollars to obtain this data. 3. Collaboration with partners of the SILO project to develop weather and climate tools specific to the cotton industry. In gaining reliable and easy access to continuous climate datasets has enable a number initiatives where tools for research and decision support to be developed in collaboration with the SILO team. Currently all tools developed are available free to members of the Australian cotton industry. The tools developed and initiatives undertaken during this project are as follows: SILO/Cotton CRC day degree calculator - allows user to enter a starting date and finishing date to calculate the day degrees. SILO/Cotton CRC day degree target calculator - This decision tool is similar to the day degree calculator described above only that it differs in enabling the users to specify a target day degree. Early season diagnosis tool - The early season diagnosis (ESD) tool was developed for the Cotton CRC’s website to assist with the agronomic management of cotton crops. The ESD has been linked to the SILO day degree calculator, and allows the users to enter the sowing date and the dates on which the measurements were taken. HydroLOGIC - Fundamental to the effective use of the HydroLOGIC software released to the cotton industry in September 2003 is gaining access to daily weather data. The SILO initiative was able to contribute to the development of HydroLOGIC by providing access to Historical patched point datasets for major cotton growing regions and developing computing routines were developed to enable HydroLOGIC to directly access SILO’s patched point datasets from the Internet.||en_US| |dc.description.sponsorship||CRDC||en_US| |dc.publisher||CSIRO Plant Industry||en_US| |dc.relation.ispartofseries||;CSP128C||-| |dc.subject||availability of accurate and continuous weather and climate data||en_US| |dc.subject||CottonLOGIC||en_US| |dc.subject||OZCOT crop simulation model||en_US| |dc.subject||SILO web services||en_US| |dc.subject||day degree target calculator||en_US| |dc.subject||early season diagnosis (ESD) tool||en_US| |dc.subject||HydroLOGIC software||en_US| |dc.title||Enhancing Access to climate and weather data||en_US| |dc.type||Technical Report||en_US| |Appears in Collections:||2003 Final Reports| Files in This Item: |File||Description||Size||Format| |CSP128C Final Report.pdf||3.93 MB||Adobe PDF||View/Open| Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
http://www.insidecotton.com/jspui/handle/1/3708?mode=full
How Did The Planets Form? The Sun and the planets formed together 4.6 billion years ago from a cloud of gas and dust called the solar nebula. A shock wave from a nearby supernova explosion probably initiated the collapse of the solar nebula. The Sun formed in the center and the planets formed in a thin disk orbiting around it. How were planets formed? How are planets formed simple answer? How did our planet first form? Over millions of years the dust clustered into grains then lumps boulders and eventually planetesimals – chunks of rock big enough to have their own gravitational field. Some of these planetesimals became the embryonic forms of the planets in our solar system today. … Earth found its path as third planet from the sun. Why do we have planets? Are planets still forming? Planets are thought to form in a disc of dust and gas also known as a protoplanetary disc surrounding a host star. Theoretical models suggest planets should begin to take shape while the host star is still growing – but until now we have only seen active evidence of planet formation … How the Sun was created? The Sun formed about 4.6 billion years ago in a giant spinning cloud of gas and dust called the solar nebula. As the nebula collapsed under its own gravity it spun faster and flattened into a disk. … (The rest of the leftover gas and dust was blown away by the young Sun’s early solar wind.) Who created planets? … |PLANET||DIST. FROM SUN (A.U.)||ORBIT PERIOD (EARTH-YEARS)| |Neptune||30.11||164.79| |Pluto||39.44||248.5| Can a star turn into a planet? How do planets form NASA? Planets emerge from the dense disk of gas and dust encircling young stars. … They emerge from the giant donut-shaped disk of gas and dust that circles young stars. Gravity and other forces cause material within the disk to collide. If the collision is gentle enough the material fuses growing like rolling snowballs. How did Earth get its name? The name Earth is an English/German name which simply means the ground. … It comes from the Old English words ‘eor(th)e’ and ‘ertha’. In German it is ‘erde’. What did Earth First Look Like? In Earth’s Beginning At its beginning Earth was unrecognizable from its modern form. At first it was extremely hot to the point that the planet likely consisted almost entirely of molten magma. Over the course of a few hundred million years the planet began to cool and oceans of liquid water formed. How long ago did life arise on Earth? What does Bible say about planets? Planets. Except for Earth Venus and Saturn are the only planets expressly mentioned in the Old Testament. Isaiah 14:12 is about one Helel ben Shahar called the King of Babylon in the text. Helel (“morning star son of the dawn”) is translated as Lucifer in the Vulgate Bible but its meaning is uncertain. How old is the earth? 4.543 billion years What planet do not have their own? Planets do not have their own light and heat. A planet is an astronomical body orbiting a star or stellar remnant that is massive enough to be rounded by its own gravity is not massive enough to cause thermonuclear fusion. The inner rocky planets are Mercury Venus Earth and Mars. Do all suns have planets? Most stars have planets but exactly what proportion of stars have planets is uncertain because not all planets can yet be detected. That said it has been calculated that there is at least one planet on average per star. One in five Sun-like stars are expected to have an “Earth-sized” planet in the habitable zone. Did the sun create the planets? The Sun and the planets formed together 4.6 billion years ago from a cloud of gas and dust called the solar nebula. A shock wave from a nearby supernova explosion probably initiated the collapse of the solar nebula. The Sun formed in the center and the planets formed in a thin disk orbiting around it. How are stars born? Stars are born within the clouds of dust and scattered throughout most galaxies. … Turbulence deep within these clouds gives rise to knots with sufficient mass that the gas and dust can begin to collapse under its own gravitational attraction. As the cloud collapses the material at the center begins to heat up. Can we survive without sun? How hot is it in space? The average temperature of outer space around the Earth is a balmy 283.32 kelvins (10.17 degrees Celsius or 50.3 degrees Fahrenheit). This is obviously a far cry from more distant space’s 3 kelvins above absolute zero. How was Moon formed? Why is Earth habitable? What makes the Earth habitable? It is the right distance from the Sun it is protected from harmful solar radiation by its magnetic field it is kept warm by an insulating atmosphere and it has the right chemical ingredients for life including water and carbon. Who discovered the Earth? Has Planet Nine been found? Planet Nine is a hypothetical planet in the outer region of the Solar System. … While sky surveys such as Wide-field Infrared Survey Explorer (WISE) and Pan-STARRS did not detect Planet Nine they have not ruled out the existence of a Neptune-diameter object in the outer Solar System. Why do stars twinkle? What happens when stars explode? Is Jupiter a failed star? “Jupiter is called a failed star because it is made of the same elements (hydrogen and helium) as is the Sun but it is not massive enough to have the internal pressure and temperature necessary to cause hydrogen to fuse to helium the energy source that powers the sun and most other stars. What would happen if a new planet was formed in the solar system? If a planet formed in our Solar System it would take over 100 million years. … First it could start with a gas cloud entering our Solar System. The Sun’s gravitational pull could cause the gas cloud to start spinning rapidly. Eventually dust and rocks would form. What are the 3 things that make a planet? - It is in orbit around the Sun. - It has sufficient mass to assume hydrostatic equilibrium (a nearly round shape). - It has “cleared the neighborhood” around its orbit. What will happen to the universe in the future? Most observations suggest that the expansion of the universe will continue forever. If so then a popular theory is that the universe will cool as it expands eventually becoming too cold to sustain life. What God is Earth named after? Earth is the only planet not named after a Roman god or goddess but it is associated with the goddess Terra Mater (Gaea to the Greeks). In mythology she was the first goddess on Earth and the mother of Uranus. The name Earth comes from Old English and Germanic. Who named the moon? Earth’s moon the longest known of all was given the name “Selene” by the Greeks and “Luna” by the Romans each a goddess. Who named the Sun? Ancient Greeks called the Sun Helios and this word is still used to describe the Sun today. During the reign of the Roman Empire Helios was replaced with the Latin name Sol. Like Helios Sol is a term that is still used to describe the Sun. The Formation of the Solar System in 6 minutes! (4K “Ultra HD”) Formation of the Planets Formation of the Solar System | The Dr. Binocs Show | Learn Videos For Kids Less Than Five – How was Planet Earth Formed?
https://www.microblife.in/how-did-the-planets-form/
Nepal floods: Water, sanitation and hygiene urgently needed to prevent public health crisis, WaterAid warns The recent floods in Nepal, which have killed at least 123 people and affected approximately 11.5 million, risk turning into a public health crisis unless urgent steps are taken to provide those affected with water, sanitation and hygiene, WaterAid said on Thursday. The warning comes after days of relentless rains sparked flash floods and landslides that have affected 27 low-lying districts neighbouring India. The significant damage to water sources and sanitation infrastructure, coupled with power cuts, the washing away of the main east-west highway road network and the closure of local health centres, brings with it the potential for a significant public health crisis. Currently 8% of Nepal’s population are without access to clean water and 54% don’t have decent toilets. Without proper sanitation, heavy rains wash faeces into the water supply, allowing waterborne diseases including cholera to spread quickly. Residents cut off from water supplies are also forced to collect dirty water from overflowing rivers, making the transmission of disease much easier. Tripti Rai, WaterAid Nepal’s Country Director, said: “In order to stop this situation from getting any worse, the government must ensure affected communities are provided with emergency kits, which include water purification tablets and soap. In addition, temporary toilets must be erected to reduce the spread of disease, communities must be educated on the importance of handwashing and food safety and oral rehydration salts and zinc tablets should be provided for the treatment of diarrhoea.” “WaterAid Nepal are working alongside affected communities in the Siraha, Bardiya and Kailali districts in the southern belt of Nepal, as are five of our local partners.” “The Department of Health, and the District Water Supply and Sanitation Office have requested our assistance handling the emergency and we have dispatched 90,000 water purification chlorine tablets across Siraha, Bardiya and Nawalparasi. We will of course continue to provide any support that the Government requests during this difficult time”. ENDS For more information, please contact: Jo Lehmann, Acting News Manager on [email protected] or +44 (0)207 793 4909 or Suzy Vickers, PR manager, [email protected] Or call our after-hours press line on +44 (0)7887 521 552 or email [email protected] Notes to Editors: WaterAid WaterAid’s vision is of a world where everyone has access to clean water and sanitation. The international organisation works in 37 countries across Africa, Asia, Latin America and the Pacific Region to transform lives by improving access to clean water, hygiene and sanitation in some of the world’s poorest communities. Since 1981, WaterAid has reached 25 million people with clean water and, since 2004, 24 million people with sanitation. For more information, visit www.wateraid.org, follow @WaterAidUK on Twitter, or visit us on Facebook at www.facebook.com/wateraid. - Some 289,000 children die each year from diarrhoeal diseases caused by dirty water and poor sanitation. That’s almost 800 children each day, or one child every two minutes. - An estimated 844 million people (around one in ten) are without clean water - Nearly 2.3 billion people (around one in three) live without a decent toilet - For every £1 invested in water and sanitation, an average of £4 is returned in increased productivity. - Just £15 can help provide one person with access to clean water. - For details on how individual countries are keeping their promises on water and sanitation, please see our online database, WASHWatch.org.
https://www.wateraid.org/uk/media/nepal-floods-water-sanitation-and-hygiene-urgently-needed-to-prevent-public-health-crisis
DESCRIPTION AND MEANING: Anna requested the tattoo of an owl in a subtle and airy style, representing change, transition and rebirth. Many cultures revere the owl as a sacred animal, often seen as the personification of an ancestor and as a guide. permalink: //www.tattootribes.com/index.php?idinfo=6504 The Polynesian Tattoo Handbook by TattooTribes PRACTICAL GUIDES to create meaningful Polynesian tattoos 360+ pages in total, 600+ symbols and their meanings, 200+ dictionary entries to find the right symbols on the fly, samples, drawings and clear explanations!
https://www.tattootribes.com/index.php?idinfo=6504
Since the Interest rover arrived at Mars in 2012, it has actually supplied NASA researchers with vital information about the world’s past, along with some spectacular pictures of the world’s surface area. Similar to its predecessors, the Spirit and Chance rover, much of these images have actually revealed what it resembles to search for at the sky from the surface area of Mars and witness celestial occasions. Of these occasions, among the most interesting needs to be the numerous Martian solar eclipses that have actually occurred given that the rover’s landed. Last month, the Interest rover experienced 2 eclipses as the moons of Phobos and Deimos both passed in front of the Sun. These most current eclipses will permit researchers to tweak their forecasts about Mars’ satellites and how they orbit the Red World. The images were caught by Interest’s Mast Electronic Camera(Mastcam), which has unique solar filters that permit it to gaze straight at the Sun. The very first occasion, where Deimos was photographed as it passed in front of the Sun, happened on March 17 th– or the 2350 th Martian day of Interest’s objective (Sol 2350). This was followed by Phobos being photographed as it passed in front of the Sun on March 26 th (Sol 2359). In both cases, owing to their smaller sized size, these occasions did not lead to a complete eclipse of the Sun. Deimos, the smaller sized of the 2 moons, steps just 16 km (10 mi) throughout and orbits Mars further than its equivalent– at a typical range of about 23,463 km (14,690 mi). On the other hand, Deimos steps 26 km (16 mi) throughout and orbits Mars at a typical range of 9,376 km (5826 mi)– however it is likewise too little to obscure the Sun. As an outcome, the passage of Phobos in front of the Sun was technically a transit while Deimos’ was more correctly categorized as an annular eclipse. In addition, among Interest’s Navigation Cameras (Navcams) observed Phobos’ shadow on March 25 th (Sol 2358) at sundown, which triggered the ground around the rover to momentarily darken. The method Mars rovers have actually had the ability to witness solar eclipses has actually enabled researchers to find out a good deal about the Martian satellites. Prior to the Spirit and Chance rovers landed in 2004, there was a lot more unpredictability about the orbit of each moon. In reality, the very first time among the rovers attempted to record Deimos eclipsing the Sun, the moon was 40 km (25 mi) far from where they anticipated it to be. By having the ability to witness more of these occasions, it not just permits to find out more about Mars and its moons’ orbital characteristics, it likewise makes Mars more relatable. As Mark Lemmon, an associate teacher of climatic sciences from Texas A&M University and a co-investigator with Interest’s Mastcam, described in a current NASA news release: ” More observations gradually aid select the information of each orbit. Those orbits alter all the time in reaction to the gravitational pull of Mars, Jupiter or perhaps each Martian moon pulling on the other … Eclipses, dawns and sundowns and weather condition phenomena all make Mars genuine to individuals, as a world both like and unlike what they see outside, not simply a topic in a book.” In overall, the Spirit, Chance and Interest rovers have actually jointly seen 8 eclipses including Deimos and 40 including Phobos. While there is still a margin of unpredictability in the orbits of both moons, it is getting smaller sized with every eclipse that’s seen from the Martian surface area. With more robotic objectives prepared for the surface area, such as the Mars 2020 rover objective and an ultimate crewed objective, there might come a day when astronomers have the ability to forecast the orbits of Mars’ moons with the very same accuracy as they do our Moon.
https://scienceandtechblog.com/index.php/2019/04/11/2-solar-eclipses-seen-from-the-surface-area-of-mars-by-interest/
The invention discloses a movable type portable recycling service station for power batteries for new energy vehicles. The movable type portable recycling service station for the power batteries for the new energy vehicles comprises a tank body, wherein an inner space of the tank body is divided into two parts, i.e. an operation area (2) and a storage area (1) according to 1:2; a single open-dooris arranged on one side of the operation area (2); a hinged door (17) is arranged on the other side of the operation area (2); the storage area (1) is only provided with a foldable hinged door (16) onone side, so that one side of the tank body is fully opened; a solar cell panel (15) is arranged on the top part of the tank body; collected solar energy is converted into electric energy through equipment such as a solar inverter, and the electric energy is stored in a module for supplying power to lighting and electronic equipment of the service station; escalators (14) and blind windows (13) are arranged at two ends of the tank body; and two warning lights (18) are further arranged on the top part of the tank body, the light-on of a yellow warning light shows that the operation area is manually operated, and the light-on of a red warning light shows warning of danger.
Are you an experienced Case Administrator looking for a new and exciting opportunity? We are looking for an enthusiastic individual who is motivated by providing good customer service, achieving targets, and customer satisfaction to join our Case Management team, based in Colchester, Essex. As a Case Administrator, you will provide administrative support to self-employed contractors by ensuring they have the relevant documents and information they need to undertake their roles, liaising with customers to deliver a high level of customer service, sending and receiving invoices, and ensuring that reports are filed with customers on time. You will follow the company process for each separate case, ensuring that you maintain databases and record information at each stage. Your customer service skills will be used to build relationships with our local authority and family solicitor customers, as well as with your professional colleagues. On a daily basis you will ensure the accurate and timely completion of administration and operational tasks whilst striving to achieve a consistently high level of customer satisfaction through outstanding personal and professional conduct. You will work closely with the Senior Case Manager to achieve maximum profitability from all sales. To succeed in the Case Administrator role, you must be able to work effectively to ensure industry-leading standards of customer care and satisfaction at all times. The successful applicant will be motivated to: - ensure each customer interaction is professional, includes accurate detail, and avoids the use of jargon; - ensure customers are notified of changes to any part of the sales agreement; - strive to achieve a consistently high level of customer satisfaction; - develop and maintain professional, positive working relationships with all colleagues, external representatives, and suppliers; - ensure sensitive information is managed appropriately and in accordance with company confidentiality procedures; - communicate progress against agreed objectives to your Line Manager; - ensure that all tasks are completed in a timely manner, in accordance with company policies. We welcome applications for this Case Administrator role from candidates with any of the following skills or attributes: admin, administration, operational, CRM, MS Office, clients, customers, relationship building, client account manager. Benefits - Free car parking - Sales target bonus scheme - Contributory pension scheme - 20 days paid holiday per year (increasing by one day per year, to a maximum of 25 days per year) plus public holidays - Additional day off for birthday Info Case Administrator role: 35 hours a week. Essential Skills - Previous experience in an administrative role - Have a good knowledge of MS Office applications - High standard of verbal and written English - Excellent telephone manner - Exceptional organisation skills and the ability to work to deadlines - Exceptional communication skills, with the ability to build rapport with internal and external customers About Company WillisPalmer is an independent provider of social work and multi-disciplinary services to vulnerable children and families. We have been established for 18 years and have become the UK’s leading provider of multi-disciplinary family assessments, independent social work, forensic risk, psychological assessments, and family support services to local authorities. As a social work owned and operated company, we place the needs and safety of children and their families at the centre of everything we do. Our paramount duty is to ensure the best outcomes for vulnerable children.
https://www.covid19jobs.io/job/case-administrator-colchester-united-kingdom/
Hipp!!bones Club 4 Youngsters With Special Needs Gillingham Hipp!!Bones is a charity run targeted youth club for young people with learning difficulties, disabilities, Autism or additional needs aged 11 to 19 years, although we can continue to accommodate young people up to the age of 25 years. Hipp!!Bones runs from Gillingham Youth and Community Centre from 10am to midday on the 1st and 3rd Saturday of every month all year round, with the exception of New Year's Day. We offer three longer sessions during the school holidays for off-site trips. All sessions provide opportunities to socialise with their peers in a safe supportive environment and participate in arts and crafts, cooking, and sports activities; alongside sensory activities, air hockey, pool, and the Nintendo Wii which are always available. There is a tuck shop available to enable young people to develop money skills and increase independence through making choices. Contact - Telephone - 07983 674719 - [email protected] - Website - http://www.hippbones.co.uk/ Venue - Venue address - Gillingham Youth Centre Cemetery Road Gillingham Dorset - Postcode - SP8 4AZ - Locality - North Dorset Other information - Availability - Weekend All year Daytime - Opening times - Sat: 10:00-12:00 - Cost(s) - £5.00 per Session - Additional Information - We also offer some longer sessions in the school holidays to visit local attractions and access leisure facilities in the local community. All young people have the opportunity to gain accreditation by working towards Dorset Awards and the Dorset Awards Plus. Hipp!!Bones staff are very knowledgeable and experienced in the fields of youth work and special education and are supported by a team of enthusiastic and dedicated adult and young volunteers. Registered Charity in England & Wales: 1189177. For further information please see website or find us on Facebook (https://www.facebook.com/hippbones). - Age Range - 11 years 0 months to 25 years 0 months Accessibility - Wheelchair accessible - The club is wheelchair accessible throughout, with hoisting facilities available. Some activities and family support services listed in our directory may be cancelled, postponed, or running in a different way due to coronavirus restrictions. Please contact providers to check for information about any changes.
https://familyinformationdirectory.dorsetcouncil.gov.uk/kb5/dorset/fsd/service.page?id=XJ7EFXFXD1E&familychannel=0
TECHNICAL FIELD The present disclosure relates to electronic systems for industrial vehicles that collect and present information to a user via a graphical user interface. BACKGROUND ART Wireless strategies are being deployed by business operations, including distributors, retail stores, manufacturers, etc., to improve the efficiency and accuracy of business operations. Wireless strategies may also be deployed by such business operations to avoid the insidious effects of constantly increasing labor and logistics costs. US 2012/0256843 A1 US 2014/0114530 A1 In a typical wireless implementation, workers are linked to a management system executing on a corresponding computer enterprise via mobile wireless transceivers. The wireless transceivers are used as interfaces to the management system to direct workers in their tasks, e.g., by instructing workers where and/or how to pick, pack, put away, move, stage, process or otherwise manipulate the items within a facility. The wireless transceiver may also be used in conjunction with a suitable input device to scan, sense or otherwise read tags, labels or other identifiers to track the movement of designated items within the facility. Patent applications and disclose a digital display mounted on an industrial vehicle. DISCLOSURE OF INVENTION The invention is defined by the appended claims. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of an industrial vehicle computing enterprise, according to aspects of the present disclosure; FIG. 2 is a block diagram of a special purpose processing device on an industrial vehicle, according to aspects of the present disclosure herein; FIG. 3A FIG. 2 is an illustration of the processing device of , implemented as a graphical user interface having a touch screen display and a corresponding vehicle operator control section, according to aspects of the present disclosure; FIG. 3B FIG. 3A is a schematic representation of the display of mounted to an industrial vehicle such as a forklift truck; FIG. 4 FIG. 2 is a block diagram of operational components executed by a processor of the special purpose processing device of ; FIG. 5 FIG. 3 is a schematic diagram illustrating an array of widget for display on the touch screen display of , according to aspects of the present disclosure; FIG. 6 FIG. 3 is a schematic screen shot of a screen display for presentation on the touch screen display of , according to aspects of the present disclosure; FIG. 7 FIG. 3 is a schematic screen shot of a screen display with first message type for presentation on the touch screen display of , according to aspects of the present disclosure; FIG. 8 FIG. 3 is a schematic screen shot of a screen display with second message type for presentation on the touch screen display of , according to aspects of the present disclosure; FIG. 9 FIG. 3 is a schematic screen shot of a screen display with third message type for presentation on the touch screen display of , according to aspects of the present disclosure; FIG. 10 FIG. 3 is a schematic screen shot of a screen display with fourth message type for presentation on the touch screen display of , according to aspects of the present disclosure; FIG. 11 FIG. 3 is a schematic illustration of the processing device of displaying widgets according to aspects of the present disclosure herein; and FIG. 12 is a block diagram of a computer processing system capable of implementing any of the systems, components, or processes described more fully herein. BEST MODE FOR CARRYING OUT THE INVENTION System Overview: Industrial Vehicle: Graphical User Interface Component: Main Component: GUI Controller: Widget Customization: Widget Organization: Graphical User Interface Display Screen: Messaging: Example Graphical User Interface Device: Vehicle Operation Controlled Display of Information: Calibration: Operator Login: Messages: Rich Display Screen: Miscellaneous: Computer System Overview: According to various aspects of the present disclosure, a processing device having a graphical user interface is provided, which is suitable for use on an industrial vehicle. The processing device includes a housing having a front face, where the front face defines a display section and a vehicle operator control section. A touch screen display is provided within the display section of the front face of the housing. The touch screen display receives touch gesture commands from a vehicle operator. Still further, a set of controls (e.g., designated as up, down, left, right, and select) is arranged within the vehicle operator control section of the front face of the housing. The set of controls include hardware control equivalents to the gesture commands recognized by the touch screen of the display. A controller is communicably connected to the touch screen display and the set of controls. The controller detects interactions with the touch screen display and the set of controls such that an up swipe gesture command on the touch screen and operation of the up control within the control section both map to the same first graphical user interface command. The first graphical user interface command may comprise, for instance, a command to navigate up one menu option, scroll up in an image that is too large to fit in the area of the display screen, etc. The controller maps the right swipe gesture command on the touch screen and operation of the right control within the control section to the same second graphical user interface command. The second graphical user interface command may comprise, for instance, scrolling to the right to expose new widgets, scrolling through a set of option choices within a menu, scrolling to the right of an image that is too large to fit in the area of the display screen, etc. The controller likewise maps the left swipe gesture command on the touch screen and operation of the left control within the control section both to the same third graphical user interface command. The third graphical user interface command may comprise, for instance, scrolling to the left to expose new widgets, scrolling back out of a set of option choices within a menu, scrolling to the left of an image that is too large to fit in the area of the display screen, etc. The controller also maps the down swipe gesture command on the touch screen and operation of the down control within the control section to the same fourth graphical user interface command. The fourth graphical user interface command may comprise, for instance, a command to navigate down one menu option, scroll down in an image that is too large to fit in the area of the display screen, etc. The controller yet further maps the element select gesture command on the touch screen (e.g., touch, press, release, etc.) and operation of the select control within the control section to the same fifth graphical user interface command. The fifth graphical user interface command may be to execute an enter command, select command, acknowledgement command, clear command, etc. This allows industrial vehicle operators to wear gloves or other equipment suitable for (or otherwise required by) the assigned task, without undue interference interacting with the graphical user interface. Also, redundant control, e.g., via gesture commands recognized by the touch screen of the display and corresponding controls in the vehicle operator control section, allow the vehicle operator to use whichever option is most convenient for speed, convenience, workflow, etc. According to further aspects of the present disclosure, the graphical user interface enables customization of industrial vehicle operational information, including customization of widgets, messaging, themes, language and other features. Moreover, in an example implementation, the vehicle operator input controls interact with, or control, elements in the graphical user interface as viewed on the display. As such, a vehicle operator can interact with the processing device, e.g., to respond to requests for information, to set up, organize, customize, etc., widgets and other display elements, or otherwise provide feedback to the processing device using the controls of the graphical user interface, the touch screen features of the display, or combinations thereof. The disclosure herein improves the technologies of industrial vehicles, operator-to-machine interaction, operation of an industrial vehicle in a work environment, and effective information push to the operator and information retrieval by the operator. In particular, various aspects of the present disclosure address the technical problem of computer interaction in industrial environments by providing dual controls implemented as touch screen gesture commands and hardware equivalent controls that are collocated adjacent to one another in a common housing. The present disclosure also addresses the technical problem of efficiently and effectively displaying (and optionally selectively obscuring and revealing) data including operational factors including time, industrial vehicle operating conditions and/or state, environmental conditions and/or state, operator conditions and/or state, combinations thereof, etc. The processes set out herein are necessarily rooted in computer technology to overcome problems arising with graphical user interfaces in industrial applications. In this regard, the processing devices set out herein are not merely general purpose computer components. Rather, the processing devices are special purpose machines built specifically for industrial vehicles used in dynamic and mobile work environments that can require multiple modes of operator interaction and operation. The technical solutions herein bring about several technical effects, including an ability to seamlessly and dynamically switch between touch gesture commands and hardware counterparts as the job and the specific application dictates. The technical solutions also bring about improved industrial vehicle performance via efficient and effective display of relevant information, including vehicle operating conditions and/or state, environmental conditions and/or state, operator conditions and/or state, combination thereof, etc. FIG. 1 Turning now to the figures and in particular to , a general diagram of a computer system 100 is illustrated according to various aspects of the present disclosure. The illustrated computer system 100 is a special purpose (particular) system that operates in a manner that enables industrial vehicles to communicate wirelessly across a computer enterprise. The computer system 100 comprises a plurality of hardware processing devices (designated generally by the reference 102) that are linked together by one or more network(s) (designated generally by the reference 104). The network(s) 104, e.g., wired or wireless networks, provide communications links between the various processing devices 102 and may be supported by networking components 106 that interconnect the processing devices 102, including for example, routers, hubs, firewalls, network interfaces, wired or wireless communications links and corresponding interconnections, cellular stations and corresponding cellular conversion technologies (e.g., to convert between cellular and TCP/IP, etc.). A processing device 102 can be any device capable of communicating over the network 104. In certain contexts and roles, a processing device 102 is intended to be mobile (e.g., a hardware-based processing device 102 provided on an industrial vehicle 108 such as a forklift truck, reach truck, stock picker, turret truck, tow tractor, rider pallet truck, walkie stacker truck, etc.). In this regard, industrial vehicles include a processing device 102 that communicates wirelessly to the network 104 to carry out the features described herein. Under such circumstances, the industrial vehicles 108 can wirelessly communicate through one or more access points 110 to a corresponding networking component 106. Also, the industrial vehicles 108 can be equipped with WiFi, cellular or other suitable technology that allows the processing device 102 on the industrial vehicle 108 to communicate directly with a remote device (e.g., over the networks 104). The illustrative computer system 100 also includes a hardware server 112 (e.g., a web server, file server, and/or other processing device) that supports an analysis engine 114 and corresponding data sources (collectively identified as data sources 116). The analysis engine 114 and data sources 116 provide the resources processing devices 102 installed on industrial vehicles 108. FIG. 2 FIG. 1 Referring to , a processing device 202 is provided on an industrial vehicle 108. The processing device 202 is equivalent to and an example embodiment of the processing device 102 on the industrial vehicle 108 in . Here, the processing device 202 is a special purpose, particular hardware computer, such as a device that mounts to or is otherwise integrated with the industrial vehicle 108. The processing device 202 includes a processor coupled to memory to carry out instructions. However, the execution environment of the processing device 202 is further tied into the native electronics of the industrial vehicle 108 making it a particular machine different from a general purpose computer. The illustrated processing device 202 is implemented as an information linking device that comprises the necessary circuitry to implement communication with a remote server, data and information processing for processing vehicle data, and wired (and optionally wireless) communication to components of the corresponding industrial vehicle 108 to which the processing device 202 is mounted. According to aspects of the present disclosure, the processing device 202 is implemented as a main component 204 and a service component 206, which couple together to create an integrated device. The service component 206 is field-replaceable and includes a display (e.g., a LCD), a set of user input controls (e.g., a touch screen, buttons, switches, encoders, etc.), and any necessary data processing circuitry. In this regard, the service component 206 provides a graphical user interface to the processing device 202. FIG. 3A FIG. 2 Referring briefly to , a graphical user interface 302 is illustrated as an example implementation of the graphical user interface component 206 (). The graphical user interface 302 includes a housing 304 having a front face 306 defining a display section 308 and a vehicle operator control section 310. A touch screen display can be provided within the display section 308 of the front face 306 of the housing 304. Also, a set of controls is arranged within the vehicle operator control section 310 of the front face 306 of the housing 304. For instance, a display 308A within the display section 308 can include for instance, a liquid crystal display (LCD) screen, a light emitting diode (LED) screen, a plasma screen, etc. Moreover, the display 308A can include the appropriate technology to implement a touch screen so as to respond to gesture controls implemented by touching the screen, pressing against or releasing from the screen, swiping across the screen, performing other gesture functions associated with the display, etc., (collectively referred to herein as gesture commands). As such the display can be a touch screen display 308A. The vehicle operator control section 310 can include buttons, switches, sliders, encoders, knobs, voice recognition, keypad, other forms of receiving vehicle operator input, combinations thereof, etc. For instance, in an example implementation, the set of controls 310 is collocated with the touch screen display 308A. For instance, the set of controls 310 is aligned to the right side of the touch screen display 308A as buttons arranged in a vertical proportion while still providing a navigational pattern logic to the set of controls 310. As illustrated, the vehicle operator control section 310 includes an up control (first control), i.e., up direction button 310A, a right control (second control), i.e., a right direction button 310B, a left control (third control), i.e., a left direction button 310C, a down control (fourth control), i.e., a down direction button 310D, and a select control (fifth control), i.e., a select/enter button 310E. Alternatively, or in addition to the above, the vehicle operator control section 310 can include additional input devices, such as an optional rotary encoder 310F. In an alternative embodiment, one or more of the buttons (e.g., the buttons 310B, 310C) can be replaced by the rotary encoder 310F or other suitable control element. For instance, a first control (e.g., right control) is implemented by rotating the encoder in a first direction (e.g., to the right) and the second control (left control) is implemented by rotating the encoder in a second direction (e.g., to the left) opposite the first direction. The user input controls interact with, or control, elements in the graphical user interface as viewed on the display. As such, an operator of the industrial vehicle can interact with the processing device 202, e.g., to respond to requests for information, to set up, organize, customize, etc., widgets and other display elements, or otherwise provide feedback to the processing device 202. FIG. 3B Referring to , the graphical user interface 302 of a corresponding processing device 202 is illustrated mounted, to a support bar 352 of an industrial vehicle 108 implemented as a sit down forklift for convenience of illustration. In practice, the graphical user interface 302 can be mounted in, on, to, integrated with, or otherwise provided for various types of industrial vehicles, including but not limited to a forklift truck, reach truck, stock picker, turret truck, tow tractor, rider pallet truck, walkie stacker truck, etc.). Here, the housing of the graphical user interface 302 can be mounted to the inside or outside of the corresponding industrial vehicle. In this regard, the size, shape, and other physical characteristics can vary depending upon the application. For instance, if the housing of the graphical user interface 302 is mounted inside an industrial vehicle, the front face can conform to neighboring structures, e.g., instrument/control cluster, etc. If mounted to the industrial vehicle, e.g., to the support bar 352, mounts, a harness, and other supporting structures may be provided. As such, the processing device herein is adaptable to multiple mounting options for different industrial vehicle types and modes (with or without operator, i.e., operator controlled, semi-automated, fully automated, etc.). In certain implementations, regardless of the form factor of the housing, the processing device maintains a constant relationship between the display 308A and vehicle operator control section 310 independent of the front face shape or housing size. FIG. 2 FIG. 1 FIG. 1 Referring back to , in an illustrative example, the processing device 202 is connected to a transceiver 212 for wireless communication. Although a single transceiver 212 is illustrated for convenience, in practice, one or more wireless communication technologies may be provided (e.g., WiFi, Bluetooth, and cellular). For instance, the transceiver 212 may be able to communicate with a remote server, e.g., server 112 of via 802.11 across the access points 110 of . The transceiver 212 may also optionally support other wireless communication, such as cellular, Bluetooth, radio frequency (RF), infrared (IR) or any other technology or combination of technologies. For instance, using a cellular to IP bridge, the transceiver 212 may be able to use a cellular signal to communicate directly with a remote server, e.g., a manufacturer server. The transceiver 212 connects to the processing device 202 via a suitable electrical connection 214, e.g., an Ethernet connection. However, the transceiver 212 can connect to the processing device 202 using other connections. Alternatively, the transceiver 212 can be built in or otherwise integral with the processing device 202. U.S. Pat. No. 8,060,400 to Wellman The processing device 202 also comprises data processing circuitry that implements a controller, e.g., illustrated as a controller 216. The controller 216 includes a processor coupled to memory for implementing computer instructions, including the relevant processes, or aspects thereof, as set out and described more fully herein. The controller 216 can also include other necessary processing circuitry and software, such as for implementing a display engine, camera processing engine, data processing engine(s), etc. In this regard, the controller 216 can include additional support circuitry, e.g., video ports, camera ports, input/output ports, etc. Moreover, the memory can include memory that stores processing instructions, as well as memory for data storage, e.g., to implement one or more databases, data stores, registers, arrays, etc. Additionally, the controller 216 implements processes such as operator log on, pre-use inspection checklists, data monitoring and other features, examples of which are described more fully in , the entirety of which is hereby incorporated by reference herein. The processing device 202 can also optionally include vehicle power enabling circuitry 218 that is controlled by the controller 216 to selectively enable or disable the industrial vehicle 108 and/or selectively enable or disable select components of the industrial vehicle 108. In certain implementations, the controller 216 controls the vehicle power enabling circuitry 218 to partially enable the industrial vehicle for operation, or fully enable the industrial vehicle for operation, e.g., depending upon proper operator login. For instance, the industrial vehicle power enabling circuitry 218 can provide selective power to components via a suitable power connection, or otherwise command certain vehicle components not to respond to vehicle operator control via vehicle messaging, e.g., across one or more vehicle communication busses. In certain implementations, the processing device 202 includes a monitoring input/output (I/O) component 220 to communicate via wired or wireless connection between peripheral devices mounted to or otherwise on the industrial vehicle, such as cameras, sensors, meters, encoders, switches, etc. (collectively represented by reference numeral 222) and the controller 216. The monitoring input/output (I/O) component 220 may also be connected to other devices, e.g., third party devices 224 such as RFID scanners, displays, meters, bar code scanners, cameras, or other devices to convey information to the controller 216. The processing device 202 is coupled to and/or communicates with other industrial vehicle system components via a suitable industrial vehicle network system 226, e.g., at least one vehicle network bus. The industrial vehicle network system 226 is any wired or wireless network, bus or other communications capability (or combination of multiple, independent networks, busses or other communications capabilities) that allows electronic components of the industrial vehicle 108 to communicate with each other. As an example, the industrial vehicle network system 226 may comprise a controller area network (CAN) bus, ZigBee, Bluetooth, Local Interconnect Network (LIN), time-triggered data-bus protocol (TTP), RS422 bus, Ethernet, universal serial bus (USB), other suitable communication strategy, or combinations thereof. For instance, the controller 216 can communicate with native vehicle electronic components such as controllers (hydraulic, traction, etc.), modules such as a battery monitor, devices such as impact sensors, etc. (collectively 228). The controller 216 of the processing device 202 can also communicate with a fob 230 (or keypad, card reader or any other device) for receiving operator log in identification. According to yet further aspects of the present disclosure, the processing device 202 can communicate with an environmental based location tracking device 232 that is provided on the industrial vehicle 108. The environmental based location tracking device 232 enables the industrial vehicle 108 to be spatially aware of its location within a local space, e.g., within a warehouse. As will be described more fully herein, utilization of the industrial vehicle network system 226 enables seamless integration of the components of the industrial vehicle 108 with the processing device 202, and in particular, the controller 216. By way of example, the industrial vehicle network system 226 enables communication between the controller 216 and the native electronics including a vehicle control module, controllers (e.g., traction controller, hydraulics controller, etc.), vehicle specific modules and other electronic devices 228 of the industrial vehicle 108, a fob reader 230, environmental based location tracking 232, etc. Also, the controller 216 can facilitate the communication of information from any electronic peripheral devices 222 or third party devices 224 associated with the industrial vehicle 108 (e.g., via the monitoring I/O 220 bridging data to other vehicle resources) that integrate with and can communicate over the network system 226. Thus for example, the processing device 202 connects with, understands and is capable of communication with native vehicle components, such as controllers, modules, devices, bus enabled sensors, displays, lights, light bars, sound generating devices, headsets, microphones, haptic devices, etc. (collectively referred to by reference numeral 228). FIGS. 2 3A 3B Referring generally to , and , the controller 216 includes a hardware processor coupled to physical memory and is capable of carrying out computer-executed processes in a hardware system. In this regard, the processes, architectures and organizations described herein may be implemented on computer-readable hardware that stores machine-executable program code, where the program code instructs the processor to implement the described features. The processor of the controller 216 executes the program code stored in the memory to implement a graphical user interface control architecture that passes information to, and receives information from the graphical user interface 302. FIG. 2 FIG. 2 In particular, the controller 216 provides several distinct control functions, which all interact with, and impact the manner in which the graphical user interface module 206 () presents and receives information via the display 308A when interacting with a vehicle operator. The features set out in simplified block diagram form, are executed by the controller 216 (). FIG. 4 FIG. 2 FIG. 2 FIG. 3A FIG. 3A FIG. 3A FIG. 3A FIG. 3A Referring to , a control architecture 400 for a graphical user interface is illustrated. The control architecture 400 is executed in an illustrative example, by the controller 216 of (microprocessor coupled to memory), and includes a graphical user interface (GUI) controller component 402 that controls a plurality of sub-algorithms (components) that affect the manner in which the processing device 202 interacts with the vehicle operator. In this regard, the GUI controller component 402 communicates with each sub-algorithm/component and further communicates with the graphical user interface module 206 () to present information to the vehicle operator via the display 308A (), and to receive information from the vehicle operator, e.g., via touch/gesture controls received through touching the display 308A () and/or interacting with a control within the vehicle operator control section 310 () of the graphical user interface 302 (). In this regard, the GUI controller component 402, is also communicably connected to the touch screen display 308A and the set of controls 310 in the control section as described with reference to . FIG. 3 In an example implementation, the GUI controller component 402 is operatively programmed to receive and process gesture commands from a vehicle operator touching or otherwise interacting with the display 308A (), such as via touch, press, release, swipe, scroll, gesture proximate to, etc. Received gesture commands can include a first touch gesture command implemented as an up swipe gesture command, a second touch gesture command implemented as a right swipe gesture command, a third touch gesture command implemented as a left swipe gesture command, a fourth touch gesture command implemented as a down swipe gesture command, and a fifth touch gesture command implemented as a select gesture command. FIG. 3 Likewise, the GUI controller component 402 is operatively programmed to receive and process vehicle operator inputs from the vehicle operator control section 310 () of the graphical user interface. The set of controls include hardware control equivalents to the gesture commands recognized by the touch screen of the display. FIG. 3 For instance, the GUI controller component 402 maps and/or processes operation of a first control designated as an up control (e.g., via a vehicle operator pressing the up direction button 310A of ), and the up swipe gesture command on the touch screen display, to trigger to a first graphical user interface command. That is, an up swipe gesture command implemented on the touch screen display and operation of a first control in the set of controls designated as an up control both map to a first graphical user interface command. FIG. 3 The GUI controller component 402 maps and/or processes operation of a second control designated as a right control (e.g., via a vehicle operator pressing the right direction button 310B of ), and the right swipe gesture command, to trigger to a second graphical user interface command. That is, a right swipe gesture command implemented on the touch screen display and operation of a second control in the set of controls designated as a right control both map to a second graphical user interface command. FIG. 3 The GUI controller component 402 maps and/or processes a third control designated as a left control (e.g., via a vehicle operator pressing the left direction button 310C of ), and the left swipe gesture command, to trigger to a third graphical user interface command. That is, a left swipe gesture command implemented on the touch screen display and operation of a third control in the set of controls designated as a left control both map to a third graphical user interface command. FIG. 3 The GUI controller component 402 maps and/or processes a fourth control designated as a down control (e.g., via a vehicle operator pressing the down direction button 310D of ) and the down swipe gesture command, to trigger to a fourth graphical user interface command. That is, a down swipe gesture command implemented on the touch screen display and operation of a fourth control in the set of controls designated as a down control both map to a fourth graphical user interface command. FIG. 3 The controller component 402 maps and/or processes a fifth control designated as a select control (e.g., via a vehicle operator pressing the select/enter button 310E of ) and the select gesture command to trigger to a fifth graphical user interface command. That is, a select gesture command implemented on the touch screen display and operation of a fifth control in the set of controls designated as a select control both map to a fifth graphical user interface command. As noted in greater detail herein, the various controls (first through fifth) can be implemented on a single input device, e.g., a keypad, or via separate controls, e.g., discrete buttons, one or more encoders, etc. The first graphical user interface command may comprise, for instance, a command to navigate a display of information provided in the display 308A of the graphical user interface vertically, e.g., navigate up one menu option or information screen, scroll up in an image that is too large to fit in the area of the display screen, to increment a value that the vehicle operator must provide as an input while interacting with the display screen, etc. The second graphical user interface command may comprise, for instance, a command to navigate a display of information provided in the display 308A of the graphical user interface horizontally/laterally, e.g., to scroll across widgets (e.g., to expose new widgets or reveal hidden widgets), to drill across menus or menu option choices, navigate right one menu option, scroll to the right in an image that is too large to fit in the area of the display screen, to index or otherwise modify a data entry value that the vehicle operator must provide as an input while interacting with the display screen, etc. The third graphical user interface command may comprise, for instance, a command to navigate a display of information provided in the display 308A of the graphical user interface horizontally/laterally, e.g., to scroll across widgets (e.g., to expose new widgets or reveal hidden widgets), to drill across menus or menu option choices, navigate left one menu option, scroll to the left in an image that is too large to fit in the area of the display screen, to index or otherwise modify a data entry value that the vehicle operator must provide as an input while interacting with the display screen, etc. The fourth graphical user interface command may comprise, for instance, a command to navigate a display of information provided in the display 308A of the graphical user interface vertically, e.g., navigate down one menu option or information screen, scroll down in an image that is too large to fit in the area of the display screen, to decrement a value that the vehicle operator must provide as an input while interacting with the display screen, etc. The fifth graphical user interface command enables the operator to select a menu option, enter a response, accept a value, trigger an action, clear a message, set or stop a timer, execute an enter command, execute a select command, enter an acknowledgement command, or otherwise interact with the information displayed in the graphical user interface or in an in-focus portion thereof. The first, second, third, fourth, and fifth graphical user interface commands will thus vary in function depending upon what is currently being displayed on the display 308A, examples of which are described in greater detail herein. However, the redundancy of the gesture controls generated by touching or otherwise interacting with the display 308A and using the corresponding controls (e.g., buttons 310A-310E) in the vehicle operator control section 310 facilitates operation of the processing device 202, even in harsh environments. For instance, some forklift operators wear gloves, such as where operating in refrigerated areas of a warehouse. Moreover, the positioning of the buttons in close proximity (on the same housing) as the display 308A facilitates operator interaction by keeping the operator consistently focused in a common area regardless of interaction with the screen or tactile controls (e.g., buttons) when interacting with the graphical user interface 302. Thus, in this configuration, the buttons are co-located with the touch screen of the display. FIG. 2 The GUI controller component 402 also facilitates customization of the user interaction experience. For instance, the GUI controller component 402 interacts with a user management component 404. The user management component 404 is responsible for storing personalized settings that are passed from the controller 216, such as information read in response to an operator logging into a corresponding industrial vehicle via the FOB reader 230 of , or via logging onto the vehicle using the graphical user interface 302. In an illustrative example, the user management component 404 is used to store a vehicle operator performance tuning level, a theme preference, a language preference, unit measurement preference (e.g., metric or English), widget arrangement, etc. A generic template can be provided where there is no customization data available for a specific vehicle operator. The themes may be set or limited based upon a vehicle operator level, truck level, company level, etc. Moreover, the user-selected theme can be temporarily overridden for certain vehicle specific functionality, e.g., to provide an inspection checklist, to provide certain diagnostic information, etc. For instance, a system supervisor may decide to not allow theme customization, and can thus lock out that feature from appearing as a user settable parameter. The GUI controller component 402 also communicates with a system management component 406. The system management component 406 can be utilized to control the allowable operator-specific settings, e.g., by limiting, disabling, enabling, etc., features. For instance, in an example implementation, the system management component 406 controls the ability of the vehicle operator to configure themes, language preference, widget arrangement, widget customization, etc. The GUI controller component 402 still further communicates with a truck management component 408. The truck management component 408 controls information about the specific industrial vehicle 108 to which the processing device 202 is installed. For instance, the truck management component 408 can include information about the maximum fork height, maximum weight, battery charge, or other vehicle specific characteristics. The GUI controller component 402 still further communicates with a language format component 410, which can be used to set a preferred language for the display of text on the display 308A. In particular, the language format component 410 manages the strings that need to be translated to be pushed to the display 308A, font, text alignment, direction, and other features that affect readability of the desired information by the vehicle operator. FIG. 2 The GUI controller component 402 still further interacts with a communication component 412, which controls the communication of the GUI controller component 402 with other vehicle controllers, modules, devices, sensors, third party devices, etc., as set out in regard to . The GUI controller component 402 further interacts with a message system component 414. The message system component 414 not only controls the messaging that is presented to the vehicle operator, but also the manner in which the messaging is presented to the vehicle operator, as will be described in greater detail herein. FIG. 3 The GUI controller component 402 also communicates with a dashboard component 416. The dashboard component 416 controls widgets, the widget order, and the menu systems that are presented on the display 308A (). Moreover, the dashboard component 416 is responsible for screen management, e.g., by storing the current screen, next screen, previous screen, etc. The dashboard component 416 also tracks the menus, calibration, checklists, widget display, messaging, text and video messaging, etc. FIG. 3 The GUI controller component 402 also can communicate with a user I/O component 418 to translate the vehicle operator provided inputs into instructions that are interpreted to facilitate a vehicle operator interaction experience when interacting with the graphical user interface 206, 306. For instance, the user I/O 418 can process the touch gesture commands from a vehicle operator touching the display 308A (), or the vehicle operator-operated section of controls within the vehicle operator control section 310 (e.g., button presses), as described in greater detail above. FIG. 3 According to aspects of the present disclosure, the display (e.g., 308A of ) can be utilized to display one or more "widgets". Each widget can represent operating conditions and/or state, environmental conditions and/or state, operator conditions and/or state, combination thereof, etc. For instance, a widget can present the current state of a feature of the industrial vehicle (e.g., battery charge), the current state of a vehicle function or operation (e.g., current vehicle speed), or the current state of some ancillary condition (e.g., environmental condition such as the current time). FIG. 1 For instance, in an example implementation, widgets are used to represent the current state of the vehicle speed, fork height, load weight, battery charge, clock, stop watch, odometer, trip meter, hour meter, time, date, etc. In this regard, the widgets represent "live" data. The current state of data values can be obtained for instance, by the processor of the controller 216 communicating with either a master state data repository on the industrial vehicle 108 (e.g., where the current state of monitored functions is aggregated into a common location), or by communicating (e.g., querying, polling, reading from, etc.) with specific vehicle control modules, sensors, etc., across the vehicle network system 226, via the monitoring I/O 220, or a combination thereof. Also, the current state data can be ascertained by polling or otherwise querying a remote server, e.g., the server 112 described with reference to , which extracts relevant data from the industrial vehicle data repository 116 and communicates that relevant data back to the controller 216. FIG. 1 A widget can also represent an operator based metric, such as a current state of a task or operation reflecting the level of completion of the task, (e.g., percentage of picks per shift), the current state of an operator-based metric (e.g., operator skill level), a level of correct vehicle operation or environmental behaviors, etc. As with the above, the current state of data values can be obtained for instance, by the processor of the controller 216 communicating with either a master state data repository on the industrial vehicle 108, or by communicating with specific vehicle control modules, sensors, etc., across the vehicle network system 226, via the monitoring I/O 220, or combination thereof. Also, the current state data can be ascertained by polling or otherwise querying a remote server, e.g., the server 112 described with reference to , as noted above. FIG. 1 By way of example, by continually data logging operator-based performance/operation data, e.g., at the server 112 of , widgets can provide a dashboard view of key vehicle and/or operator performance measures. In this regard, the overall data provided in a widget need not be limited to data collected by a specific vehicle. The widget can reflect all of the relevant industrial vehicle data associated with the operator logged into the industrial vehicle, regardless of which industrial vehicle the operator is currently operating. As noted in greater detail above, the processing device 202 is communicably connected to, or is otherwise integrated with a communications device (e.g., transceiver 212) such that the processing device receives from a remote server (e.g., server 112), information that is displayed in a widget space that includes information not extracted from the vehicle 108. Widgets can also be utilized to present industrial vehicle operator performance feedback to the vehicle operator. For instance, a widget can show incorrect vs. correct defined warehouse operational behaviors, e.g., proper use of stop and horn at the end of aisles, proper use of speed zones, etc. By way of illustration, a widget can count the frequency and/or duration of correct behavior events in-between incorrect behavior events. In this regard, information such as skill level, current streak, progress meter, badges, etc., can be integrated into a "grouped" widget, or the displayed information can be simplified to a single metric. According to yet further aspects of the present disclosure herein, widgets can tie into third party databases to display other information, such as operational information, messages, information from a warehouse management system, feeds (such as from news, sports, weather, etc.). FIG. 5 FIG. 4 FIG. 1 Referring to , widgets are organized into an array 500. The array 500 dictates which widgets will be presented, and in which order the widgets will be presented. For instance, a first widget 502(1) is set to the leftmost widget, followed by widgets 502(2), 502(3) ... 502(n), where n is any reasonable number. The vehicle operator can add as many widgets as are available, or as are limited via preferences set in the system management 404 (). Moreover, the vehicle operator can rearrange the order of presentation of the widgets so that the widgets are appropriately ordered. The first two widgets can be used to set a "Home Screen" that the device can quickly jump to, e.g., to normally display the two most important measures to the vehicle operator. Yet further, the widgets can be configured and ordered from the screen itself, e.g., by the vehicle operator, or the widgets can be set/preset by a manager or via a remote computer, which wirelessly sends the widgets and widget order to the vehicle, such as through the remote server 122 of . FIG. 6 FIG. 3 Referring to , an example display screen 600 is illustrated. The display screen 600 is an example of a graphical user interface display as presented by the display 308A (). The display screen 600 is conceptually broken down into a menu selection section 602, a first docked status tray 604A, an optional second docked status tray 604B, a widget space, e.g., illustrated as a first widget space 606 and a second widget space 608 in this example. In practice, the display screen size may dictate the number of widget spaces available. As such, aspects herein are not limited to two widgets per screen. The menu selection section 602 can be used to access a drop down set of menus, e.g., to set up/configure the widgets, to set preferences, etc. The first docked status tray 604A is used to dock a limited number (e.g., 1 to 6) of specific system status icons that are frozen on the screen regardless of which widgets are displayed. The current status of each vehicle status associated with a docket icon is presented. Thus, necessary maintenance required, operator performance tuning level, battery charge, time, a mailbox, a home button etc., can be frozen so that the vehicle operator can normally see the current state of each docked feature and access these features regardless of which widgets are being displayed. The second, optional docked status tray 604B can be used to display an identifier, e.g., the vehicle operator name or name of another person actively logged into the vehicle, truck name, company name, location, etc. Each widget space presents a widget comprising a visual representation of the current state of an associated function. In the illustrative example, two adjacent widgets are displayed, e.g., according to the order set by the array 500. Thus, since widget N-3 is displayed in the first widget space 606, then widget N-2 is displayed in the second widget space 608. Scrolling the widgets over to the right would shift widget N-3 into the second widget space 608 and would shift a new widget N-4 into the first widget space 606. Likewise, scrolling the widgets over to the left would shift the widget N-2 into the first widget space 606 and shift widget N-1 into the left widget space 608. This process can continue to scroll through the assigned widgets. At widget 1 and N, the scrolling can stop or wrap around. An optional widget position indicator 610 can be utilized to determine the number and position of the displayed widgets. In an example implementation, a vehicle operator can only swipe or key to a different widget when the vehicle is at a full stop. When the travel controls of the vehicle are engaged, the display screen "snaps" back to a designated "home" position, such as the first two widgets. Moreover, the graphical user interface may be configured to have multiple "home" screens, such as to have a home screen for travel and a home screen for lift operations. Each designated home screen displays one or more widgets that are relevant to the task at hand, e.g., travel, lift, etc. For instance, when traveling, the display can automatically change to a motion home screen that shows relevant travel related widgets, e.g., speed and battery charge. Similarly, when performing a lift operation, the display can automatically change to a lift home screen that shows relevant widgets, e.g., a capacity data monitor, e.g., by providing lift height, weight, rack selection, informational messages, blending information, etc. As a few illustrative examples, the controller 216 is communicably coupled to vehicle system modules via the vehicle network system 226. As such, the controller extracts from a traction control module 228 (e.g., directly or via a memory/ current vehicle state lookup table), an indication as to whether the traction control is engaged. If the current operating state of the traction control module indicates that the traction controls are engaged, the controller causes the display screen to "snap" back to a designated "home" position. Otherwise, the vehicle operator can scroll through the available widgets. As an illustrative implementation, the controller extracts from a traction control module, an indication as to whether the traction control is engaged on the industrial vehicle, and where the current operating state of the traction control module indicates that the traction controls are engaged, the controller causes the display screen to snap to a designated motion home position. Also, the controller extracts from a hydraulic module, an indication as to whether the forks are engaged in a lift operation on the industrial vehicle, and where the current operating state of the forks are engaged in a lift operation, the controller causes the display screen to snap to a designated lift home position. In another example, the controller extracts a speed of the vehicle based upon information read from the vehicle network bus and selectively disables operation of the touch gesture commands (and optionally disables the operator controls in the control area 310) when the industrial vehicle exceeds a predetermined speed. In yet another example, the controller (e.g., controller 216) extracts a speed of the vehicle based upon information read from the vehicle network bus and selectively disables the display of the touch screen when the industrial vehicle exceeds a predetermined speed. As noted above, the controller of the controller 216 can extract information by communicating across the vehicle network system 226 (or monitoring I/O 220) directly with a vehicle component. Alternatively, the controller can read the current state from a designated memory on the industrial vehicle. For instance, a process on the industrial vehicle (e.g., a process executed by the controller/processor in the controller 216) may be tasked with cyclically collecting and refreshing vehicle state information in the designated memory, e.g., every 100 milliseconds or less. The designated memory thus defines a vehicle state lookup table that can be accessed to make decisions based upon the current operating state of the industrial vehicle. The vehicle state information can be the current state of controllers, sensors, monitors, operator controls, etc. Here, the current state can reflect event codes, component states, component status information, hour meter readings, energy measurements, hydraulic measurements or other relevant data that is associated with a corresponding industrial vehicle. Moreover, this information can be used to create a history of operational data that may vary over time, such as vehicle speed, vehicle temperature, battery state of charge, vehicle proprietary service codes, height, weight and other measurable parameters associated with the load carrying features of the industrial vehicle, operator information, etc. FIG. 2 From time to time, it may be necessary to interrupt the presentation of information on the display 308A to provide instructions, warnings, and other suitable messages to the vehicle operator. In this regard, the graphical user interface 206 () can support the display of messages in appropriate circumstances. For instance, messages can be organized into various levels, e.g., information, warning, alert, etc. Moreover, displayed messages can persist for a predetermined amount of time, until a vehicle operator clears the message (by pressing the screen or button in the control section 310), until overridden by a more important message, or until cleared by the vehicle, e.g., due to excess vehicle speed or other task where the display content is deemed a distraction. FIG. 7 Referring to , in a first example, a message bar 702 can be displayed across the bottom of the display screen so as to only partially block the lower portion of each displayed widget. Notably, here, the docked icons in the docked status tray 604 are not obscured. FIG. 8 Referring to , in an alternative example, an entire widget space can be temporarily covered up with a message. For instance, in the example implementation, the right-most widget window is temporarily over-ridden with a message. Once the message is cleared, either automatically (times out) or cleared by the vehicle operator, the underlying widget becomes visible again. Again, here, the docked icons in the docked status tray 604 are not obscured. FIG. 9 Referring to , in yet another illustrative example, all viewable widget spaces are temporarily over-ridden with a message. Once the message is cleared, either automatically (times out) or vehicle operator clears the message by pressing a button, e.g., the select button to acknowledge the message, the underlying widget(s) becomes visible again. Yet again however, the docked icons in the docked status tray 604 are not obscured. FIG. 10 FIG. 7 Referring to , yet another example message is displayed. Here, the message temporarily covers the bottom portion of the widget window(s) in a manner analogous to that set out with regard to . However, in addition to providing a message, the system requires the vehicle operator to provide feedback, e.g., by selecting an appropriate action, Action 1 icon 1002, Action 2 icon 1004, Action 3 icon 1006 in this example. More or less options can be provided. Moreover, the vehicle operator may be required to provide information, e.g., by filling in a field using a keypad, etc. In general, the messages can be task-based messages. For instance, a maintenance message may instruct the operator to return the industrial vehicle to a service bay. A battery message may instruct the vehicle operator to return the industrial vehicle to a battery charging station, etc. The messaging may also be vehicle operation specific, e.g., a warning that the vehicle operator is traveling too fast, is traveling in or near a restricted area such as a bonded area, traveling too close to the loading dock, etc. FIG. 11 FIG. 2 FIG. 3 FIGS. 5-10 Referring briefly to , a graphical user interface 1102 is analogous to the graphical user interface component 206 () and graphical user interface 302 (). The graphical user interface 1102 includes a housing 1104 having a front face 1106 defining a display section 1108 and a vehicle operator control section 1110. A display 1108A within the display section 1108 (e.g., touch screen) illustrates the display of two widgets, as described with reference to . The vehicle operator control section 1110 is illustrated as including five keys/buttons. For instance, as illustrated, the vehicle operator control section 310 includes an up direction button 1110A, a right direction button 1110B, a left direction button 1110C, a down direction button 1110D, and an enter button 1110E. According to aspects of the present disclosure, the operating state of the industrial vehicle can be used to selectively modify, lock out, change, or otherwise modify the information viewable on the display. For instance, as noted in greater detail herein, the GUI controller component 402 (e.g., via the controller 216) can communicate with vehicle components across the vehicle network system 226. As such, in an illustrative implementation, the controller extracts an event, such as a speed of the vehicle, fork height, load weight, load contents, etc., and based upon information read from the vehicle network system, selectively modifies, controls, augments, disables, etc., operation of the touch gesture commands and/or operator controls in the controls section 310 when the industrial vehicle exceeds a predetermined threshold associated with the event. Any number of events and thresholds can be detected and used to modify the performance of the graphical user interface in any desired combination. As yet another example, the controller can extract speed of the vehicle based upon information read from the vehicle network system, and selectively disables the display of the touch screen, e.g., to hide the widgets, when the industrial vehicle exceeds a predetermined speed. As still another example, the system can disable tap and swipe (as well as key entered information) unless the vehicle is at a stop. As another example, when the vehicle starts to travel, the dashboard can transition back to the first two widgets, e.g., implementing a return to Home Screen functionality. With reference to the FIGURES generally, the graphical user interface 206 can be used to walk a vehicle operator through a calibration process, e.g., to calibrate one or more vehicle functions, e.g., to set time/date, to calibrate load sensors, set operation set points, etc. The display screen can present a vehicle operator login prompt. If the vehicle operator fails to provide a proper operator ID, the controller 402 of the processing device 202 can communicate with the power enabling and conditioning circuitry 218 to temporarily disable all or part of the vehicle (e.g., limit travel speed to a limp mode/crawl), etc. The interaction between the control component 402 and the power enabling and control circuitry 218 also enables a "push to start" feature to start the industrial vehicle for normal operation by pressing a special button graphically displayed in the graphical user interface. The robustness of the graphical user interface, in combination with the integration of the processor to vehicle operator login, and communication with a remote server facilitates the ability to send and receive electronic messages to the vehicle operator. At appropriate times, e.g., when the industrial vehicle is not moving, the message contents can be directed to the graphical user interface. If an event is detected, such as the industrial vehicle moving, lifting or otherwise working, the messages can be automatically disabled. A message icon can be one of the docked icons in the docket status tray 604. At designated times, e.g., when the industrial vehicle is not moving, the graphical user interface can also be utilized to display rich, graphic intensive information, such as operators manuals, maintenance repair manuals, schematics for maintenance personnel, etc. Where an image, text, etc. is too large to fit into the viewing area of the display screen, the user can use the controls to navigate up, down, left and right to move parts of the image, text, etc., into the viewable area of the display screen. FIG. 2 In an example implementation, the processing device includes a vehicle network bus connection, e.g., a connection to the vehicle network system/bus 226 of the industrial vehicle as best described with reference to . In an example implementation, the controller is further operatively programmed to extract a speed of the vehicle based upon information read from the vehicle network bus, and selectively disable operation of at least one gesture command when the extracted speed exceeds a predetermined speed threshold. The controller can also disable operation of at least one control (e.g., on button). Moreover, all user inputs (gesture commands and controls) can be disabled, or any combination thereof can be disabled. In another example implementation, the controller is further operatively programmed to extract a speed of the vehicle based upon information read from the vehicle network bus, and selectively disable the display of the touch screen when the extracted speed exceeds a predetermined speed threshold. In yet another example implementation, the touch screen display graphically displays a widget space for displaying a widget comprising a visual representation of a current state of an associated component of the industrial vehicle. Here, the controller is further operatively programmed to extract the current state of the associated component of the industrial vehicle by communicating with at least one electronic component across the vehicle network bus. Moreover, as noted in the examples above, in example configurations, the touch screen display further graphically displays a menu selection section, and a docked status tray that graphically displays at least one icon, each icon representing the current state of a vehicle component. Here, the controller is further operatively programmed to periodically extract the current state of each vehicle component associated with an icon in the docked status tray by submitting a query across the vehicle network bus to at least one industrial vehicle electronic component. For instance, the widget space can display the current state of at least one of battery charge, vehicle speed, or fork lift height. As another example, the controller can be further operatively programmed to receive information from another component of the industrial vehicle (e.g., a transceiver in communication with a remote server, memory, query engine, etc.), and interact with the touch screen display to cause the widget space to display the current state of at least one of environmental state, and a task-based state reflecting the level of completion of the task. For instance, the controller can interact with a communications device such that the controller receives from a remote server, information that is displayed in the widget space that includes information not extracted from the vehicle. Likewise, the controller can be further operatively programmed to receive information from another component of the industrial vehicle (e.g., the transceiver in communication with a remote server) and interact with the touch screen display to cause the widget space to display the current state of at least one measure of operator performance. As an example, the controller can be operatively programmed to cause the touch screen display to display a count of the frequency and/or duration of correct behavior events in-between incorrect behavior events. In still a further example, the controller can be further operatively programmed to receive information from another component of the industrial vehicle, and interact with the touch screen display to cause the widget space to be temporarily interrupted to display a message across a lower portion of the widget space that is removed after a period of time. In addition and/or alternatively, the controller can be further operatively programmed to receive information from another component of the industrial vehicle, and interact with the touch screen display to cause the widget space to be temporarily interrupted to display a message across the entirety of a single widget space that is removed after a period of time. In addition and/or alternatively, the controller can be further operatively programmed to receive information from another component of the industrial vehicle, and interact with the touch screen display to temporarily display a message across the entirety of all displayed widget spaces for a period of time. In yet further example configurations, the controller is operatively programmed to clear the displayed message after any one of the vehicle operator presses a designated area of the touch screen, a button in the vehicle operator control area, or a combination thereof, the message times out, the message is cleared to display a higher priority message, or the message is cleared to clear the display screen based upon a designated operating characteristic of the vehicle. For instance, the controller can be operatively programmed to clear the message from the display screen based upon a designated operating characteristic of the vehicle if the controller detects that the industrial vehicle is traveling in excess of a predetermined speed. In example configurations the controller controls the touch display screen such that the widget space snaps to a first home position displaying at least one designated widget when the vehicle is not stationary. In other example configurations, the controller controls the touch display screen such that the widget space snaps to a first home position displaying at least one designated widget when the vehicle traction control is engaged. As an example, the controller extracts from a traction control module, an indication as to whether the traction control is engaged on the industrial vehicle, and where the current operating state of the traction control module indicates that the traction controls are engaged, the controller causes the display screen to snap to a designated home position. In yet another example configuration, the controller controls the touch display screen such that the widget space snaps to a second home position displaying at least one designated widget when the vehicle lift control is engaged. As yet another example, the controller extracts from a traction control module, an indication as to whether the traction control is engaged on the industrial vehicle, and where the current operating state of the traction control module indicates that the traction controls are engaged, the controller causes the display screen to snap to a designated motion home position. Also, the controller extracts from a hydraulic module, an indication as to whether the forks are engaged in a lift operation on the industrial vehicle, and where the current operating state of the forks are engaged in a lift operation, the controller causes the display screen to snap to a designated lift home position. FIG. 12 FIG. 2 FIG. 2 FIG. 4 FIG. 2 Referring to , a schematic block diagram illustrates an exemplary computer system 1200 for implementing the processing device of , the controller 216 of , the control architecture 400 of , or other processing structures set out more fully herein. The exemplary computer system 1200 includes one or more (hardware) microprocessors (µP) 1210 and corresponding (hardware) memory 1220 (e.g., random access memory and/or read only memory) that are connected to a system bus 1230. Information can be passed between the system bus 1230 and an optional data bus 1240 by a suitable bridge 1250. The data bus 1240 is used to interface peripherals with the one or more microprocessors (µP) 1210, such as storage 1260 (e.g., solid state hard disk drive); removable media storage devices 1270 (e.g., flash drives, etc.); I/O devices 1280 (e.g., the graphical user interface 206, a universal serial bus (USB) interface, etc.); and one or more adapters 1290. The adapters 1290, where provided, allow the microprocessor 1210 to communicate across one or more of the vehicle network systems (e.g., 226 of ). In this regard, example adapters 1290 can include Bluetooth, Ethernet, CAN bus, RS422, LIN Bus, WIFI, cellular, etc. The above list of peripherals is presented by way of illustration, and is not intended to be limiting. Other peripheral devices may be suitably integrated into the computer system 1200. The memory 1220, storage 1260, removable media insertable into the removable media storage 1270 or combinations thereof, can be used to implement the processes, configurations, interfaces and other aspects set out and described herein. The microprocessor(s) 1210 control operation of the exemplary computer system 1200. Moreover, one or more of the microprocessor(s) 1210 execute computer readable code that instructs the microprocessor(s) 1210 to implement the processes herein. The computer readable code may be stored for instance, in the memory 1220, storage 1260, removable media storage device 1270 or other suitable tangible storage medium accessible by the microprocessor(s) 1210. The memory 1220 can also function as a working memory, e.g., to store data, an operating system, etc. FIG. 1 FIGS. 2-11 FIG. 4 FIG. 12 The processes herein may be implemented as a machine-executable process executed on a computer system, e.g., one or more of the processing devices 102 of , on a particular computing device such as the vehicle computer processing device 202 described with reference to , on the control architecture 400 of , on a system 1200 of , or combination thereof. In this regard, the processes herein may be implemented on a computer-readable storage device (e.g., computer-readable storage hardware) that stores machine-executable program code, where the program code instructs a processor to implement the described method/process. The processes herein may also be executed by a processor coupled to memory, where the processor is programmed by program code stored in the memory, to perform the described process. Computer program code for carrying out operations for aspects of the present description, not being part of the invention, may be written in any combination of one or more programming languages. The program code may execute entirely on the computer system 1200 or partly on the computer system 1200. In the latter scenario, the remote computer may be connected to the computer system 1200 through any type of network connection, e.g., using the network adapter 1290 of the computer system 1200. In implementing computer aspects of the present disclosure, any combination of computer-readable medium may be utilized. The computer-readable medium may be a computer readable signal medium, a computer-readable storage medium, or a combination thereof. Moreover, a computer-readable storage medium may be implemented in practice as one or more distinct mediums. per se. A computer-readable signal medium is a transitory propagating signal A computer-readable signal medium may include computer readable program code embodied therein, for example, as a propagated data signal in baseband or as part of a carrier wave. However, specifically, a computer-readable signal medium does not encompass a computer-readable storage medium. per se, A computer-readable storage medium is a tangible device/hardware that can retain and store a program (instructions) for use by or in connection with an instruction execution system, apparatus, or device, e.g., a computer or other processing device set out more fully herein. Notably, a computer-readable storage medium does not encompass a computer-readable signal medium. Thus, a computer readable storage medium, as used herein, is not to be construed as being transitory signals such as radio waves or other freely propagating electromagnetic waves through a transmission media. Specific examples (a non-exhaustive list) of the computer-readable storage medium include the following: a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), Flash memory, or any suitable combination of the foregoing. In particular, a computer-readable storage medium includes computer-readable hardware such as a computer-readable storage device, e.g., memory. Here, a computer-readable storage device and computer-readable hardware are physical, tangible implementations that are non-transitory. By non-transitory, it is meant that, unlike a transitory propagating signal per se, which will naturally cease to exist, the contents of the computer-readable storage device or computer-readable hardware that define the claimed subject matter persists until acted upon by an external action. For instance, program code loaded into random access memory (RAM) is deemed non-transitory in that the content will persist until acted upon, e.g., by removing power, by overwriting, deleting, modifying, etc. Moreover, since hardware comprises physical element(s) or component(s) of a corresponding computer system, hardware does not encompass software, per se. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed.
The present invention is directed generally to a process for making synthesis gas from which streams of carbon monoxide and methanol can be obtained in approximately stoichiometric proportions suitable for the manufacture of acetic acid, and more particularly to the retrofit of a methanol plant to divert a portion of the syngas from the existing methanol synthesis loop to a carbon monoxide separator and to react the methanol from the methanol synthesis loop with the carbon monoxide from the separator in approximately stoichiometric proportions to directly or indirectly make acetic acid. The manufacture of acetic acid from carbon monoxide and methanol using a carbonylation catalyst is well known in the art. Representative references disclosing this and similar processes include U.S. Pat. No. 1,961,736 to Carlin et al (Tennessee Products); U.S. Pat. No. 3,769,329 to Paulik et al (Monsanto); U.S. Pat. No. 5,155,261 to Marston et al (Reilly Industries); U.S. Pat. No. 5,672,743 to Garland et al (PB Chemicals); U.S. Pat. No. 5,728,871 to Joensen et al (Haldor Topsoe); U.S. Pat. No. 5,773,642 289 to Denis et al (Acetex Chimie); U.S. Pat. No. 5,817,869 to Hinnenkamp et al (Quantum Chemical Corporation); U.S. Pat. No. 5,877,347 and 5,877,348 to Ditzel et al (BP Chemicals); U.S. Pat. No. 5,883,289 to Denis et al (Acetex Chimie); and U.S. Pat. No. 5,883,295 to Sunley et al (BP Chemicals), each of which is hereby incorporated herein by reference. The primary raw materials for acetic acid manufacture are, of course, carbon monoxide and methanol. In the typical acetic acid plant, methanol is imported and carbon monoxide, because of difficulties associated with the transport and storage thereof, is generated in situ, usually by reforming natural gas or another hydrocarbon with steam and/or carbon dioxide. A significant expense for new acetic acid production capacity is the capital cost of the equipment necessary for the carbon monoxide generation. It would be extremely desirable if this capital cost could be largely eliminated or significantly reduced. Market conditions, from time to time in various localities, can result in relatively low methanol prices (an oversupply) and/or high natural gas prices (a shortage) that can make methanol manufacture unprofitable. Operators of existing methanol manufacturing facilities can be faced with the decision of whether or not to continue the unprofitable manufacture of methanol in the hope that product prices will eventually rebound and/or raw material prices will drop to profitable levels. The present invention addresses a way of modifying an existing unprofitable methanol plant to make it more profitable when methanol prices are low and/or gas prices are high. Mayland (U.S. U.S. Pat. No. 2,622,089) discloses a method of reforming natural gas to produce hydrogen and carbon monoxide synthesis gas. At column 1, lines 10-36, there is a disclosure that carbon dioxide is combined with natural gas and steam in a reforming reaction to produce hydrogen and carbon monoxide. The specific method is said to obtain larger quantities of hydrogen-carbon monoxide synthesis gas mixtures with a given quantity of carbon dioxide in the feed to the hydrocarbon reforming unit (column 1, line 48 to column 2, line 10). The molar ratio of hydrogen to carbon monoxide in the syngas is said to be 2.0 (for methanol production). The molar ratio of steam to methane in the reformer feed ranges from 1.5 to 1.8, and the ratio of carbon dioxide to methane in the feed ranges from 0.64 to 0.91. Moe (U.S. Pat. No. 3,859,230) discloses that naphtha and steam are reformed in a two-stage reforming process to make CO and H2. A minor portion (15-30%) of the effluent from the first reformer stage is subjected to absorption/stripping to remove CO2 which is fed to the second reformer stage with the major portion of the effluent from the first stage. The effluent from the second reformer is used as syngas for alcohol/aldehyde production. If desired, the effluent from the second reformer stage can also be treated to remove CO2 that can be recycled to the second reformer stage with the CO2 recovered from the first stage effluent. At column 1, beginning at line 28, it is disclosed that the prior art obtained CO2 from the effluent of the second reformer stage, or from the effluent from the combustion gases used to heat the reformer. Joensen et al (mentioned above) disclose making acetic acid from CO and H2 by using a dehydration catalyst (Cu/Zn/Al) that produces methanol (MeOH), dimethyl ether (DME) and CO2. The methanol and DME are then separated from the CO2 and reacted with CO to make acetic acid. The CO is said to be made from steam-reformed CH4. A portion of the syngas from the reformer (2 less than H2/CO less than 3) is diverted from the feed to the MeOH/DME reactor and then. membrane or cryogenically treated to recover the CO for feed to the acetic acid reactor. The process is said to be a parallel production of MeOH and CO, avoiding MeOH import as required in prior art processes. Steinberg et al (U.S. Pat. No. 5,767,165) disclose that CH4 is autothermally decomposed to make carbon black and hydrogen. In FIG. 3, a portion of the CH4 feed is reformed with CO2 (without steam) to make CO. The CO/H2 from the CH4/CO2 reformer is then reacted with the additional H2 from the autothermal reformer to make MeOH. The CO2 is said to be obtained by fossil fuel combustion. Excess H2 from the syngas can be burned to supply energy for the CH4 decomposition. Park et al (U.S. Pat. No. 5,855,815) disclose making syngas for Fischer-Tropsch synthesis. CO2 and CH4 are reformed with 0-10% O2 and 0-10% H2O in the presence of Ni catalyst containing an alkali metal on a silica support at 600-1000xc2x0 C. and a space velocity of 1000-500,000 hrxe2x80x941 to make CO, H2 and H2O. The effluent is said to have an H2/CO ratio less than 3, compared to an H2/CO ratio in the prior art of 0.5-2 with conventional CO2 reforming. As far as applicant is aware, there is no disclosure in the prior art for modifying existing methanol plants, including methanol/ammonia plants, to supply stoichiometric MeOH and CO for manufacturing acetic acid, for example, that can be a more valuable product than MeOH. The present invention involves the discovery that the large capital costs associated with CO generation for a new acetic acid plant can be significantly reduced or largely eliminated by retrofitting an existing methanol or methanol/ammonia plant to make acetic acid. More specifically, carbon dioxide can be fed into a reformer to which natural gas and steam (water) are fed. Syngas is formed in the reformer wherein both the natural gas and the carbon dioxide are reformed to produce syngas with a large proportion of carbon monoxide relative to reforming without added carbon dioxide. The syngas can be split into a first part and a second part. The first syngas part is converted to methanol in a conventional methanol synthesis loop that is operated at about half of the design capacity of the original plant since less syngas is supplied to it. The second syngas part can be processed to separate out carbon dioxide and carbon monoxide, and the separated carbon dioxide can be fed back into the feed to the reformer to enhance carbon monoxide formation. The separated carbon monoxide can then be reacted with the methanol to produce acetic acid or an acetic acid precursor by a conventional process. Separated hydrogen can also be reacted with nitrogen, in a conventional manner, to produce ammonia. Also, a portion of acetic acid that is produced can be reacted in a conventional manner with oxygen and ethylene to form vinyl acetate monomer. The nitrogen for the ammonia process (especially for any added ammonia capacity in a retrofit of an original methanol plant comprising an ammonia synthesis loop) and the oxygen for the vinyl acetate monomer process, can be obtained from a conventional air separation unit. Broadly, the present invention provides, in one aspect, a method for retrofitting an original methanol plant which has at least one steam reformer for converting a hydrocarbon to a syngas stream containing hydrogen and carbon monoxide, a heat recovery section for cooling the syngas stream, a compression unit for compressing the syngas stream, and a methanol synthesis loop for converting at least a portion of the hydrogen and carbon monoxide in the syngas stream to methanol. The method converts the methanol plant into a retrofitted plant for manufacturing a product from carbon monoxide and methanol selected from the group consisting of acetic acid, acetic anhydride, methyl formate, methyl acetate and combinations thereof. The method comprises the steps of: (a) diverting a portion of the syngas stream from at least one reformer to a separation unit; (b) operating the methanol synthesis loop with a feed comprising the remaining syngas stream to produce less methanol than the original methanol plant; (c) operating the separation unit to separate the diverted syngas into at least a carbon monoxide-rich stream and a hydrogen-rich stream, wherein the quantity of hydrogen in the hydrogen-rich stream is greater than any net hydrogen production of the original methanol plant; and (d) reacting the carbon monoxide-rich stream from the separation unit with the methanol from the methanol synthesis loop to form the product, wherein the diversion of the syngas stream is balanced for the approximately stoichiometric production of the methanol from the methanol synthesis loop and the carbon monoxide-rich stream from the separation unit for conversion to the product. Preferably; at least one steam reformer is modified to increase carbon monoxide production in the syngas stream. The syngas stream preferably comprises carbon dioxide, and the separation unit produces a carbon dioxide-rich stream that is preferably recycled to at least one reformer to increase the carbon monoxide production. The reaction step can include the direct catalytic reaction of methanol and carbon monoxide to form acetic acid as in the Mosanto-BP process, for example, or alternatively can comprise the intermediate formation of methyl formate and isomerization of the methyl formate to acetic acid, the intermediate reaction of CO and two moles of methyl alcohol to form methyl acetate and hydrolysis of the methyl acetate to acetic acid and methanol, or the carbonylation of the methyl acetate to form acetic anhydride. In one preferred embodiment of the retrofitting method, the present invention provides a method for retrofitting an original methanol plant that has at least one steam reformer for converting a hydrocarbon/steam feed to a syngas stream containing hydrogen and carbon monoxide; a heat recovery section for cooling the syngas stream, a compression unit for compressing the syngas stream, and a methanol synthesis loop for converting at least a portion of the hydrogen and carbon monoxide in the syngas stream to methanol. The retrofitted plant can manufacture a product from carbon monoxide and methanol selected from the group consisting of acetic acid, acetic anhydride, methyl formate, methyl acetate and combinations thereof. The retrofitting method comprises the steps of: (a) modifying at least one steam reformer for operation with a feed comprising a relatively increased carbon dioxide content; (b) diverting a portion of the syngas stream from at least one steam reformer to a separation unit; (c) operating the methanol synthesis loop with a feed comprising the remaining syngas stream to produce less methanol than the original methanol plant; (d) operating the separation unit to separate the diverted syngas into a carbon dioxide-rich stream, a carbon monoxide-rich stream and a hydrogen-rich stream; (e) recycling the carbon dioxide-rich stream from the separation unit to at least one modified steam reformer to increase the carbon monoxide formation relative to the original methanol plant and increase the molar ratio of carbon monoxide to hydrogen; (f) reacting the carbon monoxide-rich stream from the separation unit with the methanol from the methanol synthesis loop to form the product, wherein the diversion of the syngas stream is balanced for the approximately stoichiometric production of the methanol from the methanol synthesis loop and the carbon monoxide-rich stream from the separation unit for conversion to the product. The modified steam reformer is preferably modified to operate at a higher temperature to enhance the carbon conversion to carbon monoxide. The separation unit can include a solvent absorber and stripper for carbon dioxide recovery, and a cryogenic distillation unit for carbon monoxide and hydrogen recovery. The compression unit preferably has a three-stage compressor, and the syngas stream diversion preferably occurs between the second and third compression stages. The third compressor stage is preferably modified for operation at a lower throughput than the original methanol plant. Where the methanol synthesis loop of the original methanol plant includes a recycle loop compressor, the recycle loop compressor can also be modified for operation at a lower throughput. The method can further comprise the step of reacting the hydrogen in the hydrogen-rich stream with nitrogen to make ammonia. Where the original methanol plant produces a hydrogen-rich stream comprising a loop purge from the methanol synthesis loop that was reacted with nitrogen to make ammonia, the retrofitted plant can use the hydrogen-rich stream from the separation unit as a primary hydrogen source for the ammonia production. With the additional hydrogen available from the syngas, additional ammonia can be produced in the retrofitted plant relative to the original methanol plant. The method can further comprise installing a vinyl acetate monomer unit for reacting a portion of the acetic acid with ethylene and oxygen to make vinyl acetate monomer. An air separation unit can be installed to make the oxygen for the vinyl acetate monomer unit, and the nitrogen produced from the air separation unit preferably matches the nitrogen required for the additional ammonia production. In another aspect, the present invention provides a process for making hydrogen and a product selected from the group consisting of acetic acid, acetic anhydride, methyl formate, methyl acetate and combinations thereof, from a hydrocarbon via methanol and carbon monoxide which can be effected by construction of a new plant or retrofit of an existing plant. The process comprises the steps of: (a) reforming the hydrocarbon with steam in the presence of a minor proportion of carbon dioxide to form a syngas containing hydrogen, carbon monoxide, and carbon dioxide having a molar ratio of R ((H2xe2x80x94CO2)/(CO+CO2)) from about 2.0 to about 2.9; (b) recovering heat from the syngas to form a cooled syngas stream; (c) compressing the cooled syngas stream to a separation pressure; (d) diverting a major portion of the compressed syngas to a separation unit; (e) separating the syngas diverted to the separation unit into a carbon-dioxide-rich stream, a carbon monoxide-rich stream and a hydrogen-rich stream; (f recycling the carbon dioxide-rich stream to the reforming step; (g) further compressing the remaining minor portion of the syngas to a methanol synthesis pressure higher than the separation pressure; (h) operating a methanol synthesis loop to convert the hydrogen and carbon monoxide in the further compressed syngas into a methanol stream; and (i) reacting the carbon monoxide-rich stream from the separation unit with the methanol stream from the methanol synthesis loop to make the product, wherein the diversion step is balanced to obtain approximately stoichiometric amounts of carbon monoxide and methanol. The process preferably has a molar ratio of carbon dioxide to hydrocarbon comprising natural gas in feed to the reforming step from about 0.1 to 0.5 and a ratio of steam to natural gas from about 2 to 6. The methanol synthesis loop can be operated substantially below a total maximum combined design throughput of all methanol synthesis reactor(s) in the loop. The process can further comprise the step of reacting the hydrogen in the hydrogen-rich stream with nitrogen in an ammonia synthesis reactor to make ammonia. The process can also comprise the step of separating air into a nitrogen stream and an oxygen stream and supplying the nitrogen stream to the ammonia synthesis reactor. Where the product comprises acetic acid or an acetic acid precursor which is converted to acetic acid, the process can further comprise the step of supplying the oxygen stream from the air separation unit to a vinyl acetate synthesis reactor, along with a portion of the acetic acid from the carbon monoxide-methanol reaction step, and ethylene, to produce a vinyl acetate monomer stream. Regardless of whether the plant is a retrofit or a new plant, where the product comprises acetic acid, the reaction step preferably comprises reacting methanol, methyl formate, or a combination thereof in the presence of a reaction mixture comprising carbon monoxide, water, a solvent and a catalyst system comprising at least one halogenated promoter and at least one compound of rhodium, iridium or a combination thereof. The reaction mixture preferably has a water content up to 20 weight percent. Where the reaction step comprises simple carbonylation, the water content in the reaction mixture is more preferably from about 14 to about 15 weight percent. Where the reaction step comprises low-water carbonylation, the water content in the reaction mixture is more preferably from about 2 to about 8 weight percent. Where the reaction step comprises methyl formate isomerization or a combination of isomerization and methanol carbonylation, the reaction mixture more preferably contains a nonzero quantity of water up to 2 weight percent. The reaction step is preferably continuous.
According to our database1, Eric Muth authored at least 22 papers between 2004 and 2017. Assessing the Accuracy of a Wrist Motion Tracking Method for Counting Bites Across Demographic and Food Variables. Measuring the Consumption of Individual Solid and Liquid Bites Using a Table-Embedded Scale During Unrestricted Eating. Designing Mobile Health Technologies for Self-Monitoring: The Bite Counter as a Case Study. Recognizing Eating Gestures Using Context Dependent Hidden Markov Models. Automatic Detection of Periods of Eating Using Wrist Motion Tracking. Improving the Recognition of Eating Gestures Using Intergesture Sequential Dependencies. The effects of 0.2 Hz varying latency with 20-100 ms varying amplitude on simulator sickness in a helmet mounted display. Detecting Periods of Eating During Free-Living by Tracking Wrist Motion. Remote Technical Support Requires Diagnosing the End User (Customer) as well as the Computer. Real-time detection of workload changes using heart rate variability. Characteristics of Head-Mounted Displays and Their Effects on Simulator Sickness. The effects of display delay on simulator sickness. Perceptual thresholds for display lag in a real visual environment are not affected by field of view or psychophysical technique. Change Point Detection Via Sub-Gaussian Fitting. The Challenge of Uncoupled Motion: Duration of Cognitive and Physiological Aftereffects. A Device for Detecting and Counting Bites of Food Taken by a Person during Eating. Effects of Brief Training on Use of Automated External Defibrillators by People Without Medical Expertise. Real-Time Correction of Heart Interbeat Intervals. Methods for the Evaluation of Orientation Sensors. Effects of Uncoupled Motion on Performance. A Trackable Laser Tag System. A Real-Time Index of Vagal Activity.
https://www.csauthors.net/eric-muth/
A week after the owners of the Penn Book Center announced its closure, more than 3,000 Philadelphia residents have signed a petition to save the bookstore. The petition, which was addressed to Penn President Amy Gutmann, called on the University to meet with the bookstore owners to develop a business model that would save the store. On April 8, store owners Ashley Montague, who graduated from Penn in 1999 with a Ph.D. in English, and Michael Row, who received a Wharton Ph.D. in 2001, announced that Penn Book Center will shut its doors after nearly 60 years of business. Although they have taken steps to keep the store, which opened in 1962, from closing, the owners said these measures have not generated enough profit. English professor Chi-ming Yang started the petition on April 10 after the owners made the announcement. In the petition, Yang called on the University to find a solution that keeps both the Penn Book Center and the Penn Bookstore in place to serve the school. Yang also commended the President's Office for promoting the MacArthur Foundation's 100&Change competition and the President's Engagement and Innovation prizes to "reward students who 'make a substantial, sustainable impact in the world," but called on them to do the same for the Penn Book Center. "Now is the time for the University to stand behind its commitments to sustainability and social innovation," Yang wrote. The Penn Book Center regularly hosts poetry and book readings with the Institute of Contemporary Art and Penn’s Center for Africana Studies. In fall 2017, the Penn Book Center stopped selling course books and expanded its collection to the sale of "regular trade books." The decision was made because the Penn Book Center did not receive enough revenue from the course books, Montague said. Yang said the petition received 2,000 signatures within the first 24 hours. Yang added that peer institutions such as Yale University, Columbia University, Princeton University, Harvard University, and Brown University have independent bookstores in the surrounding area. In the petition, Yang wrote that Princeton University subsidizes 30% of textbook sales at its independent bookstore to encourage students to buy books. “It would be a travesty if a rich Ivy League school like Penn did not have a scholarly independent bookstore on its campus,” she said. Montague said the petition has brought hope that Penn and the Penn Book Center can find a way to save the beloved bookstore. In the past, the University has been flexible with the Penn Book Center's lease, Montague said. “Maybe something similar to what they do at Drexel where you have a co-op that you could get some Penn students from Wharton working on certain aspects of the business,” Montague said. “You could have English students who might be able to be book sellers." Penn faculty and students said they hope the petition will garner the attention of Penn administrators and lead to the University helping to resolve bookstore's financial struggles. College senior Derek Willie, who studies English, said he has little faith in the Penn administration because of its responses in the past to Fossil Free Penn, which called on the University to divest from fossil fuels, and Graduate Employees Together – University of Pennsylvania's campaign for unionization. Despite student advocacy, Penn has not divested from fossil fuels. Penn has also opposed the formation of a graduate student union. “[The Penn Book Center] is contributing very much to the Penn community, but I don’t think the University values that,” Willie said. “They think of everything in terms of money and I think that given Penn’s attitude toward these kinds of things combined with the general business atmosphere in which small book stores are going out of business, I think that is what is allowing this to happen.” Sixth-year English Ph.D. candidate Orchid Tierney said when she first arrived at Penn from New Zealand in August 2013, the Penn Book Center was the first bookstore she entered in the United States. “I gravitated towards it as soon as I arrived in August and I was just shocked and delighted by the poetry collection that they had there, and I think I would go in each week just picking out books to buy and it became a place I could anchor myself as a Philadelphian,” Tierney said. A few days before hearing about the Penn Book Center's closure, sixth-year Ph.D. candidate in Comparative Literature Julia Dasbach attended a reading featuring world-renowned poet Ilya Kaminsky. She said she was heartbroken when she heard the news and that she had hoped her poetry collection would be featured at the bookstore after it is published this September. "[Penn Book Center] supported me from the very beginning back when I was a nobody,” Dasbach said. The Penn Book Center hosted the launch of her chapbook of poems when she was a first-year graduate student. Montague said she hopes Penn will help the owners come up with a creative solution to save the bookstore. “[Row and I] are the owners I guess, but in a way we don’t really own it, and that’s why it’s so wrenching to say we’re going to close it, because we’re not the only people who have something invested in this,” Montague said. “I feel like it was kind of a legacy to us that we took on. It would be great for this to continue.” All comments eligible for publication in Daily Pennsylvanian, Inc. publications.
https://www.thedp.com/article/2019/04/penn-book-center-petition-close-university-city
Presentation is loading. Please wait. Published byJeffery Kenerly Modified over 4 years ago 1 Payroll Accounting 2013 Bernard J. Bieg and Judith A. Toland CHAPTER 2 COMPUTING WAGES & SALARIES Developed by Lisa Swallow, CPA CMA MS 2 Learning Objectives Explain the major provisions of the Fair Labor Standards Act Define hours worked Describe the main types of records used to collect payroll data Calculate regular and overtime pay Identify distinctive compensation plans 3 What is Minimum Wage? Includes all rates of pay including, but not limited to Commissions Nondiscretionary bonuses and severance pay On-call or differential pay Discretionary bonus (one which is not agreed upon or promised before hand) is not included in an employee’s regular rate of pay Other types of compensation not included in regular rate of pay include Gifts made as a reward for service Payments for a bona fide profit-sharing plan Vacation, holiday, sick day or jury duty pay Vehicle, tool or uniform allowances LO-1 4 Tipped Employees “Tipped employee” regularly average more than $30/month in tips Minimum tipped wages is $2.13/hour, therefore tip credit = $5.12/hour – but may be calculated differently based upon state law Employee must make $7.25/hour when combining tips/wages ($7.25 x 40 = $290 minimum weekly gross) Tip credit remains the same for overtime pay calculation purposes Examples of tips received for 40-hour workweek #1. Reported tips = $43 Is $85.20 (40 x $2.13 minimum tipped wage) + $43 > $290 - (No - so employer must pay additional wages of $290 - $43 = $247) #2. Reported tips = $1,189 Is $85.20 + $1,189 > $290 - (Yes – so employer pays $85.20 wages) Note: states’ tip credit percentages may differ from federal law *40 hours x $2.13/hour = $85.20 LO-1 5 Overtime Provisions & Exceptions Workweek established by corporate policy Must be seven consecutive 24-hour periods For example 12:01 a.m. Saturday - 11:59 p.m. Friday Some states require daily overtime (OT) over 8 hours (if state plan is more generous than FLSA, state law is followed) FLSA sets OT pay at 1.5 times regular pay Employer can require employees to work overtime Exceptions to the above are as follows Hospital employee, overtime for 80+ hours in 14 days or over 8 hours in a day (whichever is greatest) Retail or service industry employees earning commission (special rules) Employee receiving remedial education LO-1 6 Exempt vs. Nonexempt Employees “Exempt” means exempt from some, or all, of FLSA provisions White-collar workers as outlined in Figure 2-2 (p. 2-10) are exempt Executives, administrators, professionals Business owners, highly compensated employees Computer professionals and creative professionals Outside salespeople Test of exemption means employee must meet ‘primary duty’ requirements listed in Figure 2-2 Employee must be paid on salary basis at least $455/week Blue collar workers are always entitled to overtime pay – includes police officers, EMTs, firefighters, paramedics and LPNs Note: Putting someone on salary doesn’t mean he/she is exempt!! LO-1 7 Determining Employee’s Work Time Principal activities require exertion, and are required by the employer and for the employer’s benefit Prep at work station is principal activity and in some situations changing in/out of protective gear may be part of workday Travel (when part of principal workday) is compensable Idle time and wait time (waiting to provide employer’s service) Rest periods under 20 minutes are principal activities (can’t make employee “check out”) Meal periods are not compensable time unless employee must perform some tasks while eating – generally 30 minutes or longer Work at home is principal activity for nonexempt employees Sleep time is principal activity if required to be on duty < 24 hours Training sessions (with certain caveats) Waiting for doctor’s appointment on site LO-2 8 Records Used for Timekeeping LO-3 FLSA requires certain time and pay records be kept Time sheets indicate arrival/departure time of employee Computerized time/attendance recording systems Card-generated systems use computerized time cards Badge systems employ badges in conjunction with electronic time clocks Cardless and badgeless sytems require that an employee use their PIN number to process timekeeping PC-based system allows employee to clock in via computer Next generation technology includes touch-screen kiosks, web-based, biometrics and IVR (interactive voice response) 9 Computing Wages/Salaries Most common pay periods are as follows Biweekly (26) - 80 hours each pay period Semi-monthly (24) - different hours each pay period Monthly (12)- different hours each pay period Weekly (52) - 40 hours each pay period Employer may have different pay periods for different groups within same company! LO-4 10 Calculating Overtime Pay There are two methods Most common method Calculate gross pay (40 hours x employee’s regular rate) OT rate then calculated by multiplying 1.5 x employee’s regular rate x hours in excess of 40 Other method Calculate gross pay (all hours worked x employee’s regular rate) Then calculate an overtime premium (hours in excess of 40 x overtime premium rate*) Hourly rate x ½ = *overtime premium rate These methods result in same total gross pay! LO-4 11 Salaried Nonexempt Employees - Fluctuating Workweek Employee and employer may forge an agreement that a fluctuating schedule on a fixed salary is acceptable Overtime is calculated by dividing normal salary by total hours worked Then an extra.5 overtime premium is paid for all hours worked over 40 or Can divide fixed salary by 40 hours – gives different pay rate each week Then an extra.5 overtime premium is paid for all hours worked over 40 Alternative – BELO Plan Appropriate for very irregular work schedule Deductions cannot be made for non-disciplinary absences Guaranteed compensation cannot be for more than 60 hours Calculate salary as wage rate multiplied by maximum number of hours and then add 50% for overtime LO-5 12 Piece Rate FLSA requires piecework earners to get paid for nonproductive time Must equal minimum wage with OT calculated one of two ways Method A Units produced x unit piece rate = regular earnings Regular earnings/total hours = hourly rate Hourly rate x 1/2 = OT premium Regular earnings + (OT premium x OT hours) = gross pay or Method B (Units produced in 40 hours x piece rate) + [(Units produced in OT) x (1.5 x piece rate)] Note: two methods don’t give same results!! LO-5 13 Special Incentive Plans Special incentive plans are modifications of piece-rate plans Used to entice workers to produce more Computation of payroll is based on differing rates for differing quantities of production Example of incentive plan.18/unit for units inspected up to 2,000 units/week.24/unit for units inspected between 2,001-3,500 units/week.36/unit for units inspected over 3,500 units/week LO-5 Similar presentations © 2019 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/3559375/
July is National Minority Mental Health Awareness Month. During the month of July, treatment professionals from around the United States engage in conversation and awareness efforts promoting mental health. In American culture, there is a negative stigma surrounding mental health. Often, in Hispanic communities, that stigma is intensified. The way people talk and react to mental health concerns has a profound effect on whether people seek treatment. In the popular television series Homeland, the main character (Carrie Mathison) struggles with bipolar disorder. She is forced to conceal her illness in fear of her job being threatened. When she is not ‘med-compliant,’ she suffers from intense bouts of mania, during which people think she is “crazy” or “insane.” In reality, she is sick. When people catch a fever, they go to the doctor, take a few days off from work— and no one blinks an eye. Take 2 days off from work because a manic episode, and people around the office start whispering that you are “crazy.” The way we discuss and often shun people who struggle with mental illness has a negative impact on the overall health of our communities. In a perfect world, people should feel a nonjudgmental attitude towards seeking treatment. In a perfect world, there would be no shame associated with asking for help. The Florida Council on Compulsive Gambling (FCCG) understands how difficult it can be for people to reach out for help given this stigma. That is why the 888-ADMIT-IT Problem Gambling Helpline is confidential. People are sometimes afraid to give any identifying information out of fear and shame and the FCCG always makes sure to let callers know that anonymity is a top priority when providing resources. During July, expand your thought process as it relates to mental health. People of all ethnic backgrounds should feel that treatment is an option, for any mental illness. To read more and find way to get involved, click here.
https://blog.gamblinghelp.org/index.php/minority-mental-health-month/
Contact Us Meaning of Research Design Decisions pertaining to what, where, when, how much, and by what means with regards to a study make up a research design. A research design means the arrangement of conditions for collection and analysis of data in a fashion which aspires to combine relevance to the research purpose. As such the design consists of an overview of what the researcher will do from writing the hypothesis and its operational significance to the final analysis of data. More explicitly, the design decisions are actually with respect of: Saul McLeodpublished The prime method of inquiry in science is the experiment. The key features are control over variablescareful measurement, and establishing cause and effect relationships. An experiment is an investigation in which a hypothesis is scientifically tested. In an experiment, an independent variable the cause is manipulated and the dependent variable the effect is measured; any extraneous variables are controlled. An advantage is that experiments should be objective. The views and opinions of the researcher should not affect the results of a study. This is good as it makes the data more validand less bias. There are three types of experiments you need to know: The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances and using a standardized procedure. Participants are randomly allocated to each independent variable group. It is easier to replicate i. This is because a standardized procedure is used. They allow for precise control of extraneous and independent variables. This allows a cause and effect relationship to be established. The artificiality of the setting may produce unnatural behavior that does not reflect real life, i. This means it would not be possible to generalize the findings to a real life setting. Demand characteristics or experimenter effects may bias the results and become confounding variables. Field Experiments Field experiments are done in the everyday i. The experimenter still manipulates the independent variable, but in a real-life setting so cannot really control extraneous variables. Behavior in a field experiment is more likely to reflect real life because of its natural setting, i. There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied. This occurs when the study is covert. There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way. Natural Experiments Natural experiments are conducted in the everyday i. For example, Hodges and Tizard's attachment research compared the long term development of children who have been adopted, fostered or returned to their mothers with a control group of children who had spent all their lives in their biological families. Behavior in a natural experiment is more likely to reflect real life because of its natural setting, i. Can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e. They may be more expensive and time consuming than lab experiments. There is no control over extraneous variables that might bias the results. Experiment Terminology The degree to which an investigation represents real-life experiences. Experimenter effects These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior. Demand characteristics The clues in an experiment that lead the participants to think they know what the researcher is looking for e. Independent variable IV Variable the experimenter manipulates i.research methods, for defining the current focus of research, for the identification of needs for further training and research within certain areas and for a classification of research projects . A methodology is a plan-of-attack, especially when that plan-of-attack is used repeatedly. This might be obvious, but the word methodology is related to the word method. In fact, a methodology is a system of methods followed consistently. What is quantitative research? Research methods in education (and the other social sciences) are often methods. The last part of the definition refers to the use of mathematically based methods, in particular statistics, to analyse the data. This is what people Many researchers define themselves as either quanti-tative or qualitative. Glossary - Methodology is a chapter of dissertation that gives a full description of how the research paper/the study was performed, analyzing the material which was used to make a research. It is a theoretical analysis of the methods and principles used for particular research. The methodology is the general research strategy that outlines the way in which research is to be undertaken and, among other things, identifies the methods to be used in it. These methods, described in the methodology, define the means or modes of data collection or, sometimes, how a . a research method based on personal recollections of a case, as opposed to specific, empirically derived investigation. Can provide ideas to fuel further research. ANECDOTAL METHOD: "A professor relying on the anecdotal method reflects to his or her class on the .
https://cipyvetubifi.attheheels.com/dictionary-definition-of-research-methodology-2354zn.html
The innate immune system is an ancient mechanism of host defense found in essentially every multicellular organism from plants to humans. In invertebrates, it is the only mechanism of defense. Vertebrates also developed adaptive immune response. For decades, innate immunity in mammals was dismissed as an evolutionary remnant of immune system, gaining time by non-specific digestion of pathogens before operation of the exquisitely complex adaptive immune response. However, the discovery of the Toll-like receptors (TLRs) has opened up an entirely new era in the understanding of the role of innate immunity. Innate immunity is capable of discriminating between individual species of microbes through recognition of different microbe-specific components by individual TLRs, and rapidly induces appropriate immune responses against pathogens. Furthermore, activation of innate immunity is a prerequisite to induction of acquired immunity. This paradigm shift has changed our thinking on the pathogenesis and treatment of infections, immune diseases, allergy, and cancers. Dr. Akira has identified many ligands recognized by TLRs, and established the position of TLRs as pathogen recognition receptors. He also clarified the signaling pathways from TLRs to the nucleus. The identification of TLR ligands and clarification of their signaling pathways have opened the possibility to develop new approaches to treat a wide spectrum of diseases, including infectious, malignant, autoimmune and allergic diseases.
http://www.japan-acad.go.jp/en/publishing/pja_b/cover/85/85_4.html
The sounds of nature are beautiful. Starting off early in the morning, rights as the sun starts to come up you start hearing the sounds of nature. Bright and early you hear the birds chirping to greet the day. Throughout the day you can hear the birds singing for all to hear. There are other creatures and noises around as well. You can hear the squirrels scampering around in the trees searching for food. You can hear the wind as it blows through the tree branches, the trees making creaking and cracking noises. The chipmunks rustling through the leaves on the ground. Sometimes you can even hear the slight sound of a deer bounding around in the woods. As the day goes on the son starts to set, the sounds of the day time start to die down as the things that cause those sounds go to sleep. The wind mat still blow, but at night it has a slightly different sound, a little more eerie. Although at night the sound is not lost, just a bit different. The crickets and peepers wake up and start to sing their own songs to each other. Wolves and other wild dogs also make their own song toward the moon. As we fall asleep ourselves to the peaceful sound of crickets, soon the sun will rise again. Maybe tomorrow there will be a thunder storm. Similar Articles Milpitas, CA Bangkok, Other Green Bay, Wisconsin JOIN THE DISCUSSION This article has 0 comments.
http://www.teenink.com/nonfiction/personal_experience/article/260761/Sounds-of-Nature/
The natural gas being piped into your home contains a wide array of toxic chemicals, including nearly two dozen so harmful they're classified as hazardous air pollutants, a new study says. Natural gas samples taken from 69 Boston-area cooking stoves were found to contain at least 21 different hazardous air pollutants, including benzene, toluene, ethylbenzene, xylene and hexane, according to findings recently published in the journal Environmental Science & Technology. "One of the reasons we have natural gas in so many homes is because we've been told that it's clean," said co-author Dr. Curtis Nordgaard, an environmental health scientist with the PSE Healthy Energy research institute in Oakland, Calif. "This shows that while it may be cleaner than burning coal and oil in terms of air pollutants, that doesn't mean that it's completely clean." The benzene found in natural gas is of particular concern because it's a known carcinogen, and was found in 95% of the samples taken, Nordgaard said. "Some of the others may have some suspected carcinogenic activity, but benzene is really the one of greatest concern. We know it causes leukemia and it's also been associated with lymphoma," he said. Another study published in the same journal in January found that most stoves leak natural gas, said Nordgaard and Tasha Stoiber, a senior scientist with the Environmental Working Group in Washington, D.C. "Your stove, regardless of when you use it, is always at a very low level leaking natural gas," said Stoiber, who was not part of the studies. "This is what you're exposed to all the time, when there's this constant leak." For this study, Nordgaard and his colleagues collected 234 natural gas samples from the kitchens of 69 homes in and around Boston. Analysis detected 296 unique chemical compounds in the natural gas, including 21 designated as hazardous air pollutants by the U.S. Environmental Protection Agency. "We know from other research that these pollutants can be found in gas that's coming up from a well," Nordgaard said. "So we suspect a lot of these compounds may actually be present from where the gas comes out of the ground all the way to the pipeline going into your home and your kitchen stove." Concentrations of hazardous air pollutants in natural gas varied depending on the time of year, with the highest concentrations found in winter. Distressingly, researchers found that some samples contained inadequate levels of the rotten-egg odorants that are added to natural gas to help folks sniff out leaks. "At certain times they may not be sufficient enough to smell a low-level leak, which was pretty concerning," Stoiber said. These dangerous chemicals could be polluting the air of your kitchen and home through slow leaks in your stove. "When we were doing safety checks during the study, we actually found that 5% of homes had a leak that was large enough that we had recommended it to be repaired," Nordgaard said. The study published in January reported that stoves in all but four of 53 homes tested were leaking gas, suggesting "that most stoves and associated nearby piping leak some methane continuously." "This is what you're exposed to when you're cooking," Stoiber said. "This is what you're exposed to all the time when there's this constant leak." Nordgaard and Stoiber both recommend that folks replace their gas stoves and cooktops with electric models. Some cities -- Berkeley, San Francisco, Seattle and New York City among them -- have enacted laws that phase out natural gas hookups to new construction, Stoiber said. These laws are generally meant to fight climate change, but also could be helping protect the indoor air quality of new homes. People with a gas cooker should be sure to run their range hood while cooking, and make sure they have a range hood that pipes air outside the house rather than recirculating it through the kitchen, Nordgaard said. It also wouldn't hurt to cook with windows open and plenty of air circulation, Stoiber added. Both experts said you also should call in an HVAC contractor to inspect your natural gas appliances and make sure they are as leak-proof as possible. More information The Environmental Working Group has more on maintaining a pollution-free home.
https://martinspharmacy.net/anniston/patient-resources/article/2657573188/gas-used-in-homes-has-links-to-cancer-leaks-often-undetected
Nepal imports nearly 260,000 tons of Liquified Petroleum Gas (LPG) and other fossil fuels annually from India. This fuel dependency is one of the main reasons for Nepal’s trade deficit with India. The Government of Nepal has recently started promoting electric induction stoves for cooking as an alternative to LPG. Induction stoves are pollution free at the point of use, and they are meant to reduce Nepal's dependence on LPG. This study includes a cost-benefit analysis for replacing LPG stoves with induction stoves for households in rural areas of Kavre District, Nepal. The study involved the use of data for LPG and electricity consumption for households that use LPG and have not yet adopted induction stoves. It includes the estimated expected post-adoption (future) electricity consumption based on the cooking energy associated with the existing (pre-adoption) LPG usage. The results show that about two-thirds of households in the study would not benefit economically from the adoption of electric induction stoves if there is no subsidy on the electricity. The amount of subsidy would depend on baseline electricity consumption and the LPG usage of the households. Future analysis of the economics of a transition to induction cooking could be improved through the collection of more precise data on LPG consumption, baseline electricity consumption, LPG prices, and the economic discount rate. Citation Style APA Recommended Citation Dahal, Grishma Raj, "Cost-benefit analysis of replacing LPG stoves with induction stoves in rural households of Kavre district Nepal" (2020). Cal Poly Humboldt theses and projects. 412.
https://digitalcommons.humboldt.edu/etd/412/
Discuss where the style began and the various ways it developed across Europe. When analyzing the building style use the terminology associated with the style for the architectural and sculptural elements. Make sure to discuss the overall symbolism in your answer and connect the symbolic meaning to architectural/sculptural design. Use 3 examples in your answer. Gothic architecture originated in France during the 12th century and was a continuing advancement of Romanesque pilgrimage style which, in itself, was an evolution of the Roman basilica floor plan. This style permeated throughout the European countryside and was the prominent look of the Gothic period into the 1 5th century. Abbot Suger is credited with the design changes that occurred at the Abbey Church at St. Denis which gave birth to the Gothic architecture phase during this period. Gothic architecture was developed from a Christian perspective that gave rise to its most significant expression being predominant in churches. This aspiration was derived from Abbot Suger’s concept of adapting what St.Order now Denis called divine luminosity and then incorporating that as a design element into the church in the form of stained-glass windows and more open floor plans. The most outstanding characteristics of Gothic style were the use of light and its connection between structure and appearance. This equated to replacing the massive thickness of the walls from Romanesque architecture with thinner structures used for enclosure that could only support their own weight. Weight was reduced and greater wall heights were achieved through the use of ribbed vaults which allowed lighter materials to be placed between stone ribs. These taller, thinner walls tended to exert forces outward as they were no longer supported internally by columns and this gave rise to the xternal structural supports called flying buttresses. Wall paintings of the Romanesque churches were replaced by enormously gorgeous stained glass windows that allowed more light into the building, creating a sense of warmth and color to religious pilgrims and local congregations alike. The most exquisite example of stained glass windows can be found at the cathedral in Chartres, France. This structure represented an ideal of harmony within its confguration and contents. Its style is accentuated by strong vertical lines representative of a connection to the high heavens above. The rose windows used in his cathedral are primarily compositions of the Virgin Mary and Jesus. The rose windows represented more realistic and human characteristics of the Virgin Mary and portrays the strong religious power associated with her image and legacy. Tracery was yet another Gothic architectural element and was used at Reims Cathedral in France. Early Gothic windows were originally narrow and utilized a pointed, rather than rounded, top. The design advanced and the windows became much larger and structural supports were added into the framework to firmly hold the glass in place. These supports produced smaller pointed arches, circles and ther complicated shapes and this extra stone molding work inside each window was called tracery. Tracery was most commonly used in cathedrals during the Gothic era tnat were 0T partlcular Importance to royalty tnrougnout France ana England The combination of elements and symbolic components infused into this architectural style are representative of some of the greatest artists and masons in the history of the world. Gothic cathedrals imbue the strong pious influence that God, clerics and leaders held in society during this time and how their religious visions were brought to fruition to venerate their beliefs.
https://artscolumbia.org/applied-arts/architecture/gothic-architecture/gothic-architecture-essay-19492/
Q: Crop image with OpenCV I have a problem with cropping image using OpenCV library on iPhone. I have an image with selected area and I'd like to crop image with this area. But the new image isn't map to rectangle and the new image has also blue color of text. I use that code to crop image: IplImage *src = [OpenCV CreateIplImageFromUIImage:image]; NSValue *val = [corners objectAtIndex:0]; CGPoint p1 = [val CGPointValue]; val = [corners objectAtIndex:1]; CGPoint p2 = [val CGPointValue]; val = [corners objectAtIndex:2]; CGPoint p3 = [val CGPointValue]; val = [corners objectAtIndex:3]; CGPoint p4 = [val CGPointValue]; float minX = fmin(fmin(p1.x, p2.x), fmin(p3.x, p4.x)); float minY = fmin(fmin(p1.y, p2.y), fmin(p3.y, p4.y)); float maxX = fmax(fmax(p1.x, p2.x), fmax(p3.x, p4.x)); float maxY = fmax(fmax(p1.y, p2.y), fmax(p3.y, p4.y)); CGFloat width = maxX - minX; CGFloat height = maxY - minY; IplImage *dst = cvCreateImage(cvSize(width, height), 8, 3); p1 = CGPointMake(p1.x - minX, p1.y - minY); p2 = CGPointMake(p2.x - minX, p2.y - minY); p3 = CGPointMake(p3.x - minX, p3.y - minY); p4 = CGPointMake(p4.x - minX, p4.y - minY); IplImage* cropped = cvCreateImage(cvSize(width, height), src->depth, src->nChannels); cvSetImageROI(src, cvRect(minX, minY, width, height)); cvCopy(src, cropped, NULL); cvResetImageROI(src); cvReleaseImage(&src); CvMat* mmat = cvCreateMat(3, 3, CV_32FC1); CvPoint2D32f *c1 = (CvPoint2D32f *)malloc(4 * sizeof(CvPoint2D32f)); CvPoint2D32f *c2 = (CvPoint2D32f *)malloc(4 * sizeof(CvPoint2D32f)); c1[0].x = p1.x; c1[0].y = p1.y; c1[1].x = p4.x; c1[1].y = p4.y; c1[2].x = p2.x; c1[2].y = p2.y; c1[3].x = p3.y; c1[3].y = p3.y; c2[0].x = 0; c2[0].y = 0; c2[1].x = width; c2[1].y = 0; c2[2].x = 0; c2[2].y = height; c2[3].x = width; c2[3].y = height; mmat = cvGetPerspectiveTransform(c1, c2, mmat); cvWarpPerspective(cropped, dst, mmat, CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, cvScalarAll(0)); I will be grateful for any help. A: I had an error in my code. If someone is interested here is working code: IplImage *src = [OpenCV CreateIplImageFromUIImage:image]; IplImage *dst = cvCloneImage(src); dst->origin = src->origin; dst->nChannels = src->nChannels; dst->depth = src->depth; cvZero(dst); CGFloat width = src->width; CGFloat height = src->height; CvMat* mmat = cvCreateMat(3, 3, CV_32FC1); CvPoint2D32f *c1 = (CvPoint2D32f *)malloc(4 * sizeof(CvPoint2D32f)); CvPoint2D32f *c2 = (CvPoint2D32f *)malloc(4 * sizeof(CvPoint2D32f)); c1[0].x = round(p1.x); c1[0].y = round(p1.y); c1[1].x = round(p4.x); c1[1].y = round(p4.y); c1[2].x = round(p2.x); c1[2].y = round(p2.y); c1[3].x = round(p3.x); c1[3].y = round(p3.y); c2[0].x = 0; c2[0].y = 0; c2[1].x = width -1; c2[1].y = 0; c2[2].x = 0; c2[2].y = height - 1; c2[3].x = width - 1; c2[3].y = height - 1; mmat = cvGetPerspectiveTransform(c1, c2, mmat); free(c1); free(c2); cvWarpPerspective(src, dst, mmat, CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, cvScalarAll(0)); cvReleaseImage(&src); cvReleaseMat(&mmat); UIImage *newImage = [OpenCV UIImageFromIplImage:dst]; cvReleaseImage(&dst);
When most people think average, they are thinking of a specific type of average called the arithmetic mean. But there are actually three different types of averages: the mean, median, and mode. Mean The arithmetic mean is the most common type of average and it is easy to find. Simply add up your numeric samples and divide by the number of samples. So, for instance, if you are finding the average high temperature of a given week, you would add up the temperature of each day and divide by 7. Median Another type of average is called the median. This is the middle ground of all of your samples. So, with the following temperatures in a week you would order them numerically and choose the middle number. |Monday||Tuesday||Wednesday||Thursday||Friday||Saturday||Sunday| |35||40||43||51||56||52||49| Doing this for our set we have: 35, 40, 43, 49, 51, 52, 56. The median temperature for this week, then, was 49. Mode The mode is the value that occurs most often. In our case, the values did not repeat and so there is no mode. But if we noted the temperature for an entire season, some high temperatures would repeat. The ones that repeated the most number of times would be the mode. Range The range helps us understand how much of a difference there was in our data. You find the range by subtracting your largest number by your smallest. In our example above we would take 56 – 35 = 21. This tells us that there was some variation in temperature. In one week the weather went from being moderate to cold. Real Life Uses So, why are there so many different types of averages and what do they mean to the non-mathematical person? All of these different averages are used to help us makes sense of large data sets like, for example, the high temperature of every day in a season. Assuming we record data for 3 months, we will have about 90 weather readings. We can understand the weather patterns by taking the different averages. The mean gives us a feel for what the weather was like over a period of time, the mode tells us what temperature was felt most frequently, and the range tells us if there were any wild swings in the weather. The usefulness of averages goes way beyond the weather, though. Psychologists use them to make sense of long term research studies. Averages are used to evaluate a player’s performance in sports. Teachers use them to determine how their class is doing overall. Statisticians use them to understand something about a population such as income and education level. A Rhyme to Remember the Averages If you can’t remember which is which memorize this rhyme! Hey diddle diddle,
https://www.timesheets.com/blog/2014/03/business-math-series-averages/
Q: Matrix rank and number of linearly independent rows I wanted to check if I understand this correctly, or maybe it can be explained in a simpler way: why is matrix rank equal to the number of linearly independent rows? The simplest proof I can come up with is: matrix rank is the number of vectors of the basis of vector space spanned by matrix rows (row space). All bases of a given vector space have the same size. Elementary operations on the matrix don't change its row space, and therefore its rank. Then we can reduce it to row echelon form (reduced row echelon form is not necessary, because I think the non-zero rows in row echelon form are linearly independent already). So we might pick only the rows that are non-zero and still get the same row space (adding or removing arbitrary number of zero rows don't change a thing), and because these rows are linearly independent, they are basis for the row space. As mentioned above, all bases have the same size, so number of linearly independent vectors is equal to matrix rank (the dimension - size of basis - of row space). Is it correct? Didn't I make it too complicated? A: Two facts about elementary row operations are useful to resolve this question: Elementary row operations alter the column space but do not alter the linear dependences among the columns. For example, if column 10 is $4$ times column 3 minus $7$ times column 5, then after doing any elementary row operation, column 10 is still be $4$ times column 3 minus $7$ times column 5. It's not too hard to figure out why that's true. Therefore elementary row operations do not alter the number of linearly independent columns. Elementary row operations do not alter the row space. It's also not hard to see why that is true. Therefore elementary row operations do not alter the number of linearly independent rows. After a matrix is fully reduced, it's not hard to see that the number of linearly independent columns is the number of pivot elements, and the number of linearly independent rows is the number of pivot elements. Therefore the number of linearly independent rows. Since the row operations don't change the number of linearly independent columns or the number of linearly independent rows, those two quantities must be the same in every matrix. Consequently one can define “rank” of a matrix either as the dimension of the column space; or as as the dimension of the row space, and it's the same thing either way.
The Villa il Noceto is a truly enchanting place where you can spend a relaxing holiday, immersed in the unique nature that serves as the setting for historic centres of rare and unique beauty. -Drive 17Km, where you will find the exit COLLE DI VAL D'ELSA-SUD/SANGIMIGNANO/VOLTERRA/STAGGIA. Exit the motorway. After approximately 100 meters, turn right (a sign indicates Colle di Val d'Elsa), continue another 100 meters and turn right again, continuing for about 1 km. -As indicated on the left you will find a sign showing the SS 541 for Follonica-Grosseto-Radicondoli: turn here. -Turn Right and after 50 meters turn to the left, following the signs for Radicondoli: you are now on the SS541. -Continue for 13 km, on your right you will find the sign indicating Radicondoli, SP 3 "delle Galleraie" turn here. -Continue for 15 km: on your left you will find the sign indicating Montalcinello-Chiusdino: turn here. -Continue for 7 km: after the sign at the entrance to the village of Montalcinello, there is an intersection (turning right you go to Chiusdino): go straight uphill until a fork. Take the road to the left, go uphill, you will reach the piazza del Paese; it is the one on the right, go slightly downhill, after 100/150 mt of asphalt and 200/300 mt of gravel road you will reach the farmhouse. -Straight after exiting the toll booth, turn right and take the SGC (motorway) to FLORENCE-SIENA. -Continue for 38 Km until the exit COLLE DI VAL D'ELSA-NORD/SAN GIMIGNANO/VOLTERRA. -This exit is recommended if you want to go through the village of Colle di Val d'Elsa for a stopover or a visit. -If you would rather avoid crossing the centre of the Colle di Val d'Elsa, you should not get out of the SGC exit C.V.E. - NORD, but continue 2.7 Km until the exit COLLE VAL D'ELSA-SUD, which is signed COLLE di VAL D'ELSA-SUD/SAN GIMIGNANO/VOLTERRA/STAGGIA. -Take the A-12 south (Pisa), up to VIAREGGIO. -From VIAREGGIO take the highway A-12 / A-11 from VIAREGGIO to LUCCA. -From LUCCA take the A-11 towards FLORENCE. -immediately after the toll booth at PRATO-EST, take the junction connecting the A-11/A-1, with signs for BOLOGNA / MILAN and ROME / NAPLES. -Head towards ROME and exit at FIRENZE-CERTOSA. -Take the A-1 for FLORENCE; after the toll booth at FIRENZE-NORD, continue South and exit at Firenze-Certosa. -Take The A-1 for FLORENCE-BOLOGNA-MILAN. -After the toll booth FIRENZE-SUD, continue in NORTH and exit at Firenze-Certosa. -On the motorway - from the airport follow the signs to the A/11 for FLORENCE. The motorway is faster moving than the S.G.C. (super-motorway), but about 25 Km longer, as it takes a wider turn passing through LUCCA-MONTECATINI-FLORENCE-PISTOIA. -After about 90/95 Km, just after the exit of PRATO-EST, take the junction connecting the A-11/A-1, with signs for BOLOGNA / MILAN and ROME / NAPLES. -From the airport, turn right and take the SGC for FLORENCE. -After about 85 Km, almost at the entrance to FLORENCE, exit at SCANDICCI, where there are signs for BOLOGNA / MILAN and ROME / NAPLES. -After the SCANDICCI exit, at the end of a flyover, follow the signs to the exit FIRENZE-SIGNA (about 200 meters); at the exit there are signs for BOLOGNA / MILAN and ROME / NAPLES; head towards SOUTH ROME / NAPLES and exit at FIRENZE-CERTOSA. Do the rates include taxes? No, taxes are not included in the prices. The total bill including extra fees will be calculated at check-out. Do rates include heating, air conditioning and electricity? Prices include electricity and heating, but there is no air conditioning. Yes, dinners are available on request for a fee. Yes, there is ample parking. How many floors do the houses have and is there a lift? 2 floors but there is no lift. Are the apartments equipped with a TV? Are there international channels (satellite TV)? All apartments in Villa Il Noceto and Villa Montalcinello have satellite TV. Are the apartments equipped with direct dial telephone? Yes, and phone calls will be charged to the account to be paid at check-out. Can you connect the Internet? Internet connection is already possible at Villa Il Noceto and soon Villa Montalcinello will also offer this service. The nearest airports are Pisa, Florence and Rome about 100 kilometres. Do you have a transfer service to and from the airport? What is the nearest bus stop? The nearest bus stop is in Montalcinello Siena and is the bus line that leads to all the neighbouring countries. The nearest train station is Siena approximately 35 km from the property. Is there a spa on the property? Yes, there are two lovely pools. Sorry but pets are not allowed. Do you accept credit cards at Villa Il Noceto? No, we only accept payment in cash with a deposit of 50% by bank transfer at time of booking. Is there a restaurant on the property? No, but there are several restaurants in neighbouring villages or you can order typical Tuscan dishes by calling Maria Dora on 0577-79890 two days before. When booking we require a 50% deposit by bank transfer. This deposit will be retained if the customer fails to cancel the reservation within 30 days of the arrival date. Villa il Noceto has 2 buildings immersed in the Tuscan countryside, where you can spend unforgettable days of comfort and relaxation.
https://www.villailnoceto.com/howtoreachus.php
The ELIA Biënnale Conference is one of the most important conferences in the field of European higher education in the arts. In 2018 the biennial took place in Rotterdam and we were honored to create the visual identity of the conference, including a number of megaphone installations. The visual identity and the design installations are based upon the overall theme of this year: 'Resilience and the City: Art, Education, Urbanism’. Client:European League of Institutes of the Arts Project:ELIA Biennial Conference These designs arise from the concept 'Speaking City’. We have translated this concept into a ‘speaking’ visual identity, the megaphone installations and a signage system at six different locations in Rotterdam. The physical design objects were part of the program and interacted with the audience.
https://studiobureau.nl/en/elia-biennial-conference/
The Supreme Court considers prison conditions assurances for the first time in Lord Advocate v Dean UKSC 44 36 Extradition’s Saoirse Townshend discusses the implications of this judgment for extradition practitioners. The Supreme Court allowed the Scottish Lord Advocate's appeal against a decision of the Appeal Court of the High Court of Justiciary that the extradition of Mr Dean to serve a prison sentence in Taiwan would be incompatible with his rights under Article 3 of the European Convention on Human Rights (ECHR). The Supreme Court held that the Appeal Court had applied the wrong test when it found that there were substantial grounds for believing that there was a real risk that the respondent would face treatment from fellow prisoners that was incompatible with Article 3 ECHR. The correct test was whether Taiwan had failed to provide reasonable protection against harm inflicted by those non-state agents. The Supreme Court concluded that the assurances of the Taiwanese authorities offered the respondent reasonable protection against violence by non-state actors and that the circumstances of his confinement did not entail a real risk of his being subject to treatment that infringed Article 3 ECHR. What issues did the Supreme Court consider? The Supreme Court considered two issues: (i) Procedurally, whether the Supreme Court had the competence to decide the appeal on the grounds that that Appeal Court had not determined a devolution issue; and (ii) Substantively, whether the Appeal Court had applied the correct test when determining whether the compatibility of extradition with Article 3 ECHR in the context of a threat of harm inflicted by non-state agents. What did the court decide on those issues, and why? Firstly, the Court found the challenge to the competency of the appeal “misconceived”. The Court considered the interpretation of the 2003 Act and the Scotland Act 1998 when deciding this issue. An appeal from the sheriff’s decision under section 87(1) of the 2003 Act as to whether extradition would be compatible with the Respondent’s Convention rights raises a question of the legal competence of the Scottish Government. Schedule 6 of the Scotland Act 1998, para. 1(d) includes within the definition of a “devolution issue” “a question whether a […] proposed exercise of a function of the Scottish Executive […] would be, incompatible with any of the Convention rights”. Paragraph 13(a) of Schedule 6 to the Scotland Act 1998 confers a right of appeal to the Supreme Court against a determination of a devolution issue by a court of two or more judges of the High Court of Justiciary. The Court concluded that the decision of the Appeal Court is such a determination. The question of whether the Scottish Government’s acts in seeking to extradite the Respondent are compatible with Convention rights is a devolution issue, had been determined properly by the Appeal Court. Secondly, the Court held that the Appeal Court, in assessing the compatibility of extradition with Article 3 ECHR applied the wrong legal test. The test which was applied was the more general test which is set out in Saadi v Italy (2009) 49 EHRR 30, namely “whether substantial grounds have been shown for believing that there is a real risk of treatment incompatible with article 3”. In the Court’s view, the Appeal Court had not made the appropriate and clear distinction between the underlying threat from other prisoners (which the Appeal Court found to exist) and the conduct for which the state was responsible. In summary, the Court held that an assessment must be made, first, of whether the Taiwanese authorities are undertaking to provide the Respondent with reasonable protection against violence by third parties while he is in prison, and, secondly, if they are, whether the conditions in which he is to have such protection themselves entail an infringement of Article 3. The Court firmly relied upon Lord Brown’s speech in R (Bagdanavicius) v Secretary of State for the Home Department 2 AC 668 (para. 24); any harm inflicted by non-state agents will not constitute Article 3 ill-treatment unless in addition the state has failed to provide reasonable protection. The Court applied the well-known criteria laid down Othman v United Kingdom (2012) 55 EHRR 1 to find that the assurances given by the Taiwanese authorities were sufficient to find that reasonable protection will be afforded to the respondent. It is perhaps somewhat surprising that no mention is made of the recent CJEU case of Aranyosi and Caldararu QB 921 despite it having been recently considered in Part 1 and Part 2 extradition cases by the High Court in Kirchanov & Ors v Bulgaria EWHC 827 (Admin) and Dzgoev v Russia EWHC 735 (Admin). However, given that the Aranyosi test is very similar to that which had been applied previously by the High Court, the decision by the Supreme Court essentially represents a continuation in the Court’s approach to Article 3 arguments arising from prison conditions. What should practitioners take from the judgment? The Court’s reaffirmation of the test from Bagdanavicius will be not be new to extradition practitioners. The Court reiterated that the real focus of the court’s enquiry when deciding whether there is an Article 3 breach in respect of threats from non-state agents will be the protections in place by the State. It will therefore continue to be crucial to obtain expert evidence and the usual objective evidence from internationally recognised sources on this point. However, perhaps what is more interesting about this case is that it is the first time that the Supreme Court has considered assurances in the extradition and Article 3 context. The important points to take from the judgment are firstly, how specific the assurances are, which may suggest more is needed in other cases; and secondly, the how the Court accepted that it was sufficient for the monitoring of assurances to be provided by consular staff only. Firstly, what will be surprising to lawyers who regularly practice extradition, is the specificity of the assurances given. The recent cases on Article 3 which have considered prison conditions relate to overcrowding and material conditions of detention, e.g. cell size, time permitted outside the cell, heating, ventilation etc. Where assurances have deemed to be necessary to “dispel any doubts” as to Article 3 compliant conditions, these assurances typically will be fairly generic and concern cell size. However, in this case the assurances proffered were extremely specific. The assurances undertook that the respondent would be supervised by English-speaking officers with non-violent foreign inmates; a special assessment would be done to assess the level of protection needed; the would pre-screen inmates with ill intent towards him to prevent them having contact with them; he would have just under 7 square metres of personal space and nine hours per day out of his cell. Furthermore, if consular staff raised an issue concerning a breach the Taiwanese authorities would response to remedy them. The detail given goes far beyond what may be expected, even for a Part 2 case where the usual starting position of “mutual trust and confidence” for European Arrest Warrant cases cannot be relied upon. This may suggest that in other similar cases where prison conditions are an issue, a much more comprehensive assurance is required. Secondly, the Court found that the Appeal Court’s concerns about the absence of an international system by which prison conditions in Taiwan are monitored, were unfounded. The Court held that consular staff could protect the respondent’s rights in this regard and in any case, those considerations do not outweigh other factors which point towards accepting the assurance. This will be of particular interest to extradition lawyers since there have been numerous recent cases where evidence has been provided of the authorities breaching the assurances (Romania in particular). In terms of future cases, despite the use of assurances becoming more prevalent and the evidence of compliance shakier, this case demonstrates that the court may only require a fairly weak level of monitoring if other elements of the Othman criteria are met.
https://36group.co.uk/publications/2017/7/the-supreme-court-considers-prison-conditions-assurances-for-the-first-time-in-lord-advocate-v-dean-2017-uksc-44
Astronomy and Astrophysics Aims: Our aim is to evaluate the performance of the new neural network approach and to thoroughly compare it with the standard method commonly applied to predict far-side active regions from seismic measurements. Methods: We have computed the predictions of active regions using the neural network and the standard approach from five years of far-side seismic maps as a function of the selected threshold in the signatures of the detections. The results have been compared with direct extreme ultraviolet observations of the far hemisphere acquired with the Solar Terrestrial Relations Observatory. Results: We have confirmed the improved sensitivity of the neural network to the presence of far-side active regions. Approximately 96% of the active regions identified by the standard method with a strength above the threshold commonly employed by previous analyses are related to locations with enhanced extreme ultraviolet emission. For this threshold, the false positive ratio is 3.75%. For an equivalent false positive ratio, the neural network produces 47% more true detections. Weaker active regions can be detected by relaxing the threshold in their seismic signature. For almost the entire range of thresholds, the performance of the neural network is superior to that of the standard approach, delivering a higher number of confirmed detections and a lower rate of false positives. Conclusions: The neural network is a promising approach for improving the interpretation of the seismic maps provided by local helioseismic techniques. Additionally, refined predictions of magnetic activity in the non-visible solar hemisphere can play a significant role in space weather forecasting.
https://iac.es/en/science-and-technology/publications/performance-solar-far-side-active-region-neural-detection
Previously we examined the concept of normalized graded pace (NGP), and the information it can provide regarding the relevant physiological response experienced by a runner on varied, open terrain. In other words, we established that during open field runs, due to changes in terrain, the “raw” pace reported by a GPS device will often not reflect the physiological cost of the effort, but NGP calculated from the raw GPS data and accounting for changes in terrain and relative intensity can be much more indicative of the actual effort. Although the NGP is a valuable tool in and of itself, it is also a component of other more sophisticated tools that can help an athlete or coach to more clearly see their training progress and goals (the forest), despite often noisy and confusing individual training bouts (the trees). So, as the old adage goes, these tools will help the athlete or coach “see the forest for the trees.” One other important component of these more sophisticated tools is what is often referred to as the functional threshold, specifically in regard to running, the functional threshold pace (FTP). The functional threshold pace (FTP) is a necessary parameter for the implementation of the more sophisticated tools that utilize NGP. That is, FTP is the criteria upon which relative fitness based calculations are performed. The sophisticated tools I am referring to are the training stress score for running (rTSS) and Performance Manager (PMC) modeling. We will come to these later, because first we need to establish what the FTP is and how it is determined. What is FTP? So, what is the FTP? The FTP is the maximal running pace an individual can sustain for an effort of approximately 45 – 60 min in duration. What is the significance of the FTP you might ask? FTP enables further calculations regarding TSS and performance modeling relative to the individual’s fitness level. This is critical because, for optimal training and performance, pace based training metrics need to be expressed relative to a given athlete’s own ability, not the ability of some Olympic or professional athlete, or the athlete’s training partner, etc. The FTP is analogous to a concept from the Exercise Science literature referred to as the Maximal Lactate Steady State velocity (MLSSv; (V. Billat, Bernard, Pinoteau, Petit, & Koralsztein, 1994; V. Billat, Sirvent, Lepretre, & Koralsztein, 2004), which is also closely associated with the lactate threshold (LT). As a little background, in the laboratory setting, the MLSSv is determined by performing a series of shorter efforts (e.g. ~ 20 min) and measuring blood lactate during the efforts at approximately 6 min into the effort and again near the end. As an example, an athlete might run four trials at 8, 9, 10 and 11 kilometers per hour (7:30, 6:40, 6:00 and 5:27 min/mi, respectively), and blood lactates are recorded at 6 and 15 min in each trial. |Pace (min/mi)||Lactate @ 6 min (mmol/l)||Lactate @ 15 min (mmol/l)||Difference in Lactate @ 15 & 6 min (mmol/l)| |7:30||1.5||1.4||-0.1| |6:40||1.6||1.6||0| |6:00||2.0||2.2||0.2* MLSS| |5:27||2.5||5.0||+2.5| As you can see, the MLSSv is the fastest pace that can be maintained for this relatively short time without an observable accumulation of blood lactate. It so happens that MLSSv can typically be sustained for between 45 up to 70 min; or, around an hour. Further, the MLSSv is strongly influenced by the lactate threshold (LT) so that, as MLSSv increases (gets faster) so does LT, and vice versa (V. L. Billat, Sirvent, Py, Koralsztein, & Mercier, 2003). So, the MLSS is a proxy for the LT, and is in some respects more practical in that it uses a functional test to determine a sustainable pace. One can see how this is related to what is being called the FTP. One practical limitation to the determination of MLSSv is that it requires several test sessions. Additionally, it requires the drawing of blood and lactate determination. Fortunately, it really isn’t necessary to determine the FTP in this manner. Probably the most accurate (and practical) approach is simply to perform an effort in the one hour time frame. So, we can derive the FTP from a functional, practical test as opposed to performing a more laborious, impractical test. Again, credit should go to Dr. Andrew Coggan for popularizing this approach as applied to his NP/TSS system devised for cycling. For those of you who may have used this system, it may seem quite obvious to perform a one hour, or thereabouts, time trial to determine your FTP for running. For those of you who don’t come from a cycling background and aren’t familiar with the approach, it may not seem like a desirable approach to determine your FTP. In that case, there are more than a couple approaches to skin the proverbial cat and obtain a valid FTP. It should be noted that there are several reasons we refer to this measure as the FTP as opposed to MLSSv; 1) we do not typically need to measure blood lactate to actually determine FTP 2) we can use “functional” or practical performances in lieu of formalized testing 3) to maintain continuity with the concept of FTP as it applies to NP/TSS system in cycling. Practical Approaches to Determine a FTP 1. Actual performance from a recent race or hard training run of 10-15 km. a. If 10 km time was greater than 45 min, use 10 km b. If 10 km time was less than 45 min, use 15 km or half marathon This is likely the most “valid” measure of the FTP because this is essentially the definition of FTP. For practical purposes, races will typically give us our best data compared to training sessions. Since 10 km is a common distance for running races, it’s a practical benchmark to use for FTP. Unfortunately, (or fortunately, depending on your perspective) some athletes can run a 10 km faster than 45 min, and since the MLSSv generally corresponds to efforts longer than 45 min, around an hour, if your 10 km time indicates a pace faster than MLSSv, then you need to use a longer effort (e.g. 15 km) as your benchmark. The 15 km is not nearly as common, so, in this case another option might simply be to use a hard training run that lasts 45-70 min in duration and use the average pace if on the flats, or NGP if on hills, for that effort. A key thing to note for all of these approaches is that, once you have established a baseline using one approach, it is desirable to stick with the same approach in successive attempts to establish FTP. This is because not only is FTP used to establish training intensities and TSS, it also serves as a measure of progress, and so, consistency is important when assessing progress. To determine if your training program is working, you want to compare apples to apples, and that means assessing your FTP under the most similar conditions possible in subsequent assessments. So, if using hard training runs, it would probably be best to stick with a standard duration (e.g. 60 min), or a standard course (e.g. 12 km) that takes approximately an hour to complete. Alternatively, you can do essentially the same thing by choosing your best performances for a given time frame, say, 45 min.,that would correspond to a 10 km effort. In this case you can use the NGP calculated from open field runs as your testing benchmarks, and use these efforts to establish FTP on a frequent basis. In Figure 1a, you can see a Periodic chart of Mean Maximal 45 min NGP for all running efforts recorded in the data base. In this case, they are being plotted in mph to facilitate easier identification of the fastest efforts for this duration (high values). Highlighted in the circles, best efforts can be seen that are identified by arrows. In Figure 1b, the same data has been plotted in min/mi pace units, and with the connecting lines removed that can confuse identification when working with min/mi plots since the lower values are faster. In this figure, the lower horizontal line corresponds to a 7:20 min/mi pace, with the upper horizontal line corresponds to a 7:48 min/mi pace. So, the fastest efforts for this duration all lie within this range, except for one value labeled “outlier”. It should be noted that no single value, especially when drastically different than other common measured values, should be given great weight. Due to the nature of GPS devices, interference, and altitude and/or speed measurement error, pace values may occasionally be inflated. Regardless, if an athlete has a one-off exceptional performance; it is generally not advisable to base training loads, and progress assessment on such unrealistic results. The athlete is being set up for failure. That being said, that particular “outlier” is likely a real performance (I.e. not an anomaly due to measurement error) based on PMC modeling which we will discuss in a later installment. Figure 1a. WKO+ Chart of Mean Max 45 min NGP plotted in mph so fastest paces are easily identifiable. Figure 1b. WKO+ Chart of Mean Max 45 min NGP plotted in min/mi. Lowest values are fastest. Of course, FTP determination should be more exact than simply saying “somewhere between 7:20 and 7:48 min/mi pace”, that range is simply too large. Within the highlighted areas though, the five fastest paces for 45 min based on the NGP calculation were, |Date||Pace (min/mi)| |4/9||7:30| |7/4||7:34| |7/18||7:34| |9/9||7:38| |9/19||7:38| So, it can be seen that at no point was FTP faster than 7:30, but if based on 45 min effort, FTP could have been established as 7:34 min/mi or slower in July and 7:38 min/mi or slower in September. (These are with the exception of the “outlier”). For this particular runner, there aren’t many consistent effort runs of 60 min duration that can be used for FTP determination, so, in this case, 45 min is a practical duration to consistently use to assess and establish FTP. So, if hard training runs of 45 – 60 min are a regular staple of one’s training program, NGP values from these runs can be used to establish the FTP without formalized testing. 2. Calculated 10 km or 15 km pace from shorter distance efforts. In particular, a 5 km performance used in conjunction with Daniels’ tables to determine pace for an effort of 50-65 min. a. Because all efforts longer than a few minutes are strongly influenced by the LT, it is possible to use the velocity vs. duration relationship established by Daniels for running to estimate FTP based on much shorter efforts. One should bear in mind that the shorter the duration of the effort (e.g. 1500 m), the greater the influence of anaerobic metabolism/capacity, and the greater likelihood of error. Events such as the 5 km for example should provide good estimates of FTP, since, for durations longer than 15-20 min, the anaerobic contribution will be much smaller. There is still the potential for some error though. As an example, let’s say an individual just completed a 5 km race and this is the longest distance they have good data for. They completed the race in 18:22 min, at a pace of 5:55 min/mi. The individual could use the pace for the 5 km and Jack Daniel’s tables to determine that their vDot is 55, and they could likely run a 15 km in 58:33. This time for a 15 km effort would result in a 6:17 min/mi pace for FTP. There will likely be more error in this calculation than actually using a 10-15 km effort, but the error will be repeatable. So, if using this approach, as 5 km pace improves, FTP will also improve, and this can be used to re-establish FTP on a regular basis. 3. Performance of a MLSSv determination a. Because MLSSv and FTP are essentially synonymous, determination of the MLSSv should provide a good estimate of FTP. 4. From the results of a lactate threshold test a. Since the MLSS and the FTP both occur very close to the LT, and all three parameters should move in concert as a result of training adaptations (in other words, if one goes up, all three go up, and vice versa), LT determination should provide an athlete with a good estimate of FTP. From the above approaches for the determination of FTP, methods 1 and 2 are most desirable, and practical. Methods 3 and 4, are rather laborious, can be costly, and not as practical to perform on a regular basis. For the purposes of fitness assessment, it is of value to obtain frequent benchmarks of fitness, so, more practical approaches (methods 1 and 2) will likely be most effective. On that note, although the determination of FTP is important for the establishment of training levels, as well as determination of rTSS and performance modeling, it is also an effective means of assessing training plan effectiveness. Since FTP is really an indirect measure of performance related to the lactate threshold (LT), it provides insight with regard to the effectiveness of a training program for the development of LT. Want to learn more about FTP and other important TrainingPeaks metrics? Check out our online course for everything you need to know to take your training to the next level.
https://www.trainingpeaks.com/learn/articles/determining-functional-threshold-pace-ftp/
The Instructional Coach is highly aligned with the larger school vision and mission and leads with the crucial role of developing the majority of our instructional staff. The Instructional Coach serves as part of the Instructional Team and is responsible for bringing evidence-based practices into classrooms by working with and supporting teachers and administration with the goal of building teacher capacity that increases student engagement and improves student achievement. He or she works as a colleague with classroom teachers to support and enhance student outcomes. In order to meet this purpose, the Instructional Coach will provide personalized support that is based on the goals and identified needs of individual teachers. The Instructional Coach reports directly to the Director of Curriculum and Instruction. Role Mission: The Instructional Coach increases the impact of others through the following modes: Observe and provide feedback of others, facilitate PD, supports the school’s data driven instruction plan, and partners with the school administrative team to ensure scholars achieve academic success. Expected Outcomes - 75% of coached teachers respond strongly agree or somewhat agree to the statements about their Instructional Coach on the monthly survey - Coached teachers demonstrate 10% increase in achievement or growth as measured by Interim Assessment. - Coached teachers score an average of 4 in all domains of the TEAM evaluation rubric. Critical Competencies Core Value Competencies (GP core values listed first) - Responsibility – Accepts ownership for the actions and outcomes of myself and my team. - Integrity – Does what is right, even when no one is watching. Earns trust and maintains confidence. Speaks plainly and truthfully. - Courage – Persevere when the going gets tough. Faces the difficulty of our challenge in spite of our fear or frustration. - Commitment – Does what it takes to reach my goals and be a part of the Grizzlies team. Upholds the mission and supports the team even in the face of adversity, challenge and disappointment. - Excellence – When better is possible, good is not enough. Constantly strive to be the best. Embraces self-growth as the pathway to star in our role, elevate our team, and achieve our goals. - Grit – Demonstrates persistence and willingness to go the distance to get something done. Works relentlessly, which often includes long hours, to accomplish goals. - Efficiency – Able to produce significant output with minimal wasted effort. - Strategic Thinking/Visioning – Able to see and consistently communicate the big picture in an inspiring way. - Organization and Planning – Plans, organizes and schedules in an efficient, productive manner. Focuses on key priorities. - Flexibility/Adaptability – Adjusts quickly to changing priorities and conditions. Copes effectively with complexity and change. - Calm Under Pressure – Maintains stable performance when under heavy pressure or stress. - Analytical Skills – Able to structure and process qualitative and quantitative data and draw insightful conclusions from it which inform next steps and/or action plans. - Attention to Detail – Does not let important details slip through the cracks or derail a project. Responsibilities |Leadership||● Articulate and model the school’s values to students, families, staff, district partners and the community | ● Motivate and lead instructional team members to yield the highest academic performance possible from all students ● Hold high expectations for all students and team members ● Communicate regularly with the DCI and, upon request, the ED concerning student achievement, curriculum elements, professional development, and resources that teachers may need to improve their instruction |Curriculum Design||● Serve as instructional leader providing inputs on effective instructional curriculum and lessons and student assessment (internal and external) | ● Ensures that every facet of the instructional program is explicitly aligned with State of Tennessee and internal standards ● Alongside DCI and DoSS; evaluates, makes recommendations for purchase, distributes, and provides support for intervention materials. |Lesson Plans | Delivery of Instruction Assessments |● Supports the DCI and, when applicable, the DoSS in ensuring that lesson plans are designed to meet the unique needs of the students in that classroom | ● Learning activities in classrooms are aligned with established learning outcomes for the school, district or state ● Ensures that high quality instruction takes place from bell to bell in each classroom at Grizzlies Prep ● Ensures that all instruction at Grizzlies Prep is engaging every student and advancing learning ● Support Grizzlies Prep team members maintain a classroom culture of few to no behaviors which interrupt teaching and learning ● Lead by example; be ready to model and teach a class when needed. ● Ensures that varied and multi-modal assessments are executed as scheduled and in an efficient manner which provides adult understanding of student mastery ● Implement effective internal assessment systems to monitor academic progress of students throughout the year ● Support the DCI to ensure that each internal assessment is rigorous and is specifically aligned to state and internal standards ● Support the DCI to ensure that each internal assessment is rigorous and is specifically aligned to objectives taught ● Develop modes to invest students in the various assessments windows |Observation and | Feedback |● Give consistent and constructive feedback to assigned cohort of teachers facilitate student growth and academic achievement | ● Provide feedback re: instructional activities and practices to LT upon request ● Attend, and lead upon request, data meetings, support teachers, and facilitate positive faculty dynamics |Orientation and | Professional Development |● Collaborates with members of the leadership team and relevant staff members in planning and implementing annual summer staff orientation | ● Plan and facilitate regular 1:1 or small group meetings with cohort of teachers to discuss data, practices, and actions which lead to improved student learning ● Upon request of the DCI, facilitate regular instructional staff meetings ● Seek and share curriculum best practices from other schools and professional journals |Data||● Implement and oversee data collection systems for compliance purposes | ● Analyze all internal and external assessments for LT decision making ● Collaborate with DoSS and DCI concerning student achievement and student assessment Memphis Grizzlies Prep College Preparatory Charter School Job Descriptions Memphis Grizzlies College Preparatory Charter School Instructional Coach Qualifications: - At least 3 years of K-12 teaching experience with proven academic results - At least 2 years of implementing RTI2a programming - Track record of managing and leading adults to achieve ambitious goals - Commitment to frequently using data to inform high quality decision making and drive performance - Interventionist license or 2 years of Interventionist experience, preferred - Bachelor’s degree required, Master’s degree preferred Physical Requirements: - Prolonged periods standing and actively monitoring student behavior while teaching. - Must be able to lift up to 15 pounds at times. - Must be able to assist students who have physical impairments. - Must be able to traverse classrooms and adjust height, by bending or leaning, in order to teach each child. - Must be able to facilitate classroom transitions by walking through hallways and up and down stairs with students. - Prolonged periods sitting and working on computer to complete necessary paperwork. Americans With Disabilities Act Compliance: Grizzlies Prep is an Equal Opportunity Employer. ADA requires Grizzlies Prep to provide adequate accommodations to qualified persons with disabilities. Prospective and current employees are encouraged to discuss ADA accommodations with management.
https://jobs.teach901.com/job/grizzprep-instructional-coach/
Background: World University Service of Canada (WUSC) and Canadian Leaders in International Consulting (CLIC) have been jointly awarded a grant from Global Affairs Canada to implement the project “Bolstering Reconstruction in Iraq Through Development, Growth and Employment” (BRIDGE). The Project’s goal is to improve the employment status of male and female youth and women in selected sectors and governorates of Iraq. The two key outcome results of the Project are: Improved delivery of gender responsive and market relevant Vocational Training; and Improved employability of targeted female and male youth and women. BRIDGE will address the need for short-term stabilization of Iraq by helping beneficiaries integrate the labour market. It will also support the ongoing reform of the Vocational Education and Training Sector. Nature and Scope The Vocational Guidance and Counselling Officer will be working closely with the BRIDGE Vocational Guidance & Counselling Manager and the vocational training stakeholders and partners of BRIDGE including at MoLSA, Governorates and Vocational Training Centers (VTC) levels in Iraq to establish VG&C units that provide VG&C services and set the enrollment process and selection criteria for potential trainees in the VTCs. Support VG&C activities in other BRIDGE locations with a flexibility to travel to other cities and governorates. Key areas of Responsibility Strategic ● Under the supervision of the VG&C Manager, act as a focal point for all VG&C matters under WUSC action by carrying out the program activities, planning and developing with day-to-day oversight and direction to BRIDGE employees in the VG&C department, as well as designing strategy and setting goals for growth in accordance with the organization’s policies and procedures. ● In coordination with MOLSA and governmental entities, assess, establish and prioritize areas of capacity development for Vocational Guidance and Counselling activities in Vocational Training Centers across Iraq. ● Ensure Vocational Guidance and Counselling program implementation at the VTCs in line with WUSC regulations and procedures as well as adherence to the developed Vocational Guidance and Counseling Best Practices guidebook. ● Provide inputs and assist developing and setting up comprehensive guidelines and protocols for vocational guidance and counselling functions in coordination with key stakeholders. ● Oversee program management and operations related to the activities, including project work plan preparation and planning, financial statement and budget control, progress monitoring and, where necessary, identify corrective measures to overcome operational constraints. ● Support the introduction of the Guidance & Counselling function to other VTCs as part of project expansion; ● Participate in training provided by WUSC. Capacity Building, Training, Coaching ● Conduct capacity development needs assessment regarding Vocational Guidance and Counseling at MOLSA, the targeted Governorates and VTCs. ● Meet with individuals at the three levels of project stakeholders (MOLSA, Governorates and VTCs) who have responsibility for guidance and counselling, explaining BRIDGE’s approach and intent. ● Train and coach key MOLSA and VTCs staff on gender sensitive vocational guidance and counselling approach with the VG&C units with a regular follow-up. ● Conduct training for VTC management and VG&C staff on student enrollment (Enrollment Tools) in line with VG&C Work-plan implementation. ● Coach and monitor the work plan of the trained VG&C key staff in the selected VTCs who are responsible for offering vocational guidance and counselling to youth and women. ● Coach and train field VG&C staff of BRIDGE regarding all aspects of the job and proactively develop their skills to understand and utilize appropriate tools and services to design and implement quality programming, adhering to procedures, guidelines and work requirements. ● Flexibility to conduct training sessions, and other activities in other BRIDGE partners locations. Activity Implementation Support ● Support the Vocational Guidance & Counselling function of the VTCs as part of delivering the project’s cohort which includes marketing, enrollment and VG&C service delivery to trainees including in innovative ways. ● Ensure proper reporting from the Vocational Guidance & Counselling Units at the VTCs including the identification of examples of successes and impacts achieved. ● In coordination with BRIDGE communication officer, draft proposals for communication materials of the VG&C units and conduct training and coaching for the management staff of the participating VTCs, VG&C staff and Media staff in gender-sensitive communication and outreach, in attendance of MOLSA media focal point with a regular follow up on their Work-plan. ● Flexibility to provide coaching and support to other BRIDGE partners locations. Reporting & Communication ● Prepare Quarterly Project Implementation Reports and any other periodic and ad-hoc reports and ensure their timely submission to project management. ● Communicate regularly, with the BRIDGE Team, on component progress and issues. ● Discuss updates and international best practice with the project’s colleges and advisors and Project Management. More Information - Address Basrah - How to apply WUSC is an equal opportunity employer. We welcome and encourage applications from women and diverse ethnic and religious groups. If you'd like to apply for this position, please send your resume and cover letter to ([email protected]) with "Regional Vocational Guidance and Counseling Officer" subject line. There is a probation period of three months for the successful applicant. Only ordinary Iraqi residents will be considered and the candidates selected for an interview will be contacted.
https://iqjscout.com/jobs/regional-vocational-guidance-and-counseling-officer/
This course will cover two religions in depth and four Philosophy & Ethics topics looking at various religious and non-religious views. It is not necessary to have any particular religious beliefs to study this course. Taking the exams for this course is optional. We will cover Part 1: Christianity: - Key beliefs - Jesus Christ and the salvation - Worship and festivals - The role of the church in the local and worldwide community Hinduism: - Ideas about the nature of God and existence - Beliefs about the nature of human life - Worship and festivals - Lifestyle Part2: Using Christian, Hindu and secular beliefs we will examine: The existence of God and revelation: - Philosophical arguments for and against the existence of God - The nature of the divine and revelation Religion and Life: - The origins and value of the universe - The origins and value of human life – including the concept of sanctity of life; abortion; euthanasia; beliefs about death and the afterlife Relationships and Families: - Sex, marriage and divorce - Families and gender equality - The roles of men and women - Gender equality - Gender prejudice and discrimination Religion, crime and punishment: - Religion, crime and the causes of crime - Religion and punishment Practicalities and financial information: - The course will run over 44 weeks between 30 April 2018 and 15 April 2019. This includes 4 revision session. - Sessions take place Mondays 1.30-4.00pm and are held at The Old School Business Centre, Church Drive, Nottingham NG5 6LD, in Studio 6. - Course Dates: 30 April-4 June; 18 June-6 August; 27 August-22 October; 5 November-17 December; 7 January-18 February; 4 March-15 April - Students will require a ring binder and a set of 10 file dividers. - All students will require two course text books. These are: – AQA GCSE Religious Studies A: Christianity ISBN-13:978-0-19-837033-8 – AQA GCSE Religious Studies A: Hinduism ISBN-13:978-0-19-837034-5 You can supply these books yourself (£18.99 each from Amazon), or you can purchase them through me (buying them through me will add £3 to each monthly payment). The first book will be required for the start of the course and the second by the first week of July. - Payment for the course: – £40 per month April 2018 – April 2019 (13 payments) if you are supplying your own textbooks. – £43 per month April 2018 – April 2019 (13 payments) if you are purchasing your text books through me. - The intention is for students to sit the exams in summer 2019. Nottingham College have agreed to facilitate this, at a cost of £100 per student (2 exams @ £50 each).You can find an exam centre closer to you if you prefer. Fees at alternative centres may vary. - If a student decides not to continue with the course, you will not be liable for monthly fees beyond the month in which they stop. However, if you have purchased books through me, you will still be liable for the remainder of the cost of the books (£3 x the number of months remaining on the course).
https://earthtones.org.uk/home-education-gcse-religious-studies-course/
Machine-Learning Techniques Assist Data-Driven Well-Performance Optimization – Journal of Petroleum Technology Despite being proven to be a cost-effective surveillance initiative, remote monitoring is still not adopted in more than 60% of oil and gas fields around the world. Understanding the value of data through machine-learning (ML) techniques is the basis for establishing a robust surveillance strategy. In the complete paper, the authors develop a data-driven approach, enabled by artificial-intelligence methodologies including ML, to find an optimal operating envelope for gas-lift wells. Real-Time Well-Performance Optimization Wellsite Measurement and Control. Flow Tests. Past tests include sporadic measurement of multiphase rates and the associated flowing pressure and temperature, collected at various points of the production system, from bottomhole to separator conditions. Flow tests are also known as well tests; however, the authors use the term “flow test” in this paper to avoid confusion with well testing as used in pressure transient tests, including temporary shut-in pressure buildups (for producers) and pressure falloff tests (for injectors). Normally, a well would have limited data points from the past well tests (i.e., less than 50 valid flow tests in a period of 5–10 years). This data is the basis of creating ML models. Continuous Monitoring. Every well should have adequate instrumentation, and its supporting infrastructure should include reliable power supply, minimum latency telemetry, and desktop access to production parameters.
https://justartificialintelligence.com/machine-learning-techniques-assist-data-driven-well-performance-optimization-journal-of-petroleum-technology/
See Usage Policy. 9Mb Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20200416-091933770 Abstract Bidirectional conversion of electrical and optical signals lies at the foundation of the global internet. Such converters are employed at repeater stations to extend the reach of long-haul fiber optic communication systems and within data centers to exchange high-speed optical signals between computers. Likewise, coherent microwave-to-optical conversion of single photons would enable the exchange of quantum states between remotely connected superconducting quantum processors, a promising quantum computing hardware platform. Despite the prospects of quantum networking, maintaining the fragile quantum state in such a conversion process with superconducting qubits has remained elusive. Here we demonstrate the conversion of a microwave-frequency excitation of a superconducting transmon qubit into an optical photon. We achieve this using an intermediary nanomechanical resonator which converts the electrical excitation of the qubit into a single phonon by means of a piezoelectric interaction, and subsequently converts the phonon to an optical photon via radiation pressure. We demonstrate optical photon generation from the qubit with a signal-to-noise greater than unity by recording quantum Rabi oscillations of the qubit through single-photon detection of the emitted light over an optical fiber. With proposed improvements in the device and external measurement set-up, such quantum transducers may lead to practical devices capable of realizing new hybrid quantum networks, and ultimately, distributed quantum computers.
https://authors.library.caltech.edu/102572/
Recall bias is of particular concern in retrospective studies that use a case-control design to investigate the etiology of a disease or psychiatric condition. For example, in studies �... Selection Bias in Cohort Studies Less Likely . Among initially selected subjects, selection bias in a cohort study is less likely to occur compared to case-control or CrossSectionalStudy cross-sectional studies. 12/09/2011�� We conclude that selection bias in case-control selection of subjects from rare disease registries occurs and that this can be overcome through case-control matching to minimize bias. Therefore, application of this technique permits the study of treatment outcomes or natural history within rare disease registries. Recall bias is a major issue in studies that have self reporting, such as retrospective cohort studies. It�s a type of information bias , and cannot be corrected after a study has been completed . You can eliminate or at least reduce sources of bias and confounding by carefully designing your data project or study. The following outlines some of the major sources of bias and confounding and how to overcome these in your project design.
http://tomatosherpa.com/western-australia/how-to-avoid-selection-bias-in-case-control-study.php
Natural History of Epstein-Barr Virus Replication and Viral Load Dynamics after Alemtuzumab-Based Allogeneic Stem Cell Transplantation. Marzolini MAV., Wilson AJ., Sanchez E., Carpenter B., Chakraverty R., Hough R., Kottaridis P., Morris EC., Thomson KJ., Peggs KS. Epstein-Barr virus (EBV) load monitoring after allogeneic hematopoietic stem cell transplantation (HSCT) enables earlier detection of EBV replication and often serves as a trigger for preemptive therapies aimed at reducing EBV-related diseases. Our institutional strategy is to treat patients with clinical signs of EBV-related disease accompanied by a rising viral load, rather than to intervene based solely on viral load. This affords an opportunity to study the natural history of EBV replication and to assess whether our strategy reduces overtreatment without compromising outcomes. The objectives of the present study were to assess the natural history of untreated EBV replication in patients who underwent an alemtuzumab-based allogeneic HSCT and to examine whether our clinical strategy reduced overtreatment without compromising patient outcomes. In this retrospective single-center observational study of 515 consecutive patients (age ≥18 years) undergoing T cell-depleted allogeneic HSCT incorporating alemtuzumab, patients underwent surveillance monitoring for EBV by quantitative PCR in the peripheral blood at least weekly up to 100 days post-transplantation and longer if they remained on immunosuppressive therapy. The cumulative incidence of EBV detection and EBV-related disease were assessed. Among the 515 patients, 192 had EBV DNA detectable on ≥1 occasion, with a cumulative incidence of 35.8% (31.8% to 40.4%), although this remained below the limit of quantification in 93 patients. The median time to first detection was 89.5 days (range, 0 to 2254 days). The incidence was higher in recipients of sibling donor transplants (45.4% versus 30%; P = .00021) compared with recipients of unrelated donor transplants. Twenty patients developed EBV-related disease (cumulative incidence, 3.9%). Two patients had immunosuppression reduction alone, 18 received rituximab, and 5 required additional therapies. Five patients died from post-transplantation lymphoproliferative disorder, all of whom had received rituximab. The positive predictive value of EBV load for disease was higher in the unrelated donor cohort but remained <75% regardless of EBV threshold (57.1% to 72.7%). The cumulative incidence of EBV-related disease in our study (3.9%) is comparable to that reported in other studies incorporating alemtuzumab, and our clinical strategy reduced overtreatment in this patient population. PCR-based surveillance strategies have limitations, as reflected in the relatively low sensitivity of the assay coupled with the low positive predictive value, which may influence the potential choice of a threshold for preemptive intervention. We conclude that it remains unclear whether treatment based on a rising EBV viral load alone provides superior overall results to treatment based on the development of clinical signs of EBV-related disease in the context of a rising viral load.
https://www.rdm.ox.ac.uk/publications/1185482
New activity/unrest was observed at 6 volcanoes from December 3 - December 9, 2014. Ongoing activity was observed at 14 volcanoes. New activity/unrest: Asosan, Kyushu (Japan) | Chirinkotan, Kuril Islands (Russia) | Fogo, Cape Verde | Sinarka, Shiashkotan Island (Russia) | Turrialba, Costa Rica | Zhupanovsky, Eastern Kamchatka (Russia) Ongoing activity: Aira, Kyushu (Japan) | Ambrym, Vanuatu | Bardarbunga, Iceland | Chirpoi, Kuril Islands (Russia) | Kilauea, Hawaiian Islands (USA) | Kuchinoerabujima, Ryukyu Islands (Japan) | Mayon, Luzon (Philippines) | Ontakesan, Honshu (Japan) | Popocatepetl, Mexico | Reventador, Ecuador | Sheveluch, Central Kamchatka (Russia) | Shishaldin, Fox Islands (USA) | Sinabung, Indonesia | Suwanosejima, Ryukyu Islands (Japan) New activity/unrest Asosan, Kyushu (Japan) 32.884°N, 131.104°E, Summit elev. 1592 m JMA reported that, based on seismicity and infrasound data, the eruption from Asosan’s Nakadake Crater that began on 25 November continued during 1-8 December; inclement weather often prevented observations of the crater. A webcam recorded incandescent material being ejected from the crater at night. A very small amount of ashfall was observed at Sensui Gorges, 2 km NE on 3 December. The Alert Level remained at 2 (on a scale of 1-5). Geologic summary: The 24-km-wide Asosan caldera was formed during four major explosive eruptions from 300,000 to 90,000 years ago. These produced voluminous pyroclastic flows that covered much of Kyushu. The last of these, the Aso-4 eruption, produced more than 600 cu km of airfall tephra and pyroclastic-flow deposits. A group of 17 central cones was constructed in the middle of the caldera, one of which, Nakadake, is one of Japan's most active volcanoes. It was the location of Japan's first documented historical eruption in 553 AD. The Nakadake complex has remained active throughout the Holocene. Several other cones have been active during the Holocene, including the Kometsuka scoria cone as recently as about 210 CE. Historical eruptions have largely consisted of basaltic to basaltic-andesite ash emission with periodic strombolian and phreatomagmatic activity. The summit crater of Nakadake is accessible by toll road and cable car, and is one of Kyushu's most popular tourist destinations. Chirinkotan, Kuril Islands (Russia) 48.98°N, 153.48°E, Summit elev. 724 m SVERT reported that satellite images of Chirinkotan showed a thermal anomaly during 2-5 and 7 December, and diffuse gas-and-steam emissions on 3 December. Cloud cover obscured views on the other days during 1-8 December. The Aviation Color Code remained at Yellow. Geologic summary: The small, mostly unvegetated 3-km-wide island of Chirinkotan occupies the far end of an E-W-trending volcanic chain that extends nearly 50 km west of the central part of the main Kuril Islands arc. Chirinkotan is the emergent summit of a volcano that rises 3000 m from the floor of the Kuril Basin. A small 1-km-wide caldera about 300-400 m deep is open to the SE. Lava flows from a cone within the breached crater reached the north shore of the island. Historical eruptions have been recorded at Chirinkotan since the 18th century. Fresh lava flows also descended the SE flank of Chirinkotan during an eruption in the 1880s that was observed by the English fur trader Captain Snow. Fogo, Cape Verde 14.95°N, 24.35°W, Summit elev. 2829 m According to news articles the eruption from Fogo's Pico cone inside the Cha Caldera continued during 3-9 December. The eruption became more intense on 6 December with lava flows traveling at a rate of 30 meters per hour. By that afternoon 20 houses in the N half of Portela had been destroyed, and authorities ordered a complete evacuation of the area. The third access road that allowed for residents to retrieve property was being covered by lava. Later that day lava reached the town of Bangaeira. More than half of Bangaeira was inundated by the very fluid lava and by the evening only a few houses remained. Flights from the island to a few other locations were cancelled due to ash emissions. By 8 December about 90% of Bangaeira and 95% of Portela were overtaken by the flows which, and as noted by the article, destroyed 100 years of town history. After moving through the towns the lava-flow front was about 300 m wide. By 8 December the almost 1,500 residents of the caldera had been evacuated, with 848 of them in shelters. Lava-flow advancement slowed on 9 December. Geologic summary: The island of Fogo consists of a single massive stratovolcano that is the most prominent of the Cape Verde Islands. The roughly circular 25-km-wide island is truncated by a large 9-km-wide caldera that is breached to the east and has a headwall 1 km high. The caldera is located asymmetrically NE of the center of the island and was formed as a result of massive lateral collapse of the ancestral Monte Armarelo edifice. A very youthful steep-sided central cone, Pico, rises more than 1 km above the caldera floor to about 100 m above the caldera rim, forming the 2829 m high point of the island. Pico, which is capped by a 500-m-wide, 150-m-deep summit crater, was apparently in almost continuous activity from the time of Portuguese settlement in 1500 CE until around 1760. Later historical lava flows, some from vents on the caldera floor, reached the eastern coast below the breached caldera. Sinarka, Shiashkotan Island (Russia) 48.875°N, 154.175°E, Summit elev. 934 m SVERT reported that on 3 December satellite images of Sinarka showed steam-and-gas emissions with small amounts of ash drifting 40 km NE. Diffuse steam-and-gas emissions were observed on 6 December. Cloud cover obscured views on the other days during 1-8 December. The Aviation Color Code was raised to Yellow. Geologic summary: Sinarka volcano, occupying the northern end of Shiashkotan Island in the central Kuriles, has a complex structure. A small, 2-km-wide depression open to the NW has been largely filled and overtopped by an andesitic postglacial central cone that itself contains a lava dome that forms the 934 m high point of the island. Another lava dome, Zheltokamennaya Mountain, lies 1.5 km to the SW along the buried SW rim of the caldera, and a smaller dome lies along the northern caldera rim. Historical eruptions have occurred at Sinarka during the 17th and 18th centuries. The last and largest of these, during 1872-78, was once thought to originate from Kuntomintar volcano at the southern end of the island, but is now attributed to Sinarka volcano (Gorshkov, 1970). Turrialba, Costa Rica 10.025°N, 83.767°W, Summit elev. 3340 m OVSICORI-UNA reported that at 2128 on 8 December a strong Strombolian explosion at Turrialba lasted about 10 minutes and had no precursory activity. Ashfall (1 cm thick) and ballistics were deposited as far as 300 m W. Trace amounts of ashfall were reported in the Central Valley and in towns to the W and SW. Geologic summary: Turrialba, the easternmost of Costa Rica's Holocene volcanoes, is a large vegetated basaltic-to-dacitic stratovolcano located across a broad saddle NE of Irazú volcano overlooking the city of Cartago. The massive 3340-m-high Turrialba is exceeded in height only by Irazú, covers an area of 500 sq km, and is one of Costa Rica's most voluminous volcanoes. Three well-defined craters occur at the upper SW end of a broad 800 x 2200 m summit depression that is breached to the NE. Most activity originated from the summit vent complex, but two pyroclastic cones are located on the SW flank. Five major explosive eruptions have occurred during the past 3500 years. A series of explosive eruptions during the 19th century were sometimes accompanied by pyroclastic flows. Fumarolic activity continues at the central and SW summit craters. Zhupanovsky, Eastern Kamchatka (Russia) 53.589°N, 159.15°E, Summit elev. 2899 m KVERT reported that ash explosions at Zhupanovsky continued during 25 November-1 December. Ash plumes rose to an altitude of 6 km (19,700 ft) a.s.l. and drifted 422 km E and SE during 28 November-1 December. Satellite images detected a thermal anomaly during 28-29 November; the volcano was either quiet or obscured by clouds on the other days. The Aviation Color Code remained at Orange. Geologic summary: The Zhupanovsky volcanic massif consists of four overlapping stratovolcanoes along a WNW-trending ridge. The elongated volcanic complex was constructed within a Pliocene-early Pleistocene caldera whose rim is exposed only on the eastern side. Three of the stratovolcanoes were built during the Pleistocene, the fourth is Holocene in age and was the source of all of Zhupanovsky's historical eruptions. An early Holocene stage of frequent moderate and weak eruptions from 7000 to 5000 years before present (BP) was succeeded by a period of infrequent larger eruptions that produced pyroclastic flows. The last major eruption took place about 800-900 years BP. Historical eruptions have consisted of relatively minor explosions from the third cone. Ongoing activity Aira, Kyushu (Japan) 31.593°N, 130.657°E, Summit elev. 1117 m JMA reported that three explosions from Showa Crater at Aira Caldera’s Sakurajima volcano ejected tephra as far as 1,800 m during 5-8 December. The Alert Level remained at 3 (on a scale of 1-5). The Tokyo VAAC reported that during 4 and 6-8 December plumes rose to altitudes of 1.8-3 km (6,000-10,000 ft) a.s.l. and drifted SE and E. Geologic summary: The Aira caldera in the northern half of Kagoshima Bay contains the post-caldera Sakurajima volcano, one of Japan's most active. Eruption of the voluminous Ito pyroclastic flow accompanied formation of the 17 x 23 km caldera about 22,000 years ago. The smaller Wakamiko caldera was formed during the early Holocene in the NE corner of the Aira caldera, along with several post-caldera cones. The construction of Sakurajima began about 13,000 years ago on the southern rim of Aira caldera and built an island that was finally joined to the Osumi Peninsula during the major explosive and effusive eruption of 1914. Activity at the Kitadake summit cone ended about 4850 years ago, after which eruptions took place at Minamidake. Frequent historical eruptions, recorded since the 8th century, have deposited ash on Kagoshima, one of Kyushu's largest cities, located across Kagoshima Bay only 8 km from the summit. The largest historical eruption took place during 1471-76. Ambrym, Vanuatu 16.25°S, 168.12°E, Summit elev. 1334 m On 8 December the Vanuatu Geohazards Observatory reported that observations and analyses of Ambrym conducted during November and early December showed that activity levels had slightly decreased. The Alert Level was lowered to 1 (on a scale of 0-4). VGO warned that the area in close proximity to the active vents (Benbow, Maben-Mbwelesu, Niri-Mbwelesu, and Mbwelesu) remained dangerous. Geologic summary: Ambrym, a large basaltic volcano with a 12-km-wide caldera, is one of the most active volcanoes of the New Hebrides arc. A thick, almost exclusively pyroclastic sequence, initially dacitic, then basaltic, overlies lava flows of a pre-caldera shield volcano. The caldera was formed during a major plinian eruption with dacitic pyroclastic flows about 1900 years ago. Post-caldera eruptions, primarily from Marum and Benbow cones, have partially filled the caldera floor and produced lava flows that ponded on the caldera floor or overflowed through gaps in the caldera rim. Post-caldera eruptions have also formed a series of scoria cones and maars along a fissure system oriented ENE-WSW. Eruptions have apparently occurred almost yearly during historical time from cones within the caldera or from flank vents. However, from 1850 to 1950, reporting was mostly limited to extra-caldera eruptions that would have affected local populations. Bardarbunga, Iceland 64.63°N, 17.53°W, Summit elev. 2009 m During 3-9 December, IMO maintained Aviation Colour Code Orange due to continued activity at Bárdarbunga’s Holuhraun eruptive fissure. The Scientific Advisory Board of the Icelandic Civil Protection reviewed data from the beginning of the eruption on 31 August to 3 December and found a decreased rate of subsidence of the Bárdarbunga Caldera from up to 80 cm/day to 25 cm/day, with most of the subsidence concentrated at the center of the caldera. Data also showed a decline in the intensity of the eruption at Holuhraun, although seismic activity remained strong. The lava field covered just over 76 square kilometers on 9 December, making it the largest lava field in Iceland since the Laki eruption (1783-1784). Additionally, the gas emissions have had an impact all over Iceland for the first time in 150 years. Geologic summary: The large central volcano of Bárdarbunga lies beneath the NW part of the Vatnajökull icecap, NW of Grímsvötn volcano, and contains a subglacial 700-m-deep caldera. Related fissure systems include the Veidivötn and Trollagigar fissures, which extend about 100 km SW to near Torfajökull volcano and 50 km NE to near Askja volcano, respectively. Voluminous fissure eruptions, including one at Thjorsarhraun, which produced the largest known Holocene lava flow on Earth with a volume of more than 21 cu km, have occurred throughout the Holocene into historical time from the Veidivötn fissure system. The last major eruption of Veidivötn, in 1477, also produced a large tephra deposit. The subglacial Loki-Fögrufjöll volcanic system located SW of Bárdarbunga volcano is also part of the Bárdarbunga volcanic system and contains two subglacial ridges extending from the largely subglacial Hamarinn central volcano; the Loki ridge trends to the NE and the Fögrufjöll ridge to the SW. Jökulhlaups (glacier-outburst floods) from eruptions at Bárdarbunga potentially affect drainages in all directions. Chirpoi, Kuril Islands (Russia) 46.525°N, 150.875°E, Summit elev. 742 m SVERT reported that satellite images over Snow, a volcano of Chirpoi, showed a thermal anomaly during 2-5 December. Cloud cover obscured views on other days during 1-8 December. The Aviation Color Code remained at Yellow. Geologic summary: Chirpoi, a small island lying between the larger islands of Simushir and Urup, contains a half dozen volcanic edifices constructed within an 8-9 km wide, partially submerged caldera. The southern rim of the caldera is exposed on nearby Brat Chirpoev Island. The symmetrical Cherny volcano, which forms the 691 m high point of the island, erupted twice during the 18th and 19th centuries. The youngest volcano, Snow, originated between 1770 and 1810. It is composed almost entirely of lava flows, many of which have reached the sea on the southern coast. No historical eruptions are known from 742-m-high Brat Chirpoev, but its youthful morphology suggests recent strombolian activity. Kilauea, Hawaiian Islands (USA) 19.421°N, 155.287°W, Summit elev. 1222 m During 3-9 December HVO reported that Kilauea’s 27 June NE-trending lava flow continued to be active. A narrow lobe of lava that had broken away from the W edge of the flow field below the crack system advanced at rates of several hundred meters per day; by 9 December the front was about 3.5 km above the intersection of Pahoa Village Road and Highway 130, near the Pahoa Marketplace. The circulating lava lake occasionally rose and fell in the deep pit within Halema'uma'u Crater. Gas emissions remained elevated. The plume from the vent continued to deposit variable amounts tephra onto nearby areas; smaller particles may have been dropped several kilometers away. At Pu'u 'O'o Crater, glow emanated from several outgassing openings in the crater floor. Geologic summary: Kilauea volcano, which overlaps the east flank of the massive Mauna Loa shield volcano, has been Hawaii's most active volcano during historical time. Eruptions of Kilauea are prominent in Polynesian legends; written documentation extending back to only 1820 records frequent summit and flank lava flow eruptions that were interspersed with periods of long-term lava lake activity that lasted until 1924 at Halemaumau crater, within the summit caldera. The 3 x 5 km caldera was formed in several stages about 1500 years ago and during the 18th century; eruptions have also originated from the lengthy East and SW rift zones, which extend to the sea on both sides of the volcano. About 90% of the surface of the basaltic shield volcano is formed of lava flows less than about 1100 years old; 70% of the volcano's surface is younger than 600 years. A long-term eruption from the East rift zone that began in 1983 has produced lava flows covering more than 100 sq km, destroying nearly 200 houses and adding new coastline to the island. Kuchinoerabujima, Ryukyu Islands (Japan) 30.443°N, 130.217°E, Summit elev. 657 m JMA reported that no eruptions occurred from Kuchinoerabujima during 1-5 December, although the level of activity remained elevated. White plumes rose 300 m above the crater. Low-level seismicity continued and tremor was absent. The Alert Level remained at 3 (on a scale of 1-5). Geologic summary: A group of young stratovolcanoes forms the eastern end of the irregularly shaped island of Kuchinoerabujima in the northern Ryukyus, 15 km west of Yakushima. Furutake, Shintake, and Noike were erupted from south to north, respectively, to form a composite cone that is parallel to the trend of the Ryukyu Islands. The highest peak, Furutake, reaches only 657 m above sea level. The youngest cone, 640-m-high Shintake, was formed after the NW side of Furutake was breached by an explosion. All historical eruptions have occurred from Shintake, although a lava flow from the S flank of Furutake that reached the coast has a very fresh morphology. Frequent explosive eruptions have taken place from Shintake since 1840; the largest of these was in December 1933. Several villages on the 4 x 12 km island are located within a few kilometers of the active crater and have suffered damage from eruptions. Mayon, Luzon (Philippines) 13.257°N, 123.685°E, Summit elev. 2462 m PHIVOLCS reported that during 3-8 December white plumes were occasionally observed rising from Mayon's crater and drifted WSW, SSW, and NW, sometimes downslope. Inclement weather prevented views of the summit on some days. As many as three volcanic earthquakes were recorded per day. Alert Level remained at 3 (on a 0-5 scale). PHIVOLCS reminded residents of the 6-km-radius Permanent Danger Zone (PDZ) around the volcano and the 7-km Extended Danger Zone (EDZ) on the SE flank. Geologic summary: Beautifully symmetrical Mayon volcano, which rises to 2462 m above the Albay Gulf, is the Philippines' most active volcano. The structurally simple volcano has steep upper slopes averaging 35-40 degrees that are capped by a small summit crater. Historical eruptions at this basaltic-andesitic volcano date back to 1616 and range from strombolian to basaltic plinian, with cyclical activity beginning with basaltic eruptions, followed by longer term andesitic lava flows. Eruptions occur predominately from the central conduit and have also produced lava flows that travel far down the flanks. Pyroclastic flows and mudflows have commonly swept down many of the approximately 40 ravines that radiate from the summit and have often devastated populated lowland areas. Mayon's most violent eruption, in 1814, killed more than 1200 people and devastated several towns. Ontakesan, Honshu (Japan) 35.893°N, 137.48°E, Summit elev. 3067 m JMA reported that cloud cover mostly prevented visual observations of Ontakesan during 3-9 December; white plumes rose 100 m above the crater rim on 7 December. The Alert Level remained at 3 (on a scale of 1-5). Geologic summary: The massive Ontakesan stratovolcano, the second highest volcano in Japan, lies at the southern end of the Northern Japan Alps. Ascending this volcano is one of the major objects of religious pilgrimage in central Japan. It is constructed within a largely buried 4 x 5 km caldera and occupies the southern end of the Norikura volcanic zone, which extends northward to Yakedake volcano. The older volcanic complex consisted of at least four major stratovolcanoes constructed from about 680,000 to about 420,000 years ago, after which Ontakesan was inactive for more than 300,000 years. The broad, elongated summit of the younger edifice is cut by a series of small explosion craters along a NNE-trending line. Several phreatic eruptions post-date the roughly 7300-year-old Akahoya tephra from Kikai caldera. The first historical eruption took place in 1979 from fissures near the summit. A non-eruptive landslide in 1984 produced a debris avalanche and lahar that swept down valleys south and east of the volcano. Very minor phreatic activity caused a dusting of ash near the summit in 1991 and 2007. A significant phreatic explosion in September 2014, when a large number of hikers were at or near the summit, resulted in many fatalities. Popocatepetl, Mexico 19.023°N, 98.622°W, Summit elev. 5426 m CENAPRED reported that during 3-9 December seismicity at Popocatépetl indicated continuing emissions of water vapor and gas, which occasionally contained ash during 6-9 December. Incandescence from the crater was visible each night. On 3 December an ash plume rose 800 m above the crater and drifted WSW. An explosion at 2154 was associated with crater incandescence. An ash plume rose 600 m, and explosions at 1056 and 2121 produced ash plumes that rose 800 m and drifted SW. Two ash plumes rose 400-600 m and drifted SW the next day. Explosions detected at 0544 and 0608 on 6 December produced plumes with low ash content that rose 800 m. At 1508 a small rockslide on the N flank was recorded by a webcam. An explosion at 0431 on 8 December ejected incandescent tephra onto the flanks and generated an ash plume that rose 3 km. Slight ashfall was reported in the municipality of Tetela del Volcán (20 km SW). Explosions were detected at 0917 and 0933 on 9 December. The Alert Level remained at Yellow, Phase Two. Geologic summary: Volcán Popocatépetl, whose name is the Aztec word for smoking mountain, towers to 5426 m 70 km SE of Mexico City to form North America's 2nd-highest volcano. The glacier-clad stratovolcano contains a steep-walled, 400 x 600 m wide crater. The generally symmetrical volcano is modified by the sharp-peaked Ventorrillo on the NW, a remnant of an earlier volcano. At least three previous major cones were destroyed by gravitational failure during the Pleistocene, producing massive debris-avalanche deposits covering broad areas to the south. The modern volcano was constructed south of the late-Pleistocene to Holocene El Fraile cone. Three major plinian eruptions, the most recent of which took place about 800 CE, have occurred from Popocatépetl since the mid Holocene, accompanied by pyroclastic flows and voluminous lahars that swept basins below the volcano. Frequent historical eruptions, first recorded in Aztec codices, have occurred since precolumbian time. Reventador, Ecuador 0.077°S, 77.656°W, Summit elev. 3562 m IG reported moderate seismic activity including explosions, long-period earthquakes, harmonic tremor, and tremor at Reventador during 3-9 December. Cloudy conditions occasionally obscured views of the summit. Steam emissions on 3 December rose from the crater and drifted NW. On 4 December steam plumes with minor ash content rose 200 m and drifted S. On 5 December a webcam recorded a steam-and-gas emission associated with an incandescent lava flow on the E flank. Water vapor plumes rose 500-700 m and drifted NW on 7 December, and 1 km and drifted SW on 9 December. Geologic summary: Reventador is the most frequently active of a chain of Ecuadorian volcanoes in the Cordillera Real, well east of the principal volcanic axis. The forested, dominantly andesitic Volcán El Reventador stratovolcano rises to 3562 m above the jungles of the western Amazon basin. A 4-km-wide caldera widely breached to the east was formed by edifice collapse and is partially filled by a young, unvegetated stratovolcano that rises about 1300 m above the caldera floor to a height comparable to the caldera rim. It has been the source of numerous lava flows as well as explosive eruptions that were visible from Quito in historical time. Frequent lahars in this region of heavy rainfall have constructed a debris plain on the eastern floor of the caldera. The largest historical eruption took place in 2002, producing a 17-km-high eruption column, pyroclastic flows that traveled up to 8 km, and lava flows from summit and flank vents. Sheveluch, Central Kamchatka (Russia) 56.653°N, 161.36°E, Summit elev. 3283 m KVERT reported that during 27 November-4 December lava-dome extrusion onto Sheveluch’s N flank was accompanied by incandescence, hot avalanches, and fumarolic activity. Satellite images detected a thermal anomaly over the dome during 28-29 November and 2 December; cloud cover prevented views of the volcano on the other days. The Aviation Color Code remained at Orange. Geologic summary: The high, isolated massif of Sheveluch volcano (also spelled Shiveluch) rises above the lowlands NNE of the Kliuchevskaya volcano group. The 1300 cu km volcano is one of Kamchatka's largest and most active volcanic structures. The summit of roughly 65,000-year-old Stary Shiveluch is truncated by a broad 9-km-wide late-Pleistocene caldera breached to the south. Many lava domes dot its outer flanks. The Molodoy Shiveluch lava dome complex was constructed during the Holocene within the large horseshoe-shaped caldera; Holocene lava dome extrusion also took place on the flanks of Stary Shiveluch. At least 60 large eruptions have occurred during the Holocene, making it the most vigorous andesitic volcano of the Kuril-Kamchatka arc. Widespread tephra layers from these eruptions have provided valuable time markers for dating volcanic events in Kamchatka. Frequent collapses of dome complexes, most recently in 1964, have produced debris avalanches whose deposits cover much of the floor of the breached caldera. Shishaldin, Fox Islands (USA) 54.756°N, 163.97°W, Summit elev. 2857 m AVO reported that seismicity at Shishaldin continued to be elevated during 3-9 December. Thermal anomalies were often detected in satellite images during periods of clear weather. Webcam images on 9 December showed minor steam emissions. A low-level lava eruption was likely still occurring within the summit crater of the volcano. The Aviation Color Code remained at Orange and the Volcano Alert Level remained at Watch. Geologic summary: The beautifully symmetrical volcano of Shishaldin is the highest and one of the most active volcanoes of the Aleutian Islands. The 2857-m-high, glacier-covered volcano is the westernmost of three large stratovolcanoes along an E-W line in the eastern half of Unimak Island. The Aleuts named the volcano Sisquk, meaning "mountain which points the way when I am lost." A steady steam plume rises from its small summit crater. Constructed atop an older glacially dissected volcano, it is Holocene in age and largely basaltic in composition. Remnants of an older ancestral volcano are exposed on the west and NE sides at 1500-1800 m elevation. There are over two dozen pyroclastic cones on its NW flank, which is blanketed by massive aa lava flows. Frequent explosive activity, primarily consisting of strombolian ash eruptions from the small summit crater, but sometimes producing lava flows, has been recorded since the 18th century. Sinabung, Indonesia 3.17°N, 98.392°E, Summit elev. 2460 m Based on webcam views and weather models, the Darwin VAAC reported that on 3 December an ash plume from Sinabung rose to an altitude of 3 km (10,000 ft) a.s.l. and drifted E. Eruptions during 5-7 December produced ash plumes that drifted 2-20 km in multiple directions. On 14 December an ash plume rose to an altitude of 4.3 km (14,000 ft) a.s.l. and drifted W. Geologic summary: Gunung Sinabung is a Pleistocene-to-Holocene stratovolcano with many lava flows on its flanks. The migration of summit vents along a N-S line gives the summit crater complex an elongated form. The youngest crater of this conical, 2460-m-high andesitic-to-dacitic volcano is at the southern end of the four overlapping summit craters. An unconfirmed eruption was noted in 1881, and solfataric activity was seen at the summit and upper flanks in 1912. No confirmed historical eruptions were recorded prior to explosive eruptions during August-September 2010 that produced ash plumes to 5 km above the summit. Suwanosejima, Ryukyu Islands (Japan) 29.638°N, 129.714°E, Summit elev. 796 m The Tokyo VAAC reported that on 7 December explosions at Suwanosejima produced plumes that rose to altitudes of 1.5-1.8 km (5,000-6,000 ft) a.s.l. and drifted E and SE. Geologic summary: The 8-km-long, spindle-shaped island of Suwanosejima in the northern Ryukyu Islands consists of an andesitic stratovolcano with two historically active summit craters. The summit of the volcano is truncated by a large breached crater extending to the sea on the east flank that was formed by edifice collapse. Suwanosejima, one of Japan's most frequently active volcanoes, was in a state of intermittent strombolian activity from Otake, the NE summit crater, that began in 1949 and lasted until 1996, after which periods of inactivity lengthened. The largest historical eruption took place in 1813-14, when thick scoria deposits blanketed residential areas, and the SW crater produced two lava flows that reached the western coast. At the end of the eruption the summit of Otake collapsed forming a large debris avalanche and creating the horseshoe-shaped Sakuchi caldera, which extends to the eastern coast. The island remained uninhabited for about 70 years after the 1813-1814 eruption. Lava flows reached the eastern coast of the island in 1884. Only about 50 persons live on the island.
https://watchers.news/2014/12/11/active-volcanoes-in-the-world-december-3-9-2014/
Type Raising and the Cooperative Construction of Meaning One of the goals of computational semantics is to find an algorithmic way of building representations of natural language utterances that would enable us to perform sound inference on them. Since first-order predicate logic is a well-studied and expressive formal language, we will consider the mapping of natural language utterances onto first-order predicate logic formulas. In order to build these formulas, we will rely on Frege’s principle of compositionality. We will assume that our input, the syntactic representation of the utterance, will be in the form of a binary constituency tree. We associate a lambda term to every word of the sentence (a leaf of the constituency tree) using a lexicon. The semantics of an inner node are then obtained by function application, with one of the children acting as functor and the other as argument. We will remark which subconstituent is to be the functor and which the argument, even though this can be easily inferred from the types of the lambda terms. The semantic representation of the sentence is defined as the lambda term associated with the root node. Simple Things First Let us consider the simplest possible cases first. We will look at proper nouns and intransitive verbs. - “Vincent” ↦ vincent : t - “growls” ↦ λx. growl(x) : t → f In the above lexicon, I use the following typographic conventions: “double quotes” for natural language expressions, bold for the first-order predicate logic formulas (or their parts) and the unadorned typeface for the lambda calculus specific matter. Later on, I will use expressions in double quotes to refer not only to the expressions themselves, but also to their semantic representations, either computed through composition or obtained from the lexicon. Note that the bold parentheses do not correspond to lambda calculus function applications, they are part of the produced FOPL formulas. I will always mark lambda applications explicitly with the @ symbol. I have also taken the liberty of assigning types to the lambda terms. The two atomic types that come naturally in the domain of FOPL formulas are terms (t) and formulas (f). The lexicon I show here is very straightforward. We map the proper noun “Vincent” onto a constant and the intransitive verb “growls” onto a lambda abstraction which builds an atomic formula by filling the gap in the unary relation with the supplied argument (we will call functions that take terms and produce formulas predicates). Let us take the simple sentence “Vincent growls” and its syntactic structure. (S (NP "Vincent") (VP "growls")) We obtain the semantic representation of the sentence by applying the “growls” lambda term to the “Vincent” one and β-reducing, growl(vincent) : f. Let us now consider transitive verbs, as in the sentence “Vincent likes Mia”, with the following lexicon additions - “Mia” ↦ mia : t - “likes” ↦ λx. λy. like(y, x) : t → t → f and syntactic structure. (S (NP "Vincent") (VP (VT "likes") (NP "Mia"))) Unsurprisingly, “Mia” maps to a constant and “likes” maps to a binary predicate (a function that builds a formula using two terms as arguments). In the sentence above, “likes” applies to “Mia” first, yielding λy. like(y, mia) : t → f after β-reduction. Applying this lambda term to that of “Vincent” yields the final representation, like(vincent, mia) : f. So far so good. Some things to note about our system so far. First off, syntactic categories always correspond to a single semantic type (see table below). Second, the head constituent ends up always being the functor during composition. - S = f - NP = t - VP = NP → S - VT = NP → VP Quantifiable Complications We will now generalize our notion of a noun phrase and admit phrases with determiners such as “a” and “every”. The sentence we would like to be able to analyze is “Every boxer growls”. The desired semantic representation would look something like this ∀x. (boxer(x) → growl(x)) : f. It is obvious that the relations boxer and growl correspond to the words “boxer” and “growls”. What is the semantics of “every” then? Well, as the coloring hints, it is the quantifier, the implication and the variables. However, our old way of composing VPs with NPs will not work here, as one does not simply put a quantifier in the argument position of a relation. Here are the new elements of our lexicon. - “boxer” ↦ λx. boxer(x) : t → f - “every” ↦ λP. λQ. ∀x. ((P@x) → (Q@x)) : (t → f) → (t → f) → f We represent common nouns as unary predicates (sets of entities are after all a natural denotation for common nouns). As for the quantifier, we map it to a function that builds a universal quantification (see generalized quantifiers). The first argument is the restricting predicate which defines the range of the variable, the second argument is the predicate which is to be true in the domain of quantification. Calculemus! * (S (NP (Det "Every") (N "boxer")) (VP "growls")) “Every” applies to “boxer” yielding λQ. ∀x. (boxer(x) → (Q@x)) : (t → f) → f. The resulting NP then applies to “growls” to produce ∀x. (boxer(x) → growl(x)) : f. Hurray! OK, what we have here is not as great as it looks. Contrary to the previous situations, it is now the subject NP which acts as the functor, not the VP. However, in our new theory, this is true only for the case when the NP turns out to be a quantified NP. In the case of a proper noun, we still make the VP the functor. This slightly annoying discrepancy has more disturbing implications. The semantic type of one NP (“every boxer” : (t → f) → f)) can be different from that of another NP (“Vincent” : t). This loss of consistency between syntactic and semantic types would make writing the rest of our lexicon difficult as we would always have to consider both cases whenever dealing with NPs. Luckily, we can solve this problem by raising the types of the proper nouns from t to (t → f) → f. Whenever we have lambda terms F : α → β and G : α, we can construct G’ = λP. P@G : (α → β) → β, which satisfies that F@G = G’@F (proof via simple β-reduction). What it means is that we can take a functor F and an argument G, do some trivial change to G, and then use the new G as a functor with F as the argument while still getting the same result as before. If we apply this technique to our proper nouns, we get the following updated lexicon entries. - “Vincent” ↦ λP. P@vincent : (t → f) → f - “Mia” ↦ λP. P@mia : (t → f) → f An intuitive way to look at the current NP might be as a thing that ranges over some entities and when given a predicate, produces a formula that is true when the entities being ranged over satisfy the predicate. Therefore, in the above implementations of “Vincent”, the way to produce a formula which is true whenever vincent satisfies a predicate is to simply use the constant vincent as the term for the predicate being built. Stuff Gets Difficult Let’s try and do a recap of the types we have so far. - S = f - N = t → f - VP = t → f - NP = VP → f - VT = ??? Oops, we seem to have a problem. By changing what it means to be an NP we have broken our transitive verbs which relied on NPs being terms. If we were to consider a sentence where the transitive verb has a proper noun as the object (e.g. “Every boxer likes Mia”), things would still work out (we can type our proper nouns as (t → (t → f)) → (t → f)). However, when we try to work with a quantified object, things break. Let’s use the sentence “Every boxer likes a woman” as an example. (S (NP (Det "Every") (N "boxer")) (VP (TV "likes") (NP (Det "a") (N "woman")))) - “woman” ↦ λx. woman(x) : t → f - “a” ↦ λP. λQ. ∃y. ((P@y) ∧ (Q@y)) : (t → f) → (t → f) → f The semantics of “woman” are straightforward. As for “a”, we construct another quantifier, this time using an existential quantification. We can construct the meaning of “a woman” as λQ. ∃y. (woman(y) ∧ (Q@y)) : (t → f) → f. Now, if we try to apply this lambda term to the one of “likes” or vice versa, we get nonsense. In the first case, we end up with a lambda abstraction inside of our FOPL formula, and in the second case, we place (an abstraction of) a quantification as an argument of a relation. Neither of these can ever resolve to a proper formula. How do we fix this? Well, let us imagine how we would like the meaning of “likes a woman” to look like. - λy. ∃y. (woman(y) ∧ like(y, y)) : t → f The above is one reasonable answer (Beware of the two ys! The orange bold one belongs to the constructed FOPL formula and was contributed by “a”. The red plain one is part of the lambda calculus and should belong to “like”.). These semantics for “likes a woman” look good. If we apply them to a term, they yield a formula stating that the entity named by the term does indeed “like a woman”, i.e. what we have is a working predicate. Now that we know what we want, it is not so difficult to work out the correct version of the “likes” semantics on paper. - “likes” ↦ λO. λy. O@(λx. like(y, x)) : ((t → f) → f) → t → f OK, that looks good. Let’s check if the whole thing works. - “Every boxer” ↦ λQ. ∀x. (boxer(x) → (Q@x)) : (t → f) → f - “likes a woman” ↦ λy. ∃y. (woman(y) ∧ like(y, y)) : t → f - “Every boxer”@“likes a woman” ↦ ∀x. (boxer(x) → ∃y. (woman(y) ∧ like(x, y))) : f Yay! We have a consistent theory of the syntax-semantics interface now, here is our syntax-to-semantics type mapping. - S = f - N = t → f - VP = t → f - NP = VP → f - VT = NP → VP The transitive verb is still expressed as a function taking an NP and producing a VP, which seems somewhat linguistically plausible. The Part Where I Get Confused Too OK, so what did I do to my lexicon entry for “likes” that it magically started doing exactly what I needed instead of breaking horribly? If we look at the type signature of our new “likes” representation and compare it with the old one, we can see that we have swapped out the old type of NP (t) with its new raised type ((t → f) → f). In the body of the abstraction, we are calling our argument O and passing it something which looks like our original value. Isn’t this just like type raising? Why did the meaning of our sentence change then? Well, it’s not really like type raising if you look at it closely. |this is the original:| |this would be the type raised version:| |this is what we used:| Now we can see that the position of λy is what is different in the two versions. So what does our version actually do? It’s not that hard to decipher. After a transitive verb is applied to an object, it is supposed to return a predicate, that is a function which takes a term and returns a formula stating something about the term. Some examples of predicates we have seen: - λx. growl(x) : t → f - λy. like(y, mia) : t → f - λy. ∃y. (woman(y) ∧ like(y, y)) : t → f Let’s dissect our “likes” term then. - λO. λy. O@(λx. like(y, x)) : ((t → f) → f) → t → f First off, we accept the object as an argument and bind its name to the variable O. Then we define the predicate we will return. The name of the predicate’s argument will be y, this is the variable that the subject will be replacing with its term. Finally, how does the body of the predicate look like? To find out, we ask the object to build it, capturing any term it wants to use to represent itself in the relation using the variable x. When O is the meaning of “Mia”, the object sends mia as the term to represent it and returns the relation. When O is a quantified phrase like “a woman”, it sends the logical variable y as its representative x and it wraps the resulting relation in a quantifier providing a range for y. Wat? OK, so things got pretty crazy pretty fast. How come we ended up having to repeatedly juggle higher-order functions and raise types? We can find an answer to this by inspecting the complexity (intertwinedness) of the expressions we set out to generate. To produce growl(vincent) : f, we just had to apply the second-order function “growls” to the first-order constant “Vincent”. This was a fairly trivial thing as the representation of “Vincent” is just plugged into a hole in the represenation of “growls”. Similarly goes for plugging “Vincent” and “Mia” into the binary like relation. Things get more interesting when we try to generate something like ∀x. (boxer(x) → growl(x)) : f. When we want to combine “every” and “boxer”, we ask “every” to start. “Every” lays down the quantifiers and asks “boxer” to build the restricting formula, which in turn asks “every” for the term to be filled in the predicate (the variable x). In order to handle this case, we raised the NPs over VPs and ended up with third-order items for the NPs in the lexicon. Finally, to get quantified noun phrases working as objects of transitive verbs, we had to fix our definition of “likes”, which we raised over the NPs and which made “likes” a fourth-order item in the lexicon. Let’s take a look at applying “likes” to an object such as “a woman”, λy. ∃y. (woman(y) ∧ like(y, y)) : t → f. Here we ask “likes” to build a term, which begins by putting down the lambda abstraction (λy.), then passing control to the object, in this case “a woman”. The object lays down any necessary quantifiers (∃y. woman(y) ∧) and asks “likes” to build the consequent of the quantifier. “Likes” will build the formula (like(y, …)), but not without taking a term to place in the relation from the object (y). In other words, the function assigned by the lexicon to “likes” is a fourth-order function, which calls the third-order function corresponding to “a woman”, which calls the second-order lambda function defined within “likes” which uses the first-order argument x supplied to it by the object “a woman”. This kind of there-and-back-again exchange of control is enforced by the interleaved syntactic structure of the FOPL formulas we want to produce (the colors corresponding to the individual lexical items do not form sovereign subtrees) and the principle of compositionality. Each of the two constituents to be combined contains only the information strictly contained in its subtree. To combine these two pieces of information into an elaborate structure, the two lambda terms have to cooperate with each other, exchanging continuation-like lambda functions. One Last Jump OK, we have gotten our heads around the recursions in the compositions and our pyramid of types, but there is one more thing I’d like to do. If we look at our theory, there are still some nasty spots left. First off, the type of NP is VP → S, i.e. NPs consume VPs to produce sentences. This sounds quite unintuitive from a linguistic standpoint. We would much rather have a VP which satisfies its valency by consuming an NP as its subject and forms a sentence. Also, there is something off about how the lexical entry for “likes” treats its object fundamentally differently from its subject. Finally, the notion of who gets to be the functor seems kind of arbitrary. Within VPs, it is the verb itself, the head of the phrase. Within NPs, it is the determiner (which is also the head of its phrase, if you subscribe to the theory of generative grammar). However, within the top-level sentence, it is not VP, the head of the sentence, which is the functor, but the subject NP. All of these problems are linked and we can solve them in one stroke using a technique we have used several times before. Can you guess which? Yep, it’s type raising. Here are the new lexical entries for our verbs. - “growls” ↦ λS. S@(λx. growl(x)) : ((t → f) → f) → f - “likes” ↦ λO. λS. S@(λy. O@(λx. like(y, x))) : ((t → f) → f) → ((t → f) → f) → f In the case of the intransitive verb, we simply type raise the VP using the general type raising rule we have used on proper nouns before. For “likes”, it is a little bit more complicated. “Likes” is a transitive verb. That is, it is a function from an NP to a VP. So first, we have to strip away the outer lambda abstraction to get the VP body contained inside and then we mechanically apply the type raising rule on that VP term, changing λy. … into λS. S@(λy. …). Let’s have look at our new type mapping. - S = f - N = t → f - NP = (t → f) → f - VP = NP → S - VT = NP → VP Much better! The VP type now actually makes sense linguistically, the functors line up with head constituents, the arguments of transitive verbs are now handled uniformly and we have even made the transitive and intransitive verbs look similar to each other. Minor Caveats If you try to switch the order of S@(λy. and O@(λx. in the lexical entry for “likes” (or any other transitive verb for that matter), you will generate a different representation for sentences which have quantifiers in both the subject and object. For example: - ∀x. (boxer(x) → ∃y. (woman(y) ∧ like(x, y))) : f - ∃y. (woman(y) ∧ ∀x. (boxer(x) → like(x, y))) : f These different formulas actually correspond to two different readings of the original sentence “Every boxer likes a woman” (scope ambiguity). The top interpretation is the original one and corresponds to the reading “For every boxer there is a specific woman that he likes”, while the bottom interpretation corresponds to the reading “There is a (one) woman and every boxer likes her”. This ambiguity can be represented in our theory (maybe somewhat crudely) by providing two different lexical entries for transitive verbs, one for each permissible order of argument qualifiers. Finally, we should also do something about the FOPL variables we generate. Hard-coding x into the lexical entry for “every” and y into the one for “a” doesn’t seem like a good solution, if only because it precludes us from using the same quantifier in the arguments of one verb, e.g. as in “Every boxer likes every woman”.
https://homepages.loria.fr/JMarsik/2012/12/11/type-raising-and-the-cooperative-construction-of-meaning/
Payloads: Earth Radiation Budget Satellite (ERBS) Getaway Specials - 6 (GAS), compagnie cinématographique canadienne (IMAX), appareil photo grand format (LFC), Orbital Refueling System (ORS), Office of Space and Terrestrial Applications (Palette OSTA-3). Marc Garneau conducted 10 experiments in three main categories: space technology, space science and life sciences. The space technology experiments involve two areas: important development tests for the NRCC Space Vision System experiment to be flown on a mission in early 1986 and tests to determine the effect of exposure to space on different advanced composite materials. Each experiment were to be conducted on specific days of the mission. Besides conducting the experiments, Garneau did his share of housekeeping duties such as making meals and stowing and unstowing equipment. He also helped in any job that "needed three hands" such as taking notes for others and arranging for others to do the same for him. The space science studies deal with the physical characteristics of thespace environment and of the earth's upper atmosphere. The life sciences component includes several experiments on human adaptation to space flight as preparation for the more detailed investigations on a mission in mid-1986. Personal hygiene also took its share of time - hand-washing takes twice as long in space as it does on Earth and shaving with an electric shaver, up to three times longer. Even sleeping is different in a space environment. In order to get his eight hours of sleep each mission day, he had to attach himself to some spot in the mid-deck with Velcro to keep from floating freely. Garneau used a combination of filling out check lists and spoke into a tape recorder to record the conditions under which the experiments are performed and their results. All the equipment he needed for the 10 experiments fitted into one locker about half the size of a legal size filing cabinet drawer. - Date modified:
https://www.asc-csa.gc.ca/eng/missions/sts-041-g.asp
As the year 1984 approaches, science fiction author Isaac Asimov is invited to take part in a unique project. Thirty-five years earlier, you see, George Orwell’s 1984 had predicted that the ’80s would take place in a dystopian landscape. So now Asimov will actually begin to imagine what the world might be like in 2019 – and some of his visions still ring terrifyingly true. Born in the Russian region of Smolensk Oblast in 1920, Asimov emigrated to America with his family at the age of three. Then, some three years after they’d arrived, his parents purchased a little candy store. The family would buy and sell at least three more candy stores, too, and it was reportedly the newspapers stocked in these establishments that kickstarted Asimov’s love of writing. Yet the future writer’s early adventures in studying didn’t officially involve the creative arts. After attending public schools, in fact, Asimov earned a scholarship to Seth Low Junior College, an offshoot of New York’s Columbia University. And there, he began studying zoology – although a gruesome dissection soon inspired him to switch to chemistry. Asimov even completed a Ph.D. in 1948 in the latter subject.
http://scribol.com/anthropology-and-history/history/in-1983-isaac-asimov-imagined-the-world-in-2019-and-his-predictions-were-scarily-accurate/
OSPG Questions & Answers - The OSPG was established on June 1, 2019 through a regulation under the Professional Governance Act (PGA) to ensure consistency and best practices are applied in the governance of regulated professionals. The OSPG will be the centre of expertise in government for professional governance matters. For now, the Superintendent’s mandate includes the five regulatory bodies with registrants working in the natural resource sector and built environment. The scope of the PGA allows for the Superintendent’s mandate to expand to include other professions. - The OSPG oversees governance of the five regulatory bodies, administers the PGA, can conduct investigations and audits, research and develop best practices in governance, and take compliance actions such as issuing directives. The OSPG’s mandate does not include oversight of individual professionals. - The five regulatory bodies are the BC Institute of Agrologists (BCIA), Applied Science Technologists and Technicians of BC (ASTTBC), College of Applied Biology (CAB), Engineers and Geoscientists BC (EGBC), and Association of BC Forest Professionals (ABCFP). - The Ministry of Attorney General currently has a $1 million annual budget for the OSPG and 8 staff. - The independent final report of the Professional Reliance Review recommended the creation of a new office to oversee professional regulation, develop best practices for professional governance, and standardize elements of professional governance through umbrella legislation. - Some of the elements of the PGA and oversight model were informed by issues observed with the existing model such as conflicts of interest and professional independence. - The legislation is putting a governance framework in place that follows international best practices and helps regulatory bodies to strengthen their role in protecting the public interest and improve public trust in professionals. - The Superintendent has a duty to promote awareness among the regulatory bodies to support reconciliation with Indigenous peoples. - The Superintendent will do this by supporting implementation of the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP). - The PGA requires regulatory bodies to make bylaws that establish education programs for their professionals to support informed engagement and reconciliation with Indigenous peoples. - The PGA enables practice rights to be granted to CAB, ASTTBC, and BCIA. - EGBC and ABCFP already have practice rights established under their current legislation and these rights will be preserved with the transition to the PGA. - A lot of work needs to be completed before establishing practice rights for agrologists, biologists and science technologists and technicians. - This will be a multi-year process that started with an early discussion with the public and with professionals through an intentions paper and a “What We Heard” that followed. - The OSPG will continue to work closely with the regulatory bodies under the PGA to implement practice rights for biologists, agrologists and science technologists and technicians. - Regulatory bodies are currently canvasing their registrants, other professionals and the public on how to enable practice rights. - Requirements for competency and conflict of interest declaration will be set in regulation and may apply to all professional work, whether that work is regulated by government or not. - Input previously requested on this policy will be considered as policy is developed. - Once in force, firms would be required to register with a regulatory body, similar to how individual professionals are currently regulated. - When the regulation of firms involves multiple professions, the intent is that multiple registration processes will be avoided so that multidisciplinary firms will only have to go through one registration process for all the professions under the PGA that they work in. - Registrant firms would be required to meet the requirements set out in provisions of the PGA that also apply to firms as well as the requirements set out by the regulatory body through its bylaws - Regulatory development will consider models used elsewhere and best practices, for example, requiring firms to develop and submit management plans to regulatory bodies to show how they are meeting professional standards, and supporting their professional employees to meet continuous professional development requirements. - The regulation of firms will be implemented in phases. Engineers and Geoscientists BC (EGBC) will regulate engineering and geoscience firms first with a tentative start of summer 2021. - The other regulatory bodies will be enabled to regulate firms when ready and based on the experience gained from EGBC’s firm regulation program. - Regulated firms will be given some form of a permit to practice. - A requirement of registration will include mandatory training for delegates of the firms in how to prepare their required Professional Practice Management Plan and have their firm meet the ethical, quality management and continuing education requirements of registration. - The PGA for the most part has implications for the regulatory bodies. Interactions between regulatory bodies and individual registrants will mainly continue as before. - However, individual registrants may see changes in how they participate in the decisions of their regulatory bodies. They may see new or different standards related to professionalism, including updated codes of ethics, mandatory continuing education programs, declaration requirements, and an expanded duty to report. They may also see changes to discipline processes. - The granting of practice rights may result in individuals being required to register with a regulatory body to carry out certain professional work. - The Superintendent will be able to receive applications from other professions for designation under the PGA, and to conduct investigations to determine if it is in the public interest for a profession to be designated under the PGA. - The Lieutenant Governor in Council is responsible for designating new professions following a recommendation by the minister. This authority is not yet in force, as designating new professions is intended for a later stage of implementation. - The Professional Governance Act establishes the OSPG under the Ministry of the Attorney General. - The full scope of responsibilities of the OSPG goes beyond those affecting matters related to the natural and built environments, and could include professions outside of these sectors. - Having the OSPG report to the Ministry of the Attorney General ensures it is arms-length from the ministries that employ and rely on those professions.
https://professionalgovernancebc.ca/ospg-questions-answers/
First a decision has to be made on whether differences from control values are treatment-related effects, or occur by chance. This involves consideration of such parameters as dose response, spurious measurements in individual parameters, the precision of the measurement under evaluation, ranges of natural variation and the overall biological plausibility of the observation. Those differences judged to be treatment-related effects are then evaluated further, to differentiate between those that are adverse and those that are not. This second step involves consideration of whether the effect is an adaptive response, whether it is transient, the magnitude of the effect, its association with effects in other related endpoints, whether it is a precursor to a more significant effect, whether it has an effect on the overall function of the organism, whether it is a specific effect on an organ or organ system or secondary to general toxicity or whether the effect is a predictable consequence of the experimental model. To arrive at an overall judgement in the interpretation of complex studies, it is important to apply a ‘weight of evidence’ approach that takes into account the criteria proposed in this report. The use of the structured scheme will contribute to improved consistency of individual study interpretation that is the foundation of reliable prediction of chemical hazard and risk. Finally, a standard set of definitions is proposed for the key terms such as NOEL and NOAEL that are frequently used to describe the overall outcome of a toxicity study.
http://www.ecetoc.org/publication/tr-085-recognition-of-and-differentiation-between-adverse-and-non-adverse-effects-in-toxicology-studies/
Author: Hon Paul J. Lewis | Date: 17 February 2021 Montserrat's Leader of the Opposition Hon Paul J Lewis presents a statement on the economic impact of lockdown, with solutions on how to assist the people of Montserrat. Listen to the speech below: Full Text also shared here: "We are now in a lockdown here on Montserrat, where businesses are being forced to close, which means loss of livelihoods for many of our people. Our children’s education will also be impacted as some parents cannot do all that is necessary for their children’s education from home. Meanwhile, the cry for help from the wider public and from the business community is very loud. While it is clear that the tourism sector has been badly hit, many other sectors are suffering and feeling the negative effects of this COVID-19 lockdown, and the pandemic on a whole. Government of Montserrat’s mission and strategic plan must be to immediately bring into play relief and stimulus packages for all private sector businesses and affected individuals and families on island. The Government’s second stimulus application to the UK for more monies must include salary support for workers across all affected sectors along with universal assistance to businesses. We are also pressing the Government to provide this salary support to our workers, as opposed to partial selective sector grants. Salary support is needed now! Special arrangements and measures must be made to forestall and address areas of poverty that will burden our people. In their representation and negotiations for relief and stimulus for Montserrat, the approach from GoM must be seen to be from a people first perspective. The FCDO indicated they are awaiting another poverty assessment and labour market survey. However, the Opposition persisted and argued strongly when we met with them, for immediate relief and stimulus in the absence of these two reports. I can assure you the people, that the Opposition also reminded FCDO and The Government that a lockdown means lockdown - where schools are closed, businesses are closed, and workers are home with the exception of essential workers. Some of these workers at home will not be paid, as their employers cannot afford to do so. This will be a further crippling blow to our fledgling economy. These considerations bring us to some questions, for which the people need answers. The Opposition Team recently concluded a very positive meeting with the full Financial Assistance Team (FAM) of the Foreign Commonwealth and Development Office (FCDO) along with the Governor of Montserrat, His Excellency, the Governor Andrew Pearce. The Opposition argued for financial assistance for people and businesses affected by the shutdown. The Opposition equally insisted this should be an immediate response, similar to established practices in the United Kingdom. A number of immediate aid, and strategic steps to help the people of Montserrat’s COVID-19 response were discussed, while also insisting that further assistance from the United Kingdom ought to be a priority, in order to lessen the blow of another COVID-19 Lockdown on Montserrat. We expected the Government to have had a plan to immediately begin to help our people. This could have been done from possible use of EDF-11 COVID support funds, ECCB support funds, plus balances on the GoM accounts to support any short-term lockdown, while they waited for the United Kingdom to provide additional support. Since ‘lock-down means lock-down’ we expected support measures to have been available in any government plan, before any shutdown was announced. Shutdown is upon us and still we hear nothing from this Administration going into another week of lockdown. We call on the government to implement across the board support, covering all workers and businesses, in ALL sectors. We need this support provided as soon as possible to avoid further economic deterioration and other social issues that may arise. in the meantime, we have also impressed upon London the need for strategic projects to support the medium and long-term survival of all sector workers in Montserrat. Whether it be relief or stimulus packages for Montserrat, this government and the United Kingdom need to do something for our people, and fast. I do hope the Premier will now come forward and tell the people what the general plan of action would be in this lockdown and going forward. Hopefully, lessons learnt from the first lockdown will mean that the government is better prepared and more organized in delivering a comprehensive relief and stimulus package this time around. We hope they do not make the mistake of buying food packages and using this to say that the economy is performing well. Their duty is to aid the people, and to do their utmost to keep the economy afloat until things change here on island. In closing, I call on all of us as citizens to act responsibly to protect ourselves and others. I call on the government to also act quickly to provide assistance and relief where necessary for ALL the people of this island.
http://www.mnialive.com/articles/montserrats-leader-of-the-opposition-hon-paul-j-lewis-statement-on-solutions-to-economic-impact-of-l
Supernova remnant Cassiopeia A glows with many colors in this composite image from NASA’s Great Observatories. Zoom into the jumbled strands of colors. What stories or pictures do you see? Leave a note below. Cassiopeia A, or Cas A, is the remnant of a star that exploded. Supernovae are the ultimate end of stars that are about ten times more massive than our Sun. When stars this big run out of hydrogen fuel, they quickly expand. Their great gravity however pulls the material back in toward the star where it heats up very fast creating a runaway nuclear fusion reaction. The star becomes unstable and explodes. As you explore the image, look for the different colors offered by images of each observatory. Astronomers used to think that the explosion scattered material evenly around the star. But knots and filaments show that material was ejected at different times and speeds. Spitzer imagery shows reddish warm dust in the outer shell of the supernova with a comfortable temperature of about 80 degrees Fahrenheit (10 degrees Celsius). Hubble Space Telescope imagery shows a fine yellow filament structure of warmer gases. Chandra imagery shows superhot gas in blues and greens. The hot gas was created when material ejected at high speed during the explosion slammed into the calm gas and dust surrounding the star. Look for the turquoise dot near the center of the image. This may be the neutron star created during the supernova. A neutron star is the hot and super-dense core of an exploded star. Some scientists believe that a black hole resides at the center of the remnant. Cas A lies about 11,000 light-years from the Earth toward the constellation Cassiopeia. Astronomers believe first light from the supernova reached Earth about 300 years ago. But no one on Earth seems to have seen it. Historians think that John Flamsteed may have noticed the star in 1680. Astronomers theorize that the massive star had ejected a dense bubble of dust that blocked light from the explosion. Scientists discovered the supernova in the 1940s because it is one of the brightest radio sources in the sky. No supernovae have been visible in the Milky Way since.
http://www.starrycritters.com/a-bubble-of-many-colors/
The Gravitational Torsion Balance apparatus is a very accurate device to measure the gravitational constant, G. Isaac Newton deduced the law of universal gravitation without having and evidence of attractive nature between everyday objects. In 1798, Henry Cavendish performed the Torsion Balance experiment with small masses to measure the gravitational constant G. They could then use it to find the mass of the earth. His measurements were most accurate around for about 100 years. The pendulum is first aligned vertically using the leveling feet and a leveling sight. I set up the apparatus so that the pendulum mirror is facing a wall about 10 feet away and is normal to the plane of the wall. Using a laser reflecting off the mirror I can use geometry to find the angle displacement and therefore measure things like torque and the period of oscillation by following the laser reflected on the opposite wall. Aligning the pendulum so that at equilibrium, the mirror reflects in the same direction as the reflection of the glass plate in front of the mirror. This process takes many hours due to the long period of oscillation and the slow dampening of the torsion ribbon. After alignment I began the first of three methods. Method I: Measurement by Final Deflection: Starting with the equation of gravitational attraction between to masses one can derive the formula where is the distance between equilibrium points with masses at position I and II shown above. is the distance between the centers of mass of the big and small masses in position I or II. is the distance from the center of the small mass to the torsion axis in the pendulum chamber (used to find torque). is the radius of the small mass. is the period of oscillation. is the mass of the small mass. id the distance between the mirror and the wall. and can be calculated in the experiment while the rest of the values can be measured. I this method, I put the masses in position I and wait until the pendulum reaches equilibrium. I then measure the position and switch the masses to position II. I then measured the period many times and waited until the pendulum reached equilibrium again and measured . With these measurements I calculated . This is within 20% of the accepted value of . This is a good measurement. Method II: Measurement by Equilibrium Position: In this method I use the same equation as above. Similarly I measure and and calculate G. First I let the pendulum reach equilibrium with the masses in position I again. As soon as I move the masses to position II I measure the position of the laser on the wall (on a meter stick) every 15 seconds for 45 minutes to an hour. I repeat the process starting with the masses in position II and moving to position I. After plotting the data I could analyze the plots to find and and thus as well as . Plugging in these values I calculate which is within 20% of the accepted value and within 2% of the value measured in method I. This shows that the greatest error is systematic or in the calibration while my different measurement techniques had smaller error. Method III: Measurement by Acceleration: In this method we measure the initial acceleration of the small masses on the pendulum. We have and thus get where . After the pendulum reaches equilibrium in position I, I move the masses to position II and measure the position of the laser every 15 seconds for about 2 minutes. By plotting over I used a linear fit and found the slope. The plot is below. Using this value I solve for which is not as accurate as my previous measurements. I believe the error here is from the speed of me moving the positions of the masses and measuring the initial acceleration. Any delay can lead to measuring a later (and slightly greater) acceleration which gives us a value of G which is too large. Overall my measurements gave very close values of the gravitational constant G. If i had to change anything, I would calibrate the equilibrium of the pendulum more accurately. I could only get the laser reflected from the mirror on the wall within 5 cm of the laser reflected from the glass plate. Increasing this accuracy should produce better values of G. This process would take many more hours of guess and check work rotating the zero adjust knob and finding the equilibrium until perfect.
http://hank.uoregon.edu/wiki/index.php?title=Gravitational_Constant_Measurement&oldid=1658
These are exciting times for the memory hierarchy in systems. New kinds of DRAM and non-volatile memories are becoming available to system architects to enhance the performance and responsiveness of the applications that run upon them. One emerging technology that is gradually being adopted on a couple of server-class processors is transactional memory, which is an ingenious use of concepts already known to both processor cache and database designers for main memory. At the highest level, what transactional memory is a means of speeding up shared data processing by avoiding the overhead of applying and then freeing locks over the data that is being acted upon. And in today’s processing world, where compute performance is coming largely from parallel processing, such a function will become increasingly useful. But what is transactional memory really? To explain, we start with the notion of a transaction, and in this case, we mean a database transaction. A database is just a place to save information, but it is also a bit of a science in its own right. One of the concepts key to database management is that of a transaction, which is a unit of work and which produces all of its results successfully or it produces nothing. Those are the only two choices. A transaction ensures that when you are transferring money from one bank account to another, either the money removed from one shows up on another or no money is removed from the source account at all. In database parlance, this is the atomicity part of the notion of ACID properties for databases: Atomicity, Consistency, Isolation, and Durability. Transactional memory supports a directly related, but slightly different, concept. More subtle and nuanced is ACID’s concept of isolation. At its most simple level, isolation says that until a transaction – we’ll call it T1 – has successfully reached its end, no other transaction – T2 – executing concurrently is allowed to see any partial results produced by T1. T1’s results together become visible to other transactions like T2 at the end of T1 when T1 commits all of its results. Other transactions see either the entire result of the transaction T1 or they see nothing from T1; they don’t get to see partial results along the way. Although transactions like T2 can be executed during the same period of time as T1, isolation is a bit like ensuring that only one transaction is executing at a time. This, too, is a basic concept supported by transactional memory. Transaction isolation is nuanced in that a program using a database can trade off performance – in the sense of allowing multiple transactions to concurrently execute while touching the same data – versus the level of control over seeing changed data. For example, isolation levels called: - Read Committed ensures that a transaction T2 will not be allowed to see any results from a transaction T1 until T1 is done and has committed all of its changes. - Repeatable Reads is stronger in its control and goes on and says that if transaction T1 reads some data at any time during T1, T2 will not be able to change that data until T1 ends. The intent is to allow T1 to again re-read the same data and see again what it had previously read. You can have this level of isolation, but this typically means that you execute slower or less in parallel. The essence of this is that, given that multiple transactions, accessing the same data in some way, and desiring to be executing at the same time, the isolation level dictates what each gets to access when. This, again, is part of transactional memory. For each of these attributes of transactional memory, we’ll be looking at how it is supported shortly. First, though, let’s take a look at what it is that transactional memory is buying us. I mean, if a Transaction can execute just fine as is, why would be need transactional memory? Of Locks (Or Not) We said that a transaction T1’s results – and there might be a lot of them – are not visible to other transactions until T1 had committed these results. OK, so how? What keeps the changed results invisible? Locking. A lock – a software construct – says “This is mine for the time being. If you want it too, you’ll need to wait around a while until I free up the lock.” There can also be a lot of such locks applied during a transaction to the data of a database. Depending on the transaction’s complexity and Isolation level, even a single transaction T1 can have applied scads of locks within the bounds of T1. Any other transaction that conflicts with T1’s locks may have to wait to access the locked portion of the database. When T1 frees its locks, perhaps many of them, at the end of the transaction, at transaction commit time, only then is the data made visible and the waiting transaction(s) can see T1’s results. Not only is the performance of other transactions impacted, but – perhaps more importantly to transactional memory – it takes time to apply and then free such locks. Executing a notion of a transaction, and doing it without any such locks, is also one of the attributes of transactional memory. Before I go on, although the term transactional memory is based on all of the notions of database Transactions outlined here, I need to quickly point out that transactional memory as implemented so far cannot support the complexity of many types of database transactions. You will see why shortly. It can support transactions which are more complex than that of the primitive atomic updates – which we will be looking at next – but not necessarily what some might think of as a rather typical database transaction. Finishing up here on locks, what locks are generally all about is holding off (i.e. pausing) the execution of an otherwise concurrently executing transaction until the lock conflict goes away. Doing so requires that the transactions apply locks over what is accessed, even if the probability of such conflicts are remote. Again, such locks – and the resulting overhead – are required, even if shared data conflicts are very rare; if it can happen, lock protection is still needed. I mean, would you trust a bank who tells you that they are 99.99 percent sure that they won’t lose your money transfer? If there is even the slightest potential that database isolation and atomicity might be violated, locks are applied and later freed to ensure that the database’s ACID requirements are met. But suppose that you could ensure atomicity and your expectations on isolation without using locks. Suppose that your transaction – with hardware assistance for performance – could detect that isolation for your transaction was violated or that your transaction would violate the isolation of another. And detecting such, suppose that you knew enough to start over again, this time executing the transaction from beginning to end and still maintaining atomicity and isolation. That detection and informing of the need to restart is also one of the attributes of transactional memory. Cache And Primitive Atomicity We’ll be seeing one implementation of transactional memory shortly, but I first want to provide a conceptual building block. We are going to first show how cache is used to support primitive atomicity and doing this without locks. When I say “primitive atomicity” here, I am referring to being absolutely sure that when multiple processors are concurrently updating a simple primitive variable like a 32-bit integer, each change will be made just as though each had been done at different times on the same processor. To get the point across, let’s look at what happens if this was not supported. Let’s assume that we have two processors simply adding the number 1 to a 32-bit integer in memory. When done by one processor after the other, the variable will have been advanced by 2. Simple, right? Well, when done at the same time, the result still often appears as plus 2, but occasionally not; occasionally the result ends up being advanced only by 1. Not acceptable. The problem stems from the fact that the variable is not actually incremented in memory. At separate times, the processors - Read the 32-bit integer from memory, - Increment the 32-bit integer, and then each - Stores the updated 32-bit integer back to its location in memory. If both read the same value, incremented that same value, and both stored the same incremented value, the result would have been an increment by one, not two. You can see this in the following animation using MIT’s Scratch. The 4-byte integer of interest is held within a data block – say 128 bytes in size – which is read into the cache of both processors; the shared integer is read from and then returned to there. Remember you want to see a result as though the operation is first done by one processor and then the other, with the second using the results of the first. In this animation, both processors end up storing the same value, something that can and does happen. Again, the problem is that your program expected to see that each processor individually had incremented the variable by one, as though done “atomically”. To ensure that, there is specialized support for such “atomic” operations. This special support – which gets requested in software as an “atomic” add – is built on top of the normal functions available in the cache. Go back to that previous animation and notice what happens with the cache line in processor P1 when P0 initiates its store; it gets invalidated on P1. Similarly, P1’s store invalidates (and copies) the data block in P0. Each store to a data block in the cache always ensures that the old values are not perceivable by other processors. After the store occurs, that newly stored value becomes visible to other processors when they subsequently re-read from that same data block. Said differently, an exclusive (i.e., single-instance) version of that data block must exist on the processor doing the store (and nowhere else). As a result, the data block with our integer is invalidated on P0 when P1 does its store. In effect, P1 had stolen the data block away from P0. Again, this is completely normal operation in a cache-coherent SMP. Support for “atomicity” is built on this. What we want to have happen is – even though both processors may be reading and incrementing the same value – is that both incremented values actually show up in memory; “atomicity” means that each increment – each read-modify-write – is done as though done one after the other. In order to support the atomic requirement, the hardware needs to detect that – at the time of the attempt to store an incremented value – that integer variable in the cache still contains the pre-incremented value. That is, whatever value P0 reads from P0’s cached data block before the increment is still there in P0’s cache at the time of the store. Only then can the store associated with our atomic increment be allowed to succeed. Easy to say, of course, but how does the hardware know that the integer variable had not changed? The processor is not going to read it again. Instead, a processor can know this simply by also knowing that its cache still contains the same data block from which the pre-incremented value had been read. Said differently, a value can be read-modified-written atomically if no other processor had stolen the data block during that time. If processor P0 was guaranteed an exclusive copy of the data block throughout the entire primitive read-modify-write operation, its atomic increment will have succeeded. If not, if P1 had stolen the data block, P0 needs to restart the entire read-modify-write, continuing this until it is true. You can see in the MIT Scratch animation mentioned above both unsuccessful and successful attempts at atomic updates. Notice that, if not for P1 stealing and then modifying the data block first read by P0, P0 would have first succeeded in doing an atomic update. But because P0 did not have the original data block, P0’s store failed; it needed to re-read the data block from P1 and the same integer – now modified by P1 – in order for its own atomic update to succeed. In this simple example, what might appear in a program as a simple increment, in the hardware is actually multiple distinct operations. Somehow, building on normal cache coherency, these multiple operations were able to be perceived by the program as one single atomic operation. It is this that is the beginning of transactional memory. The “transaction” we just looked at just happens to consist of only a single atomic increment. In the next part of this analysis of transactional memory, we will discuss what happens when you start juggling lots of transactions.
https://www.nextplatform.com/2015/11/10/transactional-memory-comes-to-the-storage-hierarchy/
Q: Breaking down a Matrix so that every process gets its share of the matrix using MPI I am fairly new to using MPI. My question is the following: I have a matrix with 2000 rows and 3 columns stored as a 2D array (not contiguous data). Without changing the structure of the array, depending on the number of processes np, each process should get a portion of the matrix. Example: A: 2D array of 2000 arrays by 3 columns, np = 2, then P0 gets the first half of A which would be 2D array of first 1000 rows by 3 columns, and P1 gets the second half which would be the second 1000 rows by 3 columns. Now np can be any number (as long as it divides the number of rows). Any easy way to go about this? I will have to use FORTRAN 90 for this assignment. Thank you A: Row-wise distribution of 2D arrays in Fortran is tricky (but not impossible) using scatter/gather operations directly because of the column-major storage. Two possible solutions follow. Pure Fortran 90 solution: With Fortran 90 you can specify array sections like A(1:4,2:3) which would take a small 4x2 block out of the matrix A. You can pass array slices to MPI routines. Note with current MPI implementations (conforming to the now old MPI-2.2 standard), the compiler would create temporary contiguous copy of the section data and would pass it to the MPI routine (since the lifetime of the temporary storage is not well defined, one should not pass array sectons to non-blocking MPI operations like MPI_ISEND). MPI-3.0 introduces new and very modern Fortran 2008 interface that allows MPI routines to directly take array sections (without intermediate arrays) and supports passing of sections to non-blocking calls. With array sections you only have to implement a simple DO loop in the root process: INTEGER :: i, rows_per_proc rows_per_proc = 2000/nproc IF (rank == root) THEN DO i = 0, nproc-1 IF (i /= root) THEN start_row = 1 + i*rows_per_proc end_row = (i+1)*rows_per_proc CALL MPI_SEND(mat(start_row:end_row,:), 3*rows_per_proc, MPI_REAL, & i, 0, MPI_COMM_WORLD, ierr) END IF END DO ELSE CALL MPI_RECV(submat(1,1), 3*rows_per_proc, MPI_REAL, ...) END IF Pure MPI solution (also works with FORTRAN 77): First, you have to declare a vector datatype with MPI_TYPE_VECTOR. The number of blocks would be 3, the block length would be the number of rows that each process should get (e.g. 1000), the stride should be equal to the total height of the matrix (e.g. 2000). If this datatype is called blktype, then the following would send the top half of the matrix: REAL, DIMENSION(2000,3) :: mat CALL MPI_SEND(mat(1,1), 1, blktype, p0, ...) CALL MPI_SEND(mat(1001,1), 1, blktype, p1, ...) Calling MPI_SEND with blktype would take 1000 elements from the specified starting address, then skip the next 2000 - 1000 = 1000 elements, take another 1000 and so on, 3 times in total. This would form a 1000-row sub-matrix of your big matrix. You can now run a loop to send a different sub-block to each process in the communicator, effectively performing a scatter operation. In order to receive this sub-block, the receiving process could simply specify: REAL, DIMENSION(1000,3) :: submat CALL MPI_RECV(submat(1,1), 3*1000, MPI_REAL, root, ...) If you are new to MPI, this is all you need to know about scattering matrices by rows in Fortran. If you know well how the type system of MPI works, then read ahead for more elegant solution. (See here for an excellent description on how to do that with MPI_SCATTERV by Jonathan Dursi. His solution deals with splitting a C matrix in columns, which essentially poses the same problem as the one here as C stores matrices in row-major fashion. Fortran version follows.) You could also make use of MPI_SCATTERV but it is quite involved. It builds on the pure MPI solution presented above. First you have to resize the blktype datatype into a new type, that has an extent, equal to that of MPI_REAL so that offsets in array elements could be specified. This is needed because offsets in MPI_SCATTERV are specified in multiples of the extent of the datatype specified and the extent of blktype is the size of the matrix itself. But because of the strided storage, both sub-blocks would start at only 4000 bytes apart (1000 times the typical extent of MPI_REAL). To modify the extent of the type, one would use MPI_TYPE_CREATE_RESIZED: INTEGER(KIND=MPI_ADDRESS_KIND) :: lb, extent ! Get the extent of MPI_REAL CALL MPI_TYPE_GET_EXTENT(MPI_REAL, lb, extent, ierr) ! Bestow the same extent upon the brother of blktype CALL MPI_TYPE_CREATE_RESIZED(blktype, lb, extent, blk1b, ierr) This creates a new datatype, blk1b, which has all characteristics of blktype, e.g. can be used to send whole sub-blocks, but when used in array operations, MPI would only advance the data pointer with the size of a single MPI_REAL instead of with the size of the whole matrix. With this new type, you could now position the start of each chunk for MPI_SCATTERV on any element of mat, including the start of any matrix row. Example with two sub-blocks: INTEGER, DIMENSION(2) :: sendcounts, displs ! First sub-block sendcounts(1) = 1 displs(1) = 0 ! Second sub-block sendcounts(2) = 1 displs(2) = 1000 CALL MPI_SCATTERV(mat(1,1), sendcounts, displs, blk1b, & submat(1,1), 3*1000, MPI_REAL, & root, MPI_COMM_WORLD, ierr) Here the displacement of the first sub-block is 0, which coincides with the beginning of the matrix. The displacement of the second sub-block is 1000, i.e. it would start on the 1000-th row of the first column. On the receiver's side the data count argument is 3*1000 elements, which matches the size of the sub-block type.
“Loaf is all you knead!”. A smiley-face pun, which Head Chef Federico Carnevale claims he’s well-known for quoting. Federico, who heads up a small team at the Attria wine bar restaurant in Richmond, Melbourne, has been baking bread “for a very long time”. And travelled around the world, exploring different food cultures and ingredients. But growing up in Italy, his relationship with food was not always so positive. “I was born in Rome and grew up in a small countryside cottage. My journey as a chef only began when I was 21. However, it completely changed my entire outlook. When I was much younger, the connection I felt with food was not always good. I was overweight from eating snacks and unhealthy food. Bad eating habits were my ‘friend’ for long time. It wasn’t until I become a teenager, I decided to approach food in a different way”. Time-honoured Italian ‘cooking ‘bibles’ Federico’s journey started modestly – as a kitchen hand in a small restaurant in his local area. At the same time, the aspiring chef attended cooking school. “I was also busy studying at home the time-honoured Italian ‘cooking ‘bibles’, says Federico, such as ‘Il Talismano Della Felicita’ (Talisman of Happiness), ‘Cucchiaio D’argento’ (The Silver Spoon), ‘Larousse Gastronomique’ (Encyclopaedia of Gastronomy), etc”. After formal school education was completed, Federico began his chef’s apprenticeship in a prestigious restaurant in Rome. Two years later, he gained the position as senior chef de partie. Great new learning experience for me Just like many a young chef at this early stage in their career, Federico decided it was time to explore culinary pastures new. He packed his chef’s whites and headed down under… to Sydney, Australia. “My first kitchen was at Baccomatto Osteria”– a one-hatted, upmarket Italian cuisine & cocktails venue in the heart of Surry Hills. “I worked under the guidance of Head chef, Valerio Boncompagni, known for his signature Roman ragu mixed with rice and mozzarella. He introduced me to Italian-Australian culture and cuisine, which is far removed from the pure Italian cooking I had always known. It was great new learning experience for me”. Nevertheless, a young chef is often restless to be on the move from kitchen to kitchen, and exploring the world of cooking. So it was for Federico, too. “After a few months I moved to Melbourne where I worked in a variety of venues, from breakfast eateries to bistros and fine dining. I was hungry to expand my knowledge of Australian culinary culture as much as possible – and achieve the position of head chef!” Great mentors and travelling the world Federico has already mentioned the importance of a first mentor in his young chef’s journey. There would be more. And just like the many chefs we interview at Life on The Pass, a good and positive influence can open up a whole new world of cooking possibilities. “Along the way I was lucky enough to work with some great mentors”, say Federico. “I have to mention “Francois Bouvier (Cosi Restaurant, South Yarra Melbourne), Noah Crowcroft (Red Spice Road & Rice Paper Scissors, Melbourne), Telina Menzies (Australian Venue Co.) and Hayden McMillan (Neptune, Melbourne)”. Federico was not always working in Melbourne throughout this period. “For some 12 years, I also did a lot of travelling outside of Australia, so I could learn and explore different food cultures and ingredients. I was fortunate in that I was able to sample so many wide-ranging traditional cooking methods and techniques – from Africa to Norway! They have heavily influenced my understanding and knowledge of cooking ever since”. Which naturally brings us to Federico’s own cooking philosophy today. Aroma of bread just out of the oven gives me mental calmness We discover that his early boyhood struggles with food, and even a period of suffering anorexia early in his career, were key to first helping him form a positive attitude. “Slowly I started to develop an equilibrium between my work, my life and my relationship with food”, says Federico. It would hold him in good stead when managing the stress and pressures of life on the pass, which we discuss later in this interview. Federico is keen to share his “long time, real deep love of baking bread and “constantly researching and experimenting with new combinations of flavours”. His ‘loaf is all you knead’ pun mentioned earlier, is no coincidence. “In the bread, there is a magic that I cannot describe, says Federico, but that aroma – when it just comes out of the oven – it gives me mental calmness”. Naturally, it’s no coincidence that Federico’s cooking approach is “is mostly European with Italian-French techniques combined with influences from around the world. One of my favourite sauces is a “salsa Verde” that I learned during one of my travels in Africa”. I try to develop good, friendly relationships with all my suppliers At the same time, and like nearly all of the chefs we interview, the big focus is on sourcing local ingredients wherever possible. Federico says he tries to “include what Australian small producers can offer in combination with particular products imported from all over the world”. It’s a way of cooking that Federico particularly loves since the Attria Wine Bar & Restaurant became “home” some two years ago. “I try to develop good, friendly relationships with all my suppliers, and we talk daily to have updates on what is available on the market. One of our dearest friends are “Toolunka Estate”, South Australia, where I source premium olives, and I also work in close relationship with suppliers of small foraged items. Our bread and baking are produced daily using small suppliers for flour, and using my levain sourdough starter – Levito Madre – which is 92 years old”. I let the ingredients drive my creativity Federico says he was first attracted by a “simple, clean concept of cooking” at a time when it was not really yet popular for the industry. My menu at Attria is mainly seasonal where I let the ingredients drive my creativity. I like to challenge myself studying, testing and tasting ingredients not often used, and create new combinations of flavours.” “I like to let the ingredients express themselves without too much manipulation, especially the vegetables”, adding, “at the same time I love techniques like slow infusion and pickling to develop new combinations of flavours or maximising the original flavours of the ingredients, themselves. I’m a huge fan of vinegars, and I love to infuse them with different flowers and native herbs. I’m also a fan of peppercorns and mustards. In one of my current dishes, I use a Cubeb pepper from Afghanistan or handmade tarragon mustard for our main course”. Working together for a goal, and encouraging open dialogue Federico agrees that life on the pass, and managing mental wellbeing in a high pressure environment is most important. “Working in a kitchen and finding ‘simple ways’ to cope may not always be so easy. My work ethic is really strong but I try to have my team working max 42 hrs per week with daily breaks so they can release the pressure. I try to create connections that we all develop between staff for working together for a goal, and encourage open dialogue and honest confrontation. During the epidemic, we all worked all together as a team to survive the multiple lockdowns and let the business grew up in a tough period. I also involve my team in the creative process of menu development with special dishes to let them express themselves too. I’m the first one to remember my hunger for developing myself and showing my idea of cuisine when I was a chef de partie or sous chef. So I believe it’s only right to let my team express their ideas”. Even away from the pass, loves to cook and bake Away from the challenges of the kitchen, Federico says he, “loves going off-road” with his mountain bike, or “a drive in the country combined with picnic or wine tasting”. I love contemporary art too, so I attend a lot of events and exhibitions”. Typically, even away from the pass, Federico also loves to cook and bake at home – a lot! “Usually one of my weekly days off starts at the market where I meet my suppliers, buy some ingredients and play around with them at home – in a relaxing environment for a change – with some good rock playing in the background”. Federico also maintains his lifelong passion for travelling. “Every 18-24 months, I try to take some time off from the kitchen to travel around the world to continue my journey of discovery”. Don’t be in a rush to reach the top of the mountain Which neatly brings us to the advice Federico would offer young chefs at the start of their journey of discovery. According to the well travelled Federico, “Don’t be in a rush to reach the top of the mountain – enjoy the journey and try to collect as much experience as possible. Early on in my career, a wise chef once told me ‘this is not a sprint, it’s a marathon’. Yes – the hunger to learn and develop is really important, but also we need to learn our limits and accept them”. Many a chef we interview endlessly praise all the wise chefs and work colleagues that are so important in supporting their career. Federico pays tribute to “all my mentors that made me grow as a chef, and a person, to make me become the person I am today. Last but not least, of course, I would like to say a massive thank you to all my colleagues that work with me everyday at work – and Paul Butera, owner of Attria, for his endless help and support in the last 2 years in supporting our growth and development”.
https://www.lifeonthepass.com/featured-chefs/chef-federico-carnevale-journey-of-discovery/
Trusted Chefs in my Kitchen When artists took apprenticeships in years past, they were required to copy, copy, copy the paintings, sculptures, techniques and styles of the great artists that preceded them. It was all about practice, about familiarizing your brain, your hand and your approach with proven gems---artwork from famed, revered artists. Sometimes this would take years of painful step-by-step learning. Copying this artist and the next, studying brush strokes and mediums, learning about genres, cultures, influences and historical context. And until the upcoming artist truly had mastered and finally appreciated the presence and depth and sheer genius of what went before, they were rarely, if ever, encouraged to create their own art. I did not go to culinary school, but I have been studying great 'chef' artists through the years by engaging cookbook after cookbook. My apprenticeship, so to speak, was to copy recipe after recipe of one cook and then another, and then another, until I began to have a sense of technique, science and art in the kitchen. I cooked my way through different traditions and styles, philosophies and fortes. There are so many talented chefs out there, some recognized and some not. Though I have learned a lot, I have much more to learn. Even now, I rely heavily on cookbooks, on chefs that are close to my heart both with their kitchen philosophy and their culinary talent. I choose culinary greats to be a part of my kitchen, chefs that I can trust to be benchmarks, wizards and gurus that regularly prove their talent and maintain a welcome in my kitchen. But not just any chef will do. Just as it is important for you to pick a doctor, therapist, friend and/or teacher that are well-suited to you, so it goes with picking chefs that most reflect your kitchen style, presence and preferences. Because I am not a professional chef, you may wonder how I go about selecting my 'chef' teachers. It is quite simple: I peruse their cookbook, pick 10 recipes that sound good to me and I make them, one by one. I follow their lead, learning about their methodology, kitchen philosophy and favorite foods to engage in the kitchen. Truth be told, after about 10 recipes I have a pretty good sense of a chef via their cookbook. Would I make the recipes again? Did my family like the recipes? Were the recipes exotic or simple? What are their ideas around entertaining? And then I use my own kitchen philosophy as a filter: Is it practical? Is it a good use of simple ingredients? Is it overly complicated or just about right for the end result? Can I count on this chef to deliver great recipes every time or is it more hit and miss? Are the recipes a good springboard for creating my own? I welcome my chefs as trusted friends, teachers who gave me my start and still give me ongoing inspiration in my kitchen. Their techniques are now mine, my own recipes have traces of theirs, our kitchen philosophies are permanently commingled. What cookbooks do you love? Ask yourself why you like each book; you will begin to unpack your own kitchen philosophy, and form your filter for future chef approvals. I may like a cookbook that you don't prefer, or pass along a cookbook that the next person finds endearing. I may be looking to learn about specific methods (tagine?) or traditions (Indian, Spanish?); my bent toward learning new things propels me to keep reaching for new cookbooks, new chefs. But there are times in my life that I go back to the basics, stick to practical or don't feel a creative urge. There are chefs to befriend you and I in all the varying chapters of our lives. The important thing is to find chefs that you can trust, bookie foodie friends that will lie atop counters and enjoy regular spatters from the nearby saucepan.
http://www.talkoftomatoes.com/blog/2006/11/trusted-chefs-in-my-kitchen
, Wikipedia , VIAF : 73482548 , LCCN : no99008694 , ISNI : 0000 0000 5888 8656 , MusicBrainz : 9a564dfe-a4ed-4daa-9323-9c101c4c23bc , NKC : mzk2014808148 ✕ Show works by type/instrument External links Detailed biography: International Who's Who in Music and Musical Gazeteer (p.280) * unsung composers including Who's Who entry See also MusicSack Wishlist Miscellaneous information Husband of pianist Katharine Goodson (1872–1958) from 1903 until his death. (Also: his first symphony is in B ♭ , not B major as claimed in the link above. (N.B. in German notation, "B" is B-flat. "H" is B-natural)) Born in Beckenham, Kent, Hinton studied violin with Prosper Sainton and Sauret, and composition under F. W. Davenport, at the Royal Academy, London. In the late 1890s he studied with Rheinberger in Mubich, where his symphony was performed. He later went to Vienna and Rome. His works were played in London from 1900. Table of Contents: Top - A B C D E F G H I J K L M N O P Q R S T U V W X Y Z - Chart Compositions (10) Compositions (10) Compositions by: Hinton, Arthur The following 10 pages are in this category, out of 10 total. B Bolero (Hinton, Arthur) P Piano Concerto, Op.24 (Hinton, Arthur) Piano Quintet, Op.30 (Hinton, Arthur) Piano Trio, Op.21 (Hinton, Arthur) Pieces for Piano, Op.29 (Hinton, Arthur) R Rhapsody, Op.23 (Hinton, Arthur) S Suite for Violin and Piano, Op.20 (Hinton, Arthur) A Summer Pilgrimage (Hinton, Arthur) V Violin Sonata (Hinton, Arthur) W Weisse Rosen (Hinton, Arthur) Retrieved from " http://imslp.org/index.php?title=Category:Hinton,_Arthur&oldid=3301625 " Categories :
https://imslp.org/wiki/Category:Hinton,_Arthur
Pressure head equation How do you calculate head pressure? Divide the depth in inches by 27.71-inches/psi, or the depth in feet by 2.31-feet/psi which are the English unit conversion factors. The result is the water head pressure expressed in psi. What is pressure head in Bernoulli’s equation? The “head” form of the Engineering Bernoulli Equation is obtained by dividing the energy form throughout by the magnitude of the acceleration due to gravity, g . for a pump, the head developed by a pump p h is always positive. What is the unit of pressure head? According to EN 12723, pressure head is the pressure energy in a fluid under static pressure (p) per unit weight. Pressure head is expressed in metres (m). How do you calculate head pressure in meters? Convert pressure to feet of head or meters of head One psi = 2.31 ft of water. For example, 100 psi x 2.31 = 231 feet of head. To convert pressure in bar or kg/cm2 to meters of head, multiply the pressure by 10. 1 kg/cm2 = 10 meters of water = 1 bar. What does pressure head mean? In fluid mechanics, pressure head is the height of a liquid column that corresponds to a particular pressure exerted by the liquid column on the base of its container. It may also be called static pressure head or simply static head (but not static head pressure). What causes pressure in the head? Most conditions that result in head pressure aren’t cause for alarm. Common ones include tension headaches, conditions that affect the sinuses, and ear infections. Abnormal or severe head pressure is sometimes a sign of a serious medical condition, such as a brain tumor or aneurysm. What is velocity head formula? If u is the velocity and g the acceleration due to gravity, the velocity head is defined by u2/2g. If the flow within a pipe, channel, or duct is uniform over a given cross-section, this definition of the velocity head applies to the entire cross-section. What is meant by piezometric head? The pressure that exists in a confined aquifer. Specifically, it is the elevation above a datum plus the pressure head. What is difference between pressure and pressure head? By definition, ‘Head’ is a measure of energy. The units of energy are feet (or meters). ‘Pressure’ is a force applied against a unit of area such as a pound of force applied to a square inch of area (psi). How do u calculate pressure? Pressure and force are related, and so you can calculate one if you know the other by using the physics equation, P = F/A. Because pressure is force divided by area, its meter-kilogram-second (MKS) units are newtons per square meter, or N/m2. How do you calculate total head? Total dynamic head = elevation head + friction head loss + pressure head. A. Elevation head – is the vertical distance which the water must be pumped. It is the elevation difference in feet between the pumping level in the well and the pressure tank. How many PSI is a foot of head? The relationship between PSI and feet of head is that 2.31 feet of head = 1 PSI. Translated, that means that a column of water that’s 1-inch square and 2.31 feet tall will weigh 1 pound. Or, one-foot column of water that’s 1-inch square weighs . 433 pounds. How many PSI is a meter of head?
https://estebantorreshighschool.com/useful-about-equations/pressure-head-equation.html
What mental condition does cognitive dissonance theory explain, and how could it, or does it, affect your ability to reason? Reconciling Opposing Points of View Religious ideologies are the primary source of conflict in our modern world. If you hold biased opinions, the mind battles to reconcile differing ideas. This conflict causes mental dysfunctions and physical pain. It also hampers our ability to use reason and common sense. The mind reacts to protect its sacred ground of beliefs, regardless of validity or accuracy. To resolve a conflict of ideas, we must be clear-headed and open-minded. Some people experience physical distress when attempting to resolve differing points of view. Cognitive Dissonance theory is the name given by mental health researchers to this condition. When someone encounters opposing points of view, they must make a choice. To reconcile the new information, you must either reject the new data or change your existing worldview to align with the additional information. When someone experiences this condition, it can be terrifying. It shakes the foundation of your reality. When something doesn’t fit into the existing worldview, it can cause various mental and physical symptoms. Cognitive Dissonance Theory This theory explains the effects of attempting to reconcile opposing ideas. It also helps us understand how someone can remain in continual emotional stress for long periods. If they don’t connect their symptoms and the dilemma, it can last for years. This condition results from constant exposure to self-hypnosis and group hypnosis manipulation techniques. It makes people reject new ideas and ignore facts that conflict with their beliefs. This way, they escape the discomfort of considering anything that contradicts their current worldview. Cognitive dissonance can spread through populations. We see it happen during times of war, famine, riots, or any significant social discord. It’s your mind telling you that your belief system isn’t working for you or your society anymore. Maintaining resistance to facts that expose the fallacies of beliefs causes intense discomfort. This conflict is harmful to your health and wellness. Ignoring scientific facts and logic that contradict your beliefs has consequences. It affects your ability to apply logical reasoning in other areas of your life. The Healthy Response to New Data People in a healthy mental state can reconcile opposing views rather quickly and with much less distress. An open-minded person will not automatically reject new data. Instead, they validate the facts. They use common sense to analyze the opposing viewpoints. They test whether to accept or reject the new data by applying critical thinking. An open-minded person will change their opinions about reality as they encounter further valid information. It’s a valuable skill set to have in a rapidly changing world. Changing your mind when you confront new data is critical to your overall health and wellness. An Unhealthy Response to New Data An unhealthy worldview has common characteristics. It contains harmful bias and prejudice, which is also rigid and inflexible. A person with a narrow worldview will suffer from long periods of this condition because the world is changing so rapidly. There is a lot of information available. So, they are constantly running into data the conflicts with their narrow worldview. Cognitive dissonance theory helps us understand why some people automatically reject new data or data from some sources. They learn to reject anything that conflicts with their worldview and mindset. But, not every worldview has this issue, only those that use mythology and superstition as their primary source. Since they cannot reconcile opposing views, they try to avoid conflict so that they avoid the negative symptoms. People who use this strategy come from highly religious backgrounds. People with rigid religious beliefs place mythology above science. They accept the opinions of those who parrot their views as if what they say is factual or valid. It’s dangerous for them and anyone in their social circle. We see this play out when people think their freedom is infringed upon when asked to wear masks during the pandemic. They confuse public health with their rights. Placing others in jeopardy because of ignorance and negligence is not selfish; it is a public health risk. Yet if you ask these same people if they wear a seatbelt in their car or use the toilet instead of defecating on the sidewalk, they will say, of course, I wouldn’t do that. Avoiding opposing viewpoints is a learned strategy of confusing choice with prejudice, enabling them to cloak bias and prejudice. But now, you can see through this sham. After all, a religious experience is not a measure of truth. Nor should it be the basis to violate proven health and safety measures. Flexible Religious Paradigms Not all religions are inflexible. Some do not have strict boundaries on thinking. For example, Taoism and Paganism have very few constraints, if any. With these systems, you are free to explore and develop your path. People with these or Atheistic backgrounds are least likely to suffer from cognitive dissonance. They are more likely to change their minds when they confront opposing viewpoints. It doesn’t mean they all agree on the same things. Inflexible Religious Paradigms Religions that rely on mythology and superstition as a basis for their systems are inflexible. So, their members will often have this mental condition because they will frequently encounter facts opposing their worldview. We see this to be the case with those who follow the Abrahamic religions (1). These are the Western organized religions of Semitic origin: Christianity, Islam and Judaism. These religions are rigid and inconsistent and have fought against almost every scientific advancement. They killed and imprisoned early scientists for suggesting it was the sun and not the earth was not the center of the galaxy. And, this fight continues today against human rights and medical treatments such as vaccines. They promote negative and regressive social practices, including genocide, discrimination, ethnic and gender bias. Unfortunately, this thinking is part of many religious-based cultural systems. People use self-hypnosis and group hypnosis to indoctrinate and maintain strict beliefs. Parents and trusted spiritual authorities pass down lifestyle, religious, political, and personal opinions. These become deeply ingrained in the way we perceive life and conduct ourselves in it. These institutions influence many cultures, even if you are not a follower. People with deeply held religious views live in this stressful mental condition. They become used to friction and conflict. Thus, they are more likely to act with violence to protect their beliefs. It doesn’t matter if the new information is accurate. What matters is that they maintain their boundaries. Otherwise, it causes headaches and other physical pain. It’s why this mental condition is called a religious headache. Cognitive Dissonance Theory and the Narcissist On the surface, these two things don’t sound similar. A Narcissist is a person with an unhealthy and admiration of themselves and their abilities. They are self-absorbed and often abuse their relationships with others to bolster their importance. People living for lengthy periods under the strain of this mental disorder will find ways to substantiate their views. It doesn’t matter how much evidence there is against their beliefs. The need to believe and belong to a particular group will cause a disintegration of their personality to unhealthy coping strategies. So, it’s not surprising that cognitive dissonance and the Narcissist are hand-in-hand. If you find one, you are likely to find the other. Mental health runs on a continuum, from healthy and vibrant to unhealthy. Healthy people exhibit the traits of kindness and compassion. At the opposite end are those who exhibit unhealthy traits like greed and selfishness. Farther down the unhealthy continuum are those with extreme abnormal traits associated with anti-social personality disorders like narcissism. (2) Overcoming Cognitive Dissonance When you find yourself in this situation, it forces us to choose between the ideas causing the conflict. It enables us to shed outdated belief systems that separate us and cause conflict. It causes us to reject any new idea in favor of the existing paradigm’s beliefs. Unfortunately, people will defend their current belief system even if it is rampant with error and inconsistencies. They do this because belief systems trigger the fight, flight, or freeze reaction. It creates an underlying current of anxiety. In this state, it becomes easy to engage these primitive emotional responses. “The only position that leaves me with no cognitive dissonance is atheism. It is not a creed. Death is certain, replacing both the siren-song of Paradise and the dread of Hell. Life on this earth, with all its mystery and beauty and pain, is then to be lived far more intensely: we stumble and get up, we are sad, confident, insecure, feel loneliness and joy and love. There is nothing more, but I want nothing more.” ― Christopher Hitchens, The Portable Atheist: Essential Readings for the Nonbeliever So, people with cognitive dissonance find themselves in a constant state of emotional turmoil. Living in this state makes one susceptible to groupthink manipulation and propaganda. As a result, some unscrupulous leaders use this to maintain their control over their members. Here’s how to fight your way to the facts. First, take a breath. Engage in proper self-care. Prepare yourself emotionally and physically. 1) Enhance Your Critical Thinking Ability The quality of our thinking results in our ability to discern facts from fiction. Beliefs are preconceived points of view based on arguments. An argument is a selling tool. It acts as a filter to justify our beliefs. It doesn’t matter if what we believe is accurate or correct. As long as it aligns with our worldview, we must be right. Everyone uses arguments that have something which supports their conclusions and their worldview. If the foundation of your worldview is facts, then your decisions will be based on facts. If mythology and superstition are the basis of your premises, then they will determine your conclusions. “Because of propaganda induced cognitive dissonance, most people hate themselves and don’t even know it.” ― Bryant McGill, Simple Reminders: Inspiration for Living Your Best Life So, it’s vital to learn about the essential tools to increase your critical thinking abilities. Here are some links to the basics of logical reasoning and two other companion tools, spotting logical fallacies and the truth-seekers’ axioms. Study and apply the principles in your research. 2) Conducting Independent Research Don’t give in and ignore the new data. Find the courage to face the fact that what you may believe is incorrect. Then conduct your research. Use sources from outside the paradigm you are researching. In some cultures, this is hard to do. So, use the internet and get outside of the cultural narrative. Learn to question cultural folklore. Use the comparative analysis process to investigate the concepts of your own beliefs. Comparative analysis is a structured form of comparative religious studies. A structured method ensures more accurate data. 3) Eliminate and Minimize Negative Programming It’s imperative to reduce or remove all programming sources that project and reinforce inaccurate and harmful programming. The primary source for this kind of programming is religious indoctrination. It uses many forms of media, including masquerading as news programming; Fox News is an example of programming many countries label as propaganda. In Conclusion Cognitive dissonance theory tells us that a negative emotional or physical response is a wake-up call. Anger and frustration are one way our mind tells us that there is conflict. This conflict can force us to align with the new truth, or it can cause us to resort to other, less healthy coping strategies. For those open-minded, it is an opportunity to learn and move the boundaries of their beliefs. It can be a scary and painful experience for deeply religious people. Overcoming cognitive dissonance is possible.
https://seekerproject4se.org/2019/05/07/reconciling-opposing-points-of-view-cognitive-dissonance-theory/
Working to keep Ontario children and their families healthy and active is part of the government’s plan to work together as One Ontario to build a successful, compassionate and united province where everyone has the opportunity to connect, contribute and enjoy a high quality of life. Quick Facts Ontario is the first province in Canada to develop an outdoor charter for children and youth. A survey of young Canadians found that 70 per cent spend an hour or less a day outdoors. A 2011 University of Essex study shows that participating in outdoor activities improves children’s mental health, increases physical activity and boosts self-esteem. Children report feeling happier, more energetic and having a more positive outlook on their school, home and social lives. A 2010 University of Florida study found that the more time children and youth spend outdoors, the more likely they are to act in an environmentally friendly manner. Ontario’s rich biodiversity offers boundless opportunities for discovery. Our province is home to approximately 250,000 lakes, 30,000 known species of plants and animals, more than 330 provincial parks and over 80,000 kilometres of provincial trails. Quotes “Childhood is a time to play and explore. The Ontario Children’s Outdoor Charter reinforces the importance of children being outside to discover the wonders of nature. When we encourage a child to take part in outdoor activities, we help develop healthy habits and a lifetime of appreciation for Ontario’s biodiversity.” David Orazietti Minister of Natural Resources “The well-being of our children depends on a healthy, active lifestyle. The Ontario Children’s Outdoor Charter advocates for an environment where active choices are easy choices for students, encouraging them to build a stronger connection to nature.” Liz Sandals Minister of Education “The charter is an open invitation to explore the wonders of nature. The partners to the charter encourage everyone - parents, organizations, governments, communities - to work together to make sure kids have chances to connect with nature. The opportunities in the charter are a doorway to making that happen.” Bill Kilburn Back to Nature Network “Far too many of our youth have become disconnected from the very life support system that sustains them: our biodiversity. Why would we protect or care about that which we do not understand? The Ontario Children’s Outdoor Charter seeks to unleash a lifelong fascination in the life that surrounds them and to instill a passion to protect what sustains them.” Steve Hounsell Chair, Ontario Biodiversity Council “Let’s help our children find a balance between screen time and old-fashioned outdoor play with the help of tools like the Ontario Children’s Outdoor Charter. Computers are good, but nature is better!”
Emotional suppression has been associated with generally negative social consequences (Butler et al., 2003; Gross & John, 2003). A cultural perspective suggests, however, that these consequences may be moderated by cultural values. We tested this hypothesis in a two-part study, and found that, for Americans holding Western-European values, habitual suppression was associated with self-protective goals and negative emotion. In addition, experimentally elicited suppression resulted in reduced interpersonal responsiveness during face-to-face interaction, along with negative partner-perceptions and hostile behavior. These deleterious effects were reduced when individuals with more Asian values suppressed, and these reductions were mediated by cultural differences in the responsiveness of the suppressors. These findings suggest that many of suppression's negative social impacts may be moderated by cultural values.
https://experts.arizona.edu/en/publications/emotion-regulation-and-culture-are-the-social-consequences-of-emo
Edition: Health benefits of the disclosure of emotions about traumatic experiences: What is the evidence and potential for therapeutic benefits? by Christine Stephens This study examined whether written emotional expression of personal and emotional events in a home-based setting might result in improved psychological and physical well-being. Following completion of a number of health and psychological well-being questionnaires, forty-seven men and ninety-nine women were assigned to one of three conditions. Participants in the emotional expression writing group were asked to write about previously undisclosed personal and emotional events for 3 days; participants in the unemotive writing group were asked to write about pre-assigned superficial topics for 3 days; and participants in the control group did not write. Participants were asked to complete the initial questionnaires three, seven and thirty weeks later. The emotional expression group experienced an increase in physical symptoms (p =.04) and self-reported number of days taken off college due to illness (p =.03) at three weeks, and less anxiety and insomnia (p =.04) at 30 weeks. Written emotional expression was associated with short-term detrimental physical health effects and less substantial long-term psychological health benefits, cautioning against the adoption of written emotional expression alone in the home environment. There is evidence that individuals experiencing natural disasters are at risk for long term physical and mental health problems, particularly if experiencing unbidden thoughts (intrusions) about the disaster. Interventions provided post-disaster, especially those that involve emotional expression, may help protect individuals from these negative effects. This study examined if a brief, easily administered, structured writing task (expressing thoughts and emotions about the natural disaster) would attenuate the relationship between intrusions about the disaster and both negative affect and physical symptoms. Individuals recently displaced by a hurricane and subsequent flooding were randomly assigned to the experimental (emotional writing) group (n=27) or to the control (neutral writing) group. A community reference group (n=56) was also obtained. In the control group, disaster-related intrusions more than three months post-disaster were associated with higher levels of negative affect and more physical symptoms (p's<.01). In the experimental group, disaster-related intrusions were unrelated to both negative affect and physical symptoms (p's>.15). These data suggest that future research examine the clinical utility of structured writing interventions for individuals who have experienced natural disasters or other trauma.
https://trauma.massey.ac.nz/issues/2002-1/contents.shtml
EB Docket No. 04-296: PUBLIC SAFETY AND HOMELAND SECURITY BUREAU REMINDS EMERGENCY ALERT SYSTEM PARTICIPANTS THAT THEY MUST BE ABLE TO RECEIVE AND PROCESS COMMON ALERTING PROTOCOL-FORMATTED EAS ALERTS BY JUNE 30, 2012“Make sure your gear works and you’re monitoring the right station,” said Greg Cooke FCC associate chief of the Policy Division of the Public Safety & Homeland Security Bureau, during an EAS webinar last Wednesday. In addition to audio and video, CAP messages may include text and photos. CAP-formatted alerts sent through IPAWS can be distributed over the Web, radio and TV, cellphones and over NOAA Weather Radio. FEMA suggests an Internet connection with a speed of 1Mbps that allows for reception/transmission of an alert that is 1MB in size. FEMA anticipates each station would see an average of 1MB in EAS message data per day. On or before June 30, stations must have deployed equipment capable of handling CAP-formatted EAS alerts.The Federal Communication Commission’s (Commission) Public Safety and Homeland Security Bureau reminds Emergency Alert System (EAS) Participants that “[o]n or before June 30, 2012, EAS Participants must have deployed operational equipment that is capable” of receiving and processing Common Alerting Protocol (CAP)-formatted EAS alerts in a manner consistent with the Commission’s EAS rules as amended by the Fifth Report and Order in the above-referenced docket. This means any necessary equipment must be installed and operational by that date. For further information regarding this requirement, please contact Tom Beers, Chief, Policy Division, Public Safety and Homeland Security Bureau, (202) 418-0952; or Gregory Cooke, Associate Chief, Policy Division, Public Safety and Homeland Security Bureau, (202) 418-2351.
http://sbe66.org/?p=1401
A significant amount of work using nanotechnological approaches to crossing the BBB has focused on the delivery of antineoplastic drugs to CNS tumors. For example, radiolabeled polyethylene glycol (PEG)-coated hexadecylcyanoarcylate nanospheres have been tested for their ability to target and accumulate in a rat model of gliosarcoma . Another group has encapsulated the antineoplasitc drug paclitaxel in polylactic co-glycolic acid nanoparticles, with impressive results. In vitro experiments with 29 different cancer cell lines (including both neural and non-neural cell lines) demonstrated targeted cytotoxicity 13 times greater than with drug alone . Using a variety of physical and chemical characterization methods, including different forms of spectroscopy and atomic force microscopy, the investigators showed that the drug was taken up by the nanoparticles with very high encapsulation efficiencies and that the release kinetics could be carefully controlled. Research focusing on the delivery of many of the commonly used antineoplastic drugs is important because most of these drugs have poor solubility under physiologic conditions and require less than optimal vehicles, which can produce significant side effects. With nanotechnology, a large set of materials with distinct properties (optical, electrical, or magnetic) can be fabricated. Nanotechnologically improved products rely on a change in the physical properties when the feature sizes are shrunk. Nanoparticles for example take advantage of their dramatically increased surface area to volume ratio. Their optical properties, e.g. fluorescence, become a function of the particle diameter. When brought into a bulk material, nanoparticles can strongly influence the mechanical properties, such as the stiffness or elasticity. Example, traditional polymers can be reinforced by nanoparticles resulting in novel materials e.g. as lightweight replacements for metals. Therefore, an increasing societal benefit of such nanoparticles can be expected. The biological and medical research communities have exploited the unique properties of nanomaterials for various applications (e.g., contrast agents for cell imaging and therapeutics for treating cancer). Terms such as biomedical nanotechnology, bionanotechnology, and nanomedicine are used to describe this hybrid field.Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications.Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug-delivery vehicles. Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Biological tests 1measuring the presence or activity of selected substances become quicker, more sensitive and more flexible when certain nanoscale particles are put to work as tags or labels. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. Gold nanoparticles, tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots, into polymeric microbeads. Nanopore technology foranalysis of nucleic acids converts strings of nucleotides directly into electronic signatures. The overall drug consumption and side-effects can be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. This highly selective approach reduces costs and human suffering.A targeted or personalized medicine reduces the drug consumption and treatment expenses resulting in an overall societal benefit by reducing the costs to the public health system. Nanotechnology can help to reproduce or to repair damaged tissue. This so called “tissue engineering” makes use of artificially stimulated cell proliferation by using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering might replace today’s conventional treatments, e.g. transplantation of organs or artificial implants. On the other hand, tissue engineering is closely related to the ethical debate on human stem cells and its ethical implications. Chemical catalysis and filtration techniques are two prominent examples where nanotechnology already plays a role. The synthesis provides novel materials with tailored features and chemical properties e.g. nanoparticles with a distinct chemical surrounding (ligands) or specific optical properties. In this sense, chemistry is indeed a basic nanoscience. In a short-term perspective, chemistry will provide novel “nanomaterials” and in the long run, superior processes such as “self-assembly” will enable energy and time preserving strategies.In a sense, all chemical synthesis can be understood in terms of nanotechnology, because of its ability to manufacture certain molecules. Thus, chemistry forms a base for nanotechnology providing tailor-made molecules, polymers etc. and furthermore clusters and nanoparticles. A strong influence of nanochemistry on waste-water treatment, air purification and energy storage devices is to be expected. Mechanical or chemical methods can be used for effective filtration techniques. One class of filtration techniques is based on the use of membranes with suitable hole sizes, whereby the liquid is pressed through the membrane. Nanoporous membranes are suitable for a mechanical filtration with extremely small pores smaller than 10 nm (“nanofiltration”). Nanofiltration is mainly used for the removal of ions or the separation of different fluids. On a larger scale, the membrane filtration technique is named ultrafiltration, which works down to between 10 and 100 nm. One important field of application for ultrafiltration is medical purposes as can be found in renal dialysis. Magnetic nanoparticles offer an effective and reliable method to remove heavy metal contaminants from waste water by making use of magnetic separation techniques. Using nanoscale particles increases the efficiency to absorb the contaminants and is comparatively inexpensive compared to traditional precipitation and filtration methods. The most advanced nanotechnology projects related to energy are: storage, conversion, manufacturing improvements by reducing materials and process rates, energy saving e.g. by better thermal insulation, and enhanced renewable energy sources. A reduction of energy consumption can be reached by better insulation systems, by the use of more efficient lighting or combustion systems, and by use of lighter and stronger materials in the transportation sector. Currently used light bulbs only convert approximately 5% of the electrical energy into light. Nanotechnological approaches like light-emitting diodes (LEDs) or quantum caged atoms (QCAs) could lead to a strong reduction of energy consumption for illumination. Because of the relatively low energy density of batteries the operating time is limited and a replacement or recharging is needed. The huge number of spent batteries and accumulators represent a disposal problem. The use of batteries with higher energy content or the use of rechargeable batteries or supercapacitors with higher rate of recharging using nanomaterials could be helpful for the battery disposal problem. An example of such novel devices is based on spintronics. The dependence of the resistance of a material (due to the spin of the electrons) on an external field is called magnetoresistance. This effect can be significantly amplified (GMR - Giant Magneto-Resistance) for nanosized objects, for example when two ferromagnetic layers are separated by a nonmagnetic layer, which is several nanometers thick (e.g. Co-Cu-Co). The GMR effect has led to a strong increase in the data storage density of hard disks and made the gigabyte range possible. The so called tunneling magnetoresistance (TMR) is very similar to GMR and based on the spin dependent tunneling of electrons through adjacent ferromagnetic layers. Both GMR and TMR effects can be used to create a non-volatile main memory for computers, such as the so called magnetic random access memory or MRAM.
https://www.pharmatutor.org/articles/review-article-nanoparticle?page=3
The Defense Department will revise regulations governing hairstyles that can be worn by service members following complaints that the rules are offensive and discriminate against black women in the military. Defense Secretary Chuck HagelCharles (Chuck) Timothy HagelThe Hill's Morning Report - Biden asks Congress to expand largest relief response in U.S. history John Kirby to reprise role as Pentagon press secretary under Biden 10 former Defense secretaries call on Pentagon to stay out of election fight MORE said the Defense Department would remove terms like “matted,” “unkempt” and “dreadlocks” from regulations on the books. Additionally, each branch conducted its own examination of the standards for hairstyles, following complaints from the Congressional Black Caucus (CBC). In the letter, Hagel acknowledged that the existing regulations had been written in an “offensive” manner and said the rules would be formally relaxed this summer. Among the changes are allowances for ponytails during physical training, and larger braids, cornrows and twists. Fudge led the women on the Caucus in a letter to Hagel in April arguing that the rules discriminate against black women who wear natural hairstyles. She lauded the changes as a show of respect for black women in the nation’s military. “These changes recognize that traditional hairstyles worn by women of color are often necessary to meet our unique needs, and acknowledges that these hairstyles do not result in or reflect less professionalism or commitment to the high standards required to serve within our Armed Forces,” Fudge said in a written statement.
https://thehill.com/regulation/defense/215242-pentagon-relents-on-offensive-hairstyle-regulations
Center for Macroecology, Evolution and Climate > Highlighted CMEC papers > Genetic variation reve... 12 August 2014 Genetic variation reveals large-scale population expansion and migration during the expansion of Bantu-speaking peoples Published in Proceedings of the Royal Society B by Sen Li and co-authors. The majority of sub-Saharan Africans today speak a number of closely related languages collectively referred to as 'Bantu' languages. The current distribution of Bantu-speaking populations has been found to largely be a consequence of the movement of people rather than a diffusion of language alone. Linguistic and single marker genetic studies have generated various hypotheses regarding the timing and the routes of the Bantu expansion, but these hypotheses have not been thoroughly investigated. In this study, we re-analysed microsatellite markers typed for large number of African populations that-owing to their fast mutation rates-capture signatures of recent population history. We confirm the spread of west African people across most of sub-Saharan Africa and estimated the expansion of Bantu-speaking groups, using a Bayesian approach, to around 5600 years ago. We tested four different divergence models for Bantu-speaking populations with a distribution comprising three geographical regions in Africa. We found that the most likely model for the movement of the eastern branch of Bantu-speakers involves migration of Bantu-speaking groups to the east followed by migration to the south. This model, however, is only marginally more likely than other models, which might indicate direct movement from the west and/or significant gene flow with the western Branch of Bantu-speakers. Our study use multi-loci genetic data to explicitly investigate the timing and mode of the Bantu expansion and it demonstrates that west African groups rapidly expanded both in numbers and over a large geographical area, affirming the fact that the Bantu expansion was one of the most dramatic demographic events in human history.
https://macroecology.ku.dk/frontpageboxes/highlights/2014/genetic-variation-reveals-large-scale-population-expansion-and-migration-during-the-expansion-of-bantu-speaking-peoples/
Introduction {#eva12071-sec-0001} ============ Invertebrate *pests* (see Box 1 for definitions of terms shown in italics) are ubiquitous, damaging, and often insidious components of anthropogenic and natural landscapes and are responsible for immense economic losses worldwide; for example, Pimentel et al. ([2005](#eva12071-bib-0071){ref-type="ref"}) estimated that in the United States (USA) alone, invertebrate crop pests are associated with more than \$14 billion in annual costs. Governments, nonprofit organizations, and industries continue to make enormous investments to develop appropriate measures to prevent, mitigate, and reduce the impact of pests. It is timely to review the contribution of molecular genetics and genomics to understanding pest invasions for a number of reasons. First, global human population increases are expected to dramatically intensify pressure on agricultural systems over the next 40 years, and controlling pest infestations is integral to meeting ongoing challenges in food security (Godfray et al. [2010](#eva12071-bib-0034){ref-type="ref"}; Thrall et al. [2011](#eva12071-bib-0091){ref-type="ref"}). Second, climate change is predicted to have large effects on agricultural productivity and natural ecosystems, in part as a result of changing dynamics between plants and their pests, which may include geographic range expansions by pest species, and increasing density of pest populations (Gregory et al. [2009](#eva12071-bib-0038){ref-type="ref"}; Gornall et al. [2010](#eva12071-bib-0035){ref-type="ref"}; Thrall et al. [2011](#eva12071-bib-0091){ref-type="ref"}). Third, advances in molecular genetics and genomics are yielding new and affordable tools for understanding demographic and adaptive processes in a variety of species (Barrett and Hoekstra [2011](#eva12071-bib-0006){ref-type="ref"}; Ekblom and Galindo [2011](#eva12071-bib-0021){ref-type="ref"}; Kirk and Freeland [2011](#eva12071-bib-0055){ref-type="ref"}), and these tools have been underutilized in the study of invertebrate pest species. ###### Definitions {#eva12071-sec-0002} **Adaptation:** An evolutionary process that occurs as a result of natural selection. Adaptation allows an organism to become better suited to living under a particular set of environmental conditions. Alternatively used to refer to a trait that has evolved by means of natural selection. **Adaptive (or non‐neutral) molecular markers:** Markers that are associated with genes or regulatory regions involved in adaptation. Adaptive, non‐neutral markers provide information about evolutionary processes that result from selection. **DNA barcode:** A short species‐specific sequence of DNA that can be used for diagnostics purposes to identify an unknown sample to the species level. **Invasive species:** Species that have been introduced from their native range to one or more non‐native areas and cause significant economic or ecological damage. In some cases, populations within species vary in their propensity for invasions (i.e., some introduced populations may cause significant ecological or economic damage, while others do not). **Neutral molecular markers:** Markers that are not associated with genes or regulatory regions involved in adaptation. Neutral markers provide information about evolutionary processes other than selection, such as migration and genetic drift. **Pest species:** Species that disrupt an ecosystem, causing significant ecological or economic damage. **Phenotypic plasticity:** The ability of an individual to express different phenotypes in response to variation in environmental conditions. **Preadaptation:** A pre‐existing structure or trait that predisposes a population or species to adapt in response to a novel selection pressure. Invertebrate species, including exotic invasive and native pests, are increasingly recognized as a major management concern. A recent review (Kenis et al. [2009](#eva12071-bib-0054){ref-type="ref"}) showed that of 403 primary research publications that investigated the ecological impacts of invasive alien insects, 60% were published between 2000 and 2007, indicating that this is a relatively new and expanding area of research. However, these studies cumulatively incorporated only 72 insect species, and 32% of the studies concerned two ant species (*Solenopsis invicta* and *Linepithema humile*), which emphasizes the need to expand the taxonomic coverage of invertebrate pests. There are a number of applied and fundamental challenges regarding invertebrate pest invasions and outbreaks in anthropogenic and natural ecosystems. Applied challenges include the early detection of pest invasions and outbreaks, the development and assessment of control measures, and the improvement in predictive models that can allow policy makers and practitioners to evaluate the risk associated with pest species. The need for appropriate strategies to meet these challenges is highlighted by several pest invasions that have occurred during the last decade. The spotted wing Drosophila (*Drosophila suzukii*) was collected for the first time in the USA in 2008 (invasion history reviewed by Hauser [2011](#eva12071-bib-0044){ref-type="ref"}), and correct identification to the species level was not made for more than a year; the species continues to be frequently confused with the western cherry fruit fly (*Rhagoletis indifferens*). Initial observations of this *Drosophila* species on berry crops did not raise major concern, because *Drosophila* species do not generally cause significant damage to crops in the USA (Hauser [2011](#eva12071-bib-0044){ref-type="ref"}). Although the species is assumed to have originated in South‐East Asia, its center of origin has not yet been identified, which may hamper attempts to identify suitable potential biological control agents from its native range. Similar problems have been reported from other recent invaders, including the marmorated stink bug (*Halyomorpha halys*) in the USA, which was first collected in the USA in 1996, but was not correctly identified until 2001 (Nielsen and Hamilton [2009](#eva12071-bib-0068){ref-type="ref"} and references therein). A fundamental understanding of the factors that contribute to invertebrate pest invasions and outbreaks is also lacking. Invasions and outbreaks by invertebrate pests share many characteristics with invasions by other species, and biological invasions have recently garnered considerable attention (Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; van Kleunen et al. [2010](#eva12071-bib-0056){ref-type="ref"}). *Invasive species* have been increasingly recognized as a major threat to regional and global biodiversity (Wagner and Van Driesche [2010](#eva12071-bib-0100){ref-type="ref"}) and are also useful model species for studies of rapid evolution, niche shifts, and range expansion (Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; Verhoeven et al. [2011](#eva12071-bib-0097){ref-type="ref"}). For these reasons, considerable effort has been invested to identify factors that may predispose species or populations to become successful invaders of non‐native or novel habitats (see Box 2). ###### Can existing inferences regarding invasive plant species be extrapolated to invertebrate pests? {#eva12071-sec-0003} It is not clear whether existing hypotheses derived mostly based on data from invasive plants (e.g., Bossdorf et al. [2005](#eva12071-bib-0009){ref-type="ref"}; Mitchell et al. [2006](#eva12071-bib-0065){ref-type="ref"}; Schierenbeck and Ellstrand [2009](#eva12071-bib-0082){ref-type="ref"}; van Kleunen et al. [2010](#eva12071-bib-0056){ref-type="ref"}, [2011](#eva12071-bib-0057){ref-type="ref"}; Moles et al. [2012](#eva12071-bib-0066){ref-type="ref"}) are also broadly applicable to invertebrate pests (Hayes and Barry [2008](#eva12071-bib-0045){ref-type="ref"}). Are there factors that predispose invertebrate pests to invasions, and how are these different from factors that contribute to plant invasions? While this is certainly not an exhaustive list, three factors are frequently suggested to contribute to invasions, at least in plants: First, high genetic diversity is associated with the potential for rapid adaptation to novel selection pressures in non‐native habitats (Schierenbeck and Ellstrand [2009](#eva12071-bib-0082){ref-type="ref"}), although it should be noted that neutral molecular variation is not necessarily a proxy for quantitative variation in life‐history traits or for adaptive potential (Reed and Frankham [2001](#eva12071-bib-0075){ref-type="ref"}). Second, high *phenotypic plasticity* or *preadaptation* to high‐resource uptake across environments confers high fitness in new or changing environments (e.g., van Kleunen et al. [2011](#eva12071-bib-0057){ref-type="ref"}; but see Palacio‐López and Gianoli [2011](#eva12071-bib-0070){ref-type="ref"}), and third, high ecological similarity between habitats in the native and introduced range of a species promotes the successful establishment of introduced species (i.e., habitat matching; Hayes and Barry [2008](#eva12071-bib-0045){ref-type="ref"}). However, these factors may vary in importance with regard to invasions by invertebrate pests. Rapid adaptation and/or phenotypic plasticity may be less critical for invaders of agricultural systems compared with invaders of more natural landscapes, because the 'new' habitats that they encounter are typically more homogeneous in space and time, both in terms of the genotypic diversity of the host plants, and with regard to the abiotic conditions in which the host plants are grown. With regard to phenotypic plasticity, various invertebrate species exhibit a wide range of developmental and behavioral processes that are influenced by environmental cues, including sex determination, transitions between developmental stages, dispersal behaviors, and host preferences (reviewed by Simpson et al. [2011](#eva12071-bib-0084){ref-type="ref"}). In some cases, plastic responses may even limit the distribution range of invertebrate species. For example, Sobek‐Swant et al. ([2012](#eva12071-bib-0085){ref-type="ref"}) showed that a loss of cold tolerance in response to acclimation to a mid‐winter warm period reduces the ability of the emerald ash borer (*Agrilus planipennis*) to tolerate subsequent cold periods. The range of this invasive species may therefore be limited by the occurrence of local climatic extremes during the winter season. The relative role of plastic responses compared with adaptive genomic change in pest invasions has barely been touched upon. Although much effort has been dedicated to understanding which traits make plants good invaders (reviewed by Moles et al. [2012](#eva12071-bib-0066){ref-type="ref"}), similar lines of research for invertebrate pests are lacking. However, a large proportion of such studies, and the inferences that are derived from them, are based on data from invasive plants (reviewed by Bossdorf et al. [2005](#eva12071-bib-0009){ref-type="ref"}; Mitchell et al. [2006](#eva12071-bib-0065){ref-type="ref"}; Schierenbeck and Ellstrand [2009](#eva12071-bib-0082){ref-type="ref"}; Moles et al. [2012](#eva12071-bib-0066){ref-type="ref"}), and the majority of studies on invertebrates target species that use wild plants, rather than cultivated plants, as hosts (Kenis et al. [2009](#eva12071-bib-0054){ref-type="ref"}). While this review is not restricted to pests of agroecosystems, it is pertinent to note that pest invasions and outbreaks in agroecosystems can be qualitatively different from invasions of natural ecosystems; agricultural pests generally benefit from high levels of ecological homogeneity among and within different regions (Benton et al. [2003](#eva12071-bib-0007){ref-type="ref"}) relative to species that invade natural habitats, and often use host plants that possess little genetic variability compared with wild host species. Also, agricultural pests are sometimes subject to selection pressures that are unique to agroecosystems, such as pest control programs, seasonal crop harvests, and the effects of artificial (anthropogenic) selection on plant--insect interactions (see Harvey and Gols [2011](#eva12071-bib-0043){ref-type="ref"}; Tamiru et al. [2011](#eva12071-bib-0088){ref-type="ref"} for examples of the latter). We review applied and fundamental applications of molecular genetics/genomics in understanding invertebrate invasions and outbreaks. Wherever possible, we provide examples of invertebrate pests of cash crops because they are associated with a disproportionately high actual and potential economic impact; yet, little attention has been paid to the factors that facilitate invertebrate pest invasions and outbreaks compared with, for example, weed invasions (Kenis et al. [2009](#eva12071-bib-0054){ref-type="ref"}; Box 2). We exclude examples of vectors of human disease, as they have been reviewed extensively elsewhere (e.g., Veracx and Raoult [2012](#eva12071-bib-0096){ref-type="ref"}; Vontas et al. [2012](#eva12071-bib-0099){ref-type="ref"}; Williams [2012](#eva12071-bib-0106){ref-type="ref"}), and they involve factors (such as demographic history and genetic variation in resistance) that are beyond the scope of this review. Current applications of molecular genetics and genomics in studies of pest invasions {#eva12071-sec-0004} ==================================================================================== Molecular genetics approaches have been widely applied to the study of invertebrate pest species from a variety of taxa for at least 20 years and are indispensible tools for addressing applied and fundamental questions relating to invertebrate pest invasions. Primarily applied uses include the following: (i) the identification of pest species, (ii) the identification of mechanisms of pesticide resistance, and (iii) the assessment of the efficacy of pest management practices. Other molecular genetic applications address more fundamental questions regarding the biology of pest species or ultimate questions regarding pest species evolution. Predominantly putatively *neutral molecular markers* have been used, often in combination with historical records, (iv) to reconstruct invasion histories and colonization routes, and to understand demographic processes such as population bottlenecks and regional dispersal patterns, that are associated with pest invasions and outbreaks (see Table [1](#eva12071-tbl-0001){ref-type="table-wrap"}), or (v) to develop ecological and evolutionary hypotheses about mechanisms of pest invasions. ###### Examples of recent studies that used molecular genetics to infer invasion histories of invertebrate pests Pest species Common name Order Host plant species Molecular marker Conclusions References ----------------------------- ---------------------------------- -------------- ------------------------------------------------------------------------------- ------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------- *Linepithema humile* Argentine ant Hymenoptera N/A mtDNA (COI‐COII, cytB) 10 introduced haplotypes globally including five in Japan, indicative of multiple introductions to Japan. The dominant Japanese supercolony is identical to the dominant supercolony distributed throughout Europe, North America, and Australia. Two of the remaining Japanese haplotypes were likely introduced from the USA, and two were unique to Japan (putative source populations could not be inferred.) Inoue et al. ([2013](#eva12071-bib-0049){ref-type="ref"}) *Grapholita molesta* Oriental fruit moth Lepidoptera Stone and pome fruits, including peach, nectarines, cherries, apple, and pear SSR Little evidence for multiple introductions on each continent. Data suggest introductions from Asia to Australia and from North America to South Africa, South America, and the Azores. A recent, secondary introduction likely occurred from Brazil to Europe Kirk et al. (2013) *Megastigmus schimitscheki* None (seed chalcid wasp species) Hymenoptera The ornamental cedar tree species *Cedrus atlantica* and *Cedrus brevifolia* SSR, mtDNA (COI) Cyprus was unambiguously the source of introduced *M. schimitscheki* in France. Invasion in non‐native area was associated with a severe bottleneck Auger‐Rozenberg et al. ([2012](#eva12071-bib-0003){ref-type="ref"}) *Sirex noctilio* None (woodwasp species) Hymenoptera Pine trees (*Pinus* sp.) SSR, mtDNA (COI) Two potential global invasion sources of *S. noctilio* were suggested. A nonsampled source population was likely introduced to South Africa and Chile, followed by a serial introduction from Chile to Switzerland. A second complex scenario involves three independent introductions from Europe to Oceania, South Africa, and South America followed by admixture among continents Boissin et al. ([2012](#eva12071-bib-0008){ref-type="ref"}) *Tecia solanivora* Potato tuber moth Lepidoptera Potato mtDNA (cytB) Evidence for stepwise introductions from north to south in Central and South America Torres‐Leguizamón et al. ([2011](#eva12071-bib-0092){ref-type="ref"}) *Myzus persicae nicotianae* Tobacco aphid Homoptera Tobacco SSR Evidence of multiple introductions from Europe to North America, and introductions from North America to South America. Loss of genetic diversity is associated with ongoing invasion. Also evidence for a highly successful 'super clone' in the Americas Zepeda‐Paulo et al. ([2010](#eva12071-bib-0107){ref-type="ref"}) *Bactrocera cucurbitae* Melon fly Diptera Polyphagous: melons, squash, tomato, bean, orange, etc. SSR Data suggest central Asian origin, continent‐scale differentiation, with low levels of long‐distance and complex inter‐regional dispersal Virgilio et al. ([2010](#eva12071-bib-0098){ref-type="ref"}) *Ceratitis capitata* Mediterranean fruit fly (medfly) Diptera Highly polyphagous SSR, RAPDs, mtDNA Evidence for invasion from Kenya to Mediterranean Basin (MB) and from MB to Latin America and the Pacific. Invasion was accompanied by a loss of genetic diversity Reviewed by Malacrida et al. ([2007](#eva12071-bib-0060){ref-type="ref"}) *Scirtothrips perseae* Avocado thrips Thysanoptera Avocado mtDNA and SSRs Recent introduction to California likely derived from a single‐source population from Coatepec Harinas, Mexico Rugman‐Jones et al. ([2007](#eva12071-bib-0080){ref-type="ref"}) SSR, simple sequence repeat; AFLP, amplified fragment length polymorphism; mtDNA, mitochondrial DNA; cytB, cytochrome b gene; COI/COII, cytochrome c oxidase subunits I and II. See main text for a detailed description of the invasion history of the western corn rootworm *Diabrotica virgifera* and the Asian ladybird *Harmonia axyridis* (not mentioned here to minimize redundancy). John Wiley & Sons, Ltd Identification of pest species {#eva12071-sec-0005} ------------------------------ The accurate identification of pest species is a prerequisite for the deployment of appropriate management strategies; it is critical to identify pest species accurately at points of entry to a new geographic area or during the early phases of an invasion. Correct identification to the species level is also essential for the implementation of selective pest control measures. However, rapid identification of pests is often impaired by uninformative morphological traits and a lack of available molecular data, such as species‐specific *DNA barcodes*. Control of *D. suzukii*, a recent example of a recent cross‐continental invader (mentioned above), was hampered by the time lag between its initial sighting and its correct identification. During this time lag, the species spread from California to Oregon, Washington State, Florida, and British Columbia (Canada). Utilizing molecular tools when suspicions about the identity of this pest were first raised may have permitted early identification and implementation of quarantine and other pest control measures, preventing its rapid early spread. Many other pest species are similarly difficult to identify based on morphology, especially because the frequently elusive immature stages are often responsible for the bulk of crop damage. Examples include economically damaging thrips species, which exhibit few diagnostic morphological characters, and are often mistaken for less destructive congeners. *Thrips palmi* likely originated in South‐East Asia, but is frequently intercepted on imports of bitter gourd, eggplant, and ornamental flowers from Asia to the Caribbean, West Africa, the USA, South America, Africa, Australia, and the United Kingdom (Cardona et al. [2002](#eva12071-bib-0011){ref-type="ref"}; Glover et al. [2010](#eva12071-bib-0033){ref-type="ref"}). DNA barcoding based on the cytochrome oxidase I (COI) gene permits this species to be rapidly distinguished from congeners and other morphologically similar species (Glover et al. [2010](#eva12071-bib-0033){ref-type="ref"}). Similarly, a number of tortricid moth species are internal pome and stone fruit feeders that occur sympatrically, and the larvae of these species are frequently misidentified based on morphology or feeding damage symptoms. Molecular markers based on polymorphisms in the COI gene can be used to distinguish between four pest species of the genera *Cydia* and *Grapholita* that have overlapping distributions in Europe (Chen and Dorn [2009](#eva12071-bib-0014){ref-type="ref"}). Such molecular markers provide useful tools that allow authorities at national entry ports to ensure that imported goods are free of pests, and also permit land managers to accurately identify pests from agricultural landscapes and to implement appropriate control measures. Identifying mechanisms of resistance to xenobiotics {#eva12071-sec-0006} --------------------------------------------------- Another application of molecular genetics in invertebrate invasion biology is the identification of mechanisms involved in resistance to xenobiotics, including pesticides and toxins expressed by transgenic crops. Resistance mechanisms have been well studied because of their direct and measurable economic consequences, and a number of gene families are known to be involved in the evolution of pesticide resistance across a number of taxa. They include genes that control the production of enzymes that break down xenobiotics such as P450 monooxygenases, esterases, and glutathione *S*‐transferases, and genes that reduce the binding of insecticides at target sites such as acetylcholinesterase, sodium channels, and GABA receptors (Hemingway and Ranson [2000](#eva12071-bib-0046){ref-type="ref"}). Identifying the genes underlying resistance allows to estimate the frequency of resistant genotypes across populations (Franck et al. [2007](#eva12071-bib-0028){ref-type="ref"}) and to develop pest control measures that circumvent prevalent resistance mechanisms (Gao et al. [2012](#eva12071-bib-0030){ref-type="ref"}). For example, the high combined frequencies of two alleles of the *para* gene that confer pyrethroid resistance to the diamondback moth (*Plutella xylostella*) over a wide area in southern Australia suggest that alternative pest control measures should be used for this species in the region (Endersby et al. [2011](#eva12071-bib-0022){ref-type="ref"}). Moreover, recent comparative genomics work has suggested that some invertebrate species may be better preadapted than others to rapidly evolve pesticide resistance, based on their genomic architecture. The highly polyphagous and multivoltine two‐spotted spider mite (*Tetranychus urticae*) frequently evolves pesticide resistance in the field, and Grbić et al. ([2011](#eva12071-bib-0037){ref-type="ref"}) suggested that high diversity in gene families involved in detoxification predisposes this species to rapid *adaptation* (discussed in greater detail in the [Comparative genomics](#eva12071-sec-0013){ref-type="sec"} section below). Another major crop pest, the Colorado potato beetle (*Leptinotarsa decemlineata*), is native to North America and invasive in Europe and Asia, and has developed resistance to an astounding 52 different compounds from all major synthetic pesticide classes (reviewed by Alyokhin et al. [2008](#eva12071-bib-0001){ref-type="ref"}). A wide variety of different resistance mechanisms is involved, often within small geographic areas. This implies that there is high functional diversity of genes involved in resistance to xenobiotics in *L. decemlineata*, which is not surprising given that the species is specialized on a host family that produces a range of toxic alkaloids. Further genomics work on *L. decemlineata* and other pest species that demonstrate rapid and repeated evolution of pesticide resistance may allow us to better understand how and under what conditions pests adapt to pest management practices, including pesticide applications. If only one or a limited number of genome regions are involved in adaptation to insecticides in a particular pest species, then management of resistance genotypes may be relatively straightforward. However, data are accumulating from a number of pest species in which many genes and paralogous gene copies are involved in insecticide resistance, and some resistance alleles may provide cross‐resistance to multiple pesticides. If this is generally the case, then management of insecticide resistance can become extremely complex (e.g., Gao et al. [2012](#eva12071-bib-0030){ref-type="ref"}), particularly when several pests need to be managed simultaneously. Assessing the efficacy of management practices that target invasive pests {#eva12071-sec-0007} ------------------------------------------------------------------------- Population genetic data can be used to predict and measure the effects of different management practices (Mazzi and Dorn [2012](#eva12071-bib-0063){ref-type="ref"}), although to date, they have infrequently been used for this purpose. Measurements of immigration and gene flow between different management areas, and estimates of effective sizes of populations under different management regimes can be derived from molecular genetic data. Such an approach was used by Franklin et al. ([2010](#eva12071-bib-0029){ref-type="ref"}) to assess the impacts of greenhouse management practices on cabbage looper (*Trichoplusia ni*) populations in British Columbia. The authors showed that yearly pesticide applications in greenhouses led to strong bottleneck effects and likely resulted in selection for pesticide‐resistant genotypes. A small number of resistant genotypes likely migrated between neighboring greenhouses in the spring. These results underscore the need for coordinated pest management efforts between growers to prevent the regional spread of resistant genotypes. Understanding pest demography and reconstructing invasion routes {#eva12071-sec-0008} ---------------------------------------------------------------- Understanding pest dispersal and demography is beneficial because it permits the identification of new invasions, facilitates the monitoring of current infestations and the prevention of further ones, and aids in the planning and implementation of control measures (Hulme [2009](#eva12071-bib-0048){ref-type="ref"}; Estoup and Guillemaud [2010](#eva12071-bib-0023){ref-type="ref"}; Ridley et al. [2011](#eva12071-bib-0078){ref-type="ref"}). With regard to the latter point, identification of the center of origin of a species can, for example, yield natural enemies of the species that can be used as agents of biological control in integrated pest management programs. This may be an important step for the management of *D. suzukii* (discussed in *i* above), for which the native range has not yet been identified (Hauser [2011](#eva12071-bib-0044){ref-type="ref"}). Most often putatively neutral microsatellite or amplified fragment length polymorphism markers are used, sometimes in combination with mitochondrial sequences, to reconstruct migration pathways, to quantify gene flow across different spatial scales, to identify migrants between different regions, and to estimate admixture between populations from different origins (Handley et al. [2011](#eva12071-bib-0042){ref-type="ref"}). Such an approach was recently used to reconstruct invasion routes of the oriental fruit moth (*Grapholita molesta*), which is a globally invasive pest of stone and pome fruit trees including peach and apple (Kirk et al. 2013). Eastern Asia was inferred to be the center of origin of this species, and molecular genetic data implied that the species has been introduced to Europe at least twice, including once from South America (Kirk et al. 2013). Perhaps the most comprehensive example of invasion route reconstruction is that of the red fire ant (*S. invicta*), which was achieved using both nuclear and mitochondrial molecular markers (Ascunce et al. [2011](#eva12071-bib-0002){ref-type="ref"}). *Solenopsis invicta* builds large colonies that damage crop roots, and ants sometimes damage or kill young citrus trees by feeding on the bark, cambium, and shoots (Banks et al. [1991](#eva12071-bib-0005){ref-type="ref"}). More problematic is the damage that ants cause to property through mound‐building activities and the painful or even dangerous stings that they inflict on humans, pets, and livestock. The species originated in South America and is currently distributed in the USA, the Caribbean, Australia, New Zealand, and parts of Asia. Ascunce et al. ([2011](#eva12071-bib-0002){ref-type="ref"}) sampled and genotyped ant colonies from different geographic locations throughout the current global range of the species. The authors found that at least nine separate introductions have occurred from the southern USA to other parts of the invaded range, and that recent introductions have not occurred from the native range in South America. This indicates that measures to prevent the global spread of the species should target USA ports of trade. Studies of pest population dynamics *within* countries or regions can also be useful for developing management strategies. For example, the lepidopteran species *Cydia pomonella* is a major worldwide pest of fruit and nut species, such as apple, apricot, and walnut, but its dispersal abilities appear to vary according to landscape characteristics and management practices (Chen and Dorn [2010](#eva12071-bib-0015){ref-type="ref"}; Franck and Timm [2010](#eva12071-bib-0027){ref-type="ref"}). Chen and Dorn ([2010](#eva12071-bib-0015){ref-type="ref"}) characterized 15 populations of *C. pomonella* in Switzerland. The authors showed that even proximately located (\<10 km) populations were genetically differentiated, suggesting limited natural gene flow over small spatial scales in this region, even when significant barriers to dispersal were absent. The same study also identified a single population from an apple orchard close to the international airport in Zurich, which clustered separately from all other sampled populations in Switzerland; this finding implied a possible introduction from a foreign population via the airways, which in turn suggests that airport screening for fruits and/or pests should be more stringent. By reconstructing the demographic history of pest outbreaks and introduction routes associated with invasions, we can more effectively minimize the movement of pests across landscapes and between geographic regions. Also, understanding the demographic characteristics and geographic pathways of pest dispersal and invasions is central to the development of ecological and evolutionary hypotheses regarding the mechanisms of pest invasions and outbreaks. Developing ecological and evolutionary hypotheses about the mechanisms of pest invasions {#eva12071-sec-0009} ---------------------------------------------------------------------------------------- Although a number of hypotheses have been developed based on data from invasive plants regarding factors underlying invasions (see Box 2, Facon et al. [2006](#eva12071-bib-0024){ref-type="ref"}), it is debatable whether knowledge derived from plants can be generalized to invertebrate invasions. For example, a recent review (Pyšek et al. [2010](#eva12071-bib-0073){ref-type="ref"}) investigated the relationship between invasive species richness and biogeographic, climatic, economic, and human demographic factors across the European landscape. For invasive plants, 91% of the variation in species richness was explained by the minimal adequate model, which indicates that many primary factors that correlate with the number of invasive plant species in a landscape have already been identified, and these include human population density, wealth, and climate. In contrast, only 27% of the variation was explained for invasive insects, all of which was attributed to human factors such as population density and wealth. This suggests that there is still a large gap in our understanding of the factors that promote invertebrate invasions, both from a life‐history perspective (i.e., which life‐history characteristics are common to invasive invertebrate species?) and from a landscape perspective (i.e., which geographic, climatic, and human demographic features of a landscape make it more prone to species invasions?). Yet, filling this gap will be important for developing predictive models for forecasting and pest management purposes. One example of a factor that is frequently claimed to be associated with invasions across many different taxa is the occurrence of multiple introductions across a particular geographic area (Dlugosch and Parker [2008](#eva12071-bib-0019){ref-type="ref"}; Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; Schierenbeck and Ellstrand [2009](#eva12071-bib-0082){ref-type="ref"}; Guillemaud et al. [2011](#eva12071-bib-0040){ref-type="ref"}). Multiple introductions of the western corn rootworm (*Diabrotica virgifera virgifera*), one of the most damaging and costly pests of maize in the USA (Miller et al. [2005](#eva12071-bib-0064){ref-type="ref"}; Ciosi et al. [2008](#eva12071-bib-0016){ref-type="ref"}), are linked to its invasion of Europe. *Diabrotica virgifera* originated in Central America and was first reported from the USA in 1867 and from Europe in 1992 (reviewed by Gray et al. [2009](#eva12071-bib-0036){ref-type="ref"}). Its successful establishment in Europe is associated with at least five introductions from the northern USA to Europe (Ciosi et al. [2008](#eva12071-bib-0016){ref-type="ref"}) although no secondary contact was inferred between populations derived from different origins (Ciosi et al. [2008](#eva12071-bib-0016){ref-type="ref"}). It is widely assumed that multiple introductions of pest species, followed by admixture between introduced populations, can reduce the potential for founder effects and may even increase the adaptive potential of introduced populations by generating recombinant genotypes that are better able to cope with novel environments (Dlugosch and Parker [2008](#eva12071-bib-0019){ref-type="ref"}; Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; Schierenbeck and Ellstrand [2009](#eva12071-bib-0082){ref-type="ref"}; Verhoeven et al. [2011](#eva12071-bib-0097){ref-type="ref"}). For example, Lombaert et al. ([2010](#eva12071-bib-0059){ref-type="ref"}) showed that invasive European populations of the harlequin ladybird *Harmonia axyridis* were comprised of admixed populations from eastern North America and the local biocontrol strain derived from Asia. Subsequently, it was shown that hybrids between eastern North American and a biocontrol strain are fitter than either parental lineage with regard to a number of phenotypic traits including body size and generation time (Lombaert et al. 2011; Turgeon et al. [2011](#eva12071-bib-0094){ref-type="ref"}), which suggests that admixture may have partially fueled the invasion of Europe by the harlequin ladybird (Turgeon et al. [2011](#eva12071-bib-0094){ref-type="ref"}). Yet, evidence to support the hypothesis that admixture generally promotes invasiveness in invertebrates is scant. In contrast to plants, genetic diversity in insects is generally low in exotic invasive populations compared with native populations (Uller and Leimu [2011](#eva12071-bib-0095){ref-type="ref"}), which suggests that admixture may be more important for the former than for the latter. This again emphasizes the need to collect data from a wider selection of invertebrate taxa, which may further illuminate differences between the features of invasions by plants versus invertebrates. Many invasions and outbreaks are thought to involve evolutionary change, including niche shifts and/or local adaptation (Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; Verhoeven et al. [2011](#eva12071-bib-0097){ref-type="ref"}). The most widely studied and best‐understood examples of local adaptation in invertebrate pest species include adaptation to exogenous and endogenous insecticides (including genetically modified crops; discussed above), and the evolution of genetically distinct host‐associated populations within species. Work carried out on adaptation to insecticides has convincingly demonstrated that genetic structure at loci related to insecticide resistance can be shaped by pest control regimes and that structure at adaptive loci often differs compared with structure at putatively neutral loci. For example, the codling moth (*C. pomonella*) has developed resistance to insecticides in several parts of its range (Franck et al. [2007](#eva12071-bib-0028){ref-type="ref"}; and references therein). Franck et al. ([2007](#eva12071-bib-0028){ref-type="ref"}) compared genetic structure of neutral loci with that of two loci that confer insecticide resistance across codling moth populations from 27 orchards. The frequencies of alleles that conferred resistance were positively correlated with the number of times each orchard was sprayed with insecticide annually, while only low levels of genetic structure were observed at the neutral loci. Insecticide applications likely represent a strong selection pressure that favors adaptive alleles at resistance loci, despite high migration rates between orchards, while high migration rates tend to homogenize genetic variation at loci that are not subject to strong selection (i.e., neutral loci). Evidence of adaptation to different hosts was observed among populations of the cotton‐melon aphid (*Aphis gossypii*), a highly polyphagous pest. Carletto et al. ([2009](#eva12071-bib-0012){ref-type="ref"}) identified five host races that were associated with Cucurbitaceae, cotton, eggplant, potato, and chili‐ or sweet pepper, and showed using supplementary plant transfer experiments that host races were somewhat specialized on their associated host plants. Local adaptation to different hosts in the field has also been demonstrated for the codling moth in Switzerland (Chen and Dorn [2010](#eva12071-bib-0015){ref-type="ref"}). Populations derived from apple, apricot, and walnut were genetically distinct from one another in a fruit‐growing valley in southwestern Switzerland. These studies demonstrate that pest populations can be structured according to the distribution of suitable host species across the landscape. However, despite that some inroads have been made toward understanding the role of adaptive change in a limited number of pest invasions (see Table [2](#eva12071-tbl-0002){ref-type="table-wrap"} for additional examples), there remains a paucity of data regarding the role of adaptation versus dispersal ability and other life‐history characteristics in the establishment and spread of invertebrate pests. ###### Selected examples of recent studies that used molecular genetics to test for adaptation in invertebrate pests Pest species Common name Order Host plant species Molecular marker Conclusions References ------------------------------ ----------------------- -------------- --------------------------------------------------------- ------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------- *Acyrthosiphon pisum* Pea aphid Hemiptera More than 20 legume genera SSR Based on analysis of sympatric populations specialized on red clover, alfalfa, and pea. Eleven host race--associated outlier loci were identified out of 390 genotyped SSR loci, and these outlier loci were associated with several candidate genes, which may be involved in host adaptation, including genes encoding salivary proteins and chemosensory genes. Results support the hypothesis that adaptation to each host species occurred only once Jaquiéry et al. ([2012](#eva12071-bib-0051){ref-type="ref"}) *Cydia pomonella* Codling moth Lepidoptera Pome and stone fruits, and nuts SSR Evidence for significant population structure within Switzerland. Evidence for differentiation according to host plant species (apple, apricot, and walnut) Chen and Dorn ([2010](#eva12071-bib-0015){ref-type="ref"}) *Trichoplusia ni* Cabbage looper Lepidoptera Crucifers and other vegetables AFLPs Between *Bt* sprayed fields and greenhouse populations, no difference in genetic diversity, but lower differentiation in field compared with greenhouse. Significant IBD in both cases Franklin et al. ([2010](#eva12071-bib-0029){ref-type="ref"}) *Schizaphis graminum* Greenbug Hemiptera Wheat, barley, noncultivated grasses SSR Evidence for adaptation to host plant species, but also evidence of geographic differentiation Weng et al. ([2010](#eva12071-bib-0102){ref-type="ref"}) *Frankliniella occidentalis* Western flower thrips Thysanoptera Highly polyphagous mtDNA and SSRs 2 ecotypes associated with climatic variation. One ecotype adapted to hot/dry habitat and the other ecotype adapted to cool/moist habitat Brunner and Frey ([2010](#eva12071-bib-0010){ref-type="ref"}) *Aphis gossypii* Cotton‐melon aphid Hemiptera Polyphagous, e.g., cucumber, cotton, eggplant, and okra SSR 47% of variation in data explained by host plant, only 16% explained by geographic region: region was important within a host race, but host race was far more important overall Carletto et al. ([2009](#eva12071-bib-0012){ref-type="ref"}) *Rhopalosiphum padi* Bird cherry‐oat aphid Hemiptera Oat, brome wheat, barley, rye, other grasses SSR No significant effect of host plant. Two clusters corresponded to alternative reproductive modes (sexual versus asexual) Gilabert et al. ([2009](#eva12071-bib-0031){ref-type="ref"}) SSR, simple sequence repeat; AFLP, amplified fragment length polymorphism; mtDNA, mitochondrial DNA; IBD, isolation by distance. John Wiley & Sons, Ltd Shortcomings of the current applications {#eva12071-sec-0010} ======================================== Molecular genetics and genomics strategies have proven to be extremely useful tools for understanding pest invasions in many instances, and their utility should not be undervalued. However, a number of limitations to their predictive and explanatory power have become apparent. These shortcomings include limitations in the power of statistical models to accurately reconstruct invasion pathways, particularly because sampling of populations is necessarily incomplete (discussed in detail by Estoup and Guillemaud [2010](#eva12071-bib-0023){ref-type="ref"}; Guillemaud et al. [2011](#eva12071-bib-0040){ref-type="ref"}). Also, population genetics studies are traditionally based on a limited number of putatively *neutral molecular markers* that are of limited use for studying adaptive processes (reviewed by Kirk and Freeland [2011](#eva12071-bib-0055){ref-type="ref"}), which may play a key role in pest invasions (Prentis et al. [2008](#eva12071-bib-0072){ref-type="ref"}; Moles et al. [2012](#eva12071-bib-0066){ref-type="ref"}). Finally, there can be multiple nonexclusive explanations for observed patterns of genetic structure, and inferences regarding the ecological and evolutionary factors that contribute to pest invasions are subject to interpretation of these patterns (Coop et al. [2010](#eva12071-bib-0018){ref-type="ref"}). While the first two shortcomings listed above have been reviewed in detail elsewhere, the last point deserves some additional attention. The genetic structure of invertebrate pest populations between and within landscapes may be influenced by a myriad of factors (Guillemaud et al. [2011](#eva12071-bib-0040){ref-type="ref"}) that can be difficult to tease apart, including (i) species‐specific demographic processes and natural dispersal patterns, (ii) anthropogenic activities (e.g., human‐mediated transport of a pest between different locations; Torriani et al. [2010](#eva12071-bib-0093){ref-type="ref"}), (iii) bottom‐up factors (i.e., distribution patterns or genetic structure of hosts; Carletto et al. [2009](#eva12071-bib-0012){ref-type="ref"}; Chen and Dorn [2010](#eva12071-bib-0015){ref-type="ref"}; Groot et al. [2011](#eva12071-bib-0039){ref-type="ref"}), (iv) multitrophic interaction networks including competitive interactions between pests, mutualistic interactions, and interactions between pests and their natural enemies (Lavandero et al. [2011](#eva12071-bib-0058){ref-type="ref"}; Wilder et al. [2011](#eva12071-bib-0105){ref-type="ref"}), (v) other environmental, landscape, or climatic factors (Chaplin‐Kramer et al. [2011](#eva12071-bib-0013){ref-type="ref"}; Sandrock et al. [2011](#eva12071-bib-0081){ref-type="ref"}), and/or (vi) adaptive processes (Franck et al. [2007](#eva12071-bib-0028){ref-type="ref"}). Contemporary patterns of genetic structure in a pest species will likely reflect the influence of several or all of these factors and their interactions, which may confound the interpretation of the data. One question that has received increasing attention with regard to invasion biology in recent years is how often and to what extent adaptation promotes pest invasions, yet evidence for adaptation at both the molecular level and the phenotypic level has been notoriously difficult to come by (Barrett and Hoekstra [2011](#eva12071-bib-0006){ref-type="ref"}; Verhoeven et al. [2011](#eva12071-bib-0097){ref-type="ref"}). An approach that is frequently used to assess the relative role of adaptive evolution in species invasions is to identify sources of invasions and then collect evidence of both genetic and phenotypic differentiation in the invaded compared with the native range of a species (Bossdorf et al. [2005](#eva12071-bib-0009){ref-type="ref"}). However, even when parallel genetic and phenotypic differences are observed between native and introduced populations, it is difficult to demonstrate whether such differences have arisen from adaptive processes or from neutral, stochastic events such as founder effects or patterns of multiple introductions and possibly subsequent admixture (reviewed by Keller and Taylor [2008](#eva12071-bib-0053){ref-type="ref"}). Other authors have attempted to identify spatial patterns of genetic structure or allelic variation within an invaded landscape, which are indicative of local adaptation to various selection pressures or climatic gradients. Such patterns can also be difficult to interpret (Coop et al. [2010](#eva12071-bib-0018){ref-type="ref"}; and references therein), because the power to identify signatures of selection can depend on the strength of selection, as well as the demographic history of the surveyed populations. For instance, patterns of isolation‐by‐distance or spatial autocorrelation across a landscape can support a number of different hypotheses, including leading‐edge dispersal and range expansion, stepwise introductions (possibly human‐mediated; Torres‐Leguizamón et al. [2011](#eva12071-bib-0092){ref-type="ref"}), adaptation to an ecological cline (Telonis‐Scott et al. [2011](#eva12071-bib-0090){ref-type="ref"}), or other scenarios, such as multiple introductions of a species from several parts of its native range (Taylor and Keller [2007](#eva12071-bib-0089){ref-type="ref"}). For instance, Gilchrist and Meats ([2010](#eva12071-bib-0032){ref-type="ref"}) examined population genetic structure of the Queensland fruit fly (*Bactrocera tryoni*) across the southern to central parts of its invaded range in Australia. The authors reported a latitudinal cline in microsatellite allele frequencies over a distance of 500 km, but could not determine whether this cline resulted from natural selection or from neutral processes. The latitudinal cline corresponded to differences in monthly average temperatures, decreasing from north to south, and the authors postulated that the cline might also correspond to heritable changes in developmental time or thermal tolerances. However, the two populations at either end of the distribution (north and south) were genetically differentiated source populations that contributed migrants to intermediate populations, and admixture along the north--south gradient generated an allelic frequency cline that could also explain the pattern of isolation by distance. Similarly, an absence of obvious distance‐related patterns could be caused by factors including geographic barriers to dispersal, patchy distribution of host plants, local adaptation to heterogeneous landscape features, anthropogenic redistribution of the pest, genetic drift (i.e., small population sizes), or factors related to the species\' life‐history characteristics. The relative importance of these factors in mediating the distribution and density of invertebrate pests remains difficult to determine. Clearly, molecular genetic data need to be interpreted in the context of additional data, including but not limited to landscape features, climatic variables, and host and natural enemy distribution (Thrall et al. [2011](#eva12071-bib-0091){ref-type="ref"}). What is new? Recent developments and novel applications {#eva12071-sec-0011} ======================================================= A number of new and ongoing developments are stimulating the application of genetic and genomic tools to study invertebrate pest invasions and outbreaks. First, advances in genomics methodologies promise to provide the double advantage of yielding more detailed and precise information on colonization routes and neutral genetic structure, while also providing the opportunity to identify adaptive loci, and to estimate the contribution of adaptation relative to neutral processes in the spread of invasive pests. We are now able to collect data from thousands of markers from coding and/or noncoding regions of the genome at low cost, using high‐throughput genotyping techniques or through resequencing. Moreover, it is foreseeable that over the coming years, it will become feasible to compare entire genomes of many individuals within and between populations. These technical advances will not solve problems related to adequate and representative sampling nor those associated with the challenges of constructing appropriate models based on realistic scenarios of historical population demography and molecular evolution (Nei et al. [2010](#eva12071-bib-0067){ref-type="ref"}). Nonetheless, the availability of genome‐wide sequence data promises an improved understanding of the relative roles of neutral versus adaptive processes (Stapley et al. [2010](#eva12071-bib-0086){ref-type="ref"}; Barrett and Hoekstra [2011](#eva12071-bib-0006){ref-type="ref"}; Kirk and Freeland [2011](#eva12071-bib-0055){ref-type="ref"}) and to tease apart the factors that shape genetic architecture across landscapes. By applying these advances to pertinent questions in carefully designed studies, we will better understand pest invasions. It has been increasingly recognized that both neutral and *adaptive molecular markers* (Kirk and Freeland [2011](#eva12071-bib-0055){ref-type="ref"}) should be characterized to develop a more integrated understanding of the processes involved in invasion success. Population genomics and adaptation {#eva12071-sec-0012} ---------------------------------- To develop current and cross‐disciplinary approaches to the study of pest invasions, it is useful to look at case studies from other taxa. A number of methods have been applied to the study of neutral versus adaptive processes in model and nonmodel species over the past several years, and some particularly exciting insights have been generated from work carried out on fish (e.g., Hohenlohe et al. [2010](#eva12071-bib-0047){ref-type="ref"}; Whiteley et al. [2011](#eva12071-bib-0104){ref-type="ref"}; Jones et al. [2012](#eva12071-bib-0052){ref-type="ref"}; Renaut et al. [2012](#eva12071-bib-0076){ref-type="ref"}) and plants (e.g., Eckert et al. [2010](#eva12071-bib-0020){ref-type="ref"}; Hancock et al. [2011](#eva12071-bib-0041){ref-type="ref"}; Strasburg et al. [2012](#eva12071-bib-0087){ref-type="ref"}). Understanding the molecular mechanisms underlying range expansion and adaptation often relies on the large amount of data that can be generated through genomics technologies. By incorporating data from thousands of loci, or even entire genomes, we are now much better equipped to capture genetic loci that are both neutral and adaptive. One strategy that has been applied successfully to several species is the identification of markers, or 'outlier' loci, that show different patterns of genetic differentiation between populations compared with the entire set of genotyped loci. Hohenlohe et al. ([2010](#eva12071-bib-0047){ref-type="ref"}) genotyped single‐nucleotide polymorphisms (SNPs) in threespine sticklebacks (*Gasterosteus aculeatus*) from freshwater and oceanic populations and not only retraced historical routes of colonization between habitats, but also identified genomic regions that had been subject to balancing or divergent selection. Such methods offer great promise both for retracing colonization routes of invasive pests and for identifying genomic regions that are putatively involved in adaptive processes. The complete genomes of 21 *G. aculeatus* individuals derived from numerous pairs of marine and freshwater population were recently sequenced (Jones et al. [2012](#eva12071-bib-0052){ref-type="ref"}), and the resulting data provide a number of novel insights into adaptive evolutionary processes. For example, only 17% of regions associated with marine--freshwater divergence contained mutations resulting in changes in amino acid sequences, which suggests that regulatory changes provide an unexpectedly large contribution to adaptive evolution. As is the case with *G. aculeatus*, invertebrate pests are often introduced multiple times to novel habitats and geographic areas and may provide a rich source for other studies of parallel adaption. Landscape genomics also offers promising applications in the study of invasive invertebrate pests, because adaptive loci can be identified that are associated with environmental and landscape variables (Manel et al. [2010](#eva12071-bib-0061){ref-type="ref"}). This approach was used to study the molecular basis of adaptation in loblolly pine (*Pinus taeda*) across its range (Eckert et al. [2010](#eva12071-bib-0020){ref-type="ref"}). Allele frequencies at a number of loci were correlated with climatic variables including temperature, accumulated growing degree‐days, precipitation and aridity (Eckert et al. [2010](#eva12071-bib-0020){ref-type="ref"}). Based on a comparison with orthologs from the *Arabidopsis* genome, the authors identified a number of candidate genes that may be involved in adaptation to climatic variables including temperature and precipitation. A similar but more comprehensive study of thale cress (*Arabidopsis thaliana*) included 948 accessions derived from the native range of the species, which were genotyped at 215 000 SNPs (Hancock et al. [2011](#eva12071-bib-0041){ref-type="ref"}). A number of regions were strongly correlated with one or more of 13 climatic variables, and these provided a set of candidate genes for adaptation to climate. Likewise, landscape genomics may permit the identification of loci from pest species that are associated with adaptation to host plants, climatic variables, or factors associated with anthropogenic landscape management practices, and identifying such loci should lead to a clearer mechanistic understanding of how pests spread over agricultural landscapes. To derive meaningful information from such large‐scale genomics studies, it is important to carefully select biological systems according to the particular adaptive questions to be addressed. Careful selection of study species and sampling design will minimize problems relating to data interpretation discussed in the '[Shortcomings of the current applications](#eva12071-sec-0010){ref-type="sec"}' section above. One criterion is the selection of pest species and populations for which natural and demographic histories have been or can be well defined (Nosil and Feder [2012](#eva12071-bib-0069){ref-type="ref"}). Additionally, potential selection pressures should be defined *a priori*, and sampling schemes should be established to minimize the confounding effects of environmental effects other than those specifically targeted by the sampling scheme (issues related to sampling are discussed in detail in, e.g., Schoville et al. [2012](#eva12071-bib-0083){ref-type="ref"}). For example, parallel evolution has received considerable recent interest (Radwan and Babik [2012](#eva12071-bib-0074){ref-type="ref"}) and is particularly relevant to understanding pest outbreaks because pests are frequently introduced multiple times to different geographic areas over various spatial and temporal scales. Also, they often show evidence of repeated adaptation to similar selection pressures (e.g., commonly used pesticides), and genomics can be used to study the repeatability of evolution. For instance, are only one or a few genes repeatedly involved in adaptation to a particular pesticide, or are different genes involved in adaptation by different populations? To what extent does gene flow between populations mediate the adaptive response and the genomic architecture of divergence between adapted and nonadapted populations (Feder et al. [2012](#eva12071-bib-0026){ref-type="ref"})? To answer these questions, it is important to first identify and sample from multiple nonadapted 'source' populations and populations that represent independent examples of the evolution of pesticide resistance. It should be ensured that any other climatic variables, such as temperature and rainfall, vary randomly among source and resistant populations. Finally, potential sources of gene flow between resistant and nonresistant populations should be identified in the field. Comparative genomics {#eva12071-sec-0013} -------------------- The ever‐growing pool of publicly available genomic resources from a variety of species offers opportunities to investigate the molecular basis of adaptation over an evolutionary timescale. There are more than 30 insect species for which complete genome sequences are available, although only a few of these are agricultural pests. Among other species, genomic resources are available for the red flour beetle (*Tribolium castaneum*; full genome sequence; Richards et al. [2008](#eva12071-bib-0077){ref-type="ref"}), the pea aphid (*Acyrthosiphon pisum*; full genome sequence; IAGC [2010](#eva12071-bib-0050){ref-type="ref"}), the soybean aphid (*Aphis glycines*; partial transcriptome and genome sequence; Bai et al. [2010](#eva12071-bib-0004){ref-type="ref"}), and the legume pod borer (*Maruca vitrata*; partial transcriptome sequencing; Margam et al. [2011](#eva12071-bib-0062){ref-type="ref"}). Moreover, an ambitious initiative known as i5k was launched in 2011 that aims to sequence and analyze the genomes of 5000 insects and related arthropod species by 2016 (Robinson et al. [2011](#eva12071-bib-0079){ref-type="ref"}). One of the most comprehensive and recent genomics studies of an agricultural pest species was carried out on the two‐spotted spider mite (*T. urticae*), which infests numerous crop species globally, including tomatoes, maize, and soybean (Grbić et al. [2011](#eva12071-bib-0037){ref-type="ref"} and references therein). *Tetranychus urticae* populations are notorious for their ability to adapt rapidly to pesticides. The gene families involved in the digestion, detoxification, and transport of xenobiotics, such as cysteine peptidase genes and glutathione S‐transferase genes, were often expanded in *T. urticae* compared with those known from insects such as *Drosophila* species (Grbić et al. [2011](#eva12071-bib-0037){ref-type="ref"}). Moreover, the responsiveness in the expression of a large proportion of these genes to different host plants suggests that they contribute to host adaptation, as well as adaptation to xenobiotics applied for pest control. This finding suggests that repeated duplication of genes involved in detoxification may predispose pest species to rapid adaptation to toxic plant metabolites and commercially produced pesticides. Another study compared the transcriptomes of two invasive white fly species from the *Bemisia tabaci* species complex (Wang et al. [2011](#eva12071-bib-0101){ref-type="ref"}) and identified 24 sequences that showed evidence of divergent positive selection between the two species; these included genes involved in metabolism of carbohydrates, proteins, and xenobiotics, which may play a role in host specialization of the species, as well as the evolution of insecticide resistance. It is difficult to carry out comparative analyses between the genomes of taxonomically divergent pest species, because mechanisms of adaptation to similar selection pressures (e.g., pesticides) may not involve homologous genome regions among divergent species. Also, large accumulated differences across the genomes of divergent taxa make it difficult to derive meaningful conclusions from such comparisons. Currently, by examining the genomic architecture of adaptation to particular selection pressures (e.g., pesticides) across multiple species, we may be able to draw conclusions about the gene families and mechanisms (e.g., gene duplication) involved in adaptation. However, our ability to use comparative data to understand genomic divergence and convergence will improve with increasing taxonomic coverage of closely related pest species or population pairs. Eventually, placing genomic data in a comparative phylogenetic framework may allow us to identify genomic features that are specific to pest versus nonpest species among different lineages, or to identify genome responses (through e.g., transcript profiling) that are convergent or divergent between ecotypes and species that are subject to similar or different selection pressures (Whitehead [2012](#eva12071-bib-0103){ref-type="ref"}). Comparative genomics provides high potential to improve our understanding of the mechanisms behind adaptation to pesticides and other selection pressures. Conclusions {#eva12071-sec-0014} =========== It is critical to test alternative hypotheses regarding the factors that promote invertebrate pest invasions (Thrall et al. [2011](#eva12071-bib-0091){ref-type="ref"}). A better understanding of evolutionary and demographic processes involved in pest invasions will allow for more rigorous assessment and implementation of management plans (Thrall et al. [2011](#eva12071-bib-0091){ref-type="ref"}). Historically, genetic studies of insect pest invasions have focused on retracing routes of introduction, colonization, and spread of pests, and such studies provide valuable information that can be used to minimize the risk of ongoing human‐mediated introductions. However, there are a number of limitations to such studies, in part because the results can be difficult to interpret, and they may not capture critical ecological and evolutionary factors that determine the success of pest invasions. Recently, increased emphasis has been placed on understanding the biology of invasive invertebrate pests (Pyšek et al. [2010](#eva12071-bib-0073){ref-type="ref"}). As a result of advances in statistical modeling and the ongoing development and falling costs of genomics techniques, we are now at a new frontier in terms of our ability to rapidly collect genomic data and to study ecological and evolutionary processes at the molecular level in invasive invertebrate pests. These advances will not solve methodological problems related to limitations in sampling and will not entirely resolve the challenge of interpreting molecular data given complex historical demography. However, they promise to provide more precise information about colonization routes and expand our understanding of adaptive versus neutral processes involved in pest invasions. Hitherto, these great opportunities have been underutilized in studies of pest invasions and outbreaks. The economic impacts of pests are high, and mitigating these impacts will be a key component of ensuring food security through the coming decades, given the twin challenges of global population growth and climate change. The authors thank Jana Collatz for useful comments on the manuscript.
Cardiovascular disease (CVD), the leading cause of morbidity and mortality worldwide, is a complex multifactorial disease which is influenced by environmental and genetic factors. There is substantial evidence on the relationship between diet and CVD risk. An understanding of how genetic variation interacts with the diet to influence CVD risk is a rapidly evolving area of research. Since diet is the mainstay of risk factor modification, it is important to consider potential genetic influences on CVD risk. Nutrigenomics is the study of the interaction between diet and an individual's genetic makeup. Single nucleotide polymorphisms are the key factors in human genetic variation and provide a molecular basis for phenotypic differences between individuals. Whole genome and candidate gene association studies are two main approaches used in cardiovascular genetics to identify disease-causing genes. Recent nutrigenomics studies show the influence of genotype on the responsiveness to dietary factors or nutrients that may reduce CVD risk. Nutrigenomics research is expected to provide the scientific evidence for genotype-based personalized nutrition to promote health and prevent chronic disease, including CVD. It is imperative that healthcare providers, including cardiovascular nurses, are trained in genetics to foster delivery of competent genetic- and genomic-focused care and to facilitate incorporation of this new knowledge into current clinical practice, education, and research.
http://onlinelibrary.wiley.com/doi/10.1111/j.1751-7117.2009.00058.x/abstract
To investigate the factors that affect the rate of photosynthesis. Extracts from this document... Introduction Key Stage 4 Biology Coursework Aim: to investigate the factors that affect the rate of photosynthesis. Planning Photosynthesis equation: carbon dioxide + water sunlight glucose + oxygen 6CO2 6H2O chlorophyll C6H12O6 6O2 Photosynthesis is the process that produces 'food' in plants. The food it produces is glucose. Photosynthesis takes place in the leaves of green plants. Plants need light, carbon dioxide, water and chlorophyll in order to photosynthesise. The leaf is where the food is made. It gets light usually from the sun. The carbon dioxide enters the leaf form the air around. Water comes form the soil, up to the stem and into the leaf. The most important thing is the chlorophyll, which is found in the chloroplasts. The chlorophyll absorbs the energy from the sunlight and uses it to combine carbon dioxide and water to produce glucose. Oxygen is a waste product. Therefore these are all limiting factors. The rate of photosynthesis is affected by a number of factors including light levels, temperature, availability of water, and availability of nutrients. If the conditions that the plant needs are improved the rate of photosynthesis should increase. The maximum rate of photosynthesis will be constrained by a limiting factor. This factor will prevent the rate of photosynthesis from rising above a certain level even if other conditions needed for photosynthesis are improved. This limiting factor will control the maximum possible rate of the photosynthetic reaction. The rate of photosynthesis can be measured by measuring the amount of oxygen given off. To do this you place the water plant in water under a funnel in a beaker full of water. Then place a test tube over the funnel and start counting the bubbles given off. The bubbles are oxygen bubbles. I know this from preliminary work because I tested the gas by putting a glowing splint into the test tube and it re-lit so this proves the gas is oxygen. The more oxygen bubbles there are, the faster the rate of reaction of photosynthesis. ...read more. Middle For example when the light intensity was 40 at 5cm the amount of oxygen bubbles collected was 40 whereas when the light intensity was 1.6 at 25cm there were only 1.5 oxygen bubbles given off. When I repeated the results a second time, they were fairly similar to the first trial, which proves my prediction was correct twice. As predicted when the light intensity increases so does the rate of photosynthesis. I predicted that a level would be reached where increasing the light intensity would have no more effect on the rate of reaction as there would be some other limiting factor, which limits the rate of the reaction. The rate increases at a steady rate as the light intensity increases until near the end of each line where the rate of increase decreases. This is either because the photosynthesis reaction has reached its maximum rate of reaction or another factor is limiting the rate. The fact that the curve levels off so quickly indicates that there is another limiting factor limiting the photosynthesis. It could be temperature. These tests are being carried out at room temperature so the temperature would have to be raised another 15�C before the enzymes in the plant's cells were at their optimum working temperature. More tests could be done by using water that was at a higher temperature to see what effect this would have on the photosynthesis rate. It is however impossible to raise the plant's temperature without affect other factors. For instance the actual amount of oxygen released by the plant is slightly more than the readings would suggest as some of the oxygen would dissolve into the water. At a higher temperature less oxygen would be able to dissolve into the water so the readings for the photosynthesis rate could be artificially increased. It is also possible that the photosynthetic reactions in the plant are occurring at their maximum possible rate and so can not be increased any more. ...read more. Conclusion It could also be interesting to explore the effects of coloured lights on the rate of photosynthesis, which could lead to the question of whether or not other types of light, such as fluorescent lights or halogen lights, would have a different effect on the rate of photosynthesis I think my results are fairly accurate because the results plotted on my graph show that they fit into a pattern. Also when I repeated the results a second time, they were fairly similar to the first trial, which proves my prediction was correct twice. If you repeated it more times you would get similar results, thus proving a successful method and reliable results. However, I realised that on my second trial, amount of bubbles was greater than the first trial so I think I should have let the water equilibrate for longer. I think my results are fairly accurate because most points fit on the line of best fit, and the other points are quite close to the line of best fit. The number of oxygen bubbles didn't completely stop increasing but if I had taken more readings they would have. To improve my experiment I could try other methods. For example, instead of counting the oxygen bubbles, I could measure the volume of oxygen collected. In this method the size of the bubbles would not affect the results so it would make the results more reliable. To extend my investigation, I could investigate if different coloured lights affect the rate of photosynthesis. For this investigation I used white light, so as an extension I could split up this light into the colours of the spectrum by using a coloured filter and find out if one colour has a different effect to the rate of photosynthesis than another. I know that plants reflect green light so I could also investigate if plants absorb one colour more than another does. Amreen Khadri 11QWR Amreen Khadri 11QWR ...read more. This student written piece of work is one of many that can be found in our GCSE Green Plants as Organisms section. Found what you're looking for?
http://www.markedbyteachers.com/gcse/science/to-investigate-the-factors-that-affect-the-rate-of-photosynthesis.html
There’s this tendency to associate autocratic leaders with economic prosperity but that couldn’t be further from the truth. A new study analyzed economies under the rule of dictators from all over the world in the last 150 years and found that dictators are most often linked to weaker economies than democratic states. “In an era where voters are willingly trading their political freedoms in exchange for promises of strong economic performance to strongman figures like Donald Trump, Vladimir Putin or Recep Tayyip Erdogan, it’s important to understand whether autocratic leaders do deliver economic growth,” Dr. Ahmed Skali, an economist at RMIT University, said in a statement. “Our empirical results show no evidence that autocratic leaders are successful at delivering economic growth in any systematic way.” It is true that some dictators have had some very good runs. Lee Kuan Yew, who ruled Singapore with an iron fist between 1959 and 1990, oversaw a flourishing economy (6.50x increase in GDP per capita since taking office) in a nation that would later become the seat of Asia’s best health and education systems. Then there’s Sayyid Muhammad Idris bin Muhammad al-Mahdi, the king of the United Kingdom of Libya, who ruled his country from 1951 to 1969. During that period, Libya’s economy saw a 9.78x increase in GDP per capita. However, Skali and colleagues at RMIT University and Victoria University have found that the vast majority of autocratic rules are associated with weaker economies. Autocrats whose countries experienced larger than average economic growth were found only as frequently as you would predict based on chance alone. Meanwhile, autocrats overseeing poor economic growth were more frequently found than a coin toss would predict. “Taken together, these two results cast serious doubt on the view that autocratic leaders are successful at promoting economic growth,” Skali said. Besides economic growth, the researchers also analyzed the effects of other areas of policy, such as whether autocratic leaders were good at reducing unemployment or improvement healthcare than their democratic peers. “That was not the case,” Skali said. What’s more, the researchers found little to no evidence that dictators directly had any influence on their country’s growth trajectories in the 5-10 years after taking power. So it seems that even successful dictators that oversaw growing economies just found themselves in the right place at the right time. In contrast, the economies of poor countries run by autocratic leaders dropped quickly and significantly after the dictator came to power. In other words, dictators seem to have had little positive influence on their countries but tended to have a lot of influence when it came to driving down their nation’s economies and public services. If this is true, why are so many people fond of dictators and the ‘good times’ under their rule? Victoria University behavioral economist Stephanie Rizio, one of the study’s co-authors, explains that humans, as social primates, are inclined to accept and even seek out the authority of a single individual — the alpha primate. “Perhaps this is why we routinely attribute group-level outcomes to the actions of leaders, even when leaders have no control over outcomes, which may lead us to be accepting of autocratic leadership styles,” she said. The study is published during a time of rising “strongman” figures in the world’s political scene, such as Turkey’s Recep Tayyip Erdoğan or Hungary’s Viktor Orbán. As more and more people subscribe to autocratic views, these findings should serve as a wake-up call.
https://www.zmescience.com/science/news-science/dictators-weak-economies-0534/
The 2016 NMC Horizon Report described Virtual Reality in Higher Education as “...[providing] contextual learning experiences that foster exploration of real world data in virtual surroundings”. Our team is working with academic teams across the university with a range of devices. We help to manage the Hololens user group; a high-end augmented reality technology, suitable for small cohorts and requiring specialist developers. But we are also exploring more accessible forms of virtual reality, suited to scaling up experiences for larger cohorts. We are talking to academics who have used a variety of technologies to create virtual experiences, for fieldwork or otherwise, and aim to share good pedagogic practice in developing new immersive resources. For disciplines that require students to complete fieldwork, virtual opportunities supplement and enhance, but usually don’t replace, existing ‘real world’ learning. Frequently ‘virtual field-trips’ are interactive ebooks, missing the truly immersive experience (and resultant engagement) of the original visit. Using 360 images and/or VR, fieldwork opportunities can be maximised with both formative opportunities for students, before and after, to consolidate their learning. One example of the new technologies we are investigating is Thinglink; a new browser- and app-based form of software, which bridges the gap between high-end VR experience and handheld accessibility. Resources can be viewed in 360° via a browser giving a more interactive, if not completely immersive, experience or in full VR using an app and headset. This software also allows for ease of development since teaching staff need only an account and access to 360° image capture. Usually it takes months of training to develop a virtual world but Thinglink allows this to be done in a short space of time by users with no previous experience. We are also pushing some of the functionality of Articulate Storyline in creating simulations and interactive environments. Please contact us if you have ideas about virtual experiences for your students that you would like to develop.
https://www.southampton.ac.uk/digital-learning/what-is-it/360degrees.page
Develop strategies, research and coordinate implementation of strategic and long-range planning activities. BASIC FUNCTION Perform planning research, analyses, and report preparation in support of institutional departmental program planning and research. DISTINGUISHING CHARACTERISTICS Under general direction, conduct research, analysis and/or evaluation such as resource allocation and resource loading and leveling to support the development or modification of intermediate and long-range plans. Conduct research, analysis and/or evaluation for the development of planning processes including systems design and implementation in support of institutional program strategic planning. TYPICAL WORK Determine resource allocation and resource loading and leveling by developing baseline plans and schedules, tracking progress, assessing status, and forecasting probable outcomes of variances; Organize and develop a study plan from a general problem statement or statement of objectives; Determine project requirements, identify and research data sources, develop or revise survey questionnaires to obtain data; Perform analysis of data including development of statistical and computer simulation models; Coordinate work activities with others; Provide staff assistance for and attend various administrative committee meetings to review planning problems/developments; Assist in the development of proposals; collect, analyze, and incorporate feedback on plan drafts; Develop planning activities, processes, and parameters including the development of concepts and measurement of activities; Analyze and evaluate planning reports and their impact; Direct the work of others; Perform related duties as required. MINIMUM QUALIFICATIONS - UW Contract Classified DESIRABLE QUALIFICATIONS - Classified Non-Union A Master's Degree in business, health administration, public health or a related field OR equivalent education/experience.
https://hr.uw.edu/files/comp/jobspecs/17204.php
Mathieu (2016) finds investor sentiment to be a significant factor in explaining REIT returns and REIT return volatility in the US. As Freybote and Seagraves (2016) show, particularly institutional investors tend to rely on the sentiment of specialized real estate investors, by analyzing the buy-sell-imbalance as an indicator of the demand for a particular asset. Based on the aforementioned factors, the objective of this paper is to complement the sentiment-investigating literature by applying two methodologies of textual analysis to real-estate-related newspaper headlines, in order to create sentiment measures and test relationships to US REIT prices. Furthermore, this study analyzes, if real estate related news do reflect, cause or enhance market performance in the real estate sector.For this purpose, a set of about 130,000 newspaper articles from four different US newspapers, with a time frame from 2005 until 2015, was collected. Following the approach of Bollen et al. (2009), sentiment analysis is applied with a term based methodology, by counting words that indicate positive or negative sentiment derived from different research approaches. Moreover, this dictionary-based methodology will be supplemented by and compared to the results of a machine learning tool, the “Google Prediction API”. In consequence, qualitative information from news stories and posts are converted into a quantifiable measure achieved by analyzing the positive and negative tone of the information.To test the explanatory power of the created sentiment measures on REIT market movements in the US (FTSE EPRA/NAREIT), a regression model is employed. Due to the unique characteristics of REITs, variables to control for macroeconomic changes, the general stock market and also representatives for the direct real estate market are included in the model. Results show, that the created real estate sentiment measures have significant effects on the REIT market. Different measures were found to have varying relationships. Furthermore, the created sentiment measures are more powerful in times of a decreasing REIT market than in increasing times. To the best of our knowledge, this is the first research work applying textual analysis to capture sentiment in the securitized real estate market in the US. Furthermore, the broad collection of newspaper articles from four different sources is unique, as normally only one or two different sources have been used in literature so far.
AT the outbreak of the Peloponnesian War in 431 B.C. Thucydides began to write a history of the conflict. His History in eight books gives an account of the hostilities to 411 B.C., but the war, a twenty-seven years struggle between Athens and Sparta for the hegemony of the Greek world, did not end until the surrender of Athens to Lysander in 404. The work is remarkable chiefly for its analysis of the causes that underlay the events recorded. In this respect Thucydides was the first and remains the greatest of philosophical historians. His History contains some fine descriptive passages, and in the substantially authentic summaries of speeches attributed to various prominent men on both sides, some magnificent declamations. (From Thucydides) I--THE BEGINNING OF THE WAR I have written the account of the war between Athens and Sparta, since it is the greatest and most calamitous of all wars hitherto to the Greeks. For the contest with the Medes was decided in four battles; but this war was protracted over many years, and wrought infinite injury and bloodshed. Of the immediate causes of the war the first is to be found in the affairs of Epidamnus, Corcyra and Corinth, of which Corcyra was a colony. Of the Greek states, the most were joined to either the Athenian or the Peloponnesian league, but Corcyra had joined neither. But having a quarrel with Corinth about Epidamnus, she now formed an alliance with Athens, which enraged the Corinthians. They then helped Potidaea, a Corinthian colony but an Athenian tributary, to revolt from Athens. Corinth next appealed to Sparta, as the head of Hellas, to intervene ere it should be too late and check the Athenian aggression, which threatened to make her the tyrant of all Greece. At Sparta, the war party prevailed, although Archidamus urged that sufficient pressure could be brought to bear without hostilities. The great prosperity and development of Athens since the Persian war had filled other states with fear and jealousy. She had rebuilt her city walls and refortified the port of Piraeus after the Persian occupation; Sparta had virtually allowed her to take the lead in the subsequent stages of the war, as having the most effective naval force at command. Hence she had founded the Delian league of the maritime states, to hold the seas against Persia. At first these states provided fixed contingents of ships and mariners; but Athens was willing to accept treasure in substitution, so that she might herself supply ships and men. Thus the provision of forces by each state to act against Persia was changed in effect into a tribute for the expansion of the Athenian fleet. The continuous development of the power of Athens had been checked only momentarily by her disastrous Egyptian expedition. Her nominal allies found themselves actually her tributary dependencies, and various attempts to break free from her yoke had made it only more secure and burdensome. Hence the warlike decision of Sparta was welcomed by others beside Corinth. But diplomatic demands preceded hostilities. Sparta and Athens sent to each other summons for the 'expulsion of the curse,' that is of all persons connected with certain families which lay under the curse of the gods. In the case of Athens, this amounted to requiring the banishment of her greatest citizen and statesman, Pericles. To this the Spartans added the demand that the Athenians should 'restore the freedom of Hellas,' and should specifically remove certain trading disabilities imposed on the people of Megara. At this crisis Pericles laid down the rules of policy on which Athens ought to act-rules which required her to decline absolutely to submit to any form of dictation from Sparta. When a principle was at stake, it made no difference whether the occasion was trivial or serious. Athens could face war with confidence. Her available wealth was far greater-a matter of vital importance in a prolonged struggle. Her counsels were not divided by the conflicting interests of allies all claiming to direct military movements and policy. Her fleet gave her command of the sea, and enabled her to strike when and where she chose. If Peloponnesian invaders ravaged Attica, still no permanent injury would be done comparable to that which the Athenians could inflict upon them. The one necessity was to concentrate on the war, and attempt no extension of dominion while it was in progress. War was not yet formally declared when the Thebans attempted to seize Plataea, a town of Boeotia, which had long been closely allied to Athens. The attempt failed, and the Thebans were put to death; but the Plataeans appealed to Athens for protection against their powerful neighbour, and when the Athenian garrison was sent to them this was treated as a casus belli. PREPARATIONS were urged on on both sides; Sparta summoned her allies to muster their contingents on the Isthmus for the invasion of Attica, nearly all the mainland states joining the Peloponnesian league. The islanders and the cities in Asia Minor, on the other hand, were nearly all either actually subject to Athens or in alliance with her. As Pericles advised, the Athenians left the country open to the ravages of the invading forces, and themselves retired within the city. In spite of the resentment of those who saw their property being laid waste, Pericles maintained his ascendancy, and persuaded the people to devote their energies to sending out an irresistible fleet and to establishing a great reserve both of ships and treasure, which were to be an annual charge and brought into active use only in the case of dire emergency. The fleet sailed round the Peloponnese, and the ravages it was able to inflict, with the alarm it created, caused the withdrawal of the forces in Attica. In that winter Pericles delivered a great funeral oration, or panegryic, in memory of the Athenians who had so far fallen gloriously in defence of their country, in which he painted the characteristic virtues of the Athenian people in such a fashion as to rouse to the highest pitch the patriotic pride of his countrymen, and their confidence in themselves and in their future.
https://www.publicbookshelf.com/public_html/Outline_of_Great_Books_Volume_I/whowasth_f.html
Motive Renewables is committed to serving its customers and meeting their needs and expectations as a minimum, but also strives to exceed them in every way possible. To achieve this, Motive Renewables has adopted the principles and requirements of ISO 9001: 2008 to design a comprehensive, co-ordinated quality management system. The system has been implemented across the whole company and embraces all of the activities which impact upon our customers. The Directors of Motive Renewables are committed to ensuring that the system is effective in achieving quality and satisfying customers. Quality objectives have ben set across the company, which are measured against and reported upon. The management system is continuously revised to ensure the effectiveness is maintained and improved where necessary. This policy is regularly communicated to Motive Renewables employees and associates, who are made aware of the importance of achieving this policy. The policy is reviewed periodically to ensure its continued suitability and applicability. In the event the policy is changed, it will be re-implemented within the organisation as required.
http://www.motive-renewables.com/quality-policy/
Screen Time tracks how much you’ve used your iPhone or iPad. It also lets you schedule times when you shouldn’t be using apps or set maximum amounts of time you’d want to use specific types of apps, like games. You can use Screen Time as a parental control feature with your child’s iPhone or iPad, too. This feature is new in iOS 12, which Apple will likely release in Fall, 2018. How to Find Screen Time Settings To find Screen Time settings, head to Settings > Screen Time. The “Screen Time” option is just under “Do Not Disturb” near the top of the Settings list. If you only see a “Turn On Screen Time” option here with no data, that’s because Screen Time is currently disabled on your device. You can tap “Turn On Screen Time” to enable it, but you’ll need to wait for your iPhone or iPad to collect data about how you use your device before viewing any informative reports. How to View Reports To view reports about screen time usage, tap the report options at the top of the Screen Time screen. At the top of the screen, you can choose to see a report for only today or the last seven days. You can also tap the “Devices” option at the top right corner of the screen and select which devices from which you want to see reports. For example, if you have both an iPad and iPhone, select “All Devices,” and you’ll see a combined report that shows how much you’ve used both devices. You can also select a single device to see only its usage report. The graph here shows you how much you’ve used your device over the past 24 hours or the past week. Long-press a bar if you want to see exactly how much time it represents. This data is also broken down by the specific type of application, so you can see how much time you spent using reading apps, productivity tools, entertainment applications, and games. This tells you exactly how you’re using your time. Other information shown here includes how your current usage today compares to your average daily usage, the length of the most extended session you’ve used your device today, and the total amount of time you’ve spent using your device in the last week. Look below the screen time graph for more information. Under the “Most Used” section, you’ll see how long you spend using specific apps—or you can tap “Show Categories” to see how long you’ve used specific types of apps. Under the “Pickups” section, you’ll see how many times you picked up your phone or tablet today or in the last week. You’ll also see how often you’ve picked it up and the times when you most frequently pick it up. Under the “Notifications” section, you’ll see how many notifications you get, when they arrive, and from which apps. This might make you realize one or more apps are bugging you too much and help in deciding whether to turn off their notifications. RELATED: How to Backup iPhone/iPad after iOS 12 Update How to Schedule Downtime Screen Time lets you schedule “downtime” when only receive phone calls and use apps that you explicitly allow. For example, you might want to set downtime during the hours when you’re supposed to be asleep, which will—hopefully—prevent you from lying in bed on your phone rather than trying to sleep. To schedule downtime, tap “Downtime” on the main Screen Time page and enable the “Downtime” option. Set your desired start and end times here. This setting will be synchronized to all devices you sign into using the same iCloud account, and you’ll see a downtime notification five minutes before your scheduled downtime. When the downtime arrives, all the app icons on your home screen—except the ones you’re allowed to access, like Clock, Settings, and Safari—will be grayed out. If you tap one, you’ll see a message saying you’ve reached your time limit. You can tap “Ignore Limit” to open the app anyway. You can then tell your iPhone or iPad to remind you to stop in fifteen minutes or to ignore the limit for today permanently. After all, it’s your device, and you can do what you want. This won’t lock you out of your apps—the feature is just designed to give you a helpful nudge if you want it. RELATED: How To Use Downtime Feature in iOS 12 To Curb Your Smartphone Addiction How to Set App Limits The iOS operating system now lets you set time limits for specific categories of apps. For example, you might limit yourself to only 30 minutes of games per day, or just one hour inside social media apps. These limits reset every day at midnight. To configure this, tap “App Limits” on the Screen Time page and then tap “Add Limit.” Choose one or more categories of apps and then tap the “Add” button. You can also select “All Apps & Categories” here if you’d like to limit your time in all apps on your phone or tablet instead of specific types of apps. Finally, select the maximum amount of time you’d like to spend on these apps every single day. You can choose different amounts of time for different days of the week if you like. For example, you might want to give yourself more time for games and other time-wasting apps on the weekend. As with downtime, even when you’ve exceeded your time limit, you can still bypass the time limit and use the app, if you want. These limits are just designed to help you make plans and stick with them. How to Choose Always-Allowed Apps Screen Time lets you set apps that are “always allowed,” even during downtime or if you’ve exceeded your app time limits. For example, the Phone, Messages, and FaceTime apps are always allowed by default, ensuring you can still communicate. To configure this list of apps, tap the “Always Allowed” option on the Screen Time page. You can then add and remove apps from the list. You can remove Messages and FaceTime from your always allowed apps if you like, but the Phone app is essential, and there’s no way to restrict it. How to Configure Content & Privacy Restrictions Content & Privacy Restrictions, while available under Screen Time, are more like parental controls. They let you set limits on App Store purchases, restrict certain types of websites, and prevent someone with the iPhone or iPad from changing system settings. Many of these options were previously available elsewhere at Settings > General > Restrictions. To set content & privacy restrictions, tap “Content & Privacy Restrictions” on the Screen Time page. You’ll be prompted to set a PIN, which prevents people from changing these settings without your permission. This is particularly useful on a shared iPad, for example. You can then enable the “Content & Privacy” option and configure your desired options here. How to Set a Screen Time Passcode Parents can use the Screen Time options to lock down a device, too. For example, you can enable downtime on a child’s iPad to prevent them from using most apps during bedtime hours, or configure app limits to prevent them from playing games all day. To do this, tap “Use Screen TIme Passcode” at the bottom of the Screen Time page and then enter your passcode. No one can get more time when an app’s time limit expires or modify Screen Time settings without that passcode. How to Use Screen Time With a Child’s Account Screen Time integrates with Apple Family Sharing. If you have one or more child accounts in your family, you can view those accounts under the “Family” section here. Tap an account to enable Screen Time for it, which will let you view reports about your children’s device usage and set limits if you want. RELATED: How to Recover Deleted iPhone/iPad Data on iOS 12 How to Disable Screen Time If you don’t like Screen Time and don’t want to use these features, you can disable it. This will stop your iPhone or iPad from keeping track of how you use your device, so you won’t be able to see reports. Your device will immediately delete its collected data, too. Disabling Screen Time will also stop your device from showing the Weekly Report notification when iOS generates a new Screen Time report. To disable it, scroll down to the bottom of the Screen Time page and tap the “Disable Screen Time” option. Your iPhone or iPad will delete its collected usage data and stop tracking it. You can return here and re-enable Screen Time in the future if you like. Apple will probably start tracking more types of data and provide more Screen Time options in a future version of the iOS operating system. For now, Screen Time gives you the essential data you need to understand how you use your device, and it can provide some helpful nudges if you want them. For parents, Screen Time includes more powerful parental controls, too.
https://blog.syncios.com/how-to-use-and-configure-screen-time-on-your-iphone-or-ipad/
A swimmer wants to cross a river, from point A to point B, as shown in the figure. The distance d1 (from A to C) is 200 m, the distance d2 (from C to B) is 150 m, and the speed vr of the current in the river is 5 km/hour. Suppose that the swimmer's velocity relative to the water makes an angle of θ=45degrees with the line from A to C, as indicated in the figure. no title provided A) To swim directly from A to B, what speed us, relative to the water, should the swimmer have? Express the swimmer's speed numerically, to three significant figures, in kilometers per hour. We know that the time taken to move 200m in the y direction (d1) has to be the same time taken to go 150m in the x direction (d2). This means the swimmer is essentially moving along the hypotenuse of the triangle. We can set up a system of equations to solve the problem: x-x0 = vxt + 1/2axt2 There�s no acceleration so that part cancels out. The swimmer is swimming to the left but the current is pulling him to the right � we�ll treat the swimmer�s velocity as positive and the current�s velocity as negative. We know that the swimmer needs to move 150m to the right � again, from the swimmer�s perspective, this is negative. Also, because ? is with respect to the y direction, we have to use sin to find the x velocity and cos to find the y velocity (there are other ways of solving the problem but this is the most direct one). We end up with: -150 = (sin(45)v � 5000)t In other words, the swimmer�s x velocity is a combination of his own swimming plus the negative effect of the current. Now let�s set up an equation for the y distance, but put it in terms of t so that we can plug it into the above equation: y-y0 � vyt + 1/2ayt2 Again, no acceleration (the swimmer is moving up, but not against gravity), so that part cancels. We�re left with: 200 = cos(45)vt Solving for t, we get: t = 200/(cos(45)v) t = 200/(0.7071v) Now plug this into the first equation: -150 = (0.7071v � 5000)(200/(0.7071v)) -150(0.7071v/200) = (0.7071v � 5000) -0.530v = 0.7071v � 5000 1.2374v = 5000 v = 4040.7306m/h Convert to km/h like the problem asks for:
http://toughstem.com/problem-answer-solution/4546/swimmer-cross-river-point-point-shown-figure-distance-distance-speed-vr-current
This initial meeting, between a small boy and a convict, will develop into the central relationship in the book. A kindness that will change the course of Pip's life , though he doesn't realize it. The glass wind buffer is a great safety feature for people with small children. Others were considered gentleman because of their occupation, for example clergymen, army officers, and members. . We could not go fast, because of their lameness; and they were so spent, that two or three times we had to halt while they rested. Fire represents here something much greater than just its physical being: fire is love. This is a respectable but sometimes dull recounting about notorious novel , being finely performed , splendidly staged with adequate sets , stunning visuals and interesting drama of a well known story. The man tells Pip that if he wants to live, he'll go down to his house and bring him back some food and a file for the shackle on his leg. After venturing out into the wide world in search of his expectations, Pip initially seems not to care about Joe or the Forge. Further, one of those two was already re-taken, and had not freed himself of his iron. What were the major components of these explanations and why were contemporaries so eager to search for a reason for the calamity other than simple accident. Any attempt at bringing a Dickens work to the screen would be an awesome task to accomplish. Cox Great Expectations, Charles Dickens Commentary Dickens is probably the most famous, and he is surely the most beloved, author of those you will read in this class for your novel assignments. Joe was, in her refusal to see anything at all in Pip, an obstacle to great expectations. Later on , at a musty mansion Pip meets an old woman , Miss Havershan Helena Bonham Carter , though Meryl Streep was approached for the role, but turned it down due to scheduling conflicts , and a beautiful girl called Stella grown-up, a haughty beauty Holliday Grangier , though Rooney Mara turned down the role who has been raised by the eccentric Miss Havershan. But not quite, for they never did it. Knowing what I knew, I set up an inference of my own here. This man has a badly bruised face and wears a broad-brimmed hat. The embedded audio player requires a modern internet browser. Helena Bonham Carter was asked to appear as by Newell, and accepted the role after some initial apprehension, while Irvine was initially intimidated by the thought of appearing on screen as Pip. But how does that apply to his relations to Joe, Biddy, and Magwitch now? Early in the book during the opening chapter, Pip the character is a child, and Pip the narrator focuses. Joe's pork pie by himself, thus getting Pip off the hook. Analysis: Pip's materialism stems from his immaturity and having set his sights on the superficial. During the dinner, Pip nervously steals a piece of bread. Their pieces were cocked and levelled when we all ran in. You should visit and update your internet browser today! Let him turn those eyes of his on me. Pip used the money he was getting from Magwitch to finance Herbert dream of becoming a business owner. Magwitch was left without fire, and therefore without love. Pip, who was originally mortified to learn that his fortune came from someone so far beneath Estella, now learns that Estella is the daughter of his secret benefactor and therefore springs from even humbler origins than himself. A bitter sleet came rattling against us here on the east wind, and Joe took me on his back. The novel revolves around the growth of a young and orphaned boy named Pip. Human nature separates humans from the rest of the animal kingdom. Quote: I promised myself that I would do something one of these days, and formed a plan in outline for bestowing a dinner of roast beef and plum pudding, a pint of ale, and a gallon of condescension upon everybody in the village Chapter 19. Quote: I did really cry in good earnest when I went to bed, to think that my expectations had done some good to somebody Chapter 37. And the thing that usually prompts him to think of the Forge is the presence of fire. In the beginning of the novel, prior to Pip being exposed to the world he feels that he can satisfy his expectations, Joe and Pip are equals — the humbleness and loyalty that. Newell assured her that she was the right age, as the character in the book is in her forties, and she accepted the role. He lies, partly in spite, but also because he is sure that the two would not understand the situation at the Satis House even if he described it in detail. Analysis: Pip realizes he has been anything but a gentleman. We were at a loss to find a suitable attendant for her, until a circumstance happened conveniently to relieve us. Dickens, writing this book toward the end of his life, is speaking directly of his own youthful desires and those of his father as well. Fiennes hadn't read a great deal of Dickens' work prior to gaining the role. The convict then gets into a fight with another runaway convict and is take back to jail. Other times the reader is left out of the secret but we are given the impression that it is an important thing that we need to find out, as in the case of the two convicts. Joe of the things which are missing from the kitchen. Chapter 3: The next morning, Pip sneaks out of the house and back to the marshes. There are numerous explanations that attribute to the belief in either. I could see none, I could hear none. Pip himself tries so hard to run away from the love that the fire of the Forge had made for him by focusing his love on everything and anything else… his expectations, Estella, his new lifestyle… and this almost results in his being consumed too, in a fiery fever, but Joe comes to save him from it. Pip associates with Magwitch, Orlick with Magwitch¡¦s bitter enemy, Compeyson. Abel Magwitch, Charles Dickens, F. But when, in the clearer light of next morning, I began to reconsider the matter and to hear it discussed around me on all sides, I took another view of the case, which was more reasonable. Quote: I want to pursue that subject you mentioned to me when you were last here, and to show you that I am not all stone. As to the strange man; if he had come back for his two bank-notes there could have been no dispute about them, because my sister was fully prepared to restore them. The author uses these symbols most frequently in the novel. Besides, there had been no altercation; the assailant had come in so silently and suddenly, that she had been felled before she could look round. Pip has a mysterious benefactor named Abel Magwitch who is a convict. Pip regards Biddy as a sister, Orlick¡¦s intentions towards her are less honorable. The novel is a detailed story of Pip's life and how he changes throughout the novel. Analysis: Pip learns early on what Estella and Miss Havisham's plans are, yet he continues to pursue her.
http://seikyusho.jp/great-expectations-fire.html
[ Back ] [ The Roadtechs Nuclear Job Board ] [ Help ] [Edit ] [Mark Position Closed ] SUMMARY/OBJECTIVE: The Outage Coordinator will coordinate and schedule outages on the electrical transmission system for various line construction projects across the state of NH. ESSENTIAL FUNCTIONS: - Review transmission line and substation project schedules and coordinate transmission outages to minimize grid instability and maximize asset availability. - Work closely with project management, transmission switching authority, and vendors to optimize the scheduling of outages to maximize the effective use of resources to minimize the length of outages and minimize removing the same assets from service multiple times - Work with project management and vendors to develop contingency plans for emergency restoration of lines - Develop and submit outage requests to the Transmission Switching Authority - Attend project meetings regularly to drive outage coordination - Meet regularly with representatives of the Transmission Switching Authority to review outage request issues and status - Develop lessons learned and corrective actions from switching - Excellent written and verbal communication skills - Excellent facilitation skills - Seasoned, experienced outage coordinator who is adaptable, collaborative, conscientious, critical-thinking, outcome-driven, and professional - Outage Coordinator with a minimum of 7-10 years' experience developing, reviewing and submitting Transmission system outage requests - Skilled in use of the following tools; MS Scheduler or Primavera P6, Excel, Word and Power Point. Maximo is a plus. © Copyright - Roadtechs®, LLC. - All rights reserved.
https://www.roadtechs.com/nuke/wwwboard/getpost.php?rec_nbr=811054
BACKGROUND OF THE INVENTION The invention relates generally to analog-to-digital (A/D) converters having a limited number of quantization levels. Digital quantization of analog signals may be accomplished using standard A/D conversion techniques. However, in some instances an existing A/D converter having a fixed number of quantization levels can not provide adequate resolution when used with certain input signals. It is, therefore, an object of the present invention to provide a method for enhancing the signal resolution of an A/D converter without undesirably affecting the resulting digital signal. SUMMARY OF THE INVENTION According to the invention, a second waveform is added to a first analog signal, forming a composite signal. The second analog waveform has a peak-to-peak amplitude comparable to the maximum peak-to- peak amplitude of the first analog signal, has zero average value, and has a larger period than the first signal. The composite analog signal is converted to digital form using an A/D converter having a limited number of quantization levels. The output signal is then integrated to reduce undesired effects of the added second signal. For a better understanding of the present invention, and its benefits, reference is made to the following description taken in conjunction with the accompanying drawing, and its scope specified in the claims. BRIEF DESCRIPTION OF THE DRAWING FIG. 1 is a spread spectrum receiver embodying the present invention. DETAILED DESCRIPTION OF THE INVENTION In the spread spectrum receiver of FIG. 1, Antenna 10 receives a spread spectrum transmission which is down converted in Receiver/Down- Converter 12. Resulting signal 15 is a random-like waveform whose peak-to- peak amplitude variation would use few of the available quantization levels of A/D converter 18 if applied directly thereto. For example, a typical signal may only use 4 out of 256 quantization levels of an 8 bit A/D converter. The reason for this is that it is necessary to provide sufficient dynamic range overhead for interfering signals which may arrive at antenna 10. The resulting digital signal, therefore, would suffer from a loss of resolution. This problem is remedied by employing the invention's methodology. According to the invention, a second analog waveform 14, supplied by Reference Waveform Generator 13, is added to signal 15 by Adder 16. Analog waveform 14 has a peak-to-peak amplitude comparable to the maximum peak-to-peak amplitude of signal 15. Waveform 14 must also have zero average valve, and a larger period than signal 15. As used herein, comparable amplitude means that waveform 14 has an amplitude equal to or greater than the maximum peak-to-peak amplitude variation of signal 15. As an illustration, using the previous example given above for signal 15, waveform 14 would have a peak-to-peak amplitude equal to or greater than 4 quantization levels of an 8 bit A/D converter. Such a waveform, for example, may be a triangular wave having a period which is 16 times greater than the period of signal 15, and having zero average valve. Resulting composite signal 17 is converted to digital form by A/D converter 18, which may be any conventional high speed unit. The resulting digital composite signal 19 is multiplied with reference signal 21 in Multiplier 22. Signal 21 is a de-spreading function signal corresponding to the spread spectrum transmission received at antenna 10 and has zero average valve. Signal 21 is provided by direct sequence pseudo-noise (DSPN) Receiver Reference Code Generator 20. Digital Integrator 24 integrates the resulting signal 23 over the length of an information symbol (bit) that is significantly longer than the period of waveform 14, thereby causing the effects of both waveform 14 and signal 21 to average to zero. Therefore, output signal 25 is the desired digital signal with increased resolution and without undesired effects from signal 14 having been added to signal 15. Using this process, the effective resolution of an 8 bit A/D converter may be increased to 11 bits. While there has been described what is at present considered to be the preferred embodiment of this invention, it will be obvious to those skilled in the art that various changes and modifications may be made therein without departing from the invention and it is, therefore, aimed to cover all such changes and modifications as fall within the true spirit and scope of the invention. This invention was made with government support under Contract F30602- 83-C-0107 awarded by the Department of the Air Force. The government has certain rights in this invention.
Accountability for behavior is a fundamental key to empowerment in interpersonal relations. This is the third article in a series on how to empower yourself not only in relationships with persons of deficient or disordered character, but in all facets of everyday dealings with others. Some of the “tools” of personal empowerment we’ve discussed thus far include never accepting excuses for inappropriate behaviors, and judging actions, not intentions: Accountability for behavior is a fundamental key to empowerment in interpersonal relations. Knowing your own wants and needs and asserting yourself in pursuit of those needs is one part of the equation. The other part is expecting others to be as responsible in their responses to you once you have made your concerns known. One of the things I noticed early on in my work with individuals who were in some kind of relationship with a problem character was how hesitant they were to deal with them directly. They always seemed to be “walking on eggshells” with that person and carefully weighing everything they said. It seemed that they took on an inordinate sense of responsibility for approaching issues in such a manner so as not to offend, upset, or irritate the other party. Also, they seemed to think that approaching issues indirectly and not standing up for what they really wanted was less risky than self-assertion. What I later learned was that often times, they had been so used to not getting responsible responses to any of their more direct approaches that they quit trying. Try Online Counseling: Get Personally Matched A key tool of personal empowerment is to make your needs and wants known in a clear, simple, straightforward and direct manner. And, most especially, when you ask questions, ask them in simple, direct, straightforward terms, doing your best to leave out any ambiguity. Once you have spoken up clearly for yourself, it’s important to expect simple, direct, straightforward, and unambiguous responses and answers to questions. Anything short of that is likely to represent an attempt at manipulation. So many times people have told me that after seeking a simple answer from the party they were dealing with and after being met with a barrage of diversionary and other tactics, they ended up asking themselves: “Now, what exactly was my question, again?” If you don’t get a simple, direct response, don’t get sidetracked or succumb to any diversionary or other tactics. Ask again. Stay focused and ‘on message’. Make it clear that you’re steadfast in pursuit of your legitimate issue. That’s the way to maintain self-assertion in an atmosphere in which someone is trying to take advantage of you, manipulate you, or diminish your power and effectiveness.
This is the third post in a series on how a weaker global economy can threaten the strengthening recovery in the US. The first post in the series is the introduction, which is followed by three parts. Part I, delves into the drivers of growth in the US and in the rest of the world. This post, Part II, explains the basics of an open economy and analyzes the recent rise of the US dollar and the role of the trade balance in the US economy. Part III ties it all together by discussing the importance of demand to the US and global economies. In this part we discuss the basics of an open economy and the importance of exports and the trade balance to the US economy. Although the main driver of growth in the US is consumption demand, exports have been playing an increasingly important role since the financial crisis. The dollar is central to US exports and imports, and a rising dollar, although reflective of a strong US economy, can hurt US export competitiveness and shift US consumer demand overseas. If you understand well the concept that exports add to and imports subtract from GDP, and are familiar with what an exchange rate is and what makes it stronger or weaker, then you can skip the first section and proceed to the following one on the dollar and the US trade balance. The Basics of an Open Economy To better understand how a weak global economy can affect a single national economy, one must look at the channels in which an economy links with other economies beyond its border – international trade and finance. The overall GDP of an economy can be measured as the sum of consumption, investment, government spending, and the trade balance (exports minus imports). Rephrasing that sentence into a simple and well known equation, we have: GDP = Cons. + Inv. + Gov. + (Exports-Imports). Note that exports contribute positively to GDP, while imports subtract from GDP. This is because exports reflect demand (albeit from abroad) for national goods made and sold by firms inside the country. Imports are the opposite, and reflect consumers sending their money abroad to buy foreign products, benefitting incomes in foreign countries. Note that this does not necessarily mean that exports are always good and imports are bad. Countries engage in trade with each other because it benefits them – some in supplying goods because they can produce more efficiently or because domestic demand is not fully developed, and others in demanding them because they have the income (or credit) and the need. If the trade balance becomes too large, however, significant risks can emerge. If a trade deficit, for example, is largely financed through credit (borrowing money to buy imports) and becomes too large relative to GDP, then the associated risks of too much debt for households and firms can spread across borders and effect more economies. This is because the money sent abroad is borrowed money, often resulting in cross-border liabilities. The Financial Times has a great animation explaining why trade balances matter. When buying and selling products internationally, then payments must also be sent and received internationally. This requires at least one party in the transaction to exchange its currency to the currency of its trading partner in order to pay for the product. This is where the exchange rate comes in – how much one currency is worth in terms of another currency. If one currency, say the British pound, is worth a lot in terms of another currency, say the Chinese yuan, then a single pound can go a long way in buying Chinese products whose prices are set in yuan. A final amount paid in pounds for a Chinese import is then based on the price in yuan and the exchange rate used by the British buyer to convert her currency into yuan. This is why both prices and exchange rates are important for export competitiveness (in this example, the Chinese are the exporters). Like the price of any normal product, the price of a currency (the exchange rate) increases when demand for that currency increases. Demand for a currency can rise for two fundamental reasons: interest rates on government bonds rise (higher returns) and higher growth and investment prospects (via stock markets or foreign direct investment). People abroad would want to send their money to another country to buy their products or invest if that country can offer higher or more stable returns. In the US today, government interest rates are not rising but are higher and considered safer than in other developed countries. Growth, though, is rising and getting stronger, as established in Part I. This is a big reason for the rise in the dollar and is where demand comes in. In general, rising consumer demand applies to both domestic and imported goods. As more disposable income is available to consumers, part of that increase will go to imported products (economists call this the marginal propensity to import). With that in mind, a strengthening US economy and a weakening global economy are most clearly reflected in the recent rise of the US dollar and the widening US trade balance. The Dollar and the US Trade Balance In mid-April 2014, after a few consecutive months of strong job creation and around the time when stronger GDP growth was beginning to gather momentum, the dollar began its consistent and relatively rapid rise. Since then, the dollar has increased consistently by over 10 percent relative to a broad basket of currencies (17 percent against major currencies), bringing it back to where it was in the wake of the Lehman collapse in 2008. A dollar in demand is typically welcome news for the US. It reflects strengthening economic prospects as described above and increasing investor attention to the US market. It also reflects weakness and lack of sustained growth in other parts of the world, further encouraging investor demand for US assets. Higher interest rates in the US (though still historically low) than in other developed markets and expectations for the Federal Reserve to raise interest rates sooner than later also support the chances that the US dollar could continue its rise (this relationship is explained above). Indeed, divergent economic performance between the US and the rest of the world encourages the rise of the dollar which could hurt US export competitiveness. The two are interrelated, as weak external demand (demand for US products abroad) is made weaker the stronger the dollar gets (as products priced in dollars become more expensive in other currencies). Some countries are more reliant on exports for growth than others. In the US, consumption demand is the largest component of GDP and contributor to GDP growth. US Exports, up 17 percent since the pre-recession, have increased in importance, however. Imports are up 4 percent from their pre-recession peak. Exports have accounted for 13 percent of GDP, yet over 30 percent of GDP growth since the recession officially ended in the third quarter of 2009. This is in comparison to a 10 percent share of GDP before the recession and 26 percent share of GDP growth. Separating out the last three quarters of 2014, when the dollar began and continued its surge, exports had accounted for 35 percent of growth before dropping to 20 percent. Though the US oil boom drove its trade balance to a four year low in late 2013, the trade deficit rose in 2014 by 6 percent despite a 55 percent drop in the price of oil that year. The recent rise in the US trade balance last year is a result of both a 2.9 percent drop in exports and a 3.4 percent rise in imports. This could suggest that both the rising dollar and rising US consumer demand are at play through declining export competitiveness and weak external demand on the export side, and stronger purchasing power for a resurgent US consumer on the import side. Weakness in the rest of the global economy would fuel this, and that appears to be the case with the US’s largest trading partners outside of North America – the euro area and China. The euro area is the US’s largest export market outside of North America, and third largest source of imports behind China and neighboring Canada. Weak demand is therefore a concern for US exports, and the euro area as a whole is running a current account surplus of close to 3 percent, with countries like Germany running its largest trade surplus at 7.4 percent of its GDP, thanks largely to exports to the US. Countries with surpluses imply that they benefit from demand from other countries more than demand within their own borders. The euro has fallen 17 percent against the dollar since early last year, and the ECB’s quantitative easing program suggest that despite any hopeful uptick in demand in the euro area, the euro is likely to remain weak vis-à-vis the dollar relative to its past. China, the US’s second largest export market outside of North America and overall largest source of imports, is the largest driver of the US trade deficit. From 2010 to 2013, the renminbi (China’s currency, officially the yuan) has been allowed by China’s central bank, which has controls over the value of the renminbi, to appreciate 10 percent against the dollar. Amidst the troubles in the Chinese economy that were mentioned in Part I, China has arguably “allowed” the renminbi to depreciate again and fall 3 percent against the dollar last year. Though exports and imports have declined in China and the renminbi remains stronger vis-à-vis the dollar compared to 2013 when it started to depreciate again, it will be important to see what extent the Chinese increasingly look towards external demand as a source of growth again. In that scenario, the US, with its rising trade deficit, could be directly affected. With increasing pressure on a growing US trade deficit, consumer demand in the US could be increasingly sought after if other parts of the world fail to stimulate their own demand. A stronger US dollar would only encourage the pressure on the trade deficit to widen even further. We have now seen how a rising dollar impacts the trade balance through declining exports and rising imports. In Part III, we tie it all together by focusing on the import side of the trade balance and the global quest for badly needed consumer demand.
https://karimfoda.com/2015/02/18/part-ii-the-rise-of-the-dollar-and-the-us-trade-balance/
In our experiment, we compared the amounts of bacteria found on currency from Canada, America, Taiwan, and Hong Kong. Afterwards, we counted the number of colony-forming units (CFUs) cultivated on each individual Agar plate and compared the results. Meanwhile, the Gram staining and Catalase tests provided clues as to what the bacteria on the plates could be.
http://vdsf.sfiab.com/project_summary.php?year=2008&pn=2505
The theory of relativity has been described as one of the more complicated of Albert Einstein’s theories to wrap your head around. Consisting of the two interrelated theories of special relativity and general relativity, Einstein’s theory of relativity helped transform theoretical physics and astronomical theory that had previously been put forward by Sir Isaac Newton. Special relativity as it stands is the theory that describes motion at a significant fraction of the speed of light. It was originally proposed by Einstein in 1905 to help explain the discrepancies in Newtonian mechanics, and quickly became a cornerstone of physics. It was only further strengthened by his theory of general relativity. Described as “the most beautiful of all existing physical theories”, general relativity was published in 1915 and merges both special relativity and Newton’s law of universal gravitation, creating a description of gravity as a geometric property of spacetime – essentially the curvature of space. With these two theories described and well-accepted within the community, the theory of relativity was then put forward. Einstein’s theory put forward the concepts that spacetime were a single entity rather than two separate ones, as well various aspects of simultaneity, kinematics, gravitational time dilation and length contraction. His work greatly helped improve our knowledge of particle physics and paved the way for the nuclear age, whilst at the same time helping explain some of the extraordinary phenomena in the universe such as neutron stars, black holes and gravitational waves. If that sounds like a fairly important theory, it is because it is. Einstein’s theory of relativity is arguably one of – if not the – biggest and most important scientific theory to be put forward as far as space and physics are concerned – However, it may be incomplete. There are still questions regarding quantum gravity and spacetime singularities continue to be open ended, with answers yet to be found. That isn’t necessarily a bad thing though, as scientists continue to pour over Einstein’s theory to try and figure out how it can answer their questions, it ensures that it will continue to build up its already unparalleled legacy. The way the theory was originally set up leaves scientists with a very broad field of research with many potential leads. It continues to be one of the most researched theories in science, with, as early as this year, gravitational waves as described in Einstein’s works had been detected by the Advanced LIGO team.
https://www.cosmicnovo.com/2017/08/22/einsteins-theory-relativity/
This is Part 4 of a chess rules series covering all the basics. This section covers the Pawn and the Black piece. There are many variations on this, so it’s important to understand which are best suited to each situation. Pawn Position A Pawn is a piece of chess equipment which is placed in a corner of the board to protect its pieces from being captured. There’s one in each corner of each chess board, where it can act as a check, or an attack position. In this position, Pawn can move between the board’s corners and attack an opponent. There is only one Pawn in each of the six possible positions, called a king position. This position is best for players with a good eye for strategy and a keen sense of the game. It is the most dangerous position in chess, and is often used to defeat a stronger opponent, such as a knight. Picking a King Position Pawn moves to the King’s Pawn position, which is the lowest of the chess positions. The King is the only piece of equipment on the chessboard, and can attack or defend himself. In chess, the King is a pawn, and it moves to his Pawn with the same moves as the Puck. Puck Position The Puck is a smaller piece of Chess equipment which moves to a Pawn’s Puck position when it is captured. The Pucks moves with the moves of the Pucks pawn. The pawns moves in this position when captured, but there is no attack on the Pook’s pawns. There can be a Puck in each pawns position. The only way to defeat the Pooks Pawn would be to capture it, which requires the capture of all its pawns and all its King positions. Pook Position The position is the opposite of the king position, and has the same number of pawns as the king. It has a pawn in each Pawn, but only one is the Pookie. When captured, the Pokedoll can be used to attack the King, and the Poon can attack any opponent that has captured a Pook. When defeated, the pawns can be moved to the Poole, a Pookie in its Pook position. Poon Position The pawn is the smallest piece of the Chess board, and moves to its Pooll position when defeated. Poo is the same as the King position, but is much more difficult to defeat. If captured, it can attack the Poom, which has the highest attack value, and to defend it, it must capture the Pompon, a King. Pompo Position The last position is known as the Queen position, where the Pops pawn moves to protect the Queen from capture. The Queen moves in its Queen’s Pompole, the Queen’s Queen’s position. If defeated, it is the Queens Pompone, which can attack a King and a Poon. If the Queen is captured, then Pompa can attack all of the pieces on the board, which allows it to defeat any opponent. Poke Position Pokes move to their Pook positions when captured. Pooks moves in the same manner as the pawn, but can only attack with the Poke’s Pook or King moves. Pokers King moves in his Pooks Pook, and Pooks Queen moves to their Queen’s Crown. Poom can be captured to use as a King, while Pompoes can be taken to be captured as Pooks. Poot is the last position, in which Pots moves to Poot’s Poot. The final position is a King’s King, with Pots Queen and Poot King moves to each other.
https://fangbites.com/archives/38
Welcome to 1.63 meters to inches. The input in meters is often written using the unit symbol m, whereas the result in the United States customary unit inch is abbreviated as in or ″. Here you can find all about 1.63 m in ″, including a converter as well as the formula. If you have been looking for what is 1.63 meters in inches, then you are right here,too. You may overwrite the length in meters in the calculator right below, 1.63; our application then conducts the math automatically. 1.63 Meters in ″ How many ″ is 1.63 meters? With the formula explained on our page “Meters to Inches”: [in] = 1.63 m / 0.0254, we get the following result, rounded to 5 decimal places: To convert the units you have to divide the metric unit of length by 0.0254. The result of the division, 64.17323 inches, is the height, length or width in the imperial and US customary unit of length. Similar conversions on inchestometers.com include, for example: Ahead is more information in the context. Convert 1.63 Meters to Inches By reading so far, you know everything about the 1.63 m to inch conversion. It’s a simple division. Visitors who have come here in search for, for example, 1.63 meters to in, have definitely found all their answers, too. This also applies to 1.63 m in ″, 1.63 meters to ″ and lots of similar terms searched terms such as, for instance, 1.63 m to inches. Note that you can find many meters to inches conversions including 1.63 meters to inch by means of the search form we have placed in the sidebar. Give it a try right now! Enter, for example, one point six three meters in inches or 1.63 in meters, or something alike. The result page contains all relevant posts. BTW: 1.63 meters in related units is: - 5.34777 foot - 5 ft 4.17 in (feet and inches) - 1.78259 yards The following paragraph wraps our content up. Summary We summarize our content with this image: If our information about 1.63m to ″ to m has been useful to you, please share this post by means of pressing the social buttons, and don’t forget to bookmark our site. You are approaching the end of this post about 1.63 meters to inch. For questions and comments on 1.63 meter to ″ use the form below. Thanks for visiting our page about 1.63 m in inches.
https://inchestometers.com/1-63-meters-to-inches
• Review Contractor’s Baseline Construction Schedule for conformance with the Plans and Specification to ensuring proper WBS breakdown, sequencing, critical path and logic ties, and inclusion of all activities, resource loading, etc. necessary for approval. o Perform and coordinate constructability review on Contractor’s Baseline Schedule submittal and identify any missing activities, float sequestering or suppression techniques/issues or any other conflicts or shortcomings. o Ensure that project WBS are properly coded and resource loaded o Provide written evaluation, comments and responses on Baseline Schedule submittal. o Coordinate with Contractor Scheduler as necessary for clarification and resolution of schedule deficiencies • Review Contractor’s Monthly Schedule updates in conformance with the plans and specification and identifying any logic changes or inaccurate progress reporting, critical path changes or other pertinent items necessary. o Identify variations from the plan and analyses and report on the cause of the variations. o Review construction progress and inspection reports and maintain an accurate “as built” schedule for use during construction and for evaluation of Contractor’s Monthly Schedule updates and any Time Impact Analysis (TIA) submittal. o Attend weekly construction meeting and monitor 3 Week Look Ahead Schedules for accuracy and identify any inconsistency with Baseline/Schedule updates. Promptly reporting schedule slippage, other unsatisfactory performance or other variances. o Identify any critical path changes or near critical path changes that could impact individual phase, milestones or project duration o Provide written response and approval recommendation submittal response. o Coordinate schedule comments and corrections with Contractors. • Review Time Impact Analysis (TIA) submitted by the Contractor and provide written responses, evaluation and recommendation for time extension as appropriate. Identify schedule variances and opportunities and proactively develop mitigation strategies for the Project Management team. • Provide LAWA “What if” schedule impacts for contemplated additional work, potential re-sequencing of phases or construction activities. Evaluate, and forecast project status against an established baseline schedule. Assess the impact of project milestones and report status to Management in order to mitigate project overruns and schedule slippage • Provides direct scheduling related support on multiple projects on reporting to multiple Project Managers. • May require monthly support for projects in Planning or Design along with supporting Construction projects within a Master Program Schedule Plan and would report to the Airside Element Management Team. • May be required to maintain master schedule system to track project interfaces for time frame impacts to electrical airfield systems, mechanical systems, airfield operations and other aviation related disciplines and projects. • Provides development and analysis of man-hour resources, equipment resources and cost loaded, time-phased schedules. • Prepare detailed plans; time scaled charts, diagrams and tracking various quantities to measure construction performance. • Maintain simplified logic schedules to highlight status of key interfaces for critical path review. • Provide progress reporting documents and briefing materials as required. Develop creative reporting tools to communicate complex project sequencing and/or progress assessments. • Participate and review with third-party schedulers to address and provide written comments. • Analyze and assess project milestones and reports the fiscal impact and risk to Management. • Contribute to the advancement of PDG’s goals through commitment to productive collaboration with all stakeholders and in development of planning and design schedules. • Support overlapping project scheduling tasks, issue management, meeting facilitation, document control and monthly status reporting to the Project Management Team. HARDWARE/SOFTWARE KNOWLEDGE • Mastery of Primavera P6 Version 8.3 or higher. • Experience with Microsoft Office Suite software – Word, Excel, PowerPoint, and Outlook. • Knowledge of Prolog and other Project Management Systems is preferred. • Experience with Adobe Acrobat. • Knowledgeable of presenting graphical data as related to scheduling. PROFESSIONAL EXPERIENCE LEVEL/OTHER QUALIFICATIONS • Must have at least three to five years of scheduling experience with three to five years in direct support of an active construction project. Preferred civil flat work – Airports and road construction. • Demonstrated experience scheduling the design and construction of large-scale, complex projects including developing and managing a progress reporting methodology for multi-million dollar construction projects within the range of $30M – $150M. • Must be proficient in Primavera 6 (P6) project control systems. • Requires knowledge and experience of Project Management principles and Project Control Cycles. • Must be capable of reporting earned value performance measurement techniques. • Understanding of critical path method (CPM) scheduling techniques and principles, including ability to handle and prioritize multiple projects concurrently. • Must be able to track field progress weekly in reporting Contractor’s performance. • Developed reporting tools to communicate complex project sequencing and/or progress assessments. • Worked in a master schedule/sub project system environment including programs in determining progress impacts. • Must possess capability to report on all levels of project schedules, quality, monitor project progress, risk and ensure project milestones are met. • Must be able to review contractor schedules and related documents for compliance with contract documents. • Demonstrated experience in working and reviewing third-party design and/or construction schedules and addressing comments. • Must clearly and concisely be capable of presenting project trends, risks and forecast performance reports to management and to make recommendations on strategies and mitigation measures to resolve project issues is required. • Demonstrated experience in progress reporting documents and ability to understand a demanding, technical and complicated construction program plus the ability to communicate progress and issues both verbally and written communications on technical and non-technical matters for various levels of management. • Demonstrated success on the job, as evidenced by satisfactory performance, acceptance of responsibility, and growth in previous position(s). • Must possess the ability to effectively interact with various departments to maintain open lines of communication. • Proven accuracy, reliability and completeness in job accomplishment. • Self-starter with results-driven focus and ability to work with others. EDUCATION/TRAINING • A four-year degree in Engineering, Construction Management or related field is preferred. • Element-Specific Requirements/Notes • May assume other duties as required/needed. • May work overtime as required/needed. • Maybe required to work various shifts as needed. • Aviation experience preferred. REGISTRATION/CERTIFICATION • Planning and Scheduling Professional (PSP) certification is preferred.
http://paragon-em.com/job/875/
Decentralised architectures and networks are slowly embedding themselves into the technological and societal landscape, empowered by communities of like-minded people who strive to alter the existing socio-economic order by leveraging peer-to-peer (P2P) technologies that afford disintermediation and radical shifts in how entities maintain trust. Bottom-up initiatives, free from centralised control and command structures, are appearing cross-sectorally - built on the premise, and promise, of distributed ledger technology. In this article, we discuss one such initiative, Swarm, by drawing relations between its developed set of community-centred, bottom-up Fair Data Principles and the incumbent, top-down, principles-based frameworks that govern today’s existing data-sharing economy, with a focus on data protection. We also introduce the idea of the commons along with a set of “commons design principles”. We highlight how the Swarm community might simultaneously implement principles flowing from existing data protection frameworks and the commons to guide the development of robust design strategies to manifest the communicated goal of a fair and just data-led society. What is Swarm Swarm is a P2P, decentralised storage and communication infrastructure, compatible with Ethereum Virtual Machine (EVM) networks . The Swarm mission is to provide the technical platform for a decentralised internet - based on a normative framework that is aligned with the Fair Data Principles and envisioned as a vehicle for a fairer society. The Swarm ecosystem describes a fair data economy as an “Economy of processing data characterised by fair compensation of all parties involved in its creation or enrichment”. This ideological foundation is viewed as a cornerstone of a better society and the bedrock of how individuals might wrestle back control from the overreach of corporations and government; the platform attempts to shift the control of data processing away from large, centralised entities and into the hands of individuals in a just, fair and equitable manner. The key to achieving the ideal is detailed in the Swarm whitepaper. The overarching vision is born through the deployment of the technology stack, designed to act as a medium through which the Swarm network optimises network bandwidth and storage resources (hard disk memory). It consists of a P2P network of nodes that communicate according to an agreed distributed consensus mechanism and leveraged by an in-built incentivisation scheme. The technology stack, in practice, is a series of technologies that act in concert and comprises four distinct layers: The technology itself should be viewed as the manifestation of an ideology that lies at the centre of the Swarm project. In order to achieve a fair data economy, a decentralised and permissionless data sharing infrastructure is required that allows participants to engage in data storage, sharing and processing with economic benefit accruing to those who created the data, or are the data rights holders. This approach is a shift away from the economic hegemony of the existing data economy - where large, powerful data collectors, processors and sub-processors extract most of the revenue, with very little benefit accruing to those who generated the data, or own the rights. Swarm is not the first incarnation of a blockchain-based decentralised storage infrastructure, with notable projects such as Sia, Filecoin, and Storj also deployed onto the market . Distributed content storage also has a rich history, developing alongside distributed file systems, such as the open source project InterPlanetary File System (IPFS) . What sets Swarm apart is the explicit commitment to a defined set of open data principles along with a commitment to building a fair data economy. While other systems have their own native tokens as vehicles for data sharing, resource allocation and security, Swarm proposes the BZZ token as the transactory foundation through which entities engage in a “collective information” community - moving the value of data, and associated meta-data, into the hands of the data creators . As previously mentioned, Swarm also proposes a set of core principles - viewed as an integral step in realising a shared social contract for the Swarm community. The social contract acts as a normative framework that should guide behaviours and actions in the data economy, essentially a principles-based ideology to which all participants in the network adhere - in theory. In practice, however, it is not as clearly defined. Swarm’s overarching vision is based on a core set of “meta-values” : • Inclusivity - public and permissionless participation. • Integrity - privacy, provable provenance. • Incentivisation - alignment of interest of node and network. • Impartiality - content and value neutrality. The Swarm project describes these meta-values as systemic qualities, shared throughout the network. This bottom-up, inclusive ideology is key to understanding the potential impact of their P2P data economy as it shifts emphasis away from authority and hierarchy towards concepts such as collective responsibility, common ideology and principles-based governance. Principles-based data sharing ideologies have a rich history, originating in a number of data protection frameworks around the world. Applying a principles-based framework to a P2P network is, however, a novel and unique strategy. It could be stated that Swarm is attempting to build a bottom-up data economy, based on emergent, permissionless, and non-hierarchical principles. This contrasts strongly with the top-down approaches that have, more often than not, emerged from centralised regulatory or legislative bodies. Introduction to principles-based data protection frameworks The emergence of digital technologies has brought monumental change in how society functions, whether seen through the lens of computation, telecommunications, or the concept of the internet and all its related technologies. Central to most conceptions of modern society is the collection, exchange and storing of digital information or data. Since the 1970s, data protection regulation has been developing globally in an effort to protect data and the privacy of its owners, as well as to provide the structure to ensure the free flow of data for economic purposes. A top-down approach: international, supranational and national regulation Various top-down regulatory and legal frameworks guide local and global data flows. The US Federal Trade Commission, an independent governmental agency tasked with protecting consumers and maintaining competition, released the US Fair Information Practices or FIPs in 1973 (since updated and now called Fair Information Practice Principles). The goal was to outline a set of principles to help maintain informational privacy in an electronic marketplace. The FIPs represented the first attempt by a governmental body to regulate the data market - and laid out a framework to support the functioning of the emerging data economy. Several European states, such as Sweden and Germany, subsequently enacted their own data protection laws. At a supranational level, the Organisation for Economic Co-operation and Development (OECD) released Guidelines on the Protection of Privacy and Transborder Flows of Personal Data in 1980. This principles-based framework attempted to provide a more internationally-accepted instrument to guide the now established (and rapidly developing) data economy. Almost concurrently, the Council of Europe issued the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) in 1981. National constitutional traditions and, eventually, the 1995 EU Data Protection Directive and its 2016 replacement the EU General Data Protection Regulation (GDPR) further regulated personal data flows. The notion of personal data protection became increasingly focused on individuals and framed through a rights-based approach. In 2020, the European Commission issued their proposal for an EU Data Governance Act, which applies to both personal data and non-personal data. This proposed legislation demonstrates how the European Commission could be becoming more conscious of a commons-/fairness-based approach to data flows and has commonalities with the bottom-up approaches that we see emerging within decentralised networks and their associated technological architectures. A bottom-up approach: the decentralised model Much of the distributed ledger based technologies that have emerged over the last 10 years is rooted in a specific philosophical ideology, intended to replace or reject statist ideals with more democratic, equal, libertarian or anarchic models of social organisation . For a shared social contract to emerge within a community, a shared set of ideals must manifest through the community. Most distributed ledger-based incarnations root ideology within a network “whitepaper” or GitHub repository, often enveloped as community or network “ground-zero”. Swarm propose the Fair Data Society’s Fair Data Principles as the bedrock of a shared social contract, intended to inform the ideology and philosophy of all Swarm network participants. Not only do the principles support the existing book of Swarm, they set out (and maintain) various principles to which behaviours and actions of agents in the social layer of the network should adhere. While the principles should not be seen as a defined set of rules, they do look and feel extremely similar to data protection principles we have seen develop from the top-down approaches . By examining the principles side by side, we can draw out common themes - highlighting how both sets of principles have common overarching ideals - even though they have developed in very different ways. Commonalities and consistencies In the FIPs, two of the more distinct rules were enacted to protect the consumer, through concepts related to “notice/awareness” and “choice/consent”. The concept of consent-based data processing, which implies a degree of control and ownership over how an individual’s data is processed, has stayed consistent in subsequent data protection regulation. As data protection has become increasingly framed as a rights-based notion (see, e.g., article 7 of the EU Charter of Fundamental Rights), the individual as data subject has continued to be foregrounded. The more normatively-grounded Fair Data Principles echo this with their three principles of “ownership”, “control” and “consensual access”, alongside individual rights, such as the right of a data subject to request and receive data pertaining to them and to rectify or delete this data (see too, e.g., the OECD “individual participation principle” and the GDPR list of data subject rights). To be in line with this approach, Swarm should - in principle - also foreground the individual’s agency and rights to contribute to the fair data economy. Several common principles relate to the personal data itself, covering its collection, processing and quality. The FIPs, OECD principles, Convention 108 and the GDPR, inter alia, advocate limits on personal data collection and processing, while the Fair Data Principles open with a “zero-data” principle - which specifies that privacy and non-collection of personal data should be the default when a high volume of data is produced. Similarly, all instruments include various manifestations of purpose limitation, data minimisation, data accuracy and storage limitation principles. Further common principles include security, accountability and various forms of enforcement or redress. In the context of Swarm, principles of redress and enforcement are relatively ill-defined. Permissionless, inclusive networks often defer to community-based dispute mechanisms or arbitration procedures, but it is still unclear whether (or how) these will be integrated into the Swarm ecosystem. Interestingly, data protection regulation is moving towards a more normative-based framework – at least prima facie. The GDPR provides that “the processing of personal data should be designed to serve mankind” (recital 4). The EU Data Governance Act proposal includes notions of data sharing in an open and collaborative manner, and provides the basis for the concept of data altruism. “Data altruism” in the proposed Act covers situations where natural or legal persons make data voluntarily available for reuse, without compensation, for “purposes of general interest, such as scientific research purposes or improving public services” (art 2(10)). This could be analogous to parts of the “economic benefit & fair value” and “social good” principles within the Fair Data Principles, and should also be viewed as an assemblage through which the bottom-up approach of Swarm and the top-down approach of bodies such as the European Commission can find common grounds . It should also be remembered that a complex technical infrastructure is built around the Fair Data Principles and, while the principles generally flowing from the above illustration could be operationalised by decentralised architecture and blockchain technology, there are still open questions as to how the principles can be realised in practice. Further, many open questions regarding governance and interactions with the incumbent legal frameworks have yet to be explored in full by the Swarm ecosystem. We propose that drawing from the theory and practice of commons-based governance could provide further insights into how some of these questions could be addressed. Swarm’s use of distributed architecture and technological incorporation of participation and governance incentives could be seen as an example of a digital commons, with storage and communication infrastructure governed as a shared resource. The theory and practices of (digital) “commons” presents a rich history to draw from concerning the governance of sustainable decentralised networks. Introduction to the commons The commons is a term often used to refer broadly to resources that are shared by a group of people. Historically, commons would be lands that were used by multiple people, or households, over a certain period of time. Commons as a theory for common stewardship of resources was popularised by Elinor Ostrom, with her critique of Garrett Hardin’s article The Tragedy of the Commons . Ostrom demonstrated the commons as a form of decentralised management of resources, where resources could be sustainably governed without centralised management from governments, or the assignment of private property rights. Most notably, Ostrom identified “design principles”, a form of best practices that were largely present in how a commons was governed, in instances where communities had sustainably and peacefully managed shared resources over time. While Ostrom’s analysis was originally limited to natural resource systems, and “common pool resources”, the analytical lens of the commons has since been applied to “knowledge commons” (e.g., libraries, archives, Wikipedia), and “digital commons” (e.g., scholarly publishing, free software communities, Creative Commons licenses) . How the commons, Swarm and the Fair Data Principles could interact First, decentralised architecture could support the success of digital commons in important ways. Technological affordances from the architecture can enable key features of commons governance, such as granular rules around resource use, transparent and inclusive consensus building processes, mechanisms for monitoring and enforcing compliance, and so on. Second, the commons can also order thinking with respect to the governance of sustainable decentralised institutions. Commons scholars have argued that Ostrom’s design principles can guide the design of appropriate governance strategies. As Bollier notes, commons discourse is at once “descriptive”, “constitutive”, and “expressive”: it is descriptive because it helps identify models of community governance that would otherwise go unexamined; it is constitutive because using this new language helps build communities based on principles of the commons; it is expressive because the language of the commons is a way for communities to assert a connection with a resource . Initiatives like Swarm and the Fair Data Society support decentralised architecture, in part to incorporate greater fairness and participation in the governance, and use, of data. Respectively, “commons”-based institutional frameworks are well suited to aiding this task by providing principles that can guide design. Design principles of the commons The design principles for sustainable commons are: - Clearly defined boundaries around the resource and the community There is a clear understanding of what the resource in question is, and clarity over who can access, use and manage the resource, and who can exclude others from the same. - Congruence between local conditions and the rules for appropriation and provision of the resource The rules for access, use, etc. of the resource are suited to the nature of the resource, as well as the community. The costs of participating in the collective action institution is proportionate to the benefits derived from it. - Individuals affected by rules can participate in their modification Individuals affected by rules related to access, use and management are able to participate in the processes that make and modify the rules. - A system for monitoring the resource and the behaviour of individuals There are mechanisms in place to monitor the state of the resource and compliance with the rules. Monitors are part of the community or are accountable to the community - particularly those most dependent on the resource. - A graduated system of sanctions for instances of non-compliance There are proportionate and gradual sanctions for non-compliance with rules. - Accessible and low-cost dispute resolution mechanisms Individuals have access to dispute resolution mechanisms to adjudicate disputes related to boundaries, rules or infractions. The mechanisms may be informal, but are simple and accessible. - Nominal recognition by external authorities External authorities recognise the right of the community to organise itself. - Nested enterprises Governance is organised at multiple layers and decentralised to the extent possible. Decentralised networks like Swarm already incorporate these principles to some extent. For instance, digital commons, unlike natural resource systems, are constructed - necessarily having clear boundaries. There could also be clear protocols and rules around how members of the community may access the resource (Principle 1). The network protocols build in appropriate incentives, which are technologically enforced (Principles 2, 4 and 5). The network is open source, inclusive and permissionless, even including the theoretical possibility of being forked if ideologies clash (Principles 3 and 6). At the same time, the design principles can highlight elements of governance that can promote continued participation from the Swarm community. Some provocations to consider: - Is there a clear boundary, or protocols, for inclusion within the community of users, developers and maintainers? - Is there a clear protocol for community participation in rule (code) formation? - Are there proportionate incentives for participation in the maintenance and governance of the network? - How can maintainers and developers remain aligned and accountable to the community? - Are there off-chain behaviours that could negatively impact the network? Can there be agreed upon rules for those behaviours? - What mechanisms could be employed to prevent infractions? Could additional legal technical, or organisational measures support the agreed rules or protocols? Conclusion Swarm's governance requires engagement with multiple normative and legal frameworks. For instance, to the extent that it processes personal data, it must align with data protection principles and regulation. Legislative proposals such as the Data Governance Act additionally present both a challenge and an opportunity. Infrastructure like Swarm may align well with the data-related innovation conceived by the EU’s recent proposals, but this could in turn bring further regulatory scrutiny. A marriage of top-down data protection frameworks, such as those emanating from US, EU and international institutions, and Swarm and the Fair Data Principles, which flow from a community-based, bottom-up approach, could guide data governance related to Swarm. In addition to engagement with legal frameworks, distributed networks raise governance issues, not all of which may be addressed at the level of technical architecture . The sustainability of commons ventures seems to benefit also from the successful instantiation of certain design principles. Our exploration of the commons design principles illustrates how the sustainable governance of distributed networks may raise complex questions. The Fair Data Principles and the book of Swarm represent ongoing consensus within the community with respect to some of these questions, but there are further governance elements of which the community must remain cognisant. For instance, the role of maintainers and developers, the presence of accessible dispute resolution, and rules of participation surrounding both meta-data and non-personal data can be key elements of sustained engagement from a community of decentralised participants. There may be opportunities for the Swarm community to be more explicit about developing consensus on these accounts. As such, in designing and governing bottom-up, decentralised infrastructures, Swarm need not start from scratch, and has many existing traditions to draw from. A note from the authors Robin and Mistale are researchers at Trilateral Research, and Aditya is a doctoral researcher at Edinburgh University. We would like to thank Swarm for funding this research. References and footnotes : The current deployment of Swarm uses xDAI. The current roadmap is to deploy Swarm on the Ethereum network once low cost micro-transactions become available with ETH 2.0. : Benisi, N. Z., Aminian, M., & Javadi, B. (2020). Blockchain-based decentralized storage networks: A survey. Journal of Network and Computer Applications, 162, 102656. : Androutsellis-Theotokis, S. & Spinellis, D. (2004). A survey of peer-to-peer content distribution technologies. ACM computing surveys (CSUR), 36(4), 335-371. : Tron, V. (2020). the book of Swarm: storage and communication infrastructure for self-sovereign digital society back-end stack for the decentralised web, p.17. : Ibid., p.18. : For further exploration of this topics, please refer to: Karlstrøm, H. (2014). Do libertarians dream of electric coins? The material embeddedness of Bitcoin. Distinktion: Scandinavian Journal of Social Theory, 15(1), 23-36; Swan, M., & De Filippi, P. (2017). Toward a philosophy of blockchain: A symposium: Introduction. Metaphilosophy, 48(5), 603-619; Groos, J. (2020). Crypto Politics: Notes on Sociotechnical Imaginaries of Governance in Blockchain Based Technologies. In Data Loam (pp. 148-170). : The Fair Data Society Principles themselves explicitly "draw on the previous works of DataEthics, MyData, World Fair Trade Organisation, Status, Ethical Design Manifesto and various philosophical works", see Acknowledgments: https://principles.fairdatasociety.org/. : DeLanda, M. (2019). A new philosophy of society: Assemblage theory and social complexity. Bloomsbury Publishing. : Garrett Hardin, ‘The Tragedy of the Commons’, 168 Science 1243 (1968). : See for instance : Elinor Ostrom and Charlotte Hess (Eds.) Understanding Knowledge as Commons (2004); Brett M. Frischmann, et al (Eds.) Governing Knowledge Commons (2014). : David Bollier, Free Fair and Alive 16 (2019). : Walch, A. Deconstructing “Decentralization”: Exploring the Core Claim of Crypto Systems. In Cryptoassets (pp. 39-68). Oxford University Press. Join the FDS Community - Fair Data Society website. - Read the Fair Data Society principles. - Fairdrive website. - Follow us on Twitter. - Join our discussions at Discord.
https://fairdatasociety.org/blog/designing-the-data-hive/
What are the main problems of international business? 5 Common International Business Problems - Communication Breakdowns. An open line of communication between supplier and buyer is vital for supply chain efficiency, but it can be hindered by distance, time, technology, and cultural and language barriers. - Inaccurate, late, or missing documents. - Incomplete Cost Calculations. - Sub-Par Quality. - Logistics Issues. What are the pros and cons of international trade? Top 10 International Trade Pros & Cons – Summary List |International Trade Pros||International Trade Cons| |Faster technological progress||Depletion of natural resources| |Access to foreign investment opportunities||Negative pollution externalities| |Hedging against business risks||Tax avoidance| Why do we need to protect international trade? Economy -wide trade creates jobs in industries that have a comparative advantage and destroys jobs in industries that have a comparative disadvantage. Trade barriers protect domestic industry and jobs. Moreover, all workers are consumers and benefit from the expanded market choices and lower prices that trade brings. What are the benefits of expanding internationally? Advantages of International Expansion - Entry to new markets. - Access to local talent. - Increased business growth. - Stay ahead of the competition. - Regional centres. - Cost of establishing and termination of an entity. - Compliance risk. - Business practices and cultural barriers. What is the scope of international business? These assets can be products, services, capital, technology, knowledge, or labour. Scope of International Business Activities: Global Integration of Business: To help the business in the global integration in fields of trade, investment, factor, technology, and communication. Why international trade is bad? International trade has resulted in creating ‘dual economies’ in underdeveloped countries as a result of which the export sector became an island of development while the rest of the economy remained backward. Moreover, excessive dependence on exports leads to cyclical fluctuations in the advanced countries. What skills are needed for international business? The Top 7 Skills Needed for Success in International Business - Cross-cultural communication skills. - Excellent networking abilities. - Collaboration. - Interpersonal influence. - Adaptive thinking. - Emotional intelligence. - Resilience. What is the concept of international business? International business refers to the trade of goods, services, technology, capital and/or knowledge across national borders and at a global or transnational scale. It involves cross-border transactions of goods and services between two or more countries. What is the need for international business? The following points highlight the three importance of International business. The importance are: 1. National Economy 2. Importance to Exporting Firm 3. Importance from Other Points of View. Is it good to study international business? Benefits of studying International Business Administration (IBA) In today’s interconnected society it is essential to understand the world in an international context. With a degree in international business, you can follow a career in international management, finance, trade or a multinational company. What are the objectives of international business? International Marketing – Objectives - To enhance free trade at global level and attempt to bring all the countries together for the purpose of trading. - To increase globalization by integrating the economies of different countries. - To achieve world peace by building trade relations among different nations. What are the challenges of international trade? Global Trade – Major Challenges - Economic Warfare. Globalization has a tough challenge against polarization and conflicting issues. - Geo-politicization. Globalization is a kind of Americanization. - State Capitalism. - Lack of Leadership. - Power Distribution. - Weaker Underdogs. - Price Fluctuations of Natural Resources. What are the advantages and disadvantages of international business? The Advantages and Disadvantages of International Business Expansion - Reaching new customers. - Spreading business risk. - Accessing new talent. - Amplifying your brand. - Lowering costs. - Increased immunity to trends. - Improved consumer confidence. - Handling logistics.
https://somme2016.org/how-to/what-are-the-main-problems-of-international-business/