content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
The plasticizer bisphenol A (BPA), used in can linings, thermal paper, and polycarbonate plastics,[@c1]^,^[@c2] has been associated with a wide range of health outcomes, including type 2 diabetes.[@c3] Numerous experimental[@c4] and population-based[@c5]^,^[@c6] studies over the last decade have explored the BPA--diabetes relationship. Now a longitudinal study published in *Environmental Health Perspectives*, examining not just BPA but also its widely used substitute bisphenol S (BPS), adds to the evidence.[@c7] The study found positive associations between incidence of diabetes and exposure to both BPA and BPS, independent of traditional risk factors. ![It is well established that eating a healthful diet and staying active can protect against type 2 diabetes, given that obesity and a sedentary lifestyle are by far the greatest predictors of this disease.[@c8] But even in individuals without these known risk factors, exposure to endocrine disruptors such as BPA and BPS may set the stage for diabetes. Image: © iStockphoto/Kosamtu.](ehp6637_f1){#f1} The research included 755 individuals followed over 9 years as part of the French cohort Data from an Epidemiological Study on the Insulin Resistance Syndrome (D.E.S.I.R.).[@c8] The participants, who were diabetes-free when recruited, provided urine samples at enrollment (between 1994 and 1996) and again 3 years later. The authors estimated participants' exposure to BPA and BPS based on urine metabolites of these bisphenols. A total of 201 D.E.S.I.R. participants received diabetes diagnoses during the 9-year follow-up period, but the risk was not equally distributed. The authors estimated that participants in the second and third quartiles of estimated BPA exposure were more than twice as likely to develop diabetes as those in the lowest quartile. Participants with the highest estimated exposures were at increased risk as well, but less so than more moderately exposed individuals. The analysis also suggested that participants with any urine BPS metabolites were nearly three times as likely to develop diabetes compared with participants with no detectable exposure. "I think this paper is particularly good because it's longitudinal and there are very few \[such papers on bisphenols\]," says Angel Nadal, a professor at Spain's Miguel Hernandez University. Nadal, who was not involved in the current study, has researched the physiological link between BPA exposure and diabetes since 2006.[@c9] In 2018 he co-led an experimental study in which low doses of BPA triggered an insulin response (a predictor of type 2 diabetes) in a small group of men and women.[@c10] Human and animal studies have shown that BPA is metabolized in the body within a day, or potentially sooner.[@c11]^,^[@c12] This means that urine levels may fluctuate dramatically depending on recent food consumption and other activities, making single measurements of urinary BPA an unreliable predictor of past exposures. "Having a couple of samples that have been measured makes a stronger connection between BPA exposure and type 2 diabetes," Nadal says. The mechanisms through which BPA and BPS exposure may affect diabetes risk are complex and not fully understood, but certainly plausible, according to Fanny Rancière, the study's first author and an associate professor at Paris Descartes University in France. "Several biological pathways through which BPA could affect the development of diabetes have been identified,[@c13] such as insulin resistance and pancreatic beta-cell dysfunction," she says. Rancière points out that there is a smaller body of literature for BPS. "However," she adds, "a recent systematic review[@c14] of experimental studies concluded that BPS has endocrine-disrupting effects similar to those of BPA, which is not surprising given the similarity between their chemical structures." Other endocrine-disrupting chemicals besides BPA and BPS have been associated with diabetes incidence in other studies,[@c13]^,^[@c15] says senior author Dianna Magliano. "Make no mistake, the best way to get diabetes is to put on weight and become obese," she says. "But \[certain endocrine disruptors\] do have some association by themselves, independent of obesity and diet and whether you smoke or not and whether you exercise." The pattern of associations with BPA suggests a nonmonotonic dose response, with the highest estimated diabetes risk among those who were moderately exposed. However, the authors note that the trend might also be a chance finding. Nonmonotonic dose responses may be a hallmark of hormone-mimicking chemicals, such that effects of very low levels are consistent with effects of endogenous hormones. On the other hand, high exposures could cause overwhelmed cell receptors to down-regulate or desensitize.[@c16] University of Illinois at Chicago associate professor Robert Sargis, coauthor of a review of the link between endocrine-disrupting chemicals and type 2 diabetes,[@c17] says the team's findings illustrate the need for safer products. "With BPA finally getting attention and companies starting to recognize that people are attuned to its adverse effects, the move has been to replace it with other bisphenols. And what this paper underscores is that the replacement compounds aren't always better," says Sargis, who was not involved in the current study. "Companies take pride in putting 'BPA-free' on their label, which is admirable in part, but it may be misrepresenting a risk that is highly relevant." **Nate Seltenrich** covers science and the environment from the San Francisco Bay Area. His work on subjects including energy, ecology, and environmental health has appeared in a wide variety of regional, national, and international publications.
The theory of epistemic justification is one of the central topics in epistemology, and thus in philosophy. Inquiry into justification often goes hand in hand with inquiry into the nature of propositional knowledge, for traditionally knowledge is thought to entail justified true belief. Many theorists still see the connection between justification and knowledge as fundamental to understanding justification. Others, depending on their conception of justification, separate out justification as an epistemologically interesting property in its own right. Those who have a broader conception of justification see justification as a positive epistemic status, where a positive epistemic status is a good or success understood in terms of promoting true belief and avoiding error. They do not simply see justification as the positive epistemic status entailed by knowledge. When thinking through any account of justification, the thoughtful reader should ask how the proponent thinks of the connection between justification and knowledge. Inquiry into justification also closely overlaps with discussions of skepticism, for in challenging knowledge, epistemic skeptics are often better seen as challenging justification. This entry focuses on the central debates surrounding the nature of epistemic justification in the late 20th and early 21st centuries, focusing where possible on more recent discussions. General Overviews There are only a few recent general overviews as such. Most available “overviews” in the theory of justification focus on particular issues within the field, such as coherentism versus foundationalism or reliabilism in epistemology. Foley 1998 is a clear, concise, and reasonably comprehensive overview of justification. So too is Fumerton 2002. Pryor 2001 is a partial overview, but goes into more detail than either Foley 1998 or Fumerton 2002. Alston 1985 is a class presentation of various ways one might conceive justification. Alston 1993 is a follow-up that indirectly provides an overview of relevant material for theories of justification. Alston 2001 further follows up, and provides a useful caution to the overuse of “justification” in thinking about positive epistemic statuses. Graham 2010 provides a useful rival taxonomy to the standard coherentist/foundationalist and internalist/externalist taxonomies. Plantinga 1992 provides a useful historical perspective on the theory of justification. Though the connection between justification and being able to justify a belief comes up frequently in the literature, few papers address it directly. Though it is not necessarily an “overview” issue, one should have the issue in mind when thinking about justification. Leite 2004 discusses the issue directly. Alston, William P. “Concepts of Epistemic Justification.” Monist 68 (1985): 57–89. A classic overview of some of the main ways one might conceive of justification. Alston, William P. “Epistemic Desiderata.” Philosophy and Phenomenological Research 53 (1993): 527–551. Lists all sorts of epistemic properties that epistemologists have argued justification might reduce to. This paper thereby provides a very informative introduction to possible theories of epistemic justification, though without intending to. Alston, William P. “Doing Epistemology without Justification.” Philosophical Topics 29 (2001): 1–18. A call to arms to drop the use of the term “justification” in the theory of justification on the grounds that the term gets used to refer to too many distinct properties. It provides a corrective to loose uses of “justification” and “justified” when thinking about epistemic statuses, and gives the reader a sense of some of the various topics covered by the so-called theory of justification. It should be read in conjunction with Alston 1993. Foley, Richard. “Justification, Epistemic.” In Routledge Encyclopedia of Philosophy. Vol. 5. Edited by Edward Craig, 157–165. London: Routledge, 1998. A very clear and concise introduction, including foundationalism/coherentism and internalism/externalism, with connections to other topics, including testimony and social epistemology. A useful resource for those new to the topic. Fumerton, Richard. “Theories of Justification.” In The Oxford Handbook of Epistemology. Edited by Paul K. Moser, 204–233. New York: Oxford University Press, 2002. Focuses on foundationalism/coherentism and internalism/externalism. Clear and concise. Graham, Peter J. “Theorizing Justification.” In Knowledge and Skepticism. Edited by Joseph Keim Campbell, Michael O’Rourke, and Harry S. Silverstein, 45-71. Cambridge, MA: MIT Press, 2010. Most introductions to overviews of justification focus on the internalism/externalism debate and the foundationalism/coherentism debate. This article presents another taxonomy of theories of justification, which focuses on the connection between justification and truth and the extent to which philosophical theorizing can settle which beliefs are justified and which are not. Leite, Adam. “On Justifying and Being Justified.” Philosophical Issues 14 (2004): 219–253. Though not an overview of the literature, it discusses the connection between justified belief and the ability to justify a belief. It provides an engaging defense of the unorthodox view that justified belief turns on the ability to justify one’s belief. Plantinga, Alvin. “Justification in the 20th Century.” Philosophical Issues 2 (1992): 43–77. Although not really an overview and in the end mainly an attack on internalist accounts of justification, arguing that internalism depends on a deontological conception of justification, it takes a “long view” in presenting the issues. It thereby provides a very useful historical perspective. Pryor, James. “Highlights of Recent Epistemology.” British Journal for the Philosophy of Science 52 (2001): 95–124. Anyone who wants to get up to speed quickly on some of the main issues in recent epistemology—especially moderate foundationalism, internalism/externalism, and epistemic normativity—should start here. A partial but very useful overview. Users without a subscription are not able to see the full content on this page. Please subscribe or login. How to Subscribe Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here. Purchase an Ebook Version of This Article Ebooks of the Oxford Bibliographies Online subject articles are available in North America via a number of retailers including Amazon, vitalsource, and more. Simply search on their sites for Oxford Bibliographies Online Research Guides and your desired subject article. If you would like to purchase an eBook article and live outside North America please email [email protected] to express your interest. Article - A Priori Knowledge - Abortion - Action - Adorno, Theodor - Aesthetics, Analytic Approaches to - Aesthetics, Continental - Aesthetics, History of - Alexander, Samuel - Analytic/Synthetic Distinction - Anarchism, Philosophical - Animal Rights - Anscombe, G. E. M. - Anthropic Principle, The - Applied Ethics - Aquinas, Thomas - Art and Knowledge - Art and Morality - Artifacts - Assertion - Atheism - Aurelius, Marcus - Austin, J. L. - Autonomy - Bacon, Francis - Bayesianism - Beauty - Belief - Bergson, Henri - Berkeley, George - Bolzano, Bernard - British Idealism - Buber, Martin - Buddhist Philosophy - Camus, Albert - Carnap, Rudolf - Causation - Certainty - Chemistry, Philosophy of - Chinese Philosophy - Cognitive Ability - Cognitive Phenomenology - Cognitive Science, Philosophy of - Coherentism - Color - Communitarianism - Computational Science - Comte, Auguste - Concepts - Conceptual Role Semantics - Conditionals - Confirmation - Confucius - Connectionism - Consciousness - Consequentialism, Ethical - Constructive Empiricism - Contemporary African Philosophy - Contextualism - Contrastivism - Culture and Cognition - Davidson, Donald - de Beauvoir, Simone - de Montaigne, Michel - Death - Decision Theory - Deleuze, Gilles - Democracy - Depiction - Derrida, Jacques - Descartes, René - Descriptions - Dialetheism - Disability - Disagreement, Epistemology of - Disjunctivism - Dispositions - Doing and Allowing - Dummett, Michael - Dutch Book Arguments - Early Modern Philosophy, 1600-1750 - Education, Philosophy of - Emotion - Environmental Philosophy - Epicurus - Epistemic Basing Relation - Epistemic Injustice - Epistemic Justification - Epistemic Philosophy of Logic - Epistemology - Epistemology and Active Externalism - Epistemology, Feminist - Epistemology, Moral - Epistemology Of Education - Epistemology, Reasons in - Ethical Intuitionism - Ethics of Engineering, Philosophy and - Eugenics and Philosophy - Events, The Philosophy of - Evidence - Evidence-Based Medicine, Philosophy of - Evidential Support Relation In Epistemology, The - Evil - Evolutionary Epistemology - Experimental Philosophy - Extended Mind Thesis, The - Externalism and Internalism in the Philosophy of Mind - Fatalism - Feminist Metaphysics - Feminist Philosophy - Feyerabend, Paul - Fiction - Fictionalism - Fictionalism in the Philosophy of Mathematics - Film, Philosophy of - Foreknowledge - Forgiveness - Formal Epistemology - Foucault, Michel - Free Will - Frege, Gottlob - Gadamer, Hans-Georg - Georg Wilhelm Friedrich Hegel: Aesthetics - Georg Wilhelm Friedrich Hegel: Metaphysics - God and Possible Worlds - God, Arguments for the Existence of - God, The Existence and Attributes of - Grice, Paul - Heaven and Hell - Hegel, Georg Wilhelm Friedrich: Philosophy of History - Hegel, Georg Wilhelm Friedrich: Philosophy of Politics - Heidegger, Martin: Early Works - Hermeneutics - Higher Education, Philosophy of - History, Philosophy of - Hobbes, Thomas - Horkheimer, Max - Human Rights - Hume, David: Moral and Political Philosophy - Husserl, Edmund - Idealizations in Science - Identity in Physics - Imagination - Imagination and Belief - Impossible Worlds - Incommensurability in Science - Indian Philosophy - Indispensability of Mathematics - Inductive Reasoning - Infinitism - Instruments in Science - Intellectual Humility - Intuitions - James, William - Kant and the Laws of Nature - Kant, Immanuel: Aesthetics and Teleology - Kant, Immanuel: Ethics - Kant, Immanuel: Theoretical Philosophy - Kierkegaard, Søren - Knowledge - Knowledge-How - Kuhn, Thomas S. - Lacan, Jacques - Lakatos, Imre - Language of Thought - Language, Philosophy of - Latin American Philosophy - Laws of Nature - Legal Philosophy - Legal Positivism - Leibniz, Gottfried Wilhelm - Levinas, Emmanuel - Liberty - Literature, Philosophy of - Locke, John - Locke, John: Identity, Persons, and Personal Identity - Logic - Lottery and Preface Paradoxes, The - Lucretius - Machiavelli, Niccolò - Marx, Karl - Mathematical Explanation - Mathematical Pluralism - Mathematical Structuralism - Mathematics, Philosophy of - McDowell, John - McTaggart, John - Meaning of Life, The - Medically Assisted Dying - Medicine, Contemporary Philosophy of - Medieval Logic - Medieval Philosophy - Memory - Mereology - Meta-epistemological Skepticism - Metaepistemology - Metaethics - Metametaphysics - Metaphilosophy - Metaphysics, Contemporary - Mind, Metaphysics of - Modal Epistemology - Modality - Models and Theories in Science - Modularity - Montesquieu - Moore, G. E. - Moral Contractualism - Moral Naturalism and Nonnaturalism - Moral Responsibility - Music, Analytic Philosophy of - Nationalism - Natural Kinds - Naturalism in the Philosophy of Mathematics - Naïve Realism - Neo-Confucianism - Nietzsche, Friedrich - Nonexistent Objects - Normative Ethics - Normative Foundations, Philosophy of Law: - Normativity and Social Explanation - Occasionalism - Olfaction - Ontological Dependence - Ontology of Art - Ordinary Objects - Other Minds - Pacifism - Pain - Panpsychism - Paradoxes - Pascal, Blaise - Paternalism - Peirce, Charles Sanders - Perception, Cognition, Action - Perception, The Problem of - Persistence - Personal Identity - Phenomenal Concepts - Phenomenal Conservatism - Phenomenology - Philosophy of Biology - Physicalism - Physicalism and Metaphysical Naturalism - Physics, Experiments in - Plato - Plotinus - Political Obligation - Political Philosophy - Pornography and Objectification, Analytic Approaches to - Practical Knowledge - Practical Moral Skepticism - Practical Reason - Pragmatics - Pragmatism - Problem of Divine Hiddenness, The - Problem of Evil, The - Propositions - Psychology, Philosophy of - Punishment - Pyrrhonism - Qualia - Quietism - Quine, W.V.O.
http://www.oxfordbibliographies.com/view/document/obo-9780195396577/obo-9780195396577-0060.xml
One of the most amazing things about visiting Morocco is that you can explore both the Atlas Mountains and the Sahara Desert on the same trip. Both are equally viable for Brahim and Mohamed and their team. Highlights - Explore the Atlas Mountains and the Sahara Desert - Climb Highest peak in North Africa - Visit Kasbahs , Gorges - Spend night in Sahara Desert A trek of this sort can take you from the city of Marrakech, through the High Atlas Mountain range before dropping down into the arid beauty of the Sahara Desert. Along the way you can explore the homelands of the Berber tribes and ascend to the peak of the highest mountain in North Africa. You'll see local settlements, Berber culture and experience the flora and fauna (including riding camels into the desert) of Morocco and the vistas of the North African countryside. From the sand-blown dunes of Erg Chebbi to the peak of the tallest mountain in North Africa, this trekking combination is unforgettable. Brahim and Mohamed are experts in combining the two. For example, their recommended combination is a combination of travelling to Erg Chebbi and making an ascent of the highest peak in North Africa. The itinerary for this trip is outlined below. Please Contact Us to get a quote for other combined Treks. What to expect? Day 1: Marrakech - Imlil - Toubkal Basecamp 3208m Pick up from where you stay in Marrakech , drive to The starting point for this trek in Imlil,in the mountains, and here you will meet your guide and supply mules. From here you'll trek for 5 hours or so towards the basecamp of toubkal Mountain at 3207m, having lunch along the way, before staying in either a refuge or a tent, depending on the conditions and season and clients choice. Day 2: Toubkal Basecamp( Refuge) - Toubkal Ascent 4164m - Ait souka The second day you will ascend the highest peak in Northern Africa, Mount Toubkal , which is 4167m. From the top you will be able to see whole Atlas Mountain and valleys and the desert (which you will be travelling across, on the back of a camel, several days from now), before you head down toward Imlil. 10 to 12 hours walking in total after you set off, you should be in the village, ready to stay the night in a local house. Day 3: Ait Souka - Ait Benhaddou Kasbah- Ouarzazat- Dades valley On day 3, you and your party will transfer to the World Heritage site at Ait Benhaddou, which has provided the scenery for over 20 films. You will pass lots of spectacular views and beautiful villages before you get to the desert, at which point the scenery will transform into riversides of oases and kasbahs. Upon your arrival at the Valley Of The Roses, a Moroccan dinner will be waiting for you in a beautiful hotel in Dades. Day 4: Dades Gorges - Todra Gorges- Merzouga Erg Chebbbi After your breakfast at the hotel, you and your group will transfer to the Todra Gorge, a gigantic fault line that divides the Atlas Mountains as a narrow valley peppered with Berber villages and rises to 300m. After your lunch, you will head deeper into the desert, to a tiny village called Merzouga, where you will trek overnight with the camels (you will get to sleep, do not worry!) to the beautiful Erg Chebbi dunes – the highest dunes in Morocco. You will then ride your camels for an hour and a half at sunset and sleep under the stars at a desert camp after a welcome Berber meal. Day 5: Erg Chebbi - Marrakech The final day will present you with wonderful views of sunrise in the desert riding camels along the big dunes to meet the driver and back all the way to Marrakech , expect to be there around 18:30. Further discounts are available for group sizes over 10 people. You can pay in €/$/£ or Moroccan Dirhams. All Year Around , In winter the Toubkal option needs winter equipements ( see Winter Toubkal) This trek has no confirmed dates. Please make an enquiry to arrange a booking.
https://www.trekkinginmorocco.com/treks/walking-treks/atlas-mountain-desert-trip
Learn the basics of managing websites and applications in the WebApps environment. Use the URL above to create "pods" (or "Linux containers") that will host your Web applications. Think of each one as your very own server, with file systems and logs that only you can access. You can choose from a variety of popular frameworks and programming languages for your pods (see below for a complete list). Some are Shibboleth-enabled. We encourage anyone doing active development of their site to have both a live/production pod and a development pod. Direct (S)FTP access to the containers is not available. However, a CLI utility is available for Windows, Linux and Mac that provides more advanced features such as rsh and rsync. Learn more about OpenShift Client Tools. A "version control system" records changes to a file or set of files over time so that you can recall specific versions later. We highly recommend that all developers use UB's Version Control System (UBVCS) to manage their application code. UBVCS is powered by git, a distributed revision control system aimed at speed, data integrity, and support for distributed, non-linear workflows. When you create a pod, you can choose from a variety of popular programming languages and frameworks: If you are migrating a website into WebApps and you already have a central database, use that. If you need database for production, please request one through the UBIT Help Center Online. For testing purposes only. The WebApps database pods are NOT backed up or currently administered, and are therefore not recommended for testing. UB WebApps is not meant as an alternative to the UBCMS, but rather as a way to extend its functionality. Applications you create in UB WebApps can be included in UBCMS Web pages through the External Embed or HTML Snippet components.
https://www.buffalo.edu/ubit/service-guides/web-hosting/webapps/overview.html
If you’re trying to find a way to learn piano, you have pertained to the ideal place. There are a variety of various things you can do to get started. These include learning to read notes, playing octaves, and even learning to play a melody. Reading Notes If you are interested in learning to read piano notes, there are several things you need to understand. Initially, you require to comprehend how the keyboard works. Second, you need to learn the musical alphabet. Lastly, you need to practice sheet music. The treble clef is a good location to begin. It reveals the notes you have fun with your right hand. You can utilize the exact same strategy to read notes on the bass clef. Aside from the treble clef, you need to learn the design of the keys. Another essential aspect is the size of the notes. This is essential since the value of each note modifications as the bottom number is changed. For example, changing the bottom number from 0 to 1 is a huge modification. To increase the precision of your notes, you need to utilize landmark notes. One of the most popular approaches to learning to read piano notes is the landmark technique. In this system, you label the tiniest notes on the staff. Once you have actually remembered these little signs, you can start focusing on larger notes. It is also possible to use the landmark technique in mix with the period technique. Using the interval technique, you can find out more notes with less effort. As you improve at checking out music, you can focus on other aspects of the instrument, such as the hammer and strings. When you end up being a piano pro, you will have the ability to play solo and support music. The best method to learn how to read piano notes is to start playing music. Practice on sheet music and find a piece that inspires you. By doing this, you will establish muscle memory, which will aid in your note acknowledgment. Playing a Melody Playing a melody on the piano can be difficult. You need to learn the ideal chords, know the scale and key of the tune, and improvise. There are apps and websites to help you do this. Nevertheless, there are a couple of basics you should solve to speed up your progress. The very first is to select an easy melody to start with. Basic melodies are typically memorable. They are likewise the most useful. For instance, if you play a melody in your left hand, you can typically make it louder with your right. Likewise, selecting a complex melody to play with your left hand might not be the very best move. An easier melody can help you practice the piano’s most important feature: balance. The key to playing an excellent melody is to understand the significance of the note. This is where a good ear training routine comes in. Learn to hear the notes of your music by ear, and play each note in its appropriate location. As you advance in your musical understanding, you will learn the most important note in a tune. The notes on downbeats are generally more vital than the off beats. If you aren’t sure how to do this, ask your instructor. Typically, the tiniest of these note is the smallest of all. Utilizing this information, you can discover the most efficient way to play the melody. To do so, you need to learn how to balance your weight in between your hands. Whether you’re a novice or a sophisticated gamer, practice these ideas to accelerate your progress. Lastly, you ought to get a good piano book. Learning to play a melody on the piano is a process that can take weeks. Even with the best technique, it can be an overwhelming job. But with a little effort and practice, you’ll be able to master the art of playing your favorite songs. Learning Octaves The capability to play octaves well can be a real asset to a pianist’s repertoire. This is due to the fact that octave passages require an exact rhythm. They likewise require speed and control. Those who are brand-new to piano music can enhance their method by learning octaves. octaves are also useful because they add depth to songs. In addition, they are an excellent springboard to play the next note in the chord. When starting to play octaves on the piano, you will require to learn how to stretch your hand and wrist. Stretching your hand properly will prevent pain and injury. It will also assist you end up being more balanced. While extending, hold the fingers in a relaxed position. You should never overstretch your hand, due to the fact that this can cause strain on your arms. To keep the hands loose, move the fingers in and out of the keys, however do not tense the hands. When you practice octaves, concentrate on balancing your body and hand. If your wrist burns out, you may require to stop briefly or stop. Use different finger combinations to make octave passages more smooth. A great way to do this is to use a bouncing method. By lifting your hand above the keyboard, you can generate adequate energy to bounce your fingers. This will enable you to make faster movements. The best way to improve your octave technique is to practice scales in octaves. Playing octave scales can help you establish the technique of playing octaves, while likewise developing confidence in your capability to play sophisticated piano music. Practicing octaves on the piano will help you improve your control, balance and finger positioning. However, it is necessary to practice octaves separately from other notes. Carrying out octaves at the wrong time can trigger your music to sound sloppy. Learning to Play by Ear Learning to play piano by ear can be a satisfying musical activity. Unlike playing by rote, this skill permits you to play songs more spontaneously and with a more open heart. This can be a big confidence booster. It’s essential to keep in mind that while you should aim to be precise, there’s no requirement to fret if you make mistakes. The first step in learning to play by ear is to listen. You can train your ears to acknowledge various notes, intervals and chord developments. Music theory can also assist you to recognize these patterns. Another crucial aspect of playing piano by ear is a good understanding of consistency and melody. A good knowledge of these will help you evaluate a tune faster. Likewise, if you practice the major and minor scales, it will help you improve at playing by ear. There are lots of online tutorials and video courses on how to play piano by ear. A few of these teach you how to read sheet music, but others are more concentrated on establishing your ear. If you don’t have the time or money to take lessons, you can try using YouTube or a music theory workbook. You ought to be aware that not all piano tunes are written in the very same fundamental major/minor scale. Lots of pop tunes utilize the exact same scales, but the method the notes are positioned in each melody can vary. Understanding music theory will assist you select chord developments, patterns, and key signatures. You can also learn to determine the different sounds related to each chord. It’s also a good concept to see and listen to piano players. Not only will this assist you enhance your playing abilities, it can likewise boost your confidence. Playing With Both Hands Learning to play piano with both hands can be a challenge. It needs a lot of coordination and precision. To make it simpler, you must look for a good music app that can help you play with both hands. You can also attempt to improve your hand co-ordination by practicing scales and arpeggios. Having a little practice daily can go a long way. When learning to play with both hands, you should take care not to play too quick. This can result in aggravation. Dip into a slow tempo up until you feel comfortable. Doing this will make sure that your errors do not get inscribed into muscle memory. There are numerous websites that offer free hand coordination workouts. If you’re not sure where to look, Google the expression “hand co-ordination workout”. Also, try to find the very same thing on YouTube. In the end, it boils down to focusing on one part at a time. Start with the right-hand part. Master the chords and rhythms of that part. Next, try the left-hand part. Lastly, don’t be afraid to experiment with fingering. The technique is to utilize your left and right hands in conjunction with each other. For example, you could have the left hand playing a melody while the right hand plays a bass line. Performing this feat is not a natural human capability. However, by practicing the previously mentioned actions, you will find that you can start to enjoy playing piano with both hands. While it’s true that experimenting both hands can be frustrating, the benefits are well worth the effort. Your hand coordination and brain will thank you for the exercise. Attempt to reserve 30 minutes each day to practice. Playing with both hands can be an uphill struggle, so you may wish to think about hiring a professional.
https://www.eraworld.org/beginner-piano-jingle-bells-sheet-music/
Resource pages for "Northeastern University"-related posts for facility managers (FMs), building operations professionals and decision-makers in all industry sectors. LightView Achieves Boston’s First Private Student Housing LEED Platinum Certification LightView, a sustainable student housing community at Boston's Northeastern University, has been awarded USGBC's highest level of green building certification: LEED Platinum. Partnership Offers Boston Students Eco-Friendly, Live-Learn Environment A public-private partnership between Northeastern University, the City of Boston, and American Campus Communities offers eco-friendly apartments to students while freeing up housing for local workers.
https://facilityexecutive.com/tag/northeastern-university/
|Trends in Articulation Arrangements for Technical and Vocational Education in the South East Asian Region (RMIT, 1999, 44 p.)| The Commonwealth of Australia has a land area of about 7 million square km, and climatic regions ranging from tropical in the north to temperate in the south. Australia has a population of about 18.1 million people. Most of the population, together with agricultural activities and centres of manufacturing, are concentrated in the coastal strip along the east coast and south west coastal regions of continental Australia. Although Australia is a substantial exporter of agricultural produce and minerals these sectors of the economy provide limited and generally declining employment opportunities. Manufacturing and construction are significant sectors of the Australian economy, but most employment growth is in the provision of services. | | Gross Domestic Product | | (1) | | Australia 1995 | | % | | Agriculture | | 6.1 | | Mining | | 3.9 | | Manufacturing | | 15.0 | | Construction | | 6.3 | | Transport/Communications | | 9.2 | | Financial services | | 11.2 | | Commerce | | 17.5 | | Education | | 4.4 | | Other Services | | 23.3 In Australia, the primary/secondary system of education is based on a 6 + 6 year cycle of schooling. Few barriers exist to prevent students completing 12 years of schooling. Attendance at school is compulsory from 6 until at least 15 years of age. For school leavers, entry to university undergraduate degree programs is based on performance in the relevant senior secondary (year 12) certificate course. In former times, a number of Australian states provided secondary technical education in technical schools, that operated in parallel with more academic secondary high schools. This model of secondary education has been replaced in Australia, with secondary education courses which are general in nature. To cater for a broad spectrum of student ability and interest, a recent trend in Australia has been to permit senior secondary students to undertake some vocational studies, which may be cross credited to vocational education qualifications. This trend is driven by attempts to improve the transition between school and employment, particularly for those students who do not wish to proceed to university at this stage. There are 37 government funded universities in Australia, and two private universities. The small number of private universities is partly a reflection of the fact that places in undergraduate courses at government funded universities were provided on the basis that cost of tuition was largely paid by the Australian Government. Technical and vocational education In Australia, technical and vocational education is frequently referred to as vocational education and training (VET). In broad terms, vocational education and training covers preparation and ongoing retraining, for a broad sweep of occupations, ranging from the operative through to the para-professional level. The delivery of formal VET programs in Australia has, until recently, mainly been the responsibility of government funded technical and further education (TAFE) colleges. These efforts were supplemented by the training colleges of government bodies, the armed services and to a lesser extent private educational providers. In addition, there is a well established provision of informal education and training in Australia, mainly directed at adult and community education, but often providing elements of what might be considered vocational education. Over the last decade successive Australian Governments have encouraged much broader provision of vocational education and training through government and non-government providers, including TAFE colleges/institutes, private colleges, industry training centres, group training schemes, and in-house training arrangements. There are about 80 TAFE institutes in Australia with many operating on a multi-campus basis. Some universities also provide VET courses, mainly at the paraprofessional level. To facilitate broader provision of vocational education and training in Australia, a number of steps have been taken to establish a more open training market which is driven by demand rather than input of resources, including: · Establishment of a framework for qualifications, standards and assessment which apply to both government and non-government providers of vocational education and training. · Transferability of recognition of formal education and training between providers. · Encouragement of competition between government and non-government vocational education and training providers. This is enhanced by government funding assistance for some programs provided by non-government providers. · A focus on competency based training directed at the needs of industry and commerce, with a shift away from time-dependant programs. · Mechanisms for recognition of prior learning gained by individuals through formal or informal means. · Establishment of the industry based Australian National Training Authority (ANTA) in 1994, to formulate national strategic plans and objectives for the provision of VET. The Australian Qualifications Framework which was introduced in 1995, encompasses vocational education and training awards, as well as higher education courses delivered by universities. This framework of qualifications replaced all former certificates, advanced certificates and associate diplomas awarded by vocational education and training providers in Australia. Australian Qualifications Framework (2) | | Vocational Education & Training | | Higher Education | | Occupational level | | | | Doctoral Degree | | | | | | Masters Degree | | | | | | Graduate Diploma | | | | | | Bachelor Degree | | Professional | | Advanced Diploma | | Advanced Diploma | | Technologist | | Diploma | | Diploma | | Paraprofessional | | Certificate IV | | | | Technician | | Certificate III | | | | Skilled Trade | | Certificate II | | | | Skilled Operative/ | | Certificate I | | | | Preparatory Secondary students have the option of leaving school after about year 10, and might at that time commence an apprenticeship or traineeship. In recent years; under the Australian Traineeship system, the notion of an apprenticeship has been extended beyond what were previously considered to be traditional trade occupations. During an apprenticeship/traineeship period a student works through a training program leading to certificates I - IV. These studies generally articulate to VET diploma/advanced diploma programs. Given the trend for students to complete secondary school, many students enter a TAFE college, or another VET provider, and undertake a post year 12 diploma/advanced diploma course. A diploma/advanced diploma would require normally 2-3 years of full-time studies, or perhaps be a combination of part-time/full-time studies over a longer period. Articulation In the decades prior to the 1960s it was quite common for senior technical college students in a number of states to undertake a certificate course which articulated to a professional diploma program. In a number of instances, cooperative arrangements existed which permitted technical college diploma graduates of demonstrated ability to articulate to related university degree programs, with advanced standing. These practices were disturbed in the 1960s when senior technical colleges in Australia were removed from the technical college system in each state, and redesignated what at the time were known as colleges of advanced education (CAEs). CAEs were established to provide tertiary level studies of a more applied nature than were delivered by universities, and were the consequence of an investigation into the national provision of higher education by the 1964 Martin Committee. (3) Initial CAE course offerings did not extend beyond the professional diploma courses previously conducted by the technical colleges. However, by the early 1970s the CAEs, which later absorbed many former teacher training colleges, had moved to introduce degree level programs. During the early 1990s, the CAEs underwent a further metamorphosis to become universities in their own right, or were absorbed by existing universities as a consequence of decisions by the Australian Government which required rationalization and consolidation of the higher education sector. What remained of the technical college system in Australia at the postsecondary level was reworked into what became TAFE colleges during the 1970s. During the 1960s, when diploma courses were transferred from technical colleges to the CAEs, concerns were raised by a number of parties including trade unions and educators, regarding the loss of convenient linkages between vocational and higher education courses, but at the time were ignored, in favour of a Taylorist approach which resulted in terminal courses directed at particular occupations. Students wishing to articulate between vocational and higher education programs during this era were subject to assessment on a case by case basis, which proved to be somewhat inconsistent. During the 1980s the efficacy of articulation arrangements for TAFE course graduates began to be questioned, and has subsequently led to a more open approach, which was the result of a number of factors including: · The number of TAFE college graduates from paraprofessional courses that subsequently sought higher level studies increased significantly. · The provision of TAFE courses was greatly improved both in terms of course delivery resourcing, quality and availability during the 1970s, in the wake of higher levels of government funding. · The entry level to higher level TAFE courses gradually increased until completion of secondary education became the norm for many courses. · Technological change impacting on industry and commerce encouraged the existing workforce to undertake further studies, not necessarily in a field of previous study. · Social pressure, exemplified by trade union calls for removal of artificial barriers to further study became stronger. · Government at the national level, gradually became more interested in seeking ways to improve articulation arrangements, for a number of reasons including social justice for disadvantaged groups, and the need to improve the provision of education to enhance economic growth. · State educational authorities became more sensitive of the need to improve linkages between vocational education programs under their control, as well as linkages between TAFE and related higher education institutions. The 1985 report Articulation of TAFE Middle-Level and Higher Education Courses in Australia, by the TAFE National Centre for Research and Development chronicles the limited opportunities and difficulties faced by TAFE course graduates wishing to articulate to higher level studies at that time. (4) In 1989, the Australian Government in the White Paper, Higher Education, A Policy Statement, (5) documented its concerns on the somewhat ad hoc credit transfer arrangements between TAFE and higher education and set out a number of principles that it expected higher education institutions to adopt, including: · Transferring students should receive maximum possible credit for completed work. · There should be continuing dialogue between higher education and TAFE institutions to establish means by which cooperative planning of TAFE courses, may facilitate credit transfer. · Individual institutions should codify and publish information on the extent of credit which they are prepared to grant in recognition of work done in other institutions. · Arrangements should be available so that students who believe that their previous academic study justifies exemption from particular units may have credit transfer decisions reviewed. Information about pathways through postsecondary education courses is more widespread and routinely appears in TAFE college and university handbooks. Secondary students are made aware of the education pathways open to them. (7) Higher education staff involved in the enrolment process have become familiar with the notion of articulation and where applicable credit transfer arrangements. (8) It has became commonplace for the accreditation process in both VET and higher education, for course approval submissions to demonstrate a clear indication of articulation pathways. This concept has been taken up by the Institution of Engineers Australia as part of its course recognition process which now requires articulation arrangements to be described in all courses submitted to it for recognition and assessment. A small number of dual sector educational institutions exist in Australia that have both VET and higher education components. These institutions are Swinburne University of Technology, RMIT University, Victoria University of Technology (all in Victoria), the Australian Maritime College in Tasmania, and the Northern Territory University. These institutions have developed comprehensive policies to enhance articulation arrangements. Recently, the Victorian Government announced that a number of TAFE colleges will amalgamate with universities, in part to enhance articulation arrangements. Higher education programs offered through the distance education mode provide significant articulation opportunities for VET course graduates, particularly those in remote geographic locations, or those who otherwise find it difficult for various reasons to attend classes. Generally, courses offered through off-campus studies have the same entry requirements and course credit arrangements that apply in respect of courses conducted on-campus, including allowances made for the entry of mature age applicants to courses. In many cases students can enrol in single subjects without prerequisites. Reverse articulation It has become apparent that articulation is not simply a one way traffic between vocational education and higher education. In Australia there is widespread articulation of individuals from the university sector to vocational education programs at TAFE colleges. This might be described as reverse articulation. The two-way nature of articulation between VET and higher education in Australia, is such that there is now more traffic between higher education and VET than the other way round. Not only that, there is evidence indicating that articulation is a multidirectional phenomenon with more than half the students who migrate between vocational education and the university sectors previously enrolled in more than one tertiary course. (9) Whereas there has been considerable dialogue over decades to bring about improvements in articulation arrangements between VET and higher education by those having an interest in this matter, reverse articulation is a phenomenon which has grown without any perceived encouragement by government, educational authorities or institutions. Demand has been a function of individual need. What is apparent in Australia is that there is now considerable community pressure of widespread provision of postsecondary education, in contrast to the situation which applied in former times when postsecondary education was seen as only being necessary for small numbers of people. The 1998 West Committee felt it necessary to comment on this matter with the statement that the national target should be near universal access to some form of postsecondary education (13) Given increasing pressure for individuals to undertake further education and training simply to cope with change it is apparent that the pressure to continue improvements of articulation arrangements in Australia will increase.
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0ewf--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-10&a=d&c=ewf&cl=CL1.4&d=HASH01405f333eeb385dd86aeee6.3
How Much Is Woody Paige Worth? In 2023, Woody Paige's net worth was estimated to be $800 Thousand. Woody Paige makes a salary of about $300 Thousand. As one of the richest celebrities in the world, people are always wondering how much money Woody Paige has or makes. While it's impossible to calculate the exact amount Woody Paige is worth, we can use publicly available information such as salary, investments, businesses, endorsements, and other income to estimate a net worth for 2023. Who Is Woody Paige? Place of Birth: Memphis Profession/Source of Wealth:Journalist Categories:Richest Celebrities, Authors Nicknames: Woodrow Wilson Paige Net Worth: $800 Thousand Fans, please take into consideration that we didn't break into Woody Paige’s bank accounts. The Woody Paige net worth and salary figures above have been reported from a number of credible sources and websites. However, there are several factors that affect a celebrity’s net worth, such as taxes, management fees, investment gains or losses, marriage, divorce, etc.
https://www.celebritynetworth123.com/richest-celebrities/woody-paige-net-worth/
event expiration time varies from 15 minutes to 6 hours such as 0:15, 0:30, 0:45, 1:00, 1:30, 2:00, 2:30, 3:00, 3:30, 4:00, 4:30, 5:00, 5:30 and 6:00. See below the description of the expiration time quoted from NOAA. If multiple alerts received, the radio will only display the latest received event regardless of the Effective Time. However, all the Effective Times will be auto count down simultaneously and they can be viewed in the Event Memory. In the Event Memory, if the event Effective Time has expired, the LCD will display 'Effective Time OFF' or display the receiving channel (for R-200 or R-500 radio only). When the latest received event Effective Time expired and there are other active events, then, the next latest active event message and Effective Time will be shown on the LCD display. When all event Effective Times expired, no event messages will be displayed, the expired event messages together with date / time upon receiving can be reviewed in Event Memory, however. About the expiration time and the valid time period, below is quoted from NOAA site: http://www.nws.noaa.gov/nwr/nwrwarn.htm#B "NOTES about expiration time in SAME vs. valid time period in voice message: For short-fuse hazards, such as a tornado warning, the valid time period is from the broadcast time of the alert until the SAME message expiration time. For longer-fuse hazards, such as a winter storm warning, the beginning of the valid time period may not be the same as the alerting broadcast time and the end of the valid time period may not be the same as the SAME message expiration time. In such cases, updated messages are broadcast on or before the SAME message expiration time to provide fresh information. Updated messages will be alerted only for new warning information."
http://www.reecominc.com/EffectiveTime.htm
Milos Forman, the iconic director responsible for the Jack Nicholson-led drama, One Flew Over The Cuckoo’s Nest, has passed away at the age of 86. While Milos’ title might not be the most commonly uttered names throughout homes in America, his contributions to film and Hollywood have been vast. The 86-year-old, who won the Best Director Oscar for Amadeus and One Flew Over The Cuckoo’s Nest, was born in the Czech Republic and left in 1968 to avoid the Prague Spring when Stalin’s tanks rushed into the country to shut down a rebellion seeking democracy and change. His widow, Martina, said that Milos died peacefully surrounded by the people he loved. Forman went to school for filmmaking as a young man and decided to stay in the United States after the Warsaw Pact troops crushed the rebellion in what was formerly called Czechoslovakia. Back in 1984, he said to People Magazine that he thought he would never get to see his home country ever again. After Forman became a formal United States citizen in 1977, he returned to his native country – which was still ruled by the USSR – and filmed parts of Amadeus there, to the delight of Czech citizens. The action was praised by people living in Prague who consider Mozart to be a part of their history. People in the country regard Mozart as their own because of his writing and premiering of Don Giovanni there two-hundred years prior. The biographical drama went on to win eight Oscars including Best Picture, Best Actor, and Forman won Best Director once again. Some of his later films included The People Vs. Larry Flynt, Man On The Moon, and Ghosts of Goya, with the film on Larry Flynt becoming one of the most legendary. Forman made an appearance in a film once again in the acclaimed Netflix documentary on Jim Carrey’s portrayal of the late, Andy Kaufman. The documentary revolved around Jim’s commitment to portraying the iconic comedic legend. Nevertheless, we send our thoughts and prayers to Milos and his family.
https://celebrityinsider.org/milos-forman-the-director-of-one-flew-over-the-cuckoos-nest-dies-at-the-age-of-88-135745/
This course deals with design and selection of mechanical elements such as welded joints, bolts, bearings, spur gears, shafts, brakes; alternative fatigue design methods with consideration of creep, yielding, fracture; basics of finite element analysis with emphasis on beam and plate models. Mechanical Systems II (ME 302) |Programs\Type||Required||Core Elective||Area Elective| |BA- Political Science| |BA-Cultural Studies| |BA-Cultural Studies| |BA-Economics| |BA-Economics| |BA-International Studies| |BA-International Studies| |BA-Management| |BA-Management| |BA-Political Sci.&Inter.Relat.| |BA-Political Sci.&Inter.Relat.| |BA-Social & Political Sciences| |BA-Visual Arts&Visual Com.Des.| |BA-Visual Arts&Visual Com.Des.| |BS-Biological Sci.&Bioeng.||*| |BS-Computer Science & Eng.| |BS-Computer Science & Eng.| |BS-Electronics Engineering| |BS-Electronics Engineering| |BS-Industrial Engineering||*| |BS-Manufacturing Systems Eng.||*| |BS-Materials Sci. & Nano Eng.||*| |BS-Materials Science & Eng.||*| |BS-Mechatronics||*| |BS-Mechatronics||*| |BS-Microelectronics| |BS-Molecular Bio.Gen.&Bioeng||*| |BS-Telecommunications||*| CONTENT OBJECTIVE This course is designed for undergraduate students to (i) develop an understanding of standards, safety, reliability, importance of dimensional parameters and manufacturing aspects in mechanical design, (ii) develop an understanding of different design criteria, and (iii) gain the ability to design different mechanical elements such as shafts, joints, springs, etc. LEARNING OUTCOME Formulate and analyze stresses and strains in machine elements and structures in 3-D subjected to various loads. Be able to do tolerance analysis and specify appropriate tolerances for machine design application Understand and apply multi-dimensional static/fatigue failure criteria in the analysis and design of mechanical components. Be able to analyze and design structural joints Be able to analyze and design power transmission shafts carrying various elements with geometrical features PROGRAMME OUTCOMES 1. Understand the world, their country, their society, as well as themselves and have awareness of ethical problems, social rights, values and responsibility to the self and to others. 2 2. Understand different disciplines from natural and social sciences to mathematics and art, and develop interdisciplinary approaches in thinking and practice. 4 3. Think critically, follow innovations and developments in science and technology, demonstrate personal and organizational entrepreneurship and engage in life-long learning in various subjects. 4 4. Communicate effectively in Turkish and English by oral, written, graphical and technological means. 3 5. Take individual and team responsibility, function effectively and respectively as an individual and a member or a leader of a team; and have the skills to work effectively in multi-disciplinary teams. 4 1. Possess sufficient knowledge of mathematics, science and program-specific engineering topics; use theoretical and applied knowledge of these areas in complex engineering problems. 5 2. Identify, define, formulate and solve complex engineering problems; choose and apply suitable analysis and modeling methods for this purpose. 5 3. Develop, choose and use modern techniques and tools that are needed for analysis and solution of complex problems faced in engineering applications; possess knowledge of standards used in engineering applications; use information technologies effectively. 5 4. Ability to design a complex system, process, instrument or a product under realistic constraints and conditions, with the goal of fulfilling specified needs; apply modern design techniques for this purpose. 5 5. Design and conduct experiments, collect data, analyze and interpret the results to investigate complex engineering problems or program-specific research areas. 1 6. Knowledge of business practices such as project management, risk management and change management; awareness on innovation; knowledge of sustainable development. 1 7. Knowledge of impact of engineering solutions in a global, economic, environmental, health and societal context; knowledge of contemporary issues; awareness on legal outcomes of engineering solutions; understanding of professional and ethical responsibility. 4 1. Familiarity with concepts in statistics and optimization, knowledge in basic differential and integral calculus, linear algebra, differential equations, complex variables, multi-variable calculus, as well as physics and computer science, and ability to use this knowledge in modeling, design and analysis of complex dynamical systems containing hardware and software components. 5 2. Ability to work in design, implementation and integration of engineering applications, such as electronic, mechanical, electromechanical, control and computer systems that contain software and hardware components, including sensors, actuators and controllers. 5 1. Comprehend key concepts in biology and physiology, with emphasis on molecular genetics, biochemistry and molecular and cell biology as well as advanced mathematics and statistics. 1 2. Develop conceptual background for interfacing of biology with engineering for a professional awareness of contemporary biological research questions and the experimental and theoretical methods used to address them. 1 1. Applying fundamental and advanced knowledge of natural sciences as well as engineering principles to develop and design new materials and establish the relation between internal structure and physical properties using experimental, computational and theoretical tools. 4 2. Merging the existing knowledge on physical properties, design limits and fabrication methods in materials selection for a particular application or to resolve material performance related problems. 4 3. Predicting and understanding the behavior of a material under use in a specific environment knowing the internal structure or vice versa. 1 1. Formulate and analyze problems in complex manufacturing and service systems by comprehending and applying the basic tools of industrial engineering such as modeling and optimization, stochastics, statistics. 1 2. Design and develop appropriate analytical solution strategies for problems in integrated production and service systems involving human capital, materials, information, equipment, and energy. 1 3. Implement solution strategies on a computer platform for decision-support purposes by employing effective computational and experimental tools. 1 Update Date: ASSESSMENT METHODS and CRITERIA |Percentage (%)| |Final||30| |Midterm||50| |Individual Project||20| RECOMENDED or REQUIRED READINGS |Textbook|| | R.G. Budynas, K. J. Nisbett Shigley?s Mechanical Engineering Design |Readings|| | 1) Robert L. Norton, Machine Design an Integrated Approach, Prentice Hall, (third edition), New Jersey, USA, 2014.
https://www.sabanciuniv.edu/syllabus/courses.cfm?year=2020&term=02&subject=ME&code=302&lan=eng
Planning In Conflict – Experiences With The Conflict. Sensitive Programming Approach ‘Do No Harm’ in Pastoralist Settings. Conflict sensitive programming is a cross cutting approach that can be integrated in any intervention (emergency, humanitarian, recovery or development) through the various sectors such as water, natural resource management, animal health, education, health and nutrition that may not necessarily be conflict related. Conflict sensitive programming requires that an organization demonstrates ability to understand its operational context, its interventions and interactions with that context and to ensure that its interventions minimize negative impacts and maximize positive impacts on conflict. This book shares VSFs’ experiences and lessons learnt in conflict sensitive programming in pastoral settings by integrating the “Do no Harm approach”, Participatory Rural Appraisal (PRA) and components of conflict analysis tools. It captures the processes of institutionalization and application of the integrated approach into drought preparedness projects. We envisage that governments, donors and aid service providers working in emergency, humanitarian, recovery and development interventions will find practical experiences and outcomes of humanitarian assistance using conflict sensitive programming in cross border drought preparedness projects highlighted in this book useful to their work. We hope that the shared experiences will foster a better understanding of the interactions between aid assistance and conflict and how these interactions may decrease or increase the impacts on the conflict in terms of the peace building or conflict reinforcement even if it was not their intended or expected outcome.
http://vsf-international.org/fr/project/planning-in-conflict-do-not-arm/
The Centre for EU Transport Projects is responsible for EU funds distribution in area of transport (infr., rolling stock, IT, ERTMS, safety) and provides beneficiaries with comprehensive support. We offer experience in the field of economic analysis, environmental experts, cooperation with EU institutions, and with many transport market stakeholders. Our activities in the field of the traffic models, support cities in the SUMP preparation, allow to offer a wide range of areas for cooperation. We are a Spanish company established in Badajoz. Based on our 10-year experience, we have built a solid expertise in European project elaboration, management and implementation, environmental and socioeconomic impact assessment (e.g. LCA), and social participation processes. We are presently involved in several projects related to sustainable development, working on a variety of topics such as short food supply chains, rural-urban issues, mobility, waste management and cultural heritage. FGV Europe is the first overseas representation of Fundação Getulio Vargas. FGV is the 6th best think tank in the world, according to the 2018 Global Go To Think Tanks Index. Also is most important research and policy advisory institution in Latin America. FGV Europe’s work is dedicated to key issues at the forefront of global cooperation and sustainable development, by building bridges between Brazilian and European institutions, understanding and integrating their visions, ideas, and projects. Replika was conceived as an independent, interdisciplinary space, promoting the collaboration between professional artists and communities and encouraging a theater for young audience, deeply involved in society. Based on the methodology of Educational Theater we will conduct a series of creative workshops (theater exercises, creative writing, site-specific artwork and handmade, etc) meant to develop the spontaneous ability of association, the imagination, the creative potential of participants. Pannonhalma is a small touristic city under development near Győr. The municipality is committed to ensure the future development of the city through using renewable energy sources, renewing the urban and suburban transport, modernizing the existing infrastructure with smart solutions, and by ensuring the conditions of e- transport. The municipality plans a community application, smart public lighting, smart grid solar cell system, electronic cars, urban smart transport detection system, etc. OpenCom is a private Social and Environmental Research, Training, Communication and ICT and Eurodesk Center for the Province of Arezzo. We collaborate with Institutions, VET centres and Universities in doing research, rraining, CSR initiatives and and promoting inclusiveness, civic participation and social dialogue. GrAT is a scientific association for research and development of Appropriate Technology for developing and developed countries. To reach the goal, GrAT conducts the whole processes of R&D project entailing analysis, planning, design, implementation, monitoring and dissemination. Furthermore, the knowledge gained from those projects is fed into the educational programmes for college and university students, as well as into practical training and capacity building programmes for professionals. 21 SOLUTIONS is a Brussels-based consultancy, specialist in practical implementation of multi-stakeholder community-scale projects throughout the city-region. We support more than 60 "sustainable neigbourhoods" http://www.quartiersdurables.be that have developed >100 community led projects with a reach out of a few thousand citizens. We are also involved in the www.vilco.brussels research project to improve the relations between community led initiatives and local authorities / municipalities. OIKOPLUS is a communication agency. We help to disseminate relevant future-oriented knowledge (especially related to sustainability) to the right stakeholders and change makers. We make sure that those directly affected by a certain problem – often local communities or marginalised groups of people – have a voice. And that their perception and know how is recognised and taken into account by decision makers. See the JPI UE Syncity project. Trianon, a real estate owner in south of Sweden wants to refurbish their building “Motetten”, which is located in a socio economic deprived area in Malmö, to become more sustainable. One important part is to refurbish the outdoor environment to become more inclusive, with special focus on making women and girls feel safe in the area, and to promote good health and recreation. Trianon will take the role as demonstration partner. Analysis and diagnosis of the policies and projects carried out in the public urban space and in housing that have favoured equitable, inclusive and sustainable urban development. Analysis of the impacts of each action. Compilation of best practices and elaboration of recommendations and strategic lines that allow sustainable urban development. We are a non-profit NGO for research and development in the topics of urban landscape design and mental health. Through our experiments we provide scientific evidence for which types and components of urban landscape design can benefit human mental health , wellbeing. We conduct studies in neuroscience, psychology & behaviour, also we use artificial intelligence and GIS systems for mapping the most healing landscapes at scale. Our main assets are talented scientists and great ideas. Kinoa is an italian SME focussed on creating technologically advanced instruments for the collection and use of data, using the new paradigms of IOT, big data and smart data. In addition to its expertise in collecting, processing and analysing data, Kinoa is offering the Kimap technology. This tool, developed by us in 2017, allows the collection of data relative to the quality/health status of the terrain/sidewalks crossed by the user, simply using a smartphone and a mobility support. Asterion Europe are a digital agency that have experience creating systems that facilitate knowledge sharing, best practice dissemination and expertise gathering. We have participated in several projects in a variety of roles, designing websites, databases, communications channels, partner services and knowledge sharing systems. Our team has 10+ years’ experience participating in European projects, on a wide range of topics, allowing us to craft bespoke solutions for your project needs. GECOS has experience on sustainable scenarios ( urban and tourism). The environmental aspect of the analysis has qualitative data involved. GECOS is expert on the modelling and analysis of causal relations involving qualitative data. Therefore, we offer our expertise to become partner, besides the partners that will provide the general knowledge. From here, we can develop a web product to be used wordwide. AGS-water solutions is a company with activities ranging from consultancy/development of specialised engineering services to the management, operation and maintenance of water and wastewater collection, treatment facilities, storage and distribution systems to meet the needs of both public and private activities. With an international presence, the company’s mission is to constantly innovate and improve the efficiency of the solutions provided to always exceed the expectation of its clients. We are already involved in H2020 funded project about sustainable and inclusive mobility in the urban context. We are a Certified B Corp and our skills are developed around social impact measurement, design thinking, business design. We can provide frameworks/methodologies that engage for the cultural sustainability of given urban environments. We can assess the societal/cultural impact of innovative solutions on given urban communities/territories. We can pacify the context by finding balanced mitigation solutions that both encourage the change(s) and engage for the cultural sustainability of the urban areas. The University of Graz, here the Department of Geography and Regional Science, has wide experiences engaging in various European and national projects offering research-based policy advice. One focus is on new and innovative solutions for sustainable and inclusive growth, especially in old-industrialised towns (e.g. INTERREG project www.inducult.eu; www.shiftx.eu) The Department has a wide partner network in Central and SE Europe. YourDATA team is expert in the economic, statistical and policy evaluation analysis; its members have been able to publish a number of articles in top economic journals. We are available for any project in which is needed our expertise in statistical-economic analysis, exploitation and dissemination activities.
http://www.ideal-ist.eu/partner-search?field_ps_type_value=2&topic_id=TRANSFORMATIONS-03-2018-2019
Dale Carnegie was an American writer and lecturer and the developer of famous courses in self-improvement, salesmanship, corporate training, interpersonal skills and public speaking, and the author of How to Win Friends and Influence People (1936). He was born on Saturday November 24th 1888, in Maryville, Missouri, United States. Dale Carnegie wants success in its fullest meaning - wealth, power, and material comforts. He has an enormous ambition. Dale dreams of big projects, great undertakings, and rewards. Carnegie is a visionary. He sees the horizon and the promise. In general, Dale Carnegie also sees the methods necessary to fulfill that promise. But he is not especially good with details; Dale needs others to help him deal with the smaller parts of the picture. His challenge is to make full use of the full spectrum of his abilities, as indicated by his other core numbers. In the same way, Carnegie must bring forth the best from others and orchestrate their talents toward the realization of his vision. In short, Dale Carnegie must lead by example, demonstrating the standard for commitment, determination, and excellence. All of this requires effort on Dale Carnegie's part. He will meet with obstacles and difficulties. His determination and commitment will be tested. But Dale has the power to overcome every obstacle he faces and accomplish the goal he has set for himself. Dale's task in life is to learn to use power in refined and elevated ways. The expression and use of power is directly related to One's personal evolution. The more primitive uses of power rest with violence and the threat of deprivation. The higher expressions depend upon Carnegie's capacity to care for and nourish the people and projects in his charge, giving each exactly what is needed at the appropriate time and in appropriate amounts. Dale Carnegie has a creative mind and an unusual approach to business and problem-solving. He needs, however to cultivate his ability to evaluate others. It is essential for Dale to be involved in a project that challenges him and offers the potential for rewards. When he is uninvolved or unoccupied with a worthwhile task, Carnegie can become deeply despondent, depressed, and frustrated. Dale Carnegie is the perfect example of the old cliche, "Idle hands are the devils workshop." Without a challenge, he can lose balance in life and become selfish, cruel, and even self-destructive. Disappointments become shattering, causing terrible consequences in self-image and self-love. Perspective is everything for Dale Carnegie. He has a natural talent for balancing the spiritual and material planes. Dale needs to cultivate courage and stamina. Courage is a choice. Dale Carnegie needs to decide to be brave in the face of fear. A balanced 8 is among the most powerful and satisfying of numbers. It is the spirit of the true survivor, able to bounce back and achieve a greater victory. You and Dale About Dale's Soul Urge (Heart's Desire) number Dale Carnegie's Soul Urge number represents his inner self. The Soul Urge number, also called Heart's Desire, shows Dale's underlying urge - his true motivation. It reveals the general intention behind many of Carnegie's actions. Consequently, it dramatically influences the choices Dale Carnegie makes in life.
https://www.celebrities-galore.com/celebrities/dale-carnegie/soul-number/
WASHINGTON — The U.S. Defense Department will reduce spending on commercial satellite imagery as part of a broader initiative aimed at trimming $259 billion from projected U.S. military expenditures over the next five years, according to a Pentagon document released Jan. 26. However, the document says the Pentagon still intends to expand industry’s imaging capacity, an indication that planned government-backed investments in new spacecraft will go forward. Commercial imagery was listed among several activities facing “substantial” reductions relative to previous spending plans, the document said, specifying purchases for imaging capacity that exceeds requirements. The document, “Defense Budget Priorities and Choices,” was released at a Pentagon press conference that offered a glimpse of U.S. President Barack Obama’s defense budget request for 2013. The forthcoming request will total $525 billion for next year and reflects numerous force structure and other changes that the administration deemed necessary in light of a changing strategic and budgetary environment, U.S. Defense Secretary Leon Panetta said. Earlier in January Panetta unveiled new strategic guidance that he and other defense officials say drove the decisions previewed Jan. 26. The guidance responds in part to the Budget Control Act of 2011, which directs the Pentagon to cut planned spending by $487 billion over the next decade. Among the major programs being scaled back in the new budget are the Joint Strike Fighter and the Global Hawk surveillance drone, senior defense officials said. One variant of Global Hawk, the so-called Block 30 used for optical imaging, will be scrapped in favor of the less expensive U-2 manned spy plane, officials said. Development and production of other Global Hawk variants will continue, they said. Meanwhile, spending on certain capabilities, among them space, cyber warfare and special operations forces, will be preserved and in some cases increased as the Pentagon moves to a leaner and more agile force and places greater emphasis on the Asia-Pacific and Middle East regions. The document said planned upgrades to space capabilities including GPS, the Space-Based Infrared System for missile warning and Advanced Extremely High Frequency secure communications satellites will be preserved in the request, expected to be delivered to Capitol Hill in mid-February. Programs to defend U.S. and European territory against ballistic missile attacks also will be preserved, officials said, but some regional interceptor programs will not see the funding increases that had been anticipated, officials said. Deputy Defense Secretary Ashton Carter said the United States does not intend to decrease funding for regional missile defense programs but will seek to shift more funding responsibility to the U.S. allies involved. He did not cite examples, but the U.S. Missile Defense Agency has significant co-development programs under way with Israel and Japan. Senior U.S. defense officials began hinting late last year that commercial imagery was being considered for reduced funding as the Pentagon and intelligence community seek to rein in budgets after a decade of soaring growth. The U.S. National Geospatial-Intelligence Agency (NGA) is responsible for procuring commercial imagery for military and intelligence users from two principal providers: DigitalGlobe of Longmont, Colo., and GeoEye of Dulles, Va. Both companies are investing in new and highly capable satellites under an NGA program called EnhancedView that guarantees a certain level of annual business over a 10-year period. The combined value of the EnhancedView contracts, awarded in August 2010, is about $7.3 billion, with DigitalGlobe and GeoEye expected to expand their existing fleets and provide a minimum amount of imagery each year. In a written response to a Space News query, GeoEye Chief Executive Matthew O’Connell said cuts to commercial satellite imagery purchases would be inconsistent with the Pentagon’s new strategic guidance and resulting budget priorities outlined in January. “Moreover, cuts to the EnhancedView program would be counter to Presidential policy on commercial remote sensing,” he said. “One goal of that policy is to advance US leadership in space-based earth observation and geospatial technology. … It would seem cuts to the commercial imagery program would undermine this goal.” O’Connell also noted that the new strategic guidance emphasizes greater reliance on intelligence, surveillance and reconnaissance assets, including unmanned aircraft and satellites, and that commercial imagery can be readily shared with U.S. allies because it is not classified.
https://spacenews.com/pentagon-trim-commercial-imagery-spending/
Janesville, Wisconsin is close to the interstate highway I-90. It is neighboring other cities: Afton, WI (6.8mi/14m ); Hanover, WI (9.2mi/17m ); Milton, WI (9mi/21m ). You can get to the city from exit 175b a of i90 (8.9mi/16m ); exit 171c b of i90 (7.1mi/13m ); exit 171a of i90 (4.1mi/10m ). Janesville, Wisconsin can reach interstate highways: i90 (8.9mi/16m ); i43 (15.8mi/24m ); i94 (36.4mi/52m ). It is also close to local highways: US 51 WI (0.8mi/2m ); WI 11 (2.7mi/6m ); US 14 WI (4.7mi/9m ).
https://roadnow.com/i90/Janesville-WI-city-guide
Features at Country Meadows - Full kitchen - Multipurpose room - Screened patio and walkout patio for barbeques - Large flat screen TV - Billiards Other recreational features include: - 9-hole golf course - Heated pool - Pergola with outdoor games table (chess, checkers, backgammon) - Outdoor shuffleboard - Horseshoe pits - Access to snowmobile/walking trails - Additional features include: - Newer community (2010) - Municipal services - Paved r ... 3 Active Listing(s) Georgian Glen Drive Wasaga Beach, Ontario Georgian Glen Estates is situated in an ideal location with access to a large variety of activities and attractions.Homeowners live minutes away from parks, golf courses, trail systems, and the incredible beach. In addition, residents can enjoy the many urban amenities and services this unique area has to offer including recreation centres and arenas, theatre, and great shopping and dining.For a small annual fee, residents of Georgian Glen Estat ... 3 Active Listing(s) Carruthers Street Wasaga Beach, Ontario Hometown enjoys an enviable location in beautiful Wasaga Beach Residents of Hometown enjoy an enviable location close to the world's longest freshwater beach where they can swim in the warm blue waters or stroll along the picturesque shores of the Bay. The proximity of Hometown to the local services and amenities of Wasaga Beach provides residents with easy access to whatever they might need or want to maintain their new homes and their chosen li ... 1 Active Listing(s) Tecumseth Drive New Tecumseth, Ontario Features at Tecumseth Pines Recreation Centre Indoor Swimming Pool Tennis Courts Bocce Court Shuffleboard Court Library (music and television room) Exercise Room Workshop Billiards Room Multipurpose Room Bocce Bowling - 5-pin Stocked Trout Pond Darts Additional features include: Rural setting Water features and mature trees Walking paths Landscaped common areas Professional community management Roads are cleared of snow 0 Active Listing(s) Bay Moorings Boulevard Penetang, Ontario Features at The Village Overlooks Penetanguishene Bay Paved roads Green space with mature trees Walking paths Professional community management Snow removal of roads, curbside garbage and recycling pick up, and maintenance of all common areas. 6 Active Listing(s) MEADOW Lane Wasaga Beach, Ontario Features at Wasaga Meadows: - Newer community - Municipal services - Three season outdoor pavilion - Gazebo, pond and walking paths - Paved roads - Snow removal of roads, curbside garbage and recycling pick up; and maintenance of common areas - Professional management - For a small annual fee, residents of Wasaga Meadows also have access to the recreational facilities of Wasaga Country Life Resort There are currently no buildings on this page, please check back shortly.
https://www.carolireland.com/leased-land-communities
Astrogation, short for astronavigation, is the planning of a route through space and the actual navigation of spacecraft, either subluminal maneuvering in interplanetary travel or the calculations used to perform slipstream space jumps in interstellar travel. In the United Nations Space Command, UNSC Astronavigation is the organization responsible for creating and maintaining astrogation data. After the start of the Human-Covenant War, this data became a major security risk as the Covenant could potentially use it to pinpoint the location of human colonies and Earth. This prompted the enactment of the Cole Protocol, which established various measures to prevent NAV data stored on starships or other locations from falling into Covenant hands. In context of spacecraft maneuvering in conventional space, several UNSC Naval commanders have been remarked as being particularly proficient in astrogation. Captain Jacob Keyes was credited as a skilled astronavigator, due to his daring maneuvers, particularly the "Keyes Loop" he performed during the Battle of Sigma Octanus IV. Admiral Preston Cole was also renowned for his skills in using effective tactics, including gravity-assist maneuvers, in space battles. While most other operations involved in operating a spacecraft, including maneuvering in normal space, can be performed by a human pilot, the successful execution of slipspace jumps requires a navigation computer or a shipboard artificial intelligence. Due to the unpredictability of human drives, dedicated slipspace guidance beacons are sometimes used to provide navigational reference points when a coordinated and accurate slipspace transit is necessary. Despite the necessity for a computer to perform the jump itself, UNSC Navy personnel are taught the basic calculations involved in a slipspace jump, known as Shaw multivariate calculus. In the late 25th century, then-crewman apprentice Preston Cole gained recognition for coming up with a new way to calculate Shaw-Fujikawa jump parameters. There have been instances of individual ship captains performing slipspace jump calculations on their own, usually to execute a nonstandard or experimental jump, but the results of these have often been disastrous or uncertain, including Admiral Cole's possible in-atmosphere jump during the Battle of Psi Serpentis. When commandeering the stolen Insurrectionist craft Beatrice, Dr. Catherine Halsey plotted a slipspace exit vector through a saddle point in an imaginary mathematical plane involved in the jump function, allowing the drive to recapture the particle accelerator energy in its plasma coils, although this maneuver introduced a noted risk of coil overload. A slipspace jump path must be computed through a pattern of superfine quantum filaments which are distorted by gravity. Prior to the UNSC's discovery and implementation of superior Forerunner drive technology, human slipspace technology lacked the resolution to compute a path through the warped filaments near significant gravity wells, such as that of a planet. The Covenant's superior slipspace technology allows them to compute jumps with far greater precision and stability, owing to their reverse-engineering of Forerunner relics. The major difference is resolution: a Covenant ship is able to calculate abstractions in the lattice of quantum filaments on a far smaller scale, allowing an AI, for example, to compensate for the warped spacetime within a gravity well and perform a jump whilst using minimal energy. Following the Human-Covenant War, a number of UNSC ships, most notably UNSC Infinity, have been fitted with Forerunner drive technology, granting them near-perfect jump accuracy and far greater slipspace velocities than before.
https://www.halopedia.org/Astrogation
In the first step in insulin action, insulin binds to its receptor on the surface of the target cell, thereby activating the insulin receptor tyrosine kinase. This leads to receptor autophosphorylation, and to phosphorylation of tyrosine residues in multiple intracellular proteins. We have investigated this pathway whereby insulin stimulates the translocation of GLUT4 glucose transporters to the plasma membrane in isolated rat adipocytes, a physiologically important target cell for insulin action. Using an experimental approach involving transient expression in primary cultures of rat adipocytes, we have previously demonstrated that insulin receptor tyrosine kinase activity is required for this action of insulin, and that IRS-1 appears to be substrates which participate in mediating the effect of insulin upon recruitment of GLUT4. More recently, we have demonstrated that activation of phosphatidyl inositol 3-kinase (PI 3- kinase) activity also appears to play a necessary role in mediating this action on insulin. This conclusion is based upon two types of evidence: first, that the effect of insulin is inhibited by wortmannin (an inhibitor of the catalytic activity of PI 3-kinase) and, second, that expression of a dominant negative mutant of the p85 regulatory subunit of PI 3-kinase inhibits insulin action in rat adipocytes. It is not definitively established whether activation of PI 3-kinase is sufficient to trigger the translocation of GLUT4. Work is currently underway to identify the molecules that function downstream from PI 3- kinase, and also to eludicate the role of other phosphorylation substrates (e.g., insulin receptor substrate-2) in mediating the metabolic actions of insulin in adipose tissue.
- Merriman Smith B, Krushinsky A, Kistler J, Donaldson P. Expression patterns for glucose transporters GLUT1 and GLUT3 in the normal rat lens and in models of diabetic cataract. Invest Ophthalmol Vis Sci. 2003;44:3458-66 pubmed..used to detect changes in GLUT1 and -3 transcript levels, and Western blot analysis was used to monitor changes in GLUT3 protein expression levels in diabetic rats... - Li D, Han H. [Effect of progesterone on the expression of GLUT in the brain following hypoxic-ischemia in newborn rats]. Zhongguo Ying Yong Sheng Li Xue Za Zhi. 2008;24:353-5 pubmedTo investigate the expression of GLUT1 and GLUT3 in the hippocampus after cerebral hypoxic-ischemia (HI) in newborn rats and the effect of progesterone (PROG) on them... - Sciullo E, Cardellini G, Baroni M, Torresi P, Buongiorno A, Pozzilli P, et al. Glucose transporter (Glut1, Glut3) mRNA in human placenta of diabetic and non-diabetic pregnancies. Early Pregnancy. 1997;3:172-82 pubmedTransport of glucose into the cell is catalyzed by glucose transporters (Glut). Glut1 and Glut3 are expressed at various levels in many human tissues, including the placenta... - Iasevoli F, Latte G, Avvisati L, Sarappa C, Aloj L, de Bartolomeis A. The expression of genes involved in glucose metabolism is affected by N-methyl-D-aspartate receptor antagonism: a putative link between metabolism and an animal model of psychosis. J Neurosci Res. 2012;90:1756-67 pubmed publisher..Here, we explored the gene expression of hexokinase 1 (Hk1) and glucose transporter 3 (GLUT3) after the administration of a subanesthetic or a subconvulsant dose of ketamine in rats, considered to provide an .. - Maher F, Davies Hill T, Simpson I. Substrate specificity and kinetic parameters of GLUT3 in rat cerebellar granule neurons. Biochem J. 1996;315 ( Pt 3):827-31 pubmed..catalytic-centre activity ("turnover number') and stereospecificity of the neuronal glucose transporter GLUT3 in primary cultured cerebellar granule neurons... - Scheepers A, Joost H, Schurmann A. The glucose transporter families SGLT and GLUT: molecular basis of normal and aberrant function. JPEN J Parenter Enteral Nutr. 2004;28:364-71 pubmed..In addition, a malfunction of glucose transporter expression or regulation (GLUT4) appears to contribute to the insulin resistance syndrome. .. - Natalicchio A, De Stefano F, Perrini S, Laviola L, Cignarelli A, Caccioppoli C, et al. Involvement of the p66Shc protein in glucose transport regulation in skeletal muscle myoblasts. Am J Physiol Endocrinol Metab. 2009;296:E228-37 pubmed publisher..GLUT1 and GLUT3 transporter proteins were sevenfold and fourfold more abundant, respectively, and were localized throughout the .. - Nagamatsu S, Sawa H, Kamada K, Nakamichi Y, Yoshimoto K, Hoshino T. Neuron-specific glucose transporter (NSGT): CNS distribution of GLUT3 rat glucose transporter (RGT3) in rat central neurons. FEBS Lett. 1993;334:289-95 pubmed..In the present study we have isolated the GLUT3 (RGT3) cDNA and produced anti RGT3 polyclonal antibody allowing us to investigate the cellular localization and .. - Ganguly A, McKnight R, Raychaudhuri S, Shin B, Ma Z, Moley K, et al. Glucose transporter isoform-3 mutations cause early pregnancy loss and fetal growth restriction. Am J Physiol Endocrinol Metab. 2007;292:E1241-55 pubmedGlucose transporter isoform-3 (GLUT3) is the trophoblastic facilitative glucose transporter... - Santalucia T, Palacin M, Zorzano A. T3 strongly regulates GLUT1 and GLUT3 mRNA in cerebral cortex of hypothyroid rat neonates. Mol Cell Endocrinol. 2006;251:9-16 pubmed..Because of this, we hypothesised that the expression of GLUT1 and GLUT3, glucose transporters expressed in brain cortex, may be altered in congenital hypothyroidism... - Krishnan S, Haddad G. Cloning of glucose transporter-3 (GLUT3) cDNA from rat brain. Life Sci. 1995;56:1193-7 pubmedUsing PCR, glucose transporter-3 (GLUT3) was cloned in its entirety from rat brain. The similarity between human, murine and rodent glucose transporter-3 proteins exceeds 90%... - Shin B, Fujikura K, Suzuki T, Tanaka S, Takata K. Glucose transporter GLUT3 in the rat placental barrier: a possible machinery for the transplacental transfer of glucose. Endocrinology. 1997;138:3997-4004 pubmed..in the transplacental transfer of glucose, we investigated the localization of glucose transporters GLUT1 and GLUT3 immunohistochemically in the rat placenta... - Fattoretti P, Bertoni Freddari C, Casoli T, Di Stefano G, Giorgetti G, Solazzi M. Ethanol-induced decrease of the expression of glucose transport protein (Glut3) in the central nervous system as a predisposing condition to apoptosis: the effect of age. Ann N Y Acad Sci. 2003;1010:500-3 pubmedWe measured the effect of chronic ethanol administration on the expression of Glut3 in the cerebellum and hippocampus of adult and old rats. Glut3 expression significantly decreased in aging, in ethanol-treated rats vs... - Wu H, Sha J, Ren B, Wang H, Zhang G, Li M. [Expression of glucose transporter-3 in the cerebral cortex of aging rats]. Nan Fang Yi Ke Da Xue Xue Bao. 2010;30:1950-2 pubmedTo observe the changes in the expression of glucose transporter-3 (GLUT3) in the cerebral cortex of rats during aging and investigate the role of GLUT3 in the aging process of the nervous system... - Langdown M, Sugden M. Enhanced placental GLUT1 and GLUT3 expression in dexamethasone-induced fetal growth retardation. Mol Cell Endocrinol. 2001;185:109-17 pubmed..Two glucose transporters, GLUT1 and GLUT3, are expressed in placenta. In rodent placenta, GLUT1 is replaced by GLUT3 during late gestation... - Hamlin G, Cernak I, Wixey J, Vink R. Increased expression of neuronal glucose transporter 3 but not glial glucose transporter 1 following severe diffuse traumatic brain injury in rats. J Neurotrauma. 2001;18:1011-8 pubmed..that increased glucose uptake after traumatic brain injury is primarily accounted for by increased neuronal Glut 3 glucose transporter expression and that this increased expression after trauma is part of a neuronal stress .. - Beltrán F, Acuña A, Miró M, Angulo C, Concha I, Castro M. Ascorbic acid-dependent GLUT3 inhibition is a critical step for switching neuronal metabolism. J Cell Physiol. 2011;226:3286-94 pubmed publisher..Since this observation was valid only for cells expressing glucose transporter 3 (GLUT3), we evaluated the importance of this transporter on the inhibitory effect of ascorbic acid on glucose transport... - Yu J, Li J, Zhang S, Xu X, Zheng M, Jiang G, et al. IGF-1 induces hypoxia-inducible factor 1?-mediated GLUT3 expression through PI3K/Akt/mTOR dependent pathways in PC12 cells. Brain Res. 2012;1430:18-24 pubmed publisher..neurons, and the passage of glucose across cell membranes is mainly facilitated by glucose transporter 3 (GLUT3)... - Iwabuchi S, Kawahara K. Extracellular ATP-prinoceptor signaling and AMP-activated protein kinase regulate astrocytic glucose transporter 3 in an in vitro ischemia. Neurochem Int. 2013;63:259-68 pubmed publisher..Under the physiological conditions, cultured astrocytes primarily express glucose transporter1 (GLUT1), and GLUT3 is only detected at extremely low levels... - Wu H, Sha J, Wang H, Ren B, Zhang G, Li M. [Expression of GLUT 3 in different brain regions of aged rats]. Zhejiang Da Xue Xue Bao Yi Xue Ban. 2010;39:43-8 pubmedTo examine the distribution of glucose transport 3 (GLUT 3) in different brain regions of aged rats and to investigate its role in ageing process of the nervous system... - Kelly S, Harrell C, Neigh G. Chronic stress modulates regional cerebral glucose transporter expression in an age-specific and sexually-dimorphic manner. Physiol Behav. 2014;126:39-49 pubmed publisher..Given the importance of appropriate metabolic balance within the brain, further assessment of the functional implications of life stage and environmentally-induced changes in GLUTs are warranted. .. - Kipmen Korgun D, Bilmen Sarikcioglu S, Altunbas H, Demir R, Korgun E. Type-2 diabetes down-regulates glucose transporter proteins and genes of the human blood leukocytes. Scand J Clin Lab Invest. 2009;69:350-8 pubmed publisher..b>GLUT 3 was decreased in granulocytes, lymphocytes and monocytes from diabetic patients... - Rauch M, Ocampo M, Bohle J, Amthauer R, Yañez A, Rodríguez Gil J, et al. Hexose transporters GLUT1 and GLUT3 are colocalized with hexokinase I in caveolae microdomains of rat spermatogenic cells. J Cell Physiol. 2006;207:397-406 pubmed..We analyzed the expression and function of GLUT1 and GLUT3 in isolated spermatocytes and spermatids... - Zoidis E, Ghirlanda Keller C, Schmid C. Triiodothyronine stimulates glucose transport in bone cells. Endocrine. 2012;41:501-11 pubmed publisher..glucose (2DG) uptake and looked for expression of the high-affinity glucose transporters GLUT1 and GLUT3 by northern and western analysis. T? did not influence the cell number but slightly (1... - Hou W, Xian Y, Zhang L, Lai H, Hou X, Xu Y, et al. Influence of blood glucose on the expression of glucose trans-porter proteins 1 and 3 in the brain of diabetic rats. Chin Med J (Engl). 2007;120:1704-9 pubmed..The downregulation of GLUT(1) and GLUT(3) expression might be the adaptive reaction of the body to prevent excessive glucose entering the cell that may lead to cell damage. .. - Rumsey S, Daruwala R, Al Hasani H, Zarnowski M, Simpson I, Levine M. Dehydroascorbic acid transport by GLUT4 in Xenopus oocytes and isolated rat adipocytes. J Biol Chem. 2000;275:28246-53 pubmedDehydroascorbic acid (DHA), the first stable oxidation product of vitamin C, was transported by GLUT1 and GLUT3 in Xenopus laevis oocytes with transport rates similar to that of 2-deoxyglucose (2-DG), but due to inherent difficulties .. - Rumsey S, Kwon O, Xu G, Burant C, Simpson I, Levine M. Glucose transporter isoforms GLUT1 and GLUT3 transport dehydroascorbic acid. J Biol Chem. 1997;272:18982-9 pubmed..The apparent Km of DHA transport via GLUT1 and GLUT3 was 1.1 +/- 0.2 and 1.7 +/- 0.3 mM, respectively... - Colville C, Seatter M, Jess T, Gould G, Thomas H. Kinetic analysis of the liver-type (GLUT2) and brain-type (GLUT3) glucose transporters in Xenopus oocytes: substrate specificities and effects of transport inhibitors. Biochem J. 1993;290 ( Pt 3):701-6 pubmedWe have expressed the human isoforms of the liver-type (GLUT2) and brain-type (GLUT3) facilitative glucose transporters in oocytes from Xenopus laevis via injection of in vitro transcribed mRNA... - Simpson I, Chundu K, Davies Hill T, Honer W, Davies P. Decreased concentrations of GLUT1 and GLUT3 glucose transporters in the brains of patients with Alzheimer's disease. Ann Neurol. 1994;35:546-51 pubmed..We measured the concentrations of two glucose transporters, GLUT1 and GLUT3, in six regions of brains from both control subjects and patients with Alzheimer's disease... - Miranda P, Allaire A, Sosnik J, Visconti P. Localization of low-density detergent-resistant membrane proteins in intact and acrosome-reacted mouse sperm. Biol Reprod. 2009;80:897-904 pubmed publisher..Taking into consideration the requirement of the acrosome reaction for sperm to become fusogenic, these results suggest that membrane raft dynamics may have a role in sperm-egg membrane interaction. .. - Vavaiya K, Paranjape S, Briski K. Testicular regulation of neuronal glucose and monocarboxylate transporter gene expression profiles in CNS metabolic sensing sites during acute and recurrent insulin-induced hypoglycemia. J Mol Neurosci. 2007;31:37-46 pubmed..In the intact male rat DVC, MCT2, GLUT3, and GLUT4 gene expression was not altered by acute hypoglycemia but was enhanced by RIIH... - Bouche C, Serdy S, Kahn C, Goldfine A. The cellular fate of glucose and its relevance in type 2 diabetes. Endocr Rev. 2004;25:807-30 pubmed..This review is focused on the cellular fate of glucose and relevance to human type 2 diabetes. .. - Yu S, Fan M, Zhao T, Ding A, Wang F. [CoCl2-induced enhancement of glucose transport activity in mediating hypoxic tolerance in cultured hippocampal neurons]. Sheng Li Xue Bao. 2002;54:508-12 pubmed..showed that the 2-deoxy-D-[1-(3)H ]glucose uptake rate and the mRNA expressions of glucose transporters (GLUT1 and GLUT3) in the hippocampal neurons were significantly increased after a 24-hour pretreatment with CoCl(2)... - Stuart C, Wen G, Williamson M, Jiang J, Gilkison C, Blackwell S, et al. Altered GLUT1 and GLUT3 gene expression and subcellular redistribution of GLUT4: protein in muscle from patients with acanthosis nigricans and severe insulin resistance. Metabolism. 2001;50:771-7 pubmed..Glucose uptake into muscle cells in the fasting state is mediated primarily by GLUT1 and GLUT3 glucose transporters, whereas postprandial (insulin-stimulated) and exercise-related increments in muscle glucose .. - Leroy C, Pierre K, Simpson I, Pellerin L, Vannucci S, Nehlig A. Temporal changes in mRNA expression of the brain nutrient transporters in the lithium-pilocarpine model of epilepsy in the immature and adult rat. Neurobiol Dis. 2011;43:588-97 pubmed publisher..studies of cerebral metabolic changes, here we explored the expression of transporters for glucose (GLUT1 and GLUT3) and monocarboxylates (MCT1 and MCT2) during and after status epilepticus (SE) induced by lithium-pilocarpine in .. - Deng D, Sun P, Yan C, Ke M, Jiang X, Xiong L, et al. Molecular basis of ligand recognition and transport by glucose transporters. Nature. 2015;526:391-6 pubmed publisher..Using lipidic cubic phase crystallization and microfocus X-ray diffraction, we determined the structure of human GLUT3 in complex with D-glucose at 1.5 Ã… resolution in an outward-occluded conformation... - Fattoretti P, Bertoni Freddari C, Casoli T, Di Stefano G, Solazzi M, Giorgetti B. Decreased expression of glucose transport protein (Glut3) in aging and vitamin E deficiency. Ann N Y Acad Sci. 2002;973:293-6 pubmed - Sato Y, Ito T, Udaka N, Kanisawa M, Noguchi Y, Cushman S, et al. Immunohistochemical localization of facilitated-diffusion glucose transporters in rat pancreatic islets. Tissue Cell. 1996;28:637-43 pubmed..By these methods, GLUT2 immunostaining was observed on the lateral membranes of pancreatic beta-cells, whereas GLUT3 immunoreaction was predominantly localized in the cytoplasm of beta-cells and was not found in alpha-cells...
http://www.labome.org/gene/rat/glut3-25551.html
As almost 75% of adults experience difficulty digesting lactose, many seniors find it challenging to eat enough calcium. If your elderly loved one is lactose intolerant, a lack of dairy in his or her diet may mean he or she has, or will develop, a calcium deficiency. To help ensure that your senior loved one gets enough calcium everyday, the senior care experts at Home Care Assistance in Naples suggest incorporating the following foods into his or her diet. 1. Collard Greens One cup of collard greens has 350 milligrams of bone-strengthening calcium per serving. Collard greens can be a little difficult to eat raw and are best served cooked or lightly steamed. To make a simple side dish, stir washed and dried collard greens in olive oil over medium-high heat with chopped garlic for three to four minutes. Add salt and pepper to taste. For a boost of protein and a more filling dish, add cooked and shredded ham at the end. 2. Canned Sardines and Salmon Eating calcium-rich foods have been shown to help prevent osteoporosis in seniors. Canned fish with bones, such as sardines and salmon, can provide up to 350 milligrams of calcium per serving. As they are also full of Omega-3 fatty acids, serving sardines or lox on crackers or bagels is a fast way to incorporate essential nutrients in an easy snack. 3. White Beans Besides bone health, calcium is responsible for the normal clotting of the blood and nerve pulse conduction. Most beans have some amount of calcium, but white beans have the most at 240 milligrams per cup. For a tasty way to incorporate white beans into your loved one’s meal plan, have an hourly or 24-hour caregiver in Naples make a simple soup by cooking the beans, veggies, olive oil, and chicken broth over medium-high heat until ingredients are soft. 4. Soymilk and Almond Milk Non-dairy milks are full of calcium and vitamin B12, helping elderly bodies regulate hormones and enzyme production. As one cup of soymilk or almond milk provides 200 milligrams of the daily-recommended 1200 milligrams of calcium for seniors, a bowl of cereal or a simple smoothie can be a healthy choice. For a midafternoon snack, blend one-half cup of soymilk, frozen banana, frozen strawberries, vanilla, and honey in the blender and serve immediately. If you’re looking for ways to help promote a healthy senior diet, turn to Home Care Assistance. Our professional caregivers are expertly trained to help promote healthy senior habits by assisting with grocery shopping, meal preparation, and safely executing exercise routines. In addition, we also offer a variety of in-home care services, including dementia, Parkinson’s, and Alzheimer’s care in Naples, for seniors with chronic physical and cognitive conditions. To learn more about how our care services may benefit your elderly loved one, give a Care Manager a call at (239) 449-4701 and schedule a free in-home consultation.
https://www.homecareassistancenaples.com/calcium-rich-dairy-alternatives/
I'm incredibly grateful to have contributed the following publications, university lectures, and international conference presentations, and to have collaborated with some of the top thought leaders in the field, from the United States to Australia. Areas include (a) mindfulness, (b) performance, (c) client-consultant relationships, (d) consultant best practices, (e) athletic injury, (f) teaching sport and performance psychology, and (g) ethics. Please contact me for more information about any item. Mannion, J., & Andersen, M. B. (2016). Interpersonal mindfulness for athletic coaches and other performance professionals. In A. Baltzell (Ed.), Mindfulness and performance. Cambridge, England: Cambridge University Press. Mannion, J., & Andersen, M. (2015). Mindfulness, therapeutic relationships, and neuroscience in applied exercise psychology. In M. B. Andersen & S. Hanrahan (Eds.), Doing exercise psychology. Champaign, IL: Human Kinetics. Andersen, M. B., & Mannion, J. (2011). If you meet the Buddha on the football field – tackle him! In D. Gilbourne & M. B. Andersen (Eds.), Critical essays in applied sport psychology (pp. 173-192). Champaign, IL: Human Kinetics. Baltzell, A., Gardner, F., Haberl, P., Mannion, J., Moore, Z., & Newcomer-Appaneal, R. (2016, September 30). [Working title] Mindful relating: The techniqueless technique. Panelist and lecturer for a featured presentation, 5 in 5: Mindfulness in Practice, at the annual conference of the Association for Applied Sport Psychology (Proceedings), Phoenix, AZ, USA. Mannion, J. (2016, June 28). Good music doesn’t have an expiration date: Mindfulness as an instrument for enhanced performance & well-being. Lecture presented to the Los Angeles County Psychological Association’s (LACPA) Sport & Performance Psychology Special Interest Group, Los Angeles, CA, USA. Mannion, J. (2014, March 10). What kind of car do you drive?: Mindfulness as a vehicle of learning in educational alliances. Grant-contingent faculty workshop presented at the Center for Excellence in Teaching & Learning, Fontbonne University, St. Louis, MO, USA. Mannion, J. (2012, October 10). Sport psychology: Past, present, & future. Lecture presented in Essential Concepts in Health & Fitness (HES 119), Department of Human Environmental Sciences, Fontbonne University, St. Louis, MO, USA. Mannion, J. (2012, March 8). Panel of psychology professionals. Panelist for the Psychology Club at Fontbonne University, St. Louis, MO, USA. Mannion, J. (2011, April 19). When it hurts too bad to play: The psychology of athletic injury rehabilitation. Lecture presented in Introduction to Sports Injuries (PRO CR 11), Department of Kinesiology & Athletics, Santa Monica College, Santa Monica, CA, USA, via Skype videoconference. Mannion, J. (2011, April 11). Sport psychology, interpersonal neurobiology, & neuromarketing. Lecture presented in Media Psychology (MC 449), Department of Mass Communications, Southern Illinois University - Edwardsville, IL, USA. Mannion, J. (2010, September 27). The immen$e power of branding psychology: Underpinnings and ethics. Lecture presented in Fundamentals of Advertising (MC 325), Department of Mass Communications, Southern Illinois University - Edwardsville, IL, USA. Mannion, J. (2014, October 16). It's not brain surgery: Using neuroscience to inform practice, educate clients, & expand sport psychology service delivery to executives. Workshop presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 175), Las Vegas, NV, USA. Mannion, J., & Day, C. (2013, October 5). What kind of car do you drive?: The relationship as a vehicle of positive change in collaborative alliances. Workshop presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 152), New Orleans, LA, USA. Steffen, W., Woolsey, C., Mannion, J., Spradley, B., Evans, M. W., Beck, N., & Williams, R. (2013, October 4). Using evidence-based mindfulness programs to improve disorder eating: Understanding emotional and binge eating among athletes. Poster presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, pp. 99-100), New Orleans, LA, USA. Andersen, M. B., Tremayne, P., Zizzi, S., & Mannion, J. (2011, September 22). Presence, attunement, and resonance times three: Mindfulness for athletes, practitioners, and supervisors. Workshop presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 96), Honolulu, HI, USA. Mannion, J., & Woolsey, C. (2010, October 28). Teaching sport psychology in interdisciplinary environments: Challenges, opportunities, applications, and outreach. Workshop presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 172), Providence, RI, USA. Divin, A.L., Woolsey, C., Mannion, J., & Hale, W.D. (2010, October 30). Is stress always bad for athletes’ health behaviors? Perception is the key. Poster presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 82), Providence, RI, USA. Mannion, J. (2009, September 17). “I was an athlete, too”: Sport psychologist self-disclosure in the service of the therapeutic relationship. Workshop presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 122), Salt Lake City, UT, USA. Mannion, J. (2008, September 27). Raising healthy child-athletes: The “good-enough” parent and coach. Lecture presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 39), St. Louis, MO, USA. Mannion, J., Blakeley, M., Ahmed, R., Mansour, E., & Mansour, A. (2008, September 27). The torture and diaspora of Iraqi athletes under Saddam Hussein’s regime. Symposium presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, p. 22), St. Louis, MO, USA. Mannion, J. (2008, September 25). The rise and fall of Uday Hussein as head of Iraqi sport: A psychosocial exploration. Lecture presented at the annual conference of the Association for Applied Sport Psychology (Proceedings, pp. 26-27), St. Louis, MO, USA. Mannion, J. (2002, October 20). Homophobia in sport: A conference conversation. Workshop presented at the national conference of the Australian College of Clinical Psychologists, Australian Institute of Sport, Canberra, NSW, Australia. Abstract published in proceedings. Mannion, J. (2001, October 4). The stress-injury relationship: Anecdotes, research, and interventions. Paper presented at the national conference of the Australian College of Clinical Psychologists, Australian Institute of Sport, Canberra, NSW, Australia. Abstract published in proceedings. Mannion, J. (2001, October 3 – 6). The predictive utility of competitive trait anxiety, social support, and daily hassles for athletic injury. Poster presented at the annual conference of the Association for the Advancement of Applied Sport Psychology, Orlando, FL, USA. Mannion, J. (In progress). There is no ‘I’ in ‘me’: Buddhist psychology concepts of the self and athletic identity. Mannion, J., & Blakeley, M. (In progress). A paper examining psychosocial aspects of the torture and diaspora of Iraqi athletes under Uday Hussein, including issues of post-traumatic stress. Mannion, J. (2007, March). Raising healthy child athletes: The “good-enough” coach and parent. Swim Bike Run St. Louis, 20. Mannion, J. (2006, November). When it hurts too bad to play: The psychology of injury and rehab. Swim Bike Run St. Louis, 10-11. Mannion, J. (2006, September). Performance anxiety and arousal. Swim Bike Run St. Louis, 10-11. Mannion, J. (2006, August). Sport psychology: What it is, what it isn’t, and how it may be helpful. Swim Bike Run St. Louis, 20.
http://www.allworldperformance.com/allworldperf/publications.jsp
Overall purpose of role - The role holder will support the ER and Change strategic delivery model for COO and Functions. - The role holder will advise on the interpretation and application of ER policy, consultation strategies and management of the relationship with elected union and employee representatives for their business area. - The role holder will apply ER judgement to complex ER cases. - The role holder will be required to manage change programmes across COO and Functions. - Support and drive the HR workstream for change programmes such as restructuring, LEE transfers, outsourcing, in sourcing, new site set up, across COO and Functions and across our global locations leveraging in country HRBP, ER and other specialist HR functions as required. Key Accountabilities Change Management - Develop appropriate consultation/negotiation strategies to facilitate major business change initiatives - Ensure consultation decks are fit for purpose and submitted within required timeframes - Lead consultation with Unite/Employee Forum and provide regular updates throughout the project life cycle - Conduct due diligence including analysis of impacted populations - Support the preparation of project communications, 1:1 collateral and other templates Case Management - Provide advice on high risk and complex cases including Whistleblowing issues and complaints MI Insight and Analytics - Interpret and analyse data from ER Direct and other sources to provide insights from DC&G trends to key stakeholders Upskilling and Training - Build and maintain specialist knowledge in Employment Law or new Policy and ensure key stakeholders are informed of these changes - Train and develop relevant managers/HR in the methodologies and techniques and best practice in Change Management. - Provide specialist advice to line managers on complex matters that arise during the change process. - Develop and deliver workshops and course support to upskill line managers, Client Aligned HRBPs and Employee Relations Change Advisors Stakeholder Management The role holder will become a trusted subject matter expert, establishing and maintaining ER change stakeholder relationships with the business, other Centres of Excellences and HRBPs Decision Making and Problem Solving The role requires extensive analytical and investigative skills to provide advice on change projects and complex cases. The role holder will need to be aware of the various consultation and reporting cycles and be able to gather and interrogate data for ER reports. A mature pragmatic approach to problem solving is needed using all inputs available. Risk and Control Objective Ensure that all activities and duties are carried out in full compliance with regulatory requirements, Enterprise Wide Risk Management Framework and internal Policies and Policy Standards Person Specification Essential Skills/Basic Qualifications: - Experience of leading large scale/multiple change programmes and managing the people implications of in/outsourcing, restructuring, offshoring, mergers and acquisitions, disposals and RIFs - Practical generalist experience gained in an HR function advising Line Managers on a day to day basis on complex ER issues in a commercial and pragmatic way - Possess an understanding of issues affecting Employee Relations internally, e.g. legal restrictions, union issues, etc - Experience of leading Union/works council consultations - A good understanding of current employment legislation - Project management, change management and stakeholder management skills Desirable skills/Preferred Qualifications: - Experience of delivering change programmes outside the UK - Experience of working in a highly matrixed organisation - Solid appreciation of the importance of recognising cultural and geographic sensitivities - Excellent analytical and data skills, including intermediate use of Excel - Using effective communication skills to ensure customers understand our message and the actions / recommendations that should be followed to resolve an issue. - Ability to influence stakeholders - Logical decision-making, effective listening, clarity of thought and utmost professionalism The SR Group (UK) Limited is acting as an Employment Agency in relation to this vacancy.
https://www.frazerjones.com.au/job/employee-relations-advisor/
By Margaret Morales & Gretchen Kroeger In my last post I wrote about the Stockholm Convention Treaty for the international regulation of persistent organic pollutants (POPs). The United States remains one of a handful of countries that has not ratified the Treaty. One key reason for this is that several US laws stand in contradiction to parts of the treaty. Specifically the safety standards employed by the Toxics Substance Control Act (TSCA), and the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) are very different from the safety standard of the Stockholm Treaty. Both of these acts employ a cost-benefit analysis to weigh the adverse effects of chemicals, taking into account the financial costs of any potential regulation. TSCA in particular uses what it terms an ‘unreasonable risk’ standard to determine which chemicals require regulation. Often this type of safety standard results in a conservative approach to chemical testing, as testing is usually undertaken only when there is already significant evidence that a chemical causes significant risk to human or environmental health. Many activists groups have spoken out against a safety standard based on ‘unreasonable risk,’ citing the EPA’s inability to ban asbestos under this standard, despite its known health risks. In contrast, the Stockholm Convention uses a ‘health-based’ standard to determine risks from POPs. This standard relies only on health information and does not explicitly weigh this information against other social priorities, such as economic benefits. Under this standard a chemical is only considered safe to use when there is “reasonable certainty of no harm to human health or the environment”. The health-based safety standard also requires attention to all uses and sources of exposure to a chemical, particularly for vulnerable populations such as children, or those who are at risk of disproportionately high exposure. In fact, this past fall, Dr. Heather Stapleton (Project 2 and the Analytical Chemistry Core) was invited to the UN Environment Program’s Persistent Organic Pollutant Committee’s Annual Meeting in Italy where she provided scientific testimony on the debromination of decaBDE, a flame retardant chemical used in electronics and textiles. Debromination of decaBDE to lower-brominated congeners increases the compound’s risks to the environment and human health. The committee subsequently approved a risk profile for decaBDE and plans to draft a risk management evaluation. Which standard we choose to employ when weighing the risks of a chemical brings up some ethical questions. Last year HBCDs were added to the list of POPs regulated under the Stockholm Convention Treaty and the amendment will came into effect this past November. HBCDs are a class of brominated flame retardant used in a variety of products and their impact on human and environmental health has been a topic of hot discussion lately particularly for their potential to act as endocrine disruptors and neurotoxins. Despite these discussions, their production continues in the United States. So why does their production in the US continue? There are multiple factors that play into decision-making regarding this chemical. For instance, there are different lenses applied for weighing the risks and benefits of a chemical. HBCD does come with a variety of benefits. Beyond the economic benefits it offers the building industry, and its use in reducing fire hazards, some argue that HBCD may help cut greenhouse gas emissions. This is because of one of it’s main uses – polystyrene insulation, an insulation favored by green-builders for its effectiveness in reducing home energy expenditures. Each of these safety standards represents a different paradigm of how we measure and understand the adverse effects from chemicals. If you are interested in finding out more about proposed legislation to change the safety standards currently employed by the TSCA and FIFRA, read more about the Safe Chemicals Act of 2013, and the Chemical Safety Improvement Act.
https://sites.nicholas.duke.edu/superfund/safety-standards-and-weighing-risks/
Emotions can be a veritable minefield—they can be our greatest friend or our worst enemy. Some have a way of sneaking up on you when you least expect it whilst others like to mark out a comfortable spot in your psyche and settle in for the long haul. If we are talking about positive emotions such as joy or excitement, we tend to welcome them in with open arms and an open-ended invite. Yet if these emotions fall on the negative side, such as anxiety or anger, they are firmly told their name isn’t on the list and they’re not getting in. The problem with this approach is that it’s impossible to shut out negative emotions yet ride off into the sunset with the positive ones. They just cannot be cherry-picked. These negative nasties are part of our emotional makeup. You can’t outrun them and you can’t hide from them. Instead of being held hostage at their mercy each time they appear, perhaps a shift in perspective is needed. In trying to understand their purpose and learning how to release them healthily, we can develop a better relationship with them so that they show up as overnight guests instead of moving in for good. Changing your emotional behavior is never easy, but here’s how to control your emotions. How to Control Your Emotions Jill and Sarah are best friends. They do everything together, but they are also very different. Jill is constantly strung out; the smallest mishap will send her into a state of frustration, stress, and shouting. Everything around her affects her: the traffic, long queues, the mean colleague. Her mood and happiness are directly influenced daily by what is happening around her. Sarah, on the other hand, doesn’t let small things get to her. She decides how she wants to feel and is constantly much happier than Jill. What is the difference? Your Choice Controlling emotions is very much a question of choice. Do you want to, or not? So much has been written about emotions and how to deal with them effectively, yet many people can’t control this area of life. Why? Effectively managing emotions is actually like developing a skill or a habit. It is a way of doing something better, and as humans, we struggle with change the most. Changing the way you usually do something is hard, and it is even more difficult when it comes to emotions. When we are feeling “emotional,” the last thing we want to do is calm down and try to deal with the situation proactively; we most often want to rant about what is upsetting us. If we understand a little more about how our emotions work, we are in a much better position to use this information to our advantage. Learning how to control your emotions can be one of the best skills you will ever develop in your life. Your emotions lead to the actions you take and, therefore, create the life you are experiencing now—every part of it. Our emotional part of the brain, the limbic system, is one of the oldest parts when compared, for example, to our prefrontal cortex, which is our ‘thinking’ part. The average person’s emotional part of the brain is over six billion times more active than the prefrontal cortex. Because our emotional part is so old and is an extremely strong part of the brain, it is understandable that it feels like our emotions occasionally control us. The point is, your emotions will naturally hijack your thinking—this is a given—but there are still ways to deal with this. To keep things simple, let’s look at what you can do to flip this situation around. Ignoring emotions, suppressing them, or not dealing with them will come back to bite you! Stress and anxiety come from suppressed emotions, so if you think that dealing with your emotions by ignoring them is going to work, you are sorely wrong. Here are 7 simple steps to control your emotions effectively. 1. Awareness If you are not aware of the times when you are overly emotional or overreacting, how can you try to manage it? It is impossible. It is of vital importance to understand that emotions are simply energy. Thanks to the late great David Hawkins, MD, Ph.D., we have proof that emotions have measurable energy and can either foster or negate actual cell life. If we refuse to deal with them, they find a hiding place deep inside our bodies. Stuffing our feelings down by pretending they don’t exist or lying to ourselves simply prolongs the process. Instead, when you notice a negative emotion coming in, try to actually sit with it for a while. It is no easy feat and can be incredibly intense initially. Notice which part of your body it’s affecting, and then name the emotion. In saying out loud, “I’m feeling anxious right now,” it can loosen its grip. 2. Discover the ‘Why’ of Your Emotions Once you have identified how you are feeling, you want to discover why you are feeling it. What is causing this feeling inside you? Of course, there could be a million reasons, and to find out you have to ask yourself, like you would a friend, “What is wrong? What is causing me to feel this way?” Your mind will always look for an answer. Most of the time, simply the way you are thinking about the situation is causing you to feel the way you do. Another huge reason why we feel negative emotions is because our values are not present at that moment or are being respected. Remember: discover the ‘why.’ 3. Ask Yourself, “What Is the Solution?” Once you have discovered why, what can you do to take back control? Sometimes, you might need to change your thoughts about the situation. The next time you’re feeling anxious, depressed, or overwhelmed, try to reframe the situation and look at it from a slightly different angle. You’ll likely end up in a better psychological place if you do. Reframing is one skill that can truly change the way you think. Your thoughts lead directly to your feelings; so if you are feeling bad, you most likely have a negative thought making you feel that way. If you consider the situation from a different angle, you will begin to feel better immediately. What you focus on expands! Sometimes, understanding why you feel a certain way at the time, can cause your emotions diminish. This is because understanding always leads to calmness. 4. Choose How You Want to React This is the hardest part. The way we react and manage our emotions is a habit. Haven’t you noticed that people who get stressed out about everything, literally freak out at nothing? You almost feel sorry for them. They have created a habit of associating a situation they don’t like with “freaking out.” Their emotions have hijacked them. Learning to listen to your emotions, to identify, understand, and then choose them, isn’t something that you decide to practice twice a week at lunchtime. No, it takes continuous effort and discipline to build this essential skill. 5. Ritualize Your Mornings Have you ever gone to bed angry and woken up with that same anger burning a hole in your pillow? Or simply just woken up on the wrong side of the bed, feeling wretched for no apparent reason. The first few moments of the morning can be one of the most powerful. Whatever mood we climb out of bed in tends to cling to us all day. That’s why it can be incredibly powerful to have a morning ritual to cleanse the emotional palette. Effective rituals vary, but the following can be highly effective: Journaling Putting our thoughts out of our heads and onto paper can be incredibly cathartic, and stream-of-consciousness writing can be one of the most powerful ways to journal. Simply write out your thoughts as they stream through you, without review or judgment. Even if you have nothing to say, simply write “I have nothing to say” over and over again until another insight appears or simply until you fill the page. Once you’ve filled in your set number of pages, don’t review. This isn’t a diary, it’s a tool to purge negative emotions. Meditation Quieting the mind through the breath is simply one of the most powerful tools we have to release negative emotion, yet it is no easy endeavor. Find a quiet spot and sit quietly for ten to fifteen minutes, focusing on your breath or on a mantra (I find inhaling “Let” and exhaling “Go” to be simple yet effective). When we create a meditation practice, our monkey mind learns to settle and we become more in tune with ourselves, in turn leading to mindfulness. Being mindful creates more space between our thoughts so that we can be aware when negative emotions creep in, giving us the opportunity to nip it in the bud before the emotion snowballs and takes over your day or your week. 6. Massage for Relaxation and to Empty the Mind. Have you ever intensely worked on something for a long period and then subconsciously reached up to massage the back of your neck? Massage decreases anxiety and pressure and revives both the body and the mind, providing more clarity and focus. Because it benefits both the mind and the body, it promotes a feeling of well-being, thus improving a bad mood. 7. Exorcise Through Exercise As human beings, we are designed to move. Exercise is good for us not only from a physical perspective but from a mental perspective. The hormones released when our hearts are pumping and our bodies are moving can change our mindset. Some researchers suspect exercise alleviates chronic depression by increasing serotonin (the neurotransmitter targeted by antidepressants) or brain-derived neurotrophic factor (which support the growth of neurons). Another theory suggests exercise helps by normalizing sleep, which is known to have protective effects on the brain. The very act of going for a walk, a run, or attacking a set of weights forces the mind to focus on the task at hand. Next time something enrages you, instead of reaching for a bag of chips or numbing out in front of the TV, get physical. Even if it means dancing around your room like a maniac to some gangster rap music (a personal favorite of mine), you’ll feel a heck of a lot better and possibly have a laugh at just how ridiculous you look. Final Thoughts Do you control emotions, or do they control and direct you? It’s not easy, and that is why so many people make no effort or give up. But once you can control your emotions, life changes for you in more ways than you ever dreamed possible. Not only will you feel way more empowered and in control in life, but you will be happier and much healthier as you won’t be stressed or weighed down so often. TL;DR Don't have time for the full article? Read this. Controlling emotions is very much a question of choice. Do you want to, or not? Effectively managing emotions is like developing a skill or a habit. To control your emotions effectively, you need self-awareness. Then discover the ‘why’ of your emotions and ask yourself ‘what is the solution?’ You should choose how you want to react to your emotions. Some nice ways to mange your emotions include ritualizing your mornings, massaging for relaxation and exocising through exercise. Controlling emotions is not easy, and that is why so many people either make no effort or give up. But once you can control your emotions, life changes for you in more ways than you ever dreamed possible.
https://www.lifehack.org/articles/communication/how-control-your-emotions-effectively.html
As we welcome March, I’m sure the break you took over the holidays seems like it was a long time ago. Everyone is due for a break again, or at least needs to find a way to get some relaxation. It’s normal for business owners to feel stress, in fact, it’s almost a requirement. But stress can get in the way of your best productivity and so every day you should take a few minutes to just relax. Since there’s never seems to be enough time in the day, here are a few relaxation techniques that will only take a few minutes each. (Disclaimer: This is not medical advice and should not be taken as such. This is merely a set of techniques that have worked for other people in the past.) 1. Belly Breathing Some people call this belly breathing and I like it because it perfectly describes how you want to breathe. According to Scotland’s National Health Information Service, “…taking a few slow, regular breaths can help you control anxious thoughts and feelings, and make you feel calmer. To belly breathe, place one hand on your chest and your other hand on your stomach. Inhale slowly through your nose and keep an eye on your hands as you do. You want the hand on your stomach to rise and the one on your chest to stay as still as possible. Exhale slowly through your mouth and you should see the hand on your stomach fall and the one on your chest hold still. Sometimes, belly breathing works best if you’re lying down. Do this ten times, about twice a day. 2. Quick Muscle Relaxation Another great quick relaxation technique has to do with your muscles. Sit in a comfortable chair, close your eyes and concentrate on your breathing. Breathe in through your nose and out through your mouth. Then make a fist and squeeze your hand as tightly as you can. Hold it for a few seconds and then slowly open your fingers. You should feel the tension leaving your body. Similarly, you can tense different muscles in your body and then slowly release the tension for the same result. 3. Three Minute Relaxation Technique This mini relaxation exercise came from the Harvard Health website. While sitting, do a quick body scan for tension. “Relax your facial muscles and allow your jaw to open slightly. Let your shoulders drop and your arms fall to your sides. Allow your hands to loosen so there are spaces between your fingers. Uncross your legs or ankles. Feel your thighs sink into your chair, letting your legs fall comfortably apart. Feel your shins and calves become heavier and your feet grow roots into the floor. Now breathe in slowly and breathe out slowly.” 4. Mini Vacation If you don’t have time for a full vacation, a mini-vacation will probably do for the moment. Think of a spot where you feel at peace such as the beach during sunrise or out fishing on a lake. Close your eyes and imagine you’re there. What do you see? Hear? Feel? Is there a gentle breeze on your face? Do you hear the gentle lapping of the water? Can you feel the warm sun? 5. Longhand Writing Suffolk University in Boston, MA offered this out of the box idea for students. “Pick an interesting sentence and write it out in longhand very slowly, making sure that every letter looks just the way you want it to look.” 6. Slow Down Another good option is to simply slow down. Take the time to relish the moment that you’re in. Listen to what’s happening around you. What can you feel? Smell? Taste? Slow down and enjoy the moment without thinking about anything else than the moment. WebMD says, “When you spend time in the moment and focus on your senses, you should feel less tense.” 7. LOL When you laugh out loud, you’re forcing your body to lower its cortisol, a hormone that’s linked to your body’s stress. Laughing also increases endorphins in your brain which lightens your mood. Take a few minutes to watch your favorite comedian or cue up The Daily Show recap and laugh as loudly and long as you want. 8. Rock Out to Mellow Out If you crank up your favorite karaoke mix and sing along at the top of your lungs. You’re bound to let off some steam and feel better. If you have to worry about staying quiet for the people around you, soothing music can also do the trick. According to WebMD, “Research shows that listening to soothing music can lower blood pressure, heart rate, and anxiety.” No matter which you choose, you’re bound to be in a better mood when you’re done. 9. Transformations If you’re the kind of person that finds visualization helpful, then transformations might be the best option for you. The Black Dog Institute in Australia says to think of something like tightly twisted ropes and imagine them untwisting. Another good visualization, think of hard, cold wax and imagine it softening and melting. 10. The Four S’s Kansas State University says a great way to feel less stressed is to act less stressed. The university suggests following the four S’s: Smile, Slacken, Sag, Smooth. Smile and make your smile sparkle. Take a deep breath and then let your jaw slacken. Allow your shoulders to sag. Smooth out your forehead. Hope this blog has given you some ideas on how to relax. For some tips on how to boost your energy, be sure to check out this previous post.
https://intentionallyinspirational.com/2020/02/29/10-quick-relaxation-techniques/
By Sasha Pei-Silovo – EM TV Online A recently developed workshop-stratagem to combat gender-based violence in Papua New Guinea has been launched with the goal of encouraging active stakeholder participation throughout the provinces, at all levels. This is to drive provinces to work towards eliminating gender-based violence by taking into effect preventative measures, enabling [and ensuring] equal access to justice and assistance for victims; and to efficiently and effectively create monitoring systems for which gender-based services and activities can be measured against, and kept on track. The national gender-based violence workshop initiative is the outcome of key stakeholder-partnership, between the Department for Community Development, Family and Sexual Violence Action Committee (FSVAC) – a sectoral committee of the Consultative Implementation and Monitoring Council (CIMC) funded by the Department of Foreign Affairs and Trade (DFAT) with the support of the United Nations Development Programme of PNG and the United Nations Gender Task Team. Violence against women in PNG remains a challenge; intolerable and exceptionally high, two out of three women have experienced some form of violence in their lives. Gender-based violence and discrimination are obstacles to women-empowerment and development. And directly affects the realising, and fulfilling of the rights of women and girls to ensure a safe, healthy and violence-free PNG for womenfolk. Gender-base violence continues to have negative impacts on society, hindering the postitive growth of the country. Compared to males, females are underrepresented at all levels of government, have less access to education and healthcare, are often oppressed, degraded and (most times) made to suffer in silence. The national gender-based violence workshop-initiative will enable participants to review statistics and collated data. The workshops will also allow for thorough analysis on intervention methods developed by the government, development and private sector agencies’, civil society organisations and other stakeholders involved in the fight against gender-based violence in Papua New Guinea. Capitalising on the progress of the rights of women in the country, particularly with the passing of the Family Protection Bill in 2013, the nationwide workshops are a step-forward for key agencies to implement objectives to addressing gender-based violence. While acknowledging the Family Protection Bill, it is also understood that there is much, yet to be done at the community, district, provincial and national levels, in ensuring that appropriate measures are taken to implement protection methods and procedures for women in Papua New Guinea. With high rates of violence against women in PNG, it may seem extremely challenging and disheartening for some, but agencies actively involved in the fight against gender-based violence are adamant on achieving positive outcomes at the workshops; and to strengthen their reach to have the message embedded in the minds, hearts and lives of the people of this nation.
https://emtv.com.pg/gender-based-violence-workshops-launched-nationwide/
As the coronavirus continues to bring the world to its knees, Shakespeare Birthplace Trust researcher Holly Kelsey looks at a yet more deadly disease, the plague – an outbreak of which coincided with a new monarch in 1603. It too caused social isolation and impacted upon Stratford-upon-Avon’s citizens hugely during Shakespeare’s day. The year 1603 in Shakespeare’s England was certainly, as Thomas Dekker noted wryly in a bestselling pamphlet The Wonderfull Yeare. On 24th March, shockwaves reverberated through the country at the news that Elizabeth I had died. In the absence of an heir, the title was passed to James VI of Scotland. Yet this was to be no easy transition. James had barely got comfortable on his new English throne before a devastating outbreak of plague swept through London and the surrounding countryside. This was to be one of the deadliest instances of plague in England’s history, eventually claiming around a quarter of London’s population. Among James’ first actions as English monarch was to issue a book of orders relating to the plague outbreak, outlining rules and procedures to be followed in an attempt to stop the spread of the disease and to aid those suffering from it. There’s a fantastic copy of the Orders for Plague* in the Shakespeare Birthplace Trust library. The first half of the text is dedicated to physical orders enforced to try and control the plague in London and the surrounding areas. Houses were ‘to be closed up’ for six weeks if one of the inhabitants fell ill, and the sick were encouraged to be “restrained from resorting into company of others” for fear of spreading infection. If they did leave the house, they were to mark their clothes so as to warn others of their disease – they could be overseen by watchmen and breaking these orders could be punished by a spell in the stocks. Moreover, “clothes, bedding and other stuffe as hath been worne and occupied by the infected of this disease” were collected and burnt. But James also took measures to ensure the sick would not lose everything: he ordered that collections should be made in order to support those who were locked in their houses and to replace their possessions. The focus of the book then shifts to provide an illuminating insight into early modern medicine. In the second half, it prints several preventative and remedial cures recommended by physicians which were designed to be put up in public, ensuring even the poorest members of society had access to them ‘without great charge or cost’. These range from correcting the humours through purging and bloodletting, to herbal remedies. There are also some more intriguing treatments. For example, pregnant women were advised to shield themselves from plague by eating toast covered in vinegar, butter, and cinnamon, whilst the poor, who may not be able to afford vinegar and cinnamon, were told they “may eate bread and butter alone” because butter was seen as a “preservative against the Plague”. Those who were already suffering from sores could try and ease them with a warmed mixture of onions, butter and garlic, or if your cupboard was bare, you could try simply laying “a load of bread to it [the sore] hot as it commeth out of the oven”! Although these cures seem bizarre to us, people believed in them and would try anything which might protect them from this dangerous disease. Even when out and about, people were advised to hold herbs in their hands (the same they were burning to clean the air in their homes, such as rosemary, juniper, bay leaves, frankincense, sage, and lavender), or breathe through a handkerchief dipped in vinegar – an early alternative to a medical face mask! Thomas Dekker described the depressing sight of London’s streets strewn with ineffective dead herbs, lying alongside the sick and dying: “where all the pauement should in stead of greene rushes, be strewed with blasted Rosemary: withered Hyacinthes, fatall Cipresse and Ewe, thickly mingled with heapes of dead mens bones”. Whilst Dekker clearly wrote vividly about “the diseased Citie” of London, another playwright (whom Dekker may have collaborated with on the play Sir Thomas More) was also unable to avoid writing about the pervasive plague. In 1603 Shakespeare’s acting company, formerly named The Lord Chamberlain’s Men under Elizabeth I, became The King’s Men under James I. However their performances were infrequent – theatres had been closed for almost a year due to fears that plague would spread through the crowds. It was around this time (1603-4) that Shakespeare probably wrote Measure for Measure. The play is set in an unruly Vienna heavily afflicted with disease – perhaps inspired by the disruption Shakespeare witnessed around him in plague-sieged London. Pestilence and the playwright Plague was a frequent and devastating occurrence in England throughout Shakespeare’s lifetime. Those who contracted it could suffer from fevers, delirium, and painful plague sores, with a survival rate of just 50 per cent. In 1564, the year Shakespeare was born, plague claimed over 200 people in Stratford upon Avon, including four children on his very street. The ominously brief and simple statement “hic incepit pestis” (“here begins the plague”) was written in the burial register at the Holy Trinity Church on 11th July; Shakespeare had been baptised there less than three months before. As an adult, Shakespeare’s world was no less threatened by this horrible disease. His professional life was clearly influenced by its effects – for example, early in his career in 1592, an outbreak led to the closing of the theatres out of fears that their crowded conditions would cause it to spread. With reduced numbers of performances, Shakespeare spent this time writing poetry: Venus and Adonis and The Rape of Lucrece. Theatres were to close frequently during outbreaks throughout the whole of Shakespeare’s career. However, pestilence is not so conspicuous within Shakespeare’s works themselves – perhaps out of a desire to provide his audiences with light relief. Where plague does occur, it certainly brings with it destabilising consequences. In Romeo and Juliet (1595), an outbreak delays Friar Lawrence’s messenger, meaning Romeo does not receive notice of Juliet’s plan, and sparking the chain of events that culminates in the deaths of the lovers. The messenger recounts: the searchers of the town, Suspecting that we both were in a house Where the infectious pestilence did reign, Sealed up the doors, and would not let us forth (Romeo and Juliet, Act 5, Scene 3, lines 8-11 The practice of confining people suspected to be afflicted of the plague was common, and was encouraged by orders passed by monarchs at the time (such as those issued by James I, above). Romeo and Juliet might also lay claim to having the most famous disease mention made by Shakespeare – Mercutio’s dying rally of “a plague on both your houses!”. Yet in the first quarto version of the play, the word “plague” was instead “pox”, referring to the (equally dangerous) disease smallpox. Plague is once more strikingly used as an insult in King Lear (1606), in which Lear laments about his daughter Goneril: thou art a boil, A plague-sore, an embossed carbuncle, In my corrupted blood (King Lear, Act 2 Scene 4, Lines 218-20) A different dramatic invocation of plague is used by the eponymous Timon in Timon of Athens. Here, writing some time before or perhaps at the start of a renewed 1606 outbreak of plague, Shakespeare explicitly nods to its power. In an extended speech in Act 4, scene 3 (lines 109-127), Timon instructs Alcibiades to be like the deadly disease and spare nobody when he ransacks Athens: Be as a planetary plague, when Jove Will o’er some high-viced city hang his poison In the sick air: let not thy sword skip one The image of the plague being a remorseless force attacking a city was not uncommon – as in Dekker’s Wonderfull Yeare, mentioned previously. In a particularly vivid passage, Dekker describes the plague as if it were a soldier mounting “the siege of the Citie” of London: “here the Canons (like their great Bells) roard: the Plague tooke sore paines for a breach; he laid about him cruelly, ere he could get it, but at length he and his tiranous band entred”. Timon goes on to elaborate that Alcibiades should be indiscriminate – the plague does not even take pity on the elderly, the very young, “priests”, and “mothers, maids nor babes”. The persistent presence and threat of this fearsome disease should not be forgotten when considering Shakespeare’s life and works.
https://www.stratford-herald.com/108023-1603-shakespeares-england-terrible-sickness.html
Somewhat delayed, this is a double-barrel summary where two different #ELTchatters contributed at different times. The first one was kindly contributed by Patrick Andrews alias @PatrickAndrews and the second one by David Pietre alias @tefkgeek on his blog TEFLGEEK! Thanks Patrick and David!!! The discussion started with definitions of discourse. @MarisaC stated that it was language at a text rather than a sentence level and @PatrickAndrews added that discourse also applies to patterns of spoken interaction (eg how you tell a story, structure a presentation)@esolcourses suggested that Scott Thornbury’s book provides a useful definition. Then the discussion moved to problems and @dreadnought001 raised the issue of how learners can communicate if they lack sufficient vocabulary and also pointed out the need to know more than 90% of the lexis to be able to understand a text. @MarisaC suggested that perhaps too much focus on form could prevent a focus on discourse and added that some grammar books and course books only presented language in short chunks. @esolcourses mentioned the problem of some tests and @barbsaka agreed that many tests did not focus enough on language in context. @PatrickAndrews argued that discourse plays a big role in EAP teaching and also suggested that CAE might have a discourse element in what it tests. @Yohimar thought that digital coursebooks could help and @harrisonmike suggested the use of authentic materials. @MarisaC emphasised the role of reading for recognition of discourse patterns and gave the link http://t.co/vwAtVxUS for ideas and also http://t.co/jIvqPf47 for ideas on teaching politeness from a discourse perspective. It was agreed that students do not necessarily need a conscious knowledge of discourse but that teachers do. Recommendations were made for good books and these included Cook, McCarthy, Pople and Thornbury. How can we focus language teaching more at discourse level rather than sentence level? The first #eltchat of 2012 attempted to answer this question! I wasn’t actually there and didn’t take part in the chat and I’m still not quite sure how I’ve ended up writing the summary except that Marisa_C possesses remarkable powers of persuasion and as someone who teaches higher levels this is an area of interest! Hopefully, this captures the key points, but I’m not a “discourse specialist”, so feel free to point out any errors or omissions. I haven’t cited individual contributors, but the transcript is available if you’d like to know who said what. The initial question makes the assumption that discourse works at a higher level than merely the sentence thought the Wikipedia entry relating discourse analysis to “approaches to analyzing written, spoken . signed language use or any significant semiotic event” – which I interpret broadly as meaning “if something attempts to convey meaning, it can be analysed to see how it does so”. A more accessible overview of discourse suggests that discourse analysts are concerned with “the construction of meaning throughout a text”. (it should be pointed out here that the word “text” is used more to mean a linguistic event than a written document). Thus discourse can apply to patterns of interaction, “text” structures, communication events, language within a text – usually occurring within a context of authentic language use. There are no set “rules” of discourse per se, because discourse examines everything and the rules change depending on the context. Receptively, learners simply may not know enough vocabulary to access texts effectively – to fully understand a text learners need to be able to recognize 95% of the vocabulary used in the text (Laufer, 1989). Additionally, the mechanics of textual cohesion devices like referencing, linking expressions and paragraphing need to be understood. Receptive knowledge of these devices also form part of language tests, like FCE, CAE, CPE, IELTS (etc), and within fields like EAP. Often these tests also require learners to demonstrate productive knowledge of these devices in structured, genre specific writing tasks. While genre is an aspect of discourse, genre familiarity is a separate issue for learners to grapple with. Where learners are preparing for a language test, classes tend to become very test focused, very accuracy focused and very form focused – developing a test dependency that can be difficult to move away from. This may account for the amount of language teaching conducted at the sentence level within test preparation classes, though this is not ideal. It isn’t helped by the general trend within published ELT materials for decontextualized, fragmented, sentence based language presentations. Grammar teaching in particular tends to be conducted at the level of the sentence and examine items in isolation and without reference to a wider context. The natural fluidity of language would seem to predicate against this. Problem? Solution! – what a bunch of Hoey! The simplest responses to the issue of isolated sentence based grammar teaching would appear to be just to teach grammar in a wider context and by making learners aware of functional aspects of language and their use – aspects of Speech Act study (which is only possible in context). This could be facilitated by more use of authentic materials or by use of digital coursebooks (this latter point wasn’t fully expanded upon – I’m intrigued and would welcome comments!). The other key suggestion is to move learners from receptive awareness of discourse patterns, for example making them aware of such patterns as they occur in listening and reading tasks, through to productive acts that feature and practice the target discourse structures. This would seem to favour a product approach to writing – the exposure of learners to a model text before asking them to produce something based on that model. There is often a reluctance amongst learners to “do writing” in class, but while instruction could take place in class, the actual practice of the writing skill need not. In conclusion – Do Learners need Discourse Analysis? A good question – do learners really need discourse analysis skills or is it just the teachers who do? There was a general consensus that the main goal is to have learners working and using “real language”, which would seem to take us back to using authentic materials as part of the input process, both to serve for language development and provision of exponents, but also to raise awareness of discourse structures and patterns as they arise in the target texts. Teachers therefore need training in discourse analysis so that they can effectively instruct the learners, and be able to evaluate published materials more critically. Thus they can help the learners to not only look at language performance but also to reflect on the language they encounter, to think about aspects of discourse such as audience and purpose – to be aware of the patterns rather than actually conduct a discourse analysis. “Words from Context: of Logic & Connections”, Constantinides, 2010. “What percentage of lexis is essential for comprehension?”, Laufer, in: C. Lauren & M. Nordman (Eds.), Special language: From humans thinking to thinking machine (pp. 69-75). Clevedon, UK: Multilingual Matters. “An Introduction to Text and Discourse Analysis”, Pople, Nelson Thornes, 1998. “Coherence & Cohesion”, Thornbury, onestopenglish.com (see link). The Communication Spotlight coursebook series (ABAX) was also recommended as containing a focus on discourse features for work with learners. ABAX make a range of support materials available as a free pdf & mp3 downloads which might be worth investigating. An apology on behalf of #eltchat – Raquel_EFL appeared to make a large contribution to the chat with people responding with phrases like “brilliant” and “Good point!!!” but unfortunately for some reason these contributions didn’t show up in the transcript and I fear have been lost to history…. #eltchat takes place on twitter every Wednesday at 12 noon and 9.00pm London time. Simply sign in or sign up to twitter and search for the hashtag #eltchat. For more information, check out the website. Editor’s note – Raquel’s tweets did not come through on our archiver, I’m afraid… sorry!
http://eltchat.org/wordpress/summary/how-can-we-focus-language-teaching-more-at-discourse-level-rather-than-sentence-level-eltchat-summary-04012012/
I am inspired by a quote by Artist Perle Fine (1905-1988)- “I like to light up a canvas with colour; I like to make it shout or whisper; I like to make it spin...or make forms melt softly over the whole picture”. I love using line and color as my primary tool of expression. Intense color is one attribute that all of my artwork has in common. I paint using either oil or acrylic, depending on the picture I am painting. My work shows movement and energy. I use color to establish a balance that provides transformation and interface with the energy in my art. My pieces are created in many layers. I work so that something from each layer shows through to increase the sense of multiple dimensions and depth. A vision or feeling may serve as my inspiration. Whether the finished piece is a highly realistic portrayal or abstract, I strive to infuse my art with a sense of obscurity and splendor. I have been painting for 25 years and have enjoyed my journey. BIO Bern Merlo is a teacher by formal education and an Artist by passion. She has had experience painting in both Plein Air with oil and acrylics on paper or canvas. Collectors of Bern’s art are returning customers; many have multiple pictures in their collections. Purchased works are in private collections in California, Tennessee. Locally, her work has been purchased by collectors in Lansing, West Bloomfield, and Detroit, to name a few. The work of Bern Merlo uses artist-quality paint. Being an abstract artist is a trip that is traveled every day. A song, a thought, conversation or picture can be the impetus for the journey through abstraction. EXHIBITIONS/SHOWS/COMMISSIONS Group Exhibitions; • Two James Brewery, Detroit, MI • Madonna University, Livonia, MI • University of Michigan-Dearborn, MI • The Scarab Club-Gold Medal Exhibition • Starring Gallery, Northville, MI • Huron Valley Council for the Arts, Highland, MI • Detroit Biennale, MONA, Armada, MI • Chelsea River Gallery/Rackham Graduate School, Ann Arbor, MI (2) • Affirmations: Pittman-Puckett Gallery, Ferndale, MI (2) • Wyandotte Council for the Arts, Wyandotte, MI • The Loft, Mt. Clemons, MI • The Anton Gallery, Mt. Clemons, MI • The Gallery of Royal Oak, Royal Oak, MI • Art Prize, Grand Rapids, MI • Royal Oak Library, Royal Oak, MI Solo Shows • Huron Valley Art Association January 2017 • Saugatuck Artist Collective, Saugatuck, MI • Aubergine Gallery, Milford, MI • Weber's, Livonia, MI EDUCATION My education training in art has been the result of attending workshops from the following artists:
https://embracecreatives.com/artist/bernmerlo/
Air pollution is an environmental health problem endemic in all our cities10.02.2011 We must distinguish between acute health effects (immediate few hours of breathing the polluted air) and long-term or chronic. Acute effects include increasing the probability of death or enter the emergency due to respiratory and cardiovascular causes for each increase in the level of particles (about 2.8% per 10 micrograms/m3, according to the most recent Barcelona by Perez et al 2009). This would mean that for a city like Barcelona with 50 deaths per day, would be an increase in a person death due to an increase of 10 micrograms/m3 particles (average particle Barcelona is around 50). While these data cannot be underestimated at the whole population, individual risk is very small and should not cause alarm. For none of the acute effects of air pollution (cardiovascular death, myocardial infarction, exacerbation of COPD, reduced lung function, effect on cardiac function) has been able to detect a level of no effect. In all cases at any level of exposure is an increased risk proportional to the dose-response curve. This means that for every increase in pollution levels is a greater response, whatever the level of pollution. Current legislation marks a level of warning and alert when they are high to those who need to take urgent action. Health-wise the recommendations of international agencies (for example, http://www.airfobep.org/docs/avisCSHPF1.pdf) in situations of notice or warning to prepare especially sought services compared to the potential increase of cases, but in general the plans do not recommend containment measures and changes in attitude of the population. These levels have not reached these days according to data from air quality networks of competent authorities. Although it may seem surprising, the health effects of isolated episodes are only a small part of the cumulative effects over the years, most of which are chronic or long-term rate have shown a reduction of hope of life in the most exposed populations. For CREAL the most important lesson is the health gain that would occur if pollution were reduced (Report). A reduction of 10 particle micrograms/m3 recently led to an increase in half year of life expectancy (in US areas did it). These points the importance of taking measures to reduce urban pollution every day of the year and not just during the am frightened of air stagnation. * This is an official communication of CREAL. The center's researchers made no further statements about it to allow more conclusions of scientific studies.
https://www.isglobal.org/en/-/air-pollution-is-an-environmental-health-problem-endemic-in-all-our-cities?inheritRedirect=true
The volume of transactions in the South Korean digital currency market briefly exceeded the daily average transaction amount of the country’s stock market on Sunday. According to data from CoinMarketCap, the combined 24-hour volume of major South Korea-based crypto exchanges UPbit, Bithumb, Coinone, and Korbit was more than $14.6 billion on Sunday. On Friday, the same metric was roughly $14.5 billion on the Korea Composite Stock Price Index, or , and $10 billion on the Korean Securities Dealers Automated Quotations, or KOSDAQ.
https://cufeed.com/reported-volume-of-top-south-korean-crypto-exchanges-surpasses-that-of-the-countrys-stock-market-by-cointelegraph/
Netflix dominated Monday as the 2021 Academy Awards nominations were announced, with Mank leading the pack with 10 nominations as female directors made history by earning multiple nominations in the Best Director category for the first time. Key Facts Directors Emerald Fennell (Promising Young Woman) and Chloe Zhao (Nomadland) were both nominated for Best Director, and Zhao could be the first woman of color in history to take home the award if she wins; Zhao and Minari director Lee Isaac Chung also made history as the most filmmakers of Asian descent to be nominated for their directing, while actor Steven Yeun (Minari) became the first Asian American nominated for Best Actor in a Leading Role—one of a number of actors of color who were nominated this year, with Variety noting the acting categories are the most diverse in the awards’ history. Mank, The Father, Judas and the Black Messiah, Minari, Nomadland, Promising Young Woman, Sound of Metal and The Trial of the Chicago 7 were nominated for Best Picture. Netflix was a big winner in the nominations: In addition to Mank, Da 5 Bloods earned one nomination, The Trial of the Chicago 7 earned six nominations and Ma Rainey’s Black Bottom earned five, including a posthumous leading actor nomination for actor Chadwick Boseman. Other actors to earn nominations include Riz Ahmed (Sound of Metal), Anthony Hopkins (The Father) and Gary Oldman (Mank) as Best Leading Actor—with Ahmed becoming the first Muslim-American actor ever nominated in the category—and Viola Davis (Ma Rainey's Black Bottom), Andra Day (The United States vs. Billie Holiday), Vanessa Kirby (Pieces of a Woman), Frances McDormand (Nomadland) and Carey Mulligan (Promising Young Woman) in the leading actress category. Among the more notable snubs were Da 5 Bloods and Ma Rainey being shut out of the Best Picture race; Aaron Sorkin and Regina King getting overlooked for Best Director (for The Trial of the Chicago 7 and One Night at Miami, respectively); and Jodie Foster losing out on a Best Supporting Actress nomination for The Mauritanian after winning the Golden Globe for her portrayal. Big Number 76. That’s the number of nominations women earned this year—honoring 70 women in total—a record number for the awards. Surprising Fact With her nomination Monday, Davis is now the most-nominated Black actress in the awards’ history. Key Background The Academy Awards ceremony will take place on April 25, after being pushed back by two months amid the Covid-19 pandemic. The ceremony will take place at both the Dolby Theater in Hollywood, where it’s traditionally held, and at Los Angeles’ Union Station. The awards ceremony comes as movie theaters have largely remained shuttered throughout the past year—including until just this month in major box office hubs New York City and Los Angeles—and the Academy of Motion Pictures Arts and Sciences temporarily changed their eligibility rules to allow films that had debuted on streaming rather than requiring a theatrical release. In response to concerns in recent years about the lack of diversity in the iconic awards, the Academy also established new diversity rules for the Oscars in September, which requires films to meet certain criteria in order to be considered for Best Picture, though those rules won’t go into effect until 2022. What We Don’t Know What the ceremony will look like. No host has been announced yet, and it’s unclear what form the ceremony with its two locations will take. It also remains to be seen how many people will tune into the broadcast, after ratings for the Golden Globes plummeted to a new low for the awards ceremony with just 6.9 million viewers, down from 18.3 million last year. Further Reading Live: ‘Mank’ Leads Oscar Nominations With 10 (Forbes) 2021 Academy Awards Nominations (Oscars.org) I am a New York-based journalist covering breaking news at Forbes. I previously covered politics and news for Vanity Fair and Mic, and as a theater critic I serve as a… I am a New York-based journalist covering breaking news at Forbes. I previously covered politics and news for Vanity Fair and Mic, and as a theater critic I serve as a member of the New York Outer Critics Circle. Follow me on Twitter @alisond64 or get in touch at [email protected].
https://www.forbes.com/sites/alisondurkee/2021/03/15/oscar-nominations-2021-netflix-leads-academy-awards-noms-as-female-directors-make-history/?sh=7ff2ebda1fe3
While radiation detection systems exist that are sensitive to low levels of gamma or neutron radiation, well-shielded radiation sources (for instance, SNMs such as uranium or plutonium) are difficult to detect by conventional means. A solution to this problem might be provided by highly-penetrating cosmic ray muons. Dense, high-Z, materials like uranium, plutonium, and lead scatter muons at larger angles than “more common” (i.e., lower-Z) materials. Charged particle tracking detectors placed around an object of interest can be used to measure the deflection of the trajectories of naturally occuring muons, and consequently image the contents of an object (similar in concept to a medical CAT Scan, but without needing a man-made radiation source). The goal of the CRIPT project was to design, construct, and test cost-effective muon tracking and spectrometer systems that are capable of detecting, in a practical length of time, the presence of special nuclear material (SNM) or dense shielding that might conceal radiological sources. This technology can also be used to image actinides in nuclear waste containers and account for spent nuclear fuel. Small-scale muon tracking prototypes were tested in summer 2010; a full-scale prototype system (consisting of approximately 50 square meters of muon detectors) was completed in late 2012. Several institutions and organizations across Canada have been working towards this goal, including: The Carleton University CRIPT group initially performed detailed simulations of the response of different drift chamber designs to cosmic ray muons, and built a prototype drift chamber detector. We have since created a full-scale prototype of the proposed detector, and have been testing it since late 2012. This project delivered a prototype muon tomography system capable of detecting SNM or dense shielding in air-cargo sized containers. The system will be further tested by AECL to characterize the ability of the system to identify the content of nuclear waste.
http://physics.carleton.ca/cript/about-cript
Between the 1870s and 1900, Africa faced European imperialist aggression, diplomatic pressures, military invasions, and eventual conquest and colonization. At the same time, African societies put up various forms of resistance against the attempt to colonize their countries and impose foreign domination. By the early twentieth century, however, much of Africa, except Ethiopia and Liberia, had been colonized by European powers. The European imperialist push into Africa was motivated by three main factors, economic, political, and social. It developed in the nineteenth century following the collapse of the profitability of the slave trade, its abolition and suppression, as well as the expansion of the European capitalist Industrial Revolution. The imperatives of capitalist industrialization—including the demand for assured sources of raw materials, the search for guaranteed markets and profitable investment outlets—spurred the European scramble and the partition and eventual conquest of Africa. Thus the primary motivation for European intrusion was economic. But other factors played an important role in the process. The political impetus derived from the impact of inter-European power struggles and competition for preeminence. Britain, France, Germany, Belgium, Italy, Portugal, and Spain were competing for power within European power politics. One way to demonstrate national preeminence was through the acquisition of territories around the world, including Africa. The social factor was the third major element. As a result of industrialization, major social problems grew in Europe: unemployment, poverty, homelessness, social displacement from rural areas, and so on. These social problems developed partly because not all people could be absorbed by the new capitalist industries. One way to resolve this problem was to acquire colonies and export this "surplus population." This led to the establishment of settler-colonies in Algeria, Tunisia, South Africa, Namibia, Angola, Mozambique, and central African areas like Zimbabwe and Zambia. Eventually the overriding economic factors led to the colonization of other parts of Africa. | | Thus it was the interplay of these economic, political, and social factors and forces that led to the scramble for Africa and the frenzied attempts by European commercial, military, and political agents to declare and establish a stake in different parts of the continent through inter-imperialist commercial competition, the declaration of exclusive claims to particular territories for trade, the imposition of tariffs against other European traders, and claims to exclusive control of waterways and commercial routes in different parts of Africa. | | This scramble was so intense that there were fears that it could lead to inter-imperialist conflicts and even wars. To prevent this, the German chancellor Otto von Bismarck convened a diplomatic summit of European powers in the late nineteenth century. This was the famous Berlin West African conference (more generally known as the Berlin Conference), held from November 1884 to February 1885. The conference produced a treaty known as the Berlin Act, with provisions to guide the conduct of the European inter-imperialist competition in Africa. Some of its major articles were as follows. This treaty, drawn up without African participation, provided the basis for the subsequent partition, invasion, and colonization of Africa by various European powers.
https://www.mikemcguigan.com/race-to-colonize.html
For most people in the world the majority of their life is spent indoors. In industrialised countries across Europe, the USA and Canada and even “outdoor-lifestyle” Australia at least 90% of people’s time is estimated to be spent in the indoor environment. “Indoor” definitions are not consistent, but can be considered any place where at least one hour is occupied by people of varying health, including at home, in the office/workplace, school, hospital, and shops; occupation of these areas dominate daily life in the industrialised world.Environmental and health protection of the atmospheric environment across the world has focused largely on the outdoor environment, including large-scale climate change considerations and localised urban air quality. Regulation of indoor air quality (IAQ) has developed in parts of the USA in its own right, but is often approached in an indirect way such as being implemented through workplace health & safety (H & S) regulation in combination with regional air quality plans. Across many countries, it is largely through ensuring H & S compliance for industrial and public indoor spaces that IAQ is assessed and improvement needs are recognised. Sources of indoor air pollution Indoor pollutants can be categorised usefully as being particulate or gaseous. These pollutants may be generated indoors or can ingress from the outdoor environment. On average, we breathe between 20 and 30 kg of air daily. This is significant relative to other forms of ingestion (food and drink being 3 – 4 kg daily). Although the human body naturally filters and expels much of the inhaled air without absorption of many components, our biologically protective measures are not entirely efficient and particles can enter, lodge and remain within the bronchial tree and alveolar sacs where toxic components, including suspected carcinogens, can be slowly absorbed whilst other irritant or allergenic health effects and the absorption of harmful gases (e.g. carbon monoxide, radon) into the blood-stream can adversely impact.Particulate pollutants include those anthropogenically generated, such as cooking smokes, environmental tobacco smoke (ETS) and fossil-fuel combustion products (e.g. particulate carbon PM10 and PM2.5, Figure 1), natural dusts, and also a wide range of biological and microbiological sources such as dust mite wastes, pollen, spores, bacteria, viruses, etc. Clean air will contain about 108 particles/m3, whilst rural air, urban air and tobacco smoke zone containing about 109, 1011 and 1014 particles/m3 respectively. The health effects of particulates are as wide-ranging as the varied chemical compositions of the particles themselves. Asthma, a process whereby the bronchi spasm, constrict, and produce excess mucus, can be triggered by a number of particulate stimuli including ETS, pollen and dust mites. Carbonaceous particulates from diesel combustion and components in ETS are strongly linked with carcinogenic effect. Industrial air pollutants, including lead, mercury and asbestos all have associated World Health Organisation (WHO) guidelines in recognition of their toxicity.Gaseous pollutants of IAQ concern also have varied anthropogenic and natural sources. In offices across much of the world various volatile organic compounds (VOCs) are prevalent, derived from treatments, paints and varnishes applied to soft furnishings and furniture. Some are irritants whilst others are potentially much more harmful, notably those based on aromatic and polyaromatic carbon (benzene) ring structures. Combustion products from commercial and industrial boilers and other appliances can generate carbon monoxide if burners are not properly maintained. Ambient (outdoor) air quality can also impact on the indoor environment. Important gaseous pollutants generated externally include sulphur dioxide and nitrous oxides (acidic irritants), aeolian transported pesticides and tropospheric ozone. Ozone is a secondary pollutant caused by the photo-catalytic reactivity of vehicular emissions; rural environments downwind of urban ozone sources can be severely impacted as certain ozone-destructive mechanisms based on NOx – cycling will not necessarily occur as intensively outside the urban environment. Of the naturally occurring gaseous IAQ pollutants, radon is probably most focused upon. Generated during the radioactive decay of naturally occurring uranium in rocks, radon gas can migrate into indoor spaces and build-up to potentially harmful levels unless adequate abatement or ventilation measures are used. Radon itself decays with a half-life of 3.8 days (emitting alpha-particles) and generates particulate daughter compounds (including polonium, bismuth and lead) which can be inhaled directly or whilst attached electrostatically to other indoor particulates.Indoor air exchanges at a rate dependent upon infiltration (seepage of air through gaps, cracks and joints in buildings and floors) and natural and mechanical ventilation. Natural ventilation draughts through windows and doors are a function of pressure gradients caused by indoor and outdoor air temperature differences, and accordingly can vary temporally. Although ingress to the indoors of outdoor pollutants is an undesirable effect, the indoor environment requires a certain degree of exchange to avoid the build-up of pollutants generated in-situ. For industry, active air exchange enhancement may prove essential to prevent the build-up of harmful levels of gaseous or particulate contaminants in the presence of large quantities or high concentrations of manufacturing precursors or products. Strategies to mitigate IAQ problems Traditionally effective infiltration and natural ventilation has been used worldwide to effect a controlled exchange of indoor air to abate IAQ problems. However, in the past few years in the industrialised world conflicting energy efficiency considerations have resulted in an increase in building “tightness” which adversely impacts infiltration mechanisms and can discourage extensive use of natural ventilation. Concurrently, a greater understanding of potential health impacts caused by poor IAQ has been investigated by the scientific health community and policy makers have responded. One strategy to abate IAQ problems has been to minimise the sources of harmful substances where possible. For instance, fireretarding and stain-resistant formulations have been applied to furnishings for both the commercial and domestic market for many years. Awareness of the potential harm of many of these products has resulted in improved, lower emitting and less harmful formulations being introduced. Benzene, formaldehyde, phenolic, polyaromatic and chlorinated compounds within formulations are carefully controlled as their toxic or carcinogenic hazards have been scrutinised. Industrially, advances in health & safety atmospheric control including COSHH and similar hazard assessments, together with better availability of and access to material safety data sheets, have improved user understanding and workplace risk assessment with respect to air quality. In Europe, the recently introduced REACH regulations (Registration, Evaluation, Authorisation and Restriction of Chemicals, 2007) will ensure harmful chemicals included in manufactured products and formulations brought in from outside the region are also regulated. Appliance combustion efficiencies are routinely covered by legislation to limit human exposure to carbon monoxide.Some countries have detailed IAQ objectives such as period-defined maximum allowable concentration for some the most harmful substances, often adopting or exceeding WHO guidelines, although the method of legal application varies. For example, IAQ in the USA relies on initiatives from individual states (e.g. Washington, California) using the United States Environmental Protection Agency (USEPA) and other guideline limits, whilst Australia have adopted National Health and Medical Research (1993) limits, and in Canada residential occupants are protected by Health & Welfare (1987). Elsewhere, in the UK, Local Air Quality objectives and health & safety legislation form major IAQ enforcement routes.Ingress of poor quality outside air is clearly undesirable. In Europe for example, much attention has focused on local air quality in urban areas. In the UK local authorities are responsible for identifying areas with unacceptable air quality (based primarily on ambient NOx, ozone and PM10 concentrations) and implement action plans in defined zones which may include improved traffic management to reduce vehicular emissions. It is not uncommon for the worst local air quality problems to be around hospitals, shopping areas or public transport hubs where large numbers of vehicles move slowly or idle whilst stationary. Geological mapping has been widely used to identify “at risk” areas for radon. Government schemes allow widespread ambient radiation testing across “risk” regions, and where trigger concentrations are indicated householders can arrange for detailed radon assessment and install ventilation systems to abate the problem. Different governments’ assessment of safe IAQ is demonstrated by the differing radon limits set by the USA, Canada and Australia (4 pCi/L (ca. 150 Bq/m3), 200 Bq/m3 and 800 Bq/m3 respectively).Pollutant source control, user-awareness, local air quality controls and health & safety legislation are important strategies in controlling IAQ across many countries. Where problems still exist, improved mechanical ventilation is often the solution and specific air quality problems can be reduced through installing active air treatment within in heating, ventilation and air conditioning (HVAC) systems. The importance of maintaining good-quality indoor air is recognised particularly in commercial and educational sectors (to minimise sickness absence and maintain productivity) and in the medical care sector (for example, to combat airborne infection).Particle removal can be achieved through mechanical air filtration (including the range of HEPA, high efficiency particulate air filters, Figure 2) or by means of electronic air cleaners (including electrostatic precipitators). Filtration is achieved through a combination of physical mechanisms including straining, impaction, interception and diffusion. The process can be enhanced by utilising electrostatic separation by the incorporation of permanently static charged plastic materials in filters.Electronic air cleaners operate differently by applying high voltages to statically charge dust which is then attracted to oppositely charged plates on the cleaner. Such air-cleaners can, however, generate ozone and airborne charged ultra-fine particles which have their own IAQ concerns, so their applications need to be carefully considered.Gas (including odour) removal is not generally highly effective using standard particle filtration devices. Although some manufacturers have successfully developed layered filter-systems incorporating activated carbon to concurrently remove gases with some success, separate gas filtration systems will generally improve effectiveness in sensitive applications. A range of thermal (ceramic) and PTFE/polypropylene gas filters is commercially available for specialised industrial applications such as chemical storage tank vent filtration and pathogen removal (legionella, etc.) applications.Pathogen destruction can be achieved using UV lamps, with UV germicidal irradiation (UVGI) forming a supplementary option for filtration. However, it is possible that irradiation time-doses may not be sufficient in many cases for the necessary effectiveness. A promising approach to IAQ improvement is organismal and VOC destruction through photocatalytic oxidation (PCO). Here, pollutants are adsorbed to a surface containing a titanium dioxide catalyst (possibly modified, for example with tungsten oxide). UV-C light activates the catalyst which breaks organic pollutants down, nominally into carbon dioxide and water. Whilst the process undeniably is effective on a macro-scale, some concerns remain as to whether PCO generates small quantities of undesirable by-products (e.g. formaldehyde) and the USEPA suggests PCO use domestically may not treat the full range of indoor pollutants, although manufacturers are confident of their effectiveness and safety. Filtration systems for IAQ Mechanical filters come in a range of forms. Simple flat (panel) filters do not generally improve IAQ but serve to protect HVAC equipment. However, IAQ-effective filtration is achieved through more efficient pleated filters and extended surface filters (which increase the depth and surface area of the filter), whilst electrostatically enhanced systems offer high performances in particle filtration. Pleated filters are generally manufactured from synthetic or cotton/polyester blended fibres and, depending upon specific design, typically achieve MERV ratings in the range 6 to 13, whilst higher efficiency mechanical filters achieve MERV 14-16.For the higher efficiency particle filters, alternative proposed rating systems to MERV can prove useful for discriminating performance. The atmospheric-dust spot test can be useful in determining the capture of particles as small as 0.3 μm. Under this test, many panel systems rate <5%, whilst a 40% efficiency rated system will capture pollen and dust well and a 90%-rated system is effective for ETS and combustion product removal. For systems exceeding 98% efficiency on this test, further comparative refinement is possible using the finer medium DOP-smoke penetration test.Though effective at removing particles on passage through the filter (better than 80% of <2.5 μm fine particles for certain higher efficiency filters), IAQ improvements after particle filtration can still be limited as large particles tend to settle quickly within the indoor environment. These do not necessarily reach filter systems and can be repeatedly disturbed and dispersed into the indoor atmosphere. Furthermore, filter efficiency over time can vary. Many filters improve in capability upon use as entrapped material reduces porosity and the capture of fine particles can improve with age to a certain extent. In contrast, the effectiveness of electrostatically enhanced filters diminishes with time, as charged filter surfaces become neutralised by attached counter-charged particles. For example, in studies carried out by the EU (REC06, air filters for better IAQ), tested filters dropped from 80% to 20% removal efficiency after just a few weeks in operation.The excessive build-up of filtered material is undesirable, since associated pressure drops across filters can impair effectiveness. The energy consumption of operating filters is a major cost in the overall life-cycle analysis (typically 60–80% of the energy use including manufacture, maintenance, use and disposal), an important factor when energy prices are high. Excessive pressure loss can lead to large increases in energy consumption, where energy used (kWh) = (air flow (m3/h) x operation time (h) x mean pressure loss (Pa))/(1000 x fan efficiency (%)).HEPA systems (MERV 17-20) may be required in specific circumstances. Hospitals (and other locations with sensitive occupants such as children) may opt for HEPA systems to minimise the chance of pathogen transmission, whilst some specialised industries (e.g. nuclear) may need particularly effective emission abatement. HEPA systems often use specialised filter materials. Patented minipleat designs offer exceptionally high surface areas, whilst other designs can incorporate both high-quality (>99.9% or better of 0.12 μm particles is achievable) particle removal together with specific gas treatment technologies, and manufacturers continue to develop a wide range of advanced filter materials. Other manufacturers produce biodegradable (e.g. polylactyl) material designs for air filtration based around PPE equipment (face-masks, etc.).Gas filtration can be effective in odour control and the removal of important irritants from indoor air. Most development in this sector has focused on dry-filtration through adsorptive media (Figure 3). Activated carbon media is effective for general purpose treatment, including sulphidic compounds, many of which (hydrogen sulphide, mercaptans, etc.) cause odour problems for a variety of industries, whilst other advanced materials or blended sorbents can be more effective for VOC, ammonia or oxidative treatment (e.g. permanganate impregnated alumina). Most media permanently adsorb pollutants, so a sufficiently slow air flow and humidity controls may be necessary to avoid gas break-through, whilst such media have a finite lifetime before becoming spent. Other media have been developed which chemically react with pollutants to generate less harmful products. Conclusions Most of us spend most of our time indoors in homes, schools, the workplace, shopping malls or hospitals. The air we breathe can be degraded by a wide variety of contaminants, natural, synthetic, biological and inorganic. Governments across the world have begun to address IAQ problems through reduced use of harmful pollutants, improvement strategies for outdoor air quality, and improved recognition of radon-affected areas. Legislative controls are implemented in a variety of ways, both direct and indirect.Energy efficiency measures have tended to improve building “tightness” and reduce air exchange via infiltration and natural ventilation. Therefore, mechanical ventilation has been widely adopted to avoid harmful IAQ through pollutant build-up. Filtration of the air can remove both particles and gases, though few systems can do both effectively at the same time.Particle filters will improve IAQ only if they are of pleated or extended-area design, whilst the use of advanced media including electrostatic enhancement will improve filtration to potentially a very degree (MERV 16) even for non-HEPA designs. Against filtration efficiency, energy costs, maintenance requirements, installation logistics and system noise need to be considered in choosing the best system for an application. Although other methods of particle destruction are being developed and available, including UVGI and PCO, air filtration is likely to remain the method of choice for IAQ improvement in future years. Gas filtration, primarily using dry-scrubbing through granulated media is optimally achieved through separate treatment systems. Contact:Anthony Bennett is Technical Director at Clarity Authoring.Contact via www.clarityauthoring.com.
https://www.filtsep.com/content/features/strategies-and-technologies-controlling-indoor-air-quality
California Vehicle Code 22352—Prima Facie Speed Limit There are certain speed limits for California drivers to obey under certain circumstances or conditions if there is no posted speed limit. If you were traveling at a speed greater than the prima facie speed limit, which may be 15 miles per hour or 25 miles per hour depending on the location, then an officer may cite you under this code section for speeding. The pertinent parts of CVC Section 22352 can be summarized as: 15 Miles per Hour - You may not travel more than 15 miles per hour in the last 100 feet when approaching a railway crossing when the view is not clear for at least 400 feet in the distance in both directions. This only applies if there is no flagman or a clearly visible mechanical or electrical crossing signal. You may not be unable to clearly see a crossing because of the weather, lighting conditions, vegetation or parked vehicles that obstruct your view. This rule does not apply if the crossing has a clearly visible signal. - In the last 100 feet of an intersection, you must travel 15 miles per hour if you have no clear view of the intersection and all of the highways crossing, unless the intersection is controlled by an official traffic signal or is equipped with a stop sign or yield right-of-way signs. These are blind intersections where no stop sign is present at a corner and you are unable to see 100 feet in either direction. Your view may be blocked by vegetation or other vehicles. You are required to slow down when approaching the intersection or you can be cited. - You may only travel 15 miles per hour in an alleyway unless otherwise posted. 25 Miles per Hour 1. You may not travel more than 25 miles per hour on any highway other than a state highway, business district or residence district unless a local authority clearly posts the speed. 2. Your speed may not exceed 25 miles per hour when approaching or passing a school building or grounds during school hours, including noon recess, and after school during the hours when students are leaving. Schools should have clearly visible and posted “SCHOOL” warning signs so that there is no argument about why your speed was greater than 25 miles per hour. It also applies when passing by school grounds that are not separated from a highway by a fence or other physical barrier. Schools zones should have posted signs notifying drivers of the approaching educational facility. 3. Your speed limit may not exceed 25 miles per hour when passing a senior facility or senior center if located on a street other than a state highway. These facilities will usually have a posted “SENIOR” warning sign. Defenses to CVC 22352 Are there defenses to CVC 22352? There are always defenses to any traffic code violation, but it usually takes a traffic ticket expert from Ticket Snipers to receive a dismissal. For example: - You were traveling at the prima facie speed. Unless the officer had a radar gun on you, was pacing you, using Vascar or laser, which present a whole different defense, then he or she was estimating your speed. If you were slowing when approaching a blind uncontrolled intersection or uncontrolled railway crossing, the officer would only be guessing at what your true speed may have been in the last 100 feet. - You had a clear view of the intersection, highway crossing or railroad tracks for at least 400 feet when you were approaching. - The school or senior facility or center was separated from the highway by a fence or other barrier. - There were no children present and school was not in session when you were cited. - There are errors on the ticket - Challenges to the method by which the officer detected your speed Consequences for Violating CVC 22352 Similar to other speeding violations, violation of the Prima Facie speed limit has a tiered penalty schedule. - 1-15mph over the limit – $237 and up - 16-25mph over the limit – $367 and up - 26mph+ over the limit – $489 and up Regardless of the speed, a violation under this vehicle code will add a point to your driving record. If you receive a speeding citation, reach out to our knowledgable team to get a better understanding of your options. You can also submit your citation for a free ticket review.
https://www.ticketsnipers.com/ticket/cvc-22352-prima-facie-speed-limit/
About Responsible Investing Responsible Investing is an umbrella term that refers to investors integrating non-financial elements, related to environmental, social, and governance (ESG) concerns, into their assessment of firm performance. Some investors, typically high net worth families and foundations engage in what the Rockefeller foundation calls “impact investing”. Impact investors seek social or environmental impact in addition to some form of financial return, often making loans and taking equity positions in social entrepreneurs. This website highlights two investing studies Dr. McKnight is involved with. One ongoing study – Responsible Investing, and one archival study – Impact Investing. Responsible Investing Research Team Brent McKnight is an Associate Professor of Strategic Management at the DeGroote School of Business, McMaster University. Dr. McKnight studies how firms and organizations address complex societal problems. Currently, this interest has led him to undertake research in the areas of responsible investing at the retail advisory. Benson Honig is the Teresa Cascioli Chair in Entrepreneurial Leadership, DeGroote School of Business, McMaster University. Research interests include business planning, nascent entrepreneurship, transnational entrepreneurship, ethics in scholarship, immigration and social entrepreneurship, social capital, and entrepreneurship in transition environments. He has published in leading academic journals (over 100 peer-reviewed articles) and serves on ten editorial boards. Kevin Veenstra is an Assistant Professor of Accounting ar the DeGroote School of Business, McMaster University. Dr. Veenstra studies the influence of social norms and personality traits on an assortment of phenomenon, including career trajectories and corporate financial performance. His interest is driven by his fascination with how individual and group characteristics influence decision making and related outcomes. Anjali Chawla is a Ph.D. Student of organizational theory at the DeGroote School of Business, McMaster University. Her research interests include sustainability and social entrepreneurship.
https://responsibleinvesting.degroote.mcmaster.ca/
One of our clients in the mining sector is actively sourcing for a Drill Rig Operator. The ideal candidate will be responsible for drilling of blast holes to a blast design pattern using a mechanized drill. The responsibility will also include the drilling of service holes as well the installation of welded mesh support with roof bolts/split sets by means of the drill rig in accordance with a predetermined support pattern. Key Responsibilities: - Contribute to a culture of zero harm by adhering to all requirements as laid down by legislation and health, safety, environmental and security policies and procedures, and by the application of specific instructions to own work area and task completion. - Take-over/ hand over shift from off/on-going drill rig operator, attend to any abnormalities first, and then report to the supervisor to receive further instructions. - Perform visual Inspections of equipment before, and during operation according to departmental procedure to minimize breakdowns and operational losses by the timeous reporting of defects / abnormalities to Supervisor. - Operate equipment according to relevant mining requirements and Standard Operating Procedures to achieve daily drilling targets. - Report and record relevant operational data as required by department procedures to compile production reports. - Conduct efficient housekeeping of equipment and work area in accordance with departmental requirements. - Perform all activities according to instructions, procedures and mining needs. - Participate in cost saving initiatives and adhere to financial procedures - Assist the Engineering personnel with breakdown maintenance as well as planned maintenance activities - Accountable for own workspace and activities Minimum experience & qualifications required Qualifications - Grade 12 / N3 / Equivalent NQF Level 4 qualification - Competent B certificate Experience required - Minimum of 3 year’s practical experience of operating a double boom drill rig in a Trackless underground mining environment - Sound knowledge and experience of:
https://jobs.eduspotsa.online/job/drill-rig-operator-2/
Construction workers often have to work around dangerous pieces of machinery and equipment, including atop scaffolding or ladders. They also have to use dangerous tools regularly, including jackhammers, drills, and saws, all of which can malfunction and lead to worker injuries. Injuries sustained in a construction accident can also be extremely debilitating and can prevent you from working for a long time. Construction workers can suffer broken bones, traumatic head and brain injuries, spinal cord and back injuries (including paralysis injuries), soft tissue injuries, and in some cases, fatal injuries that lead to a construction worker’s untimely death. Possible Claims for Construction Injuries If you work in the construction industry and suffer injuries in an accident that occurs while you are working, you should assert a legal claim for workers’ compensation benefits in your state. This is true regardless of what caused your accident and who was at fault. Potential benefits typically include medical coverage, compensation for a portion of your lost wages, and various types of disability benefits, among others. In many states that utilize a workers’ compensation system, workers cannot sue their employers directly for injuries that they suffer while they are working on the job. Instead, they must utilize the workers’ compensation system to seek benefits. In addition to making a claim for workers’ compensation benefits following a construction accident, however, you may assert one or more third-party claims against an at-fault party not related to your employer. For example, you could sue the manufacturer of a piece of construction equipment that malfunctioned and caused you to suffer an injury while working on the job. When it comes to workers’ compensation and third-party personal injury cases, you will oftentimes be dealing with an insurance company that is not on your side, and that will do everything possible to protect its own interests and save itself money. In workers’ compensation cases, for example, some insurance companies might go so far as to try to say that you faked your injuries and/or that your injuries were pre-existing (i.e., that you did not suffer your injuries while you were working on the job). Therefore, if you sustained injuries in a construction accident while on the job—and while you were working within the scope of your employment—you should consider retaining an experienced construction accident attorney near you to represent you in your legal matter. Your lawyer can determine if you are eligible to assert a workers’ compensation claim against your employer and/or a third-party claim against another responsible party. If you are eligible, your lawyer can make the necessary claim or lawsuit on your behalf and help you pursue the compensation that you deserve for your injuries. Types of Construction Accidents that Can Lead to Serious Injuries Numerous construction accidents can lead to serious injuries and other damages. Some of the most common accidents include: - Construction equipment and machinery malfunctions – Construction workers often must work around equipment and machinery that are both dangerous and heavy. This equipment can also sometimes malfunction, causing a construction worker to sustain serious injuries or disfigurement. Workers may also suffer a crush injury when something goes wrong with a heavy piece of machinery. - Construction vehicle collisions – Construction sites utilize various vehicles, including diggers, dump trucks, cement mixers, and other vehicles. Construction site vehicles, just like vehicles utilized on roadways, can be involved in an accident. The driver of the vehicle could negligently bring about a collision with another vehicle, a worker, or a stationary object at the construction site, causing serious and sometimes fatal injuries in the process. - Electrocution accidents – When construction workers are around electrical wiring—such as those who are working on a house or another building being constructed from scratch—something can go wrong, resulting in a serious accident. In some instances, these accidents result from faulty wiring or careless workers. These electrocution accidents can also bring about serious injuries—most especially, burn injuries. - Falls – Fall accidents are also extremely common on construction sites. Construction workers often have to work from heights, such as on top of buildings, scaffolding, and ladders. In addition, slippery surfaces abound on construction sites, and a worker could slip and fall to the ground. If a worker falls to the ground from a significant height, he or she could suffer serious injuries, such as broken bones and spinal cord injuries. In some instances, a fall accident could even lead to the accident victim’s premature death. If you suffered injuries in one of these accidents that occurred at a construction site, then you may be eligible to file a workers’ compensation claim for benefits. Also, under certain circumstances, you might be eligible to file a third-party claim against an at-fault individual or entity. An experienced construction accident lawyer near you can investigate the circumstances of your accident and determine your legal options. If you are eligible to file a workers’ compensation claim and/or file a lawsuit, your attorney will help you pursue the full benefits and monetary compensation that you deserve for the injuries you suffered at your construction job. Eligibility to File a Workers’ Compensation Claim After a Construction Accident Eligibility to file a workers’ compensation claim after a construction accident varies from jurisdiction to jurisdiction. In most states, the workers’ compensation system is a no-fault system. This means that you can pursue and recover workers’ compensation benefits without regard to who caused your on-the-job injury. To qualify, you must have suffered an injury or illness while you were working on the job—and while you were engaged in activities within the scope of your employment. A worker ordinarily works ‘within the scope of employment’ when he or she carries out a particular job responsibility or duty. In addition, to be eligible to receive workers’ compensation benefits, you must be working as an employee when you suffer your on-the-job illness or injury, rather than as an independent contractor. In some cases, it can be difficult to tell whether an individual is truly an employee as opposed to an independent contractor. Generally speaking, employees have set hours, set responsibilities, and a set pay structure. Independent contractors, on the other hand, are usually free to set their own schedules and come and go as they see fit. Potential Workers’ Compensation Benefits for Construction Injuries Individuals who suffer one or more physical injuries while working at a construction site may recover various types of workers’ compensation benefits. First of all, an injured worker could file a claim for all of the medical bills that he or she incurred for treatment received following a construction accident. Medical treatment includes doctor visits, medical procedures (such as surgeries and injections), and physical therapy that become necessary due to the construction accident injury or injuries. In some states, injured construction workers are free to select their own medical providers, while in other areas, these workers must see a physician recommended by their employer’s insurance company. Injured workers may also be eligible for reimbursement for the travel costs they incurred going to and from medical appointments. In addition to receiving these medical benefits, an injured construction worker could recover a portion of his or her lost wages resulting from the construction accident. Lost wage benefits are usually a portion of the worker’s average weekly wage, as recorded over a specific time. Also, in some construction accidents, workers suffer injuries to such an extent that a healthcare provider determines that the injury or injuries are permanent—according to American Medical Association guidelines. Healthcare providers often share their findings in the form of a disability rating. For example, a healthcare provider might determine that a worker suffered a ten percent permanent impairment to his or her back as a result of the workplace accident. If the injured construction worker suffered a permanent disability, then he or she may receive a permanency award. These awards are sometimes spread out over time (such as weekly or bi-weekly), or they may involve a lump-sum payment. Moreover, in some instances, a construction worker might sustain injuries so severe that he or she cannot return to his or her former job. Some injured workers might even have to switch to a new line of work altogether. When that is the case, the injured construction worker may be eligible for vocational rehabilitation and training services. If you suffered injuries in a construction accident, a knowledgeable construction accident attorney in your area can assist you with filing a claim for workers’ compensation benefits with your employer’s insurance company and pursue the various benefits that you deserve. It is always best to have a construction accident attorney to evaluate and act on all of your options. Suing a Third Party Following a Construction Site Accident In most jurisdictions throughout the country, injured workers generally don’t need to file a personal injury claim or lawsuit against their employer directly for injuries they sustained in a construction site accident—they file for workers’ compensation instead. That, of course, is not so easy as it sounds, and often requires an experienced lawyer to help resolve in your favor. However, depending on the circumstances of the accident, they may file a claim or lawsuit against some third party who caused or contributed to the construction site accident and the injuries that the worker suffered as a result. Depending on how your construction site accident and injuries occurred, you might pursue a third-party claim against any of the following individuals or entities: - Others who were present on the construction site – On construction sites, various individuals, including project managers, project supervisors, and general contractors, are responsible for supervising the work that occurs on-site—and for making sure that the construction site is properly maintained and in a safe condition for workers. If your construction site accident occurred because one or more of these individuals was negligent under the circumstances, then you might assert a third-party claim or lawsuit against the responsible person or persons. - Manufacturers of defective construction equipment or parts – The manufacturers of construction equipment and parts have a duty to ensure that their equipment/parts are properly designed and tested before being made available for purchase. When a part or equipment defect happens due to improper manufacturing or design, then you could file a claim or lawsuit against the negligent manufacturer, developer, or some other party in the chain of product distribution. - Negligent motor vehicle operators – In some instances, construction workers are involved in motor vehicle accidents caused by a negligent driver. The other driver may have disregarded the rules of the road or may have been distracted while behind the wheel. If another driver’s negligence resulted in an accident that caused you to suffer an injury while you were on the job, then you could assert a third-party claim or file a lawsuit against the negligent motor vehicle operator. As part of a third-party case, you may be eligible to pursue and recover various categories of damages for the injuries you suffered in your construction accident. For example, you could pursue compensation for all of your medical treatment costs, lost wages, inconvenience, and pain and suffering that resulted from your injuries. Also, when it comes to third-party personal injury cases, most states have a statute of limitations that applies. The statute of limitations establishes a deadline by which you must file a lawsuit against a third party. In some jurisdictions, this deadline may be as short as one year. If the defendant is a government agency, you may only have months to file your claim. Therefore, if you want to file a third-party claim arising from your construction accident, you should speak with a knowledgeable construction accident attorney near you as soon as possible. If you are eligible to file such a claim or lawsuit, your lawyer will ensure that it is filed within the statutory deadline. He or she can then help you calculate and pursue the money damages that you deserve for your injuries.
https://braunslaw.com/blog/should-i-hire-construction-accident-attorney/
Our client is a private equity backed upstream E&P business with an ambitious growth plan. We are supporting them to source a Lead Asset Accountant to provide day-to-day support to the finance function & JV support on a portfolio of operated & non-operated assets In this role you will be part of a small team responsible for evaluating and executing the company’s strategy. The position provides broad exposure to corporate strategy related workstreams including analysing and prioritizing key strategic workstreams, co-ordination of materials for the board and executive leadership team, ensuring delivery of key corporate milestones and special projects. Based in London, the role will work closely with the Head of Strategy Execution and Risk and will also supports other members of the Executive Leadership Team when required. Key Responsibilities • Lead strategy analysis, peer benchmarking etc. • Utilise corporate model to review a range of strategic growth alternatives (both organic and inorganic) to assess organizational and financial implications • Coordinate presentations – executive summaries, corporate presentations, Board materials • Assist with any special projects e.g. supporting M&A related work • Support senior executives on ad-hoc projects, analysis and presentations To be considered for the role you will have the following: • Strong analytical skills essential • Able to take complex problems and succinctly present conclusions with supporting data/analysis • Team player who is also comfortable working independently • Strong communicator - can use language effectively (verbal or written) to gather information and facilitate the exchange of ideas • Creative thinker – able to consider range of solutions • Organised and methodical approach to work with an eye for detail • Significant and proven relevant experience – ideally in a finance/strategy role within an E&P business is essential.
https://jobs.hutcheonmearns.co.uk/job/strategy-execution-lead-6-month-contract-2484.aspx
Over recent years the field of Science of Learning has increased dramatically. Unfortunately, despite claims that this work will greatly impact education, very little research makes it into teacher practice. Although the reasons for this are varied, a primary concern is the lack of a proper translation framework. From the Laboratory to the Classroom aims to consolidate information from many different research disciplines and correlate learning principles with known classroom practices in order to establish explanatory foundations for successful strategies that can be implemented into the classroom. It combines theoretical research with the diverse and dynamic classroom environment to deliver original, effective and specific teaching and learning strategies and address questions concerning what possible mechanisms are at play as people learn. Divided into five sections, chapters cover: A Framework for Organizing and Translating Science of Learning Research Motivation and Attention as Foundations for Student Learning Memory and Metamemory Considerations in the Instruction of Human Beings Science of Learning in Digital Learning Environments Educational Approaches for Students Experiencing Learning Difficulties and Developmental Characteristics of Gifted Children Brain, Behaviour and Classroom Practice Forging Research/Practice Relationships via Laboratory Schools This fascinating text gathers an international team of expert scientists, teachers, and administrators to present a coherent framework for the vital translation of laboratory research for educational practice. Applying the Science of Learning framework to a number of different educational domains, it will be an essential guide for any student or researcher in education, educational psychology, neuropsychology, educational technology and the emergent field of neuroeducation. - Copyright: - 2017 selection and editorial matter, Jared Cooney Horvath, Jason M. Lodge and John Hattie Book Details - Book Quality: - ISBN-13: - 9781317271918 - Publisher: - Taylor and Francis - Date of Addition: - 2019-04-21T14:00:17Z - Language: - English - Categories: - Education, Nonfiction, Psychology, - Usage Restrictions: - This is a copyrighted book. Choosing a Book Format EPUB is the standard publishing format used by many e-book readers including iBooks, Easy Reader, VoiceDream Reader, etc. This is the most popular and widely used format. DAISY format is used by GoRead, Read2Go and most Kurzweil devices. Audio (MP3) format is used by audio only devices, such as iPod. Braille format is used by Braille output devices. DAISY Audio format works on DAISY compatible players such as Victor Reader Stream. Accessible Word format can be unzipped and opened in any tool that supports .docx files.
https://india.bookshare.org/en/bookshare/book/2530985
Alaska Is the Last Frontier. But for How Long? This article was originally published in Extra Newsfeed. JUNEAU, ALASKA—Every few years, budget concerns become the talk of the news cycle: whether or not Congress is going to raise taxes or cut taxes — increase services or take them away. Social security, health insurance programs, and various public services always seem to be on the chopping block, with Democrats striving to keep entitlements and spend for the “good of the people” and Republicans looking to cut government services and privatize what they can. I could go on and on about the politics of it all, but we’re all sick and tired of hearing it. Aren’t we? The national debt has ballooned to epic proportions, and unless either party makes searing changes to large entitlement programs, we aren’t going to change much. The national debt concerns me, and it always has. But these days, my concern is considerably heightened because it seems like treasured national institutions — our public lands — are at risk to solve the problem of the debt crisis. The most recent public lands issue centers on the Arctic National Wildlife Refuge (ANWR) — a remote place, unlike Yosemite or Yellowstone. You can’t drive there, and few people have ever laid eyes on this piece of very publicland. Congress is currently debating whether or not to open up this large swath of federal land to exploratory oil drilling. The phrase “drilling for oil” has been synonymous with the Alaskan economy ever since the discovery of the Prudhoe Bay oil field in the late 60’s, providing a massive boost to the state’s coffers and shaping Alaska as we know it today. From salmon fishing to logging, Alaskans make their money off the land. Natural resources are taken from the vast expanse of ocean, tundra, and forest, and are commoditized and sold for profit. There’s nothing wrong with this, as long as there are enough resources saved for subsequent years — as long as natural resources are managed sustainably. Yet Alaska is also one of the last places in the country that has remained relatively wild, and free of human development. Today, we think of Alaska as the Last Frontier — a place where caribou roam free and citizens can homestead the land. It’s a state where the original pioneer spirit of America is still alive and well, with Alaskans taming the land around them in order to make a life for themselves and their families. Yet a hundred years ago, in the days before Los Angeles was a big metropolitan city, California was the Last Frontier. Two hundred years ago, it was anything west of the Mississippi. Four hundred years ago, the island of Manhattan was covered in trees, streams, hills, and wildlife. To Europeans living in the Old World, all of America was the Last Frontier, a land of bountiful plenty just waiting to be conquered. In 2017, Alaska is the Last Frontier, but that’s just for a snapshot in time. If we continue to conquer the natural world around us, where will the Last Frontier be in another hundred years? We keep pushing the boundaries of wildernessfurther and further away from us, and soon there won’t be anywhere left to go. National Wildlife Refuges were created for the express purpose of keeping wilderness wild. There is merit in this. The ramifications of living in a world without any biological diversity are unknown, because it’s never been done. Mother Earth will adapt and improvise, and the fittest will survive. But will humans make the cut? As long as we’re a rich country full of resources both natural and manufactured, we’ll have a debt problem. Yet searching for solutions to these short-term problems by opening up our wilderness for immediate economic gain is foolhardy. Drilling for oil in the Arctic National Wildlife Refuge will cause irreversible damage to one of the last frontiers of America, all to obtain a trivial amount of a finite, non-renewable resource. It’s 2017, and Alaska is the Last Frontier. Let’s try to keep it that way.
https://www.mkoller.com/writing2/2017/12/1/alaska-is-the-last-frontier-but-for-how-long
DR Congo has several potentialities to develop its industrial sector, namely : - Abundant and cheap labor ; - Diversity of raw materials (wood, oil, minerals, quartz, etc.), conducive to the development of the national industrial fabric ; - Overall market of over 70 million potential consumers, extendable to the subregional level ; - Existence of 80 million hectares of arable land conducive to the development of agribusiness ; - Existence of a significant market share in the various industrial sectors ; - The implementation of the Inga 3 Power plant, of Kakobola, Zongo II and Katende, is a major asset to boost the volume of existing production and promote the attraction of new investments in the country. II. Achievement - Establishment of the pilot Special Economic Zone of Maluku ; - Launch of growth Pool project at Kimpese, 130 Km away from east of Matadi (Bas-Congo). This is a World Bank project designed to finance the agricultural sector and road infrastructure in the DRC. Its cost is estimated at over 110 million US dollars for the whole country, of which 48 million for farmers in the province of Bas-Congo. It is about helping farmers to increase agricultural production. - Launching activities of the pilot agro industrial park.of BUKANGA LONZO. III. Vision Industry is one of the Government’s priority sectors like agriculture, infrastructure, energy, health, education, housing, hydrocarbons, etc. In its vision, the Government adopted a development approach called "self-centered and integrated," consisting in a balanced deployment of economic and industrial growth poles. This approach relies on the ability to develop, based on comparative and competitive advantages of each economic area of processing and manufacturing industries, especially in the agro-industrial sectors, enhancement of natural resources and structuring or industrializing industries. The development strategy of the industrial sector is mainly focused on the creation of Special Economic Zones (SEZ), agro-industrial parks and growth poles. IV. Legal framework Law No. 14/022 of 7 July 2014 establishing a system of special economic zones (SEZ) in the Democratic Republic of Congo (DRC). This Law has the following objectives : - Improve the legal and institutional framework to attract and safeguard national and foreign private investment in order to promote the country’s development ; - Simplify administrative procedures to further improve the business climate and attract investment ; - Strengthen dispute settlement mechanisms related to investments ; - Provide an incentive business environment, transparent and consistent in order to encourage national and foreign private investment which will generate growth and employment and increase competition in the Democratic Republic of Congo ; - Setting the rules of organization and functioning of special economic zones, their missions and their limits ; - Determine the supervision powers of the Agency for Special Economic Zones, including its exclusive and private skills ; - Clarify the rules applicable to companies that can operate in special economic zones, except with respect to tax and customs arrangements to be set out in the Finance Act. Law n ° 14/023 of 7 July 2014 setting the rules on the terms and conditions of rescuing the ailing industrial company. The provisions of this Act apply to any industrial enterprise in difficulty following the exogenous factors and subject to a procedure either of a preventive law, or receivership. The industrial company in difficulty is eligible for benefits under section 9 of this Act, subject to the prerequisite of collective proceedings of preventive settlement or bankruptcy to : - Ensure the direct and indirect employment ; - Promoting local raw materials ; - Ensuring fiscal contribution to revenue of the central government, provincial and regional and local entities ; - Ensure the maintenance of socio-economic impacts on the local and national environment. Any industrial company in difficulty wishing to benefit from the advantages provided for in Article 9 of this Act shall, to this end, submit, at the opening or after the opening of the insolvency proceedings, at the latest before the preventive or remedial concordat, a grant application to the National Agency for Investment Promotion.
http://www.generaff.com/newsite/en/index.php/ct-menu-item-13/ct-menu-item-14/ct-menu-item-22
151B. Applied Numerical Methods. (4) Lecture, three hours; discussion, one hour. Requisite: course 151A. Introduction to numerical methods with emphasis on algorithms, analysis of algorithms, and computer implementation. Numerical solution of ordinary differential equations. Iterative solution of linear systems. Computation of least squares approximations. Discrete Fourier approximation and the fast Fourier transform. Matlab programming. Letter grading. Assignments Homework assignments in the course consist of both theoretical and computational work. The computational work is completed using matlab. General Information. Math 151AB is the main course sequence in numerical analysis, important for all of the applied mathematics majors. Mathematics majors who graduate and go into industry often find Math 151AB to be the most useful course for their work. Math 151A is offered each term, and Math 151B is offered Winter and Spring. R. Burden and J. Faires, Numerical Analysis, 10th Ed., Brooks/Cole. Homework assignments in the course consist of both theoretical and computational work. The computational work is completed using matlab. AS: The topics of stiffness and of absolute stability are not well presented in Burden and Faires. Other textbooks should be consulted. DLS: The matrix form of the discrete least squares problem is not presented in Burden and Faires. Other textbooks should be consulted. Continuous and discrete trigonometric polynomial approximation.
http://ftp.math.ucla.edu/ugrad/courses/math/151B
FIELD OF THE INVENTION BACKGROUND SUMMARY OF THE INVENTION DETAILED DESCRIPTION The present invention relates to apparatus to be applied to the body of a novice swim student to enhance confidence and provide a teaching aid for the instructor. Most beginning swim students, and especially young children, find that being placed in a prone swimming position in a body of water is an intimidating, if not a frightening experience, and not one adapted to inspire confidence. Even with the underbody hand support of an instructor many beginners are prone to panic when asked to reach and pull or do the crawl and scissor kick at the same time. As the inspiration for the present invention, it has been discovered that beginners immediately relax and become receptive to teaching commands once the student senses lifting type of support at the small of the back. The prior art has seen many different configurations of apparatus designed to either assist beginning swimmers or provide instruction to more advanced swimmers. The U.S. Pat. to Lan et al., No. 7,185,598 for Swim Training and Buoyancy Assist Device, is one illustration of the prior art where the distal end of a tether is attached to a belt around the swimmer's waist. The tether is anchored to the side of a swimming pool restraining the swimmer from moving more than a short distance in one direction from the anchor. While possibly providing apparent buoyancy or lifting, the device would not prevent a beginning swimmer from sinking to a depth that would produce panic. Similar apparatus have been developed by others but suffer from the same disadvantage, that is, the device is primarily useful to a swimmer who has at least some ability to cope with the water and execute at least remedial swimming strokes without fear. Accordingly, it is the primary object of the present invention to provide an easily donned and non-hampering harness to be worn by a beginning swim student that will provide a swim instructor with means to build confidence and reduce the student's fear of the water. Another object of the invention is to provide a harness for the beginning swimmer that includes a handle by which an instructor can easily provide a lifting force to the lower back of a prone swim student to keep the student from being immersed in the water to an extent that will cause panic and fear. Another and further object of the invention is to combine the function of supporting the swimmer by lifting the handle on the swimmer's harness with the added function of moving the swimmer's legs in a manner that will encourage the leg kicking that is required of a prone swimmer. Other and further objects, features and advantages of the present invention will become apparent on a reading of the following description of the preferred form of the invention taken in conjunction with the attached drawings. The swim training harness of the present invention includes a waist belt with an operating handle, two leg bands and a pair of straps interconnecting the operating handle and the respective leg bands. The operating handle serves two purposes. It acts first as the point of application of the lifting force. Second, when rotated to a limited degree, the operating handle mechanism will put tension on one of the connecting straps, causing the leg to which the respective leg band is attached to be raised, simulating the intentional movement of the leg in the required kicking motion. Limited rotation of the handle in the opposite direction will cause the other leg to be raised. FIG. 1 2 4 6 8 6 10 12 15 8 Referring first to , the harness of the present invention is worn by a beginning swim student . The primary components of the harness comprise a belt adapted to be secured around the waist of the student, a lever arm assembly carried by the belt at a position above the lumbar spine of the student, a handle operably connected to the lever assembly, a pair of leg bands that encircle the student's thighs and straps that interconnect the leg bands and the lever assembly . FIG. 1 FIG. 3 10 20 8 6 22 24 26 28 22 15 28 22 As seen in , the handle is grasped by a swim instructor to provide a lifting force at the waist that will be sufficient to put the student at ease without feeling that he is going to sink. The lever arm assembly is shown in as being attached to the belt in a position that when worn by the student the lever assembly will be over the student's lumbar spine. The lever arm assembly comprises an elliptically shaped disk that is rotatably mounted at its center on the base portion of the lever arm assembly housing . The long axis of the elliptical disk acts as a lever arm, the distal ends of which are attached to the respective proximal ends of the two connecting straps , While the lever arm of the preferred embodiment takes the shape of an elliptical disk , the lever arm can be a more simplistic shape, such as, for example, a simple elongated arm pivoted at it center with the connecting straps connected to the distal ends of the arm. 12 31 15 31 33 15 34 The leg bands are constructed of flexible material that accommodates being wrapped around the thigh portion of a human leg. Hook and loop attachment material secures each leg band around the leg. The posterior side of each leg band is provided at its lower edge with a ring to which the respective distal ends of the straps are secured. In vertical alignment with the ring , the upper edge of the posterior side of each leg band is provided with a second ring through which the respective connecting strap is trained for sliding movement. Each of the connecting straps is provided with a length adjusting buckle intermediate the strap's proximal and distal ends. FIG. 3 FIG. 3 22 40 22 24 26 36 22 Looking further at the lever arm assembly, it is seen in that the underside of elliptical disc is provided with two pair of angularly fences the respective members of which are angularly related to one another by 90°. The apex of the fence angle is located at the center of rotation of the disc . Attached to the base of the housing are two spaced apart upwardly projecting pins that are located below the long axis of the disc when the disc is in a neutral position, as shown in . 10 41 22 10 43 28 The handle is secured to a shaft that is perpendicular to the plane of the disc . The shaft projects from the surface of the disc a sufficient distance to engage a bore in the bottom of the handle and hold the handle at a sufficient distance from the top cover of the housing that will permit the handle to be fully grasped by the hand and fingers of an instructor. 6 2 44 47 The belt of the harness may be constructed of a suitable webbing material with length adjusting buckles . The webbing may be covered in whole or in part by a rubber sleeve to prohibit undesired bending or twisting of the webbing and to enhance the comfort of the harness. 10 28 22 15 10 40 36 In operation, once the instructor has stabilized the student by holding vertical pressure on the handle the instructor can then proceed to introduce the kicking maneuvers that are critical to swimming technique. By rotating the handle 45° the attached lever arm (the disc in the preferred embodiment) will rotate causing the connecting straps to respectively pull on one of the leg bands, raising the leg to which that band is attached, and relaxing tension on the other connecting strap, allowing the leg to which that strap is connected to lower deeper into the water. By reversing the direction of rotation of the handle by 90° the opposite effect is achieved. As the student gets the feeling of how the legs should flutter to perform the proper kicking action, the instructor may increase the frequency of rotation and reverse rotation to enhance the kicking maneuver all the while keeping upward lifting pressure on the handle to steady the student and improve his confidence in dealing with breathing and otherwise dealing with a water environment. The handle is restrained from rotation beyond 45° from its neutral position by the engagement of the disc fences with the stop pins . DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of a beginning swimmer being supported in the water by an instructor whose hand is grasping the handle of the harness of the present invention which is being worn by the swimming student. FIG. 2 is a posterior view of a swim student wearing the assist harness of the present invention. FIG. 3 is a top view of the harness of the present invention with the handle removed for clarity of illustrating the lever arm structure. FIG. 4 is a side view of the lever arm assembly housing, showing the attachment of the operating handle to the lever arm. FIG. 5 is an anterior view of the harness leg bands before being closed around and attached to the legs of the swim student.
Inventory levels define the state of the housing market. It is either a Buyers' Market, a Balanced Market or a Sellers' Market. To determine what market we are in, ask yourself, "how many months of housing inventory do we have?" Buyer's Market: When there is more than seven months of inventory. Seller's Market: When there is less than five months of inventory. In the last three years, we have been flirting with a balanced market and are getting closer to that reality. Balanced markets occur during the transition between a Buyers' Market and a Sellers' Market. So, if we are in a market marked as a transitional state, what does that mean for our clients and our business? The aha: Real Estate is a Cyclical Business. Where You are Today is Not Always Where You Will be Tomorrow. Gary Keller stated at Family Reunion 2015 that real estate goes in cycles and while we are in a good period now, agents should prepare for a slowdown in sales at some point. As Keller points out in his bestselling book, SHIFT: How Top Real Estate Agents Tackle Tough Times, there is a natural buildup of listing inventory from January through April and an offsetting decline in inventory from May through October. If this is not a Buyers' or a Sellers' Market, how do your client's know if it is a good time to buy or sell? That depends. When you present this information to your clients, sellers may be inclined to wait for inventory to drop so they can sell at a more competitive price and buyers may want to wait for inventory to increase so they have more selection and price negotiation power. However, both buyers and sellers need to look at other factors, in addition to inventory, before they decide to move or wait. The decision to buy or sell is a very important decision for your clients and they should never be forced to make a decision that doesn't sit right with them. As their real estate agent, your greatest value is acting as their local economist of choice by presenting them with the most current market facts and analysis to help them make the most informed decision. In October 2014, Keller Williams initiated a new five part blog series, Market ahas. The quarterly series is designed to help our associates better serve as their clients’ local economists. The topics covered each quarter are mortgage rates, unemployment, price change, inventory, and GDP. This is the fourth article in the series for Q4 2014. Read more Market ahas here!
https://blog.kw.com/2015/03/16/balanced-market/
AbstractThere has been considerable research regarding teacher identity and the challenges involved with the transition from teacher to teacher educator, yet there is little existing literature that addresses how to manage or mitigate these challenges. In my research, I describe the influences on my identification process and use the concept of self-compassion as a tool to support the transition from teacher to teacher educator. Self-compassion “involves offering nonjudgmental understanding to one’s pain, inadequacies and failures, so that one’s experience is seen as part of the larger human experience” (Neff, 2003b, p. 87). Data gathering and analysis were informed by self-study methodology and consisted of personal reflections and conversations with a critical friend. From the ensuing analysis, I developed three themes. First, the challenges I faced included differentiating pedagogy, returning to academia, and my perspectives of others’ perceptions. Each of these factors influenced my identification process during my transition. Second, through conversations with a critical friend, we came to see my product over process mindset, which prompted a new way of conceptualizing and acting upon these challenges. Lastly, supportive others and engaging with self-study provided a means to better understand how I practiced self-compassion and shift my mindset about challenges in new ways. Implications for this research may resonate with others who undergo similar transitions; the findings may also be informative to others making personal and professional transitions in a broader context. Collections Related items Showing items related by title, author, creator and subject. - Becoming a Queer Teacher: Perceptions of Queer Teacher Candidates in Initial Teacher Education ProgramsFleet, Courtenay; Department of Graduate and Undergraduate Studies in EducationThis study used narrative inquiry to explore the experiences of queer teacher candidates during their Initial Teacher Education Programs (ITEP) in Ontario. The study sought to further investigate: (a) stories teacher candidates tell about being queer in ITEPs; (b) how queer teacher candidates respond to social bias and stereotypes in the learning community; and (c) if and how queer teacher candidates’ narratives can inform teacher education reform. Through interviews and lettered correspondence, the participants and I share stories of being queer in ITEPs. The study examined our stories using Clandinin and Connelly’s (2000) 3 commonplaces of temporality, sociality, and place, as well as, Ciuffetelli Parker’s (2013, 2014) 3-R narrative elements of narrative reveal, narrative revelation, and narrative reformation. Four themes emerged: the complexity of the queer teacher candidates’ experience; the separation of personal and professional identity; silencing; and shame. These poignant narratives contribute to the literature by providing a context for teacher education programs and researchers to reconsider teacher education reform. - Grade nine teachers' perception of how teachers, parents, peers, administrators, and community members affect the development of the grade nine student /Hofer, Diane Elizabeth.; Department of Graduate and Undergraduate Studies in Education (Brock University, 1997-05-21)The purpose of this study was to examine grade nine teachers' perception of how teachers, parents, peers, administrators, and community members influence the overall development of grade nine students. Ten grade nine teachers (four male and six female) participated in the study which consisted of the completion of a one hour, tape-recorded interview. The central findings were as follows: 1) the grade nine student has evolved; 2) peers have an important impact on the four developmental areas (physical, emotional, social, and academic) of the grade nine student; and 3) the role of the grade nine teacher appears to have dramatically changed over the last seventeen years. Suggestions and recommendations for future research in this field are based on findings related to the enhancement of the secondary school experience for the grade nine adolescent. - Exploring Teacher Candidates’ Perception of Practicum Placement Within Concurrent Teacher Education in Two Ontario UniversitiesDusto, Claire; Department of Graduate and Undergraduate Studies in Education (Brock University, 2014-01-27)In this study, teacher candidates’ perception of their concurrent education program at two Ontario universities were examined, with specific emphasis on how the programs utilized practicum placements, to determine the effectiveness in preparing teacher candidates to teach. This research also strived to uncover the best ways to optimize concurrent teacher education through practicum placements. A questionnaire and interviews were used to uncover teacher candidates’ perceptions at one teacher education program that used full integration of practicum and one that used minimal integration of practicum. The findings revealed that teacher candidates were generally more satisfied with the overall program when there was full integration of practicum. There were statistically significant differences found between the two concurrent programs with regard to practicum time and preparedness and context of the practicum and a highly significant difference found for theory-practice divide. There was also a statistically significant difference (p < .05) observed between the teacher candidates at each university in terms of their beliefs about the need for improvement of their program. Some of the improvements that participants believed could be made to their respective programs included having (a) exceptional mentor teachers and teacher educators, (b) longer placements with a balance of observation and practicum teaching, (c) clear expectations and evaluations of practicum placement, and (d) more distinct connections between theory and practice made within the programs.
https://dr.library.brocku.ca/handle/10464/16603
Basements can provide additional storage space as well as a place for work or sleep. But having a basement may be pretty useless if it is dump and cold especially during the cold season. It may be a good time to invest some effort and money, and make your basement a dream place to keep you warm and cozy staying in the basement during winter. Read on to learn our recommendations on how to keep basement warm. When you are dealing with basement heating, it is important to address two fundamental problems. The first is how to keep the warm air inside. and, the second, is how to add heat more efficiently. Although insulation, basement heaters, and new heating systems are all options, there are some simple basement heating options that can be implemented immediately. The first is to conserve your body heat. Layer up in winter and wear thick clothes. The second is to rely on natural sunlight to warm your basement. You can keep your blinds open during daylight hours as this is the cheapest and most efficient way to heat the room. After you’ve tried these easy tricks, take a look at our recommendations on how to make your basement warm in the winter. Contents - 1 Tips on how to keep basement warm this winter - 1.1 Find Cold Spots - 1.2 Additional insulation is an option to keep warm air inside and cold air out - 1.3 Seal cracks and leaks in foundations, floors, or walls - 1.4 Make basement’s existing windows or walkout doors to be more energy-efficient - 1.5 Add thick floor rugs or carpets - 1.6 Use heavy curtains or cellular shades - 2 Heating Options to Make your Basement Warmer - 3 Conclusion Tips on how to keep basement warm this winter It is important that you insulate your basement in order to minimize heat loss. Here are 6 ways that you can increase the insulation in your basement, and also keep the warmth inside. Find Cold Spots Finding cold spots is the first step in reducing heat losses in your basement. A thermal camera can be used to scan the entire room, floor to ceiling. If you find areas in your basement that are significantly colder than others, you can begin to seal or add insulation. Additional insulation is an option to keep warm air inside and cold air out The key to your home’s comfort and energy efficiency is a properly insulated basement. While the Earth itself can be great natural insulation, it is not enough to keep your basement warm. There are many types of insulation available for your basement walls. If your basement is not finished and has stone or concrete walls visible, framing the basement with insulation and drywall will help to retain heat better. For concrete walls, fiberglass insulation is preferred. You may be able to add spray foam insulation to basement walls if you have them. The rim joists and headers in your flooring and roofing systems are what separate your basement from the main floor. They are typically positioned against the outer walls of your house and can be exposed to the outdoors. With no insulation, they allow for the warm air rises from the basement and the cold air coming in from the outside. Additional spray foam insulation can be added as a sealant. It is important to remove all trim around basement windows. To add insulation, you could also install a subfloor over the sleeper strip. Your house’s air ducts are responsible to channel air around it. These ducts should be replaced with well-insulated or at the very least, wrapped in no gap insulation. Seal cracks and leaks in foundations, floors, or walls Heat loss from your basement can be caused by air leaks around the windows, doors, rim joists, seams, and joints between the floors, walls, and ceiling. Seal any cracks with the correct sealant for your material. Foam sealants are more flexible than silicone and can also be painted. Silicone sealants, on the other hand, are more flexible and can withstand shrinking and expanding wood around your windows. Your existing sealant will eventually wear out so be sure to renew your sealant with your doors, windows, and other edges every five years. Make basement’s existing windows or walkout doors to be more energy-efficient Glass windows can lose a lot of heat, especially if they are single-glazed. Double-glazing windows will provide better insulation. To improve insulation, this technique uses two pieces of parallel glass that have air or gas in between them. Double-glazed windows can be fitted with a reflective tint in one direction to increase heat retention. Add thick floor rugs or carpets Basements often have tiled or cement floors that are very cold. They also cause heat loss. To reduce heat loss, make sure you cover the entire floor if there are no flooding or moisture problems. You don’t have to buy wall-to-wall carpeting if you are looking for cheap ways to heat your room. However, placing a throw rug on areas that are frequently used will help keep your feet warm. For the best insulation, add floor rugs that has thicker piles and a higher stitch count. Use heavy curtains or cellular shades Heating a basement room can be as simple as layering thick curtains on your windows. Cellular shades can be a better option than heavy curtains. They are more effective at reducing heat loss by up to 80%, according to energy.gov. Cellular shades are honeycomb-shaped with air pockets for insulation. Heating Options to Make your Basement Warmer Once you have insulated your basement and reduced heat loss, it is time to start heating your basement. This will make your basement more cost-effective and efficient. These heat-adding options will add warmth to your basement at different price points. Install additional heat vents As you consider the convenience and future savings, tapping into your home’s existing ductwork to expand your central air system into the basement could be a smart move. Although adding heat vents to your home can be more expensive upfront than purchasing a space heater it will have a lower impact on your monthly electricity bills over the long term. To determine if it is possible to heat your basement, as well as how to install it, consult an HVAC professional. Install a heated floor If you don’t mind paying a little more for installation and your electricity bill, a basement floor heating system can be a great choice. There are two options for floor heating: electric radiant heat is where an electric wire is placed beneath the basement floor to supply low-level heat to concrete, stone, or tiled flooring. Hydronic heat is where hot water circulates in pipes underneath the floor to heat it. While both are expensive, they offer great benefits. The heat will rise naturally and fill your basement. Install a basement heating system There are two other options for basement heating that homeowners love: electric baseboard heaters or wood pellet stove fireplaces. Baseboard heaters, which are low-to-the-ground convection space heaters, are ideal for heating single rooms. Baseboard heaters are limited in that you cannot place your couch or any other heavy furniture against the heater’s wall. Fireplace stoves that burn wood or pellets can be used to heat larger basements, but they require ventilation through chimneys. This installation is expensive, but the benefits will last for many years. Use a Space Heater You can heat your basement with a space heater without spending a lot of money on electricity. Space heaters are easy to install and can be used by anyone. There are two things you should avoid with space heaters. Use it in small rooms or for short periods. Also, pay attention to the manufacturer’s instructions on how to unplug it after you have left the room to avoid fire dangers.
https://besthvacguys.com/energy-efficiency-tips-on-how-to-keep-basement-warm/
Q: Limit theorem of extreme values What does it mean for a cumulative density function of a standardized variable to tend to a limiting value? This is main part of the problem I don't understand too well. Below is the problem I am trying to solve, and my current progress in doing so: Problem Let $U_1,\dots,U_n$ be independent uniform random variables on $[0,1]$, and let $U_{(n)}$ be the maximum. Find the cdf of $U_{(n)}$ and a standardized $U_{(n)}$, and show that the cdf of the standardized variable tends to a limiting value. Solution so far The density function of a uniform random variable on $[0,1]$ is $f(x) = 1$ and the cdf is $F(x) = x$ for $x \in [0,1]$, so for $U_{(n)}$, the density function is \begin{equation*} f_n(x) = \frac{n!}{(n-1)!(n-n)!}f(x)F^{n-1}(x)[1-F(x)]^{n-n} = nx^{n-1}, \end{equation*} and the cdf is \begin{equation*} F_n(x) = \int_0^x nt^{n-1}\;dt = x^n. \end{equation*} Let $S_n = \frac{U_{(n)} - E(U_{(n)})}{\sqrt{Var(U_{(n)})}}$ be the standardized variable of $U_{(n)}$. \begin{equation} E(U_{(n)}) = \int_0^1xnx^{n-1}\;dx = \frac{n}{n+1} \end{equation} \begin{equation} E(U_{(n)}^2) = \int_0^1x^2nx^{n-1}\;dx = \frac{n}{n+2} \end{equation} \begin{equation} Var(U_{(n)}) = E(U_{(n)}^2)- E(U_{(n)})^2 = \frac{n}{n+2}-\frac{n^2}{(n+1)^2} = \frac{n}{(n+1)^2(n+2)} \end{equation} \begin{equation} S_n = -\sqrt{n(n+2)} + \frac{(n+1)\sqrt{n+2}}{\sqrt{n}}U_{(n)} \end{equation} \begin{equation} E(S_n) = 0, \qquad Var(S_n) = 1 \end{equation} Idea? Should I be finding the moment generating function of $U_{(n)}$ and then use it to find the moment generating function of $S$, $M_S(t)$, then take $\lim_{n\rightarrow\infty}M_S(t)$ and see if the result matches any of the moment generating functions for the most common distributions? A: Let $\mu_n$ be the mean of $S_n$, and $\sigma_n$ its standard deviation. Then $\Pr(S_n \le s)=\Pr(U_{(n)}\le \sigma_n s+\mu_n)$. This is $(\sigma_n s+\mu_n)^n$. Now we calculate. A little manipulation shows that $$(\sigma_n s+\mu_n)^n=\left(\frac{n}{n+1}\right)^n \left(1+\frac{s}{\sqrt{n}\sqrt{n+2}}\right)^n.$$ The term $\sqrt{n}\sqrt{n+2}$ behaves essentially like $n$, more precisely like $n+1$, but it doesn't matter. The limit is $e^{-1}e^s$. Added: Please note the comment by Stephen Herschkorn that the limit of the cdf is given by the above expression only for a certain range of values of $s$, since $\Pr(U_{(n)}\le \sigma_n s+\mu_n)=(\sigma_n s+\mu_n)^n$ only when $0\le \sigma_n s+\mu_n\le 1$.
Carly KesslerCarly Kessler is an associate in the New York office of Robins Kaplan LLP. Ms. Kessler attended law school at the Benjamin N. Cardozo School of Law, and earned her Bachelor of Science degree at the S.I. Newhouse School of Public Communications at Syracuse University, where she graduated magna cum laude. - Miles FinnBrings 15 years of practical engineering experience and 5 years of patent litigation support to his position as an attorney in the Firm's patent litigation department. After earning a Bachelor's degree in physics (Franklin and Marshal College, magna cum laude) and a Ph.D. degree in physics (University of Minnesota) he worked as a scientist and ... Continue reading - Kelsey ThorkelsonKelsey Thorkelson is a trial attorney who focuses her practice on intellectual property litigation in the health and life sciences industry. Ms. Thorkelson has significant experience representing pharmaceutical companies in Hatch-Waxman cases, and has also handled patent litigation matters involving other complex technologies including semiconductor fabrication and mobile communications. Ms. Thorkelson understands the industry challenges ... Continue reading - Andrew HeddenAndrew Hedden focuses his practice on representing clients in intellectual property litigation. He has represented clients in a wide range of technologies, including consumer electronics, medical devices, and e-commerce. Mr. Hedden has represented clients in a wide range of jurisdictions, including the District of Minnesota, the Eastern District of Texas, the District of Delaware, the ... Continue reading - Michael KolcunMichael Kolcun primarily focuses his practice on business and intellectual property litigation, representing clients ranging from large corporations to start-ups, as well as individuals. Michael is a former in-house counsel to a Fortune 500 company and litigator at a class action law firm, where he gained extensive experience in appellate practice, health care litigation, and ... Continue reading - Annie HuangAnnie Huang is results-oriented attorney with a core belief that the law is a tool to drive business and a passion for protecting intellectual property rights (patents, trademarks, copyrights and trade secrets) through litigation, counseling and licensing. She partners with her clients in developing creative and effective solutions for complex legal problems to help her ... Continue reading - - Li ZhuLi’s practice focuses on complex patent and commercial litigation. Li represents both plaintiffs and defendants, and advises clients in a variety of markets and technical fields, including computer technology, biotechnology, medical devices, DNA sequencing, and food products. Li has experience identifying and implementing strategies to clearly and persuasively communicate complicated legal and technical issues to ... Continue reading - David MartinezMr. Martinez represents a wide range of U.S. and international clients in all forms of dispute resolution, with emphasis in the areas of intellectual property, business, antitrust and class action litigation. Mr. Martinez helps his clients resolve complex business and IP disputes, as well as protect and monetize their intellectual property and manage risk. - Jeff HovdenMy primary area of focus is on generic pharmaceutical patent cases under the Hatch-Waxman Act and within the biotechnology industry. I have litigated numerous patent infringement and invalidity actions, including many cases involving top-selling worldwide prescription drugs for both generic and name-brand pharmaceutical clients.
https://blog.robinskaplantrialattorneys.com/tag/practice_area/intellectual-property-and-technology-litigation/
Action potential at the heart The basis of electrical excitation of the heart is the so-called action potential. It represents the biologically temporally limited change of an electrical voltage across the cell membrane, which ends in a muscle action, in this case the heartbeat. With a duration of about 200 to 400 milliseconds depending on the respective heart rate, i.e. the number of heartbeats per minute, the action potential at the heart is longer than that of a skeletal muscle or nerve cell. This protects the heart from overexcitation. Starting from a certain rest potential, a basic voltage of about minus 90 millivolts, which is applied to the membranes of the cells, the action potential at the heart goes through four phases of excitation formation. Different ion channels work together to change the electrical voltage on the outside of the cells. These are mostly transport proteins that are located in the skin of the cells and transport different smallest charged particles across their membrane. This alters the electrical voltage on the cell and thus creates the action potential at the heart. In the first phase, the so-called depolarisation phase, the transport capacity for positively charged sodium particles increases. These now flow into the interior of the cells and lead to an increase in voltage from about minus 90 millivolts to plus 30 millivolts. By shifting the electrical charge into the positive range, specific calcium channels in the heart are opened. This results in an influx of calcium particles into the heart cells. This second phase represents the long plateau phase typical of the heart. Here the excitation is carried and prevents, among other things, the entry of additional superfluous action potentials. It ensures the controlled pumping performance of the heart and protects against cardiac arrhythmia. In the third phase, the repolarization phase, the electrical voltage slowly returns towards the resting potential of minus 90 millivolts. Through an energy-consuming process, the inflowing sodium particles are actively transported out of the cell against the concentration gradient above the cell, and the outflowing potassium particles are transported back into the cell. This process continues until the original resting potential has settled down again. The cell is now ready for a new action potential. Action potential at the sinus node The excitation origin of the action potential at the heart is in the so-called sinus node. This is located in the right atrium of the heart near the junction of the superior vena cava, which transports the blood from the upper body circulation to the heart. The sinus node consists of modified muscle cells that generate the action potential necessary for excitation. They thus form the natural pacemaker of our heart. These are rapidly excitable cells with a natural frequency of about 60 to 80 beats per minute. This natural frequency can be registered in the form of a pulse. From there, the resulting action potential takes its course via certain anatomical structures to lead to a contraction, a heartbeat, in the working muscles of the heart. The number of beats per minute can be adapted to the load on the human body. The sympathetic nervous system, an autonomic nervous system that is activated primarily with increasing load, leads to an increase in the incoming action potential. If the opposite, the so-called parasympathetic nervous system, is activated, which plays a role especially in resting phases of the body, the number of action potentials towards the heart is reduced. The heartbeat slows down. Drugs and the body’s own hormones, such as adrenaline, also influence this system. All articles in this series:
https://sanomedica.com/action-potential-at-the-heart-action-potential/
Social security disability can be a complicated process, requiring several steps to proving you are disabled. The complication ultimately begins with how social security defines disability. Social security defines disability as, “the inability to do any substantial gainful activity by reason of any medically determinable physical or mental impairment which can be expected to result in death or which has lasted or can be expected to last for a continuous period of not less than 12 months.” To prove you are disabled under this definition, social security will evaluate your disability using a five-step sequential evaluation process. That is a fancy way of saying they will review your claim using 5 steps. Part of that process requires you to first submit an application for benefits. Simply put, you must apply for the benefits so that social security can then determine if you are disabled or not. An application can be done online, in person, or over the phone. We recommend completing your application online for two reasons. First, it gives you the opportunity to start and stop the application at your leisure. Second, and more importantly, when you submit the application online you receive a receipt showing that you applied for the benefits at a certain date and time. You would be surprised, or maybe you wouldn’t, how often social security misplaces an individual’s application for benefits. Having a receipt of the application makes proving the date you filed all the easier. In some cases, the filing date can mean the difference of when your Medicare health insurance starts or the amount of back pay you would receive. All very important things. Once an application has been submitted, social security will then begin the five-step evaluation process. The first step in the process is to determining if you are performing “substantial gainful activity”, as termed in the definition. Substantial gainful activity is generally the work you are performing where your earnings exceed a set monthly amount. That set monthly amount changes from year to year. In 2021, the monthly SGA amount is $1,310. If you are working and earning more than $1,310 per month in gross earnings, you are technically ineligible to receive disability benefits as the definition requires an individual to be unable to engage in substantial gainful activity. It should be noted, if you are earning less than SGA, that must be due to a medical condition, not your own volition. We will discuss further in step 2. Step 2 of the five-step process requires proof that your income is limited due to a severe physical and or mental impairment. If your inability to work is related to anything but a medical condition, then you will be denied at step 2 of the evaluation process. For your claim for disability benefits to continue you must show you suffer from a severe physical or mental condition that poses more than a minimal impact upon your ability to engage in substantial gainful activity. The severity of your condition is proven through the use of medical records. Social security will need medical proof not only of a medical diagnosis but also treatment notes documenting how your diagnosis limits you or what symptoms your condition causes you. It is not enough to just be diagnosed with a medical condition. That medical diagnosis must impact your ability to work in some way. If you are found not to be engaging in substantial gainful activity and your medical conditions are deemed severe, then social security will review your claim under step three to determine if you meet or equal a listed condition. The social security listings, generally speaking, are a list of conditions in which social security have deemed so severe that if you have the medical evidence to support the listing you will be found disabled at step three of the evaluation process. The Listings comprise a number of medical conditions and are broken down by category such as musculoskeletal, respiratory, cardiovascular system, mental disorders, etc. There are a total of 14 categories of listings containing hundreds of conditions. It is quite difficult to prove you meet or equal a listing. In fact, most people are not found disabled based upon a listing but at steps four and five of the process. If you are not found disabled based upon a listed condition, social security will then determine your residual functional capacity before reviewing your claim under steps four and five of the process. In determinizing your residual functional capacity, social security will review your medical records to evaluate your physical and mental capabilities. Essentially determine what you are functionally capable of doing in spite of your medical conditions. Once your residual functional capacity is determined, social security will then determine if you are capable of performing your past relevant work under step four of the process. Not all work qualifies as past relevant work. For your work to qualify as past relevant work you must have worked the position long enough to learn it, earned SGA level earnings, and the work must have been performed within the past fifteen years. If your past work does not meet all of these criteria then social security will not consider if your residual functional capacity allows you to perform that work. Assuming social security determines your residual functional capacity precludes you from performing your past relevant work, they will then evaluate your case to see if there is any other work you may be able to perform in the national economy that exists in significant numbers. Social security has not specifically defined what counts as “significant numbers.” The general rules are that if there are more than 12,000 that may be performed with your functional abilities then you will be found disabled. Of course, there are always exceptions to every rule. One of the main exceptions at step five of the process has to do with your age. If you are over a certain age, social security will use the medical vocational guidelines to assess your ability to perform other work. If all of this sounds confusing that is because it is. While social security’s intent was not to create a complicated convoluted process, the application process is certainly not easy. At the LaBovick Law Group we know exactly how to navigate the application process so as to put forth the best legal argument to win you disability benefits. Call us today at (561) 625-8400 for a free consultation.
https://www.labovick.com/blog/5-step-evaluation/
Are clones property, or human beings? What rights do clones have? And what rights do others have over clones? Cloning is something that I think we need to discuss more. Healthcare professionals are constantly challenged. Saving lives and making lives better is not an easy task. The idea of cloning humans raises so many questions. Why should humans be cloned? What purpose does cloning humans serve? Why would someone want to produce individuals, entities or populations, identical to the parent or original organism from which they were obtained or derived? Cloning can be useful and improve some people’s lives, but there are still so many unanswered questions about it that I hope the experiments would be limited to plants although I have read about experiments using animals. The idea of saving stem cells from umbilical cords can be useful and ethical, but how many people can afford to store the stem cells once they are harvested? Cloning organs this way is probably the most ethical way to clone organs, and it is certainly better for the patient receiving identical body parts. Are scientists considering cloning people for their organs? That is scary! I would hate to know that the only reason for me being on earth is so that someone could harvest my organs. Legal and ethical issues arising from the human genome project at the Indiana University School of Law-Bloomington in 2001 included discussions about the methods used to clone, whether or not cloning is feasible, and property rights issues. There are two methods used in cloning; blastomere separation which involves splitting an embryo soon after fertilization and the somatic nuclear technique (SNTC) that was used to clone Dolly which is a technique that removes the egg cell and replaces it with the nucleus from a somatic cell. Both methods are capable of creating clones. Back in the early 2000s when lawmakers were making legislation to ban cloning, a former ethicist for the NIH, John Fletcher, commented, “the reasons for opposing this are not easy to argue”. We are now twenty years down the road and scientists are cloning organs. Therapeutic cloning, which is also known as organ cloning, is the process of creating new human organs and tissues, never newborn babies, from the cultivation of stem cells. As such, the resultant organ has an identical gene structure as the recipient such that there are theoretically little chances for rejection. Cloning plays an important role in the development of stem cell research for embryonic stem cells transplantation into patients because the stem cells would be a genetic match for the donor patient. There would be no risk of rejection and for xenotransplantation which is the cloning of organs using animals that has a higher rejection rate. We have a moral obligation to cure diseases when we can, and to save lives when we can, but it is still unclear what rights a clone has and who owns the rights to a tissue. Could a clone ever be developed for the well-being of the clone? Who owns the clones? Courts already recognize a property interest in living material.” Several different parties may potentially claim cloned organs or tissues: the DNA/tissue donor, the clone, and the scientist who developed the cloned tissue/organ or transgenic animal. Property law governing renewable and nonrenewable body parts addresses ethical and moral questions. The DNA donor has an interest in his or her own DNA and any claim over the organs of the clone. In the cloning case, Moore v. Regents of the University of California, the court determined Moore had a cause of action for breach of fiduciary duty, but the court declined to find a cause of action for conversion.’ To bring an action for conversion, Moore had to “establish an actual interference with his ownership or right of possession”. In other words, he had to have retained ownership of his cells after they had been removed. The court in Cornelio v. Stamford Hospital” followed a similar line of reasoning. However, other cases indicate that there may be property interest in cryopreserved pre-zygote and the Hecht v. Superior Court that involved property rights concerning sperm. Patentability of living things is addressed in the Diamond v. Chakrabarty case where the respondent filed a patent application for a genetically engineered bacterium capable of degrading oil, something which no naturally occurring bacteria is known to do. The patent examiner rejected the claim on the grounds that “micro-organisms are ‘products of nature,’ and… as living things they are not patentable subject matter.” But the Court of Customs and Patent Appeals, Supreme Court found the bacterium is patentable subject matter under § 101 of the Patent Act and reversed the decision. Patent US 6,211,429, granted to the University of Missouri on April 3, 2001 by the United States Patent and Trademark Office (PTO) was written so broadly that it appears to include human cloning and products of cloning in its protection. Ethical public policy concerning humans either conceived or created through science entails; a right to autonomy, i.e., that his or her bodily integrity must not be invaded or compromised by others; No person or entity has the right to enslave, own, or control any human being, regardless of stage of biological development; Any organism that is genetically human is a human being; A cloned embryo is distinct and separate from the person donating the genetic material, and therefore is a unique being protected in law; No person or institution has the right to control or profit from any process designed to clone a human being. Minding Hearts is building advocacy and peer support groups in each state. The groups are created to raise awareness, educate, and advocate for those that might not otherwise be heard. We are here for encouragement, education, and support. We cannot give legal advice, but we can try and direct you in the right direction with your case. Links to legal services are listed with their states. Please share and let’s grow our groups. We are here to support families and develop resources that maintain family integrity. We look forward to your support. If you would rather become active by donating, then visit the donation page. Related Articles: BioEthics of using Stem Cells, Cloning, Genetic Enhancements, Brain Machine Interfaces, and rapidly growing technological advances in science. References Collins, F. S. (2006). The language of God: A scientist presents evidence for belief. New York, NY: Free Press. ISBN: 9781416542742. Cunningham, Comstock, (2002), The Right to Patent a Human Being: Fact, Fiction, or Future Possibility?, The Center for Bioethics and Human Dignity, https://cbhd.org/content/right-patent-human-being-fact-fiction-or-future-possibility Hilmert, Laura, J., (2001), Cloning Human Organs: Potential Sources and Property Implications, J.D., Indiana University School of Law-Bloomington 2001; B.S. Biology, Indiana, University, 1998, Retrieved from, http://ilj.law.indiana.edu/articles/77/77_2_Hilmert.pdf Pozgar, G. D. (2019). NVPMD: Legal aspects of healthcare administration. 13E-Liberty Custom. Burlington, MA: Jones & Bartlett. ISBN: 9781284170931.
https://mindinghearts.com/2021/08/28/ethics-regarding-cloning/
Photo by Ian Lamont. An Editorial The reason I fell in love with my Blackberry was very simple: Internet access, specifically for train schedules and street directions. I seem to have no sense of direction whatsoever, so when I tell you that I was lost in the world before my Blackberry came along, it’s not a riff off a cheesy love song but an unfortunately true statement. My family members in particular have mocked me for my issues trying to get from point A to point B, and while I prefer the term “directionally challenged,” I knew that little pink smartphone would never judge me. Ours was a healthy, user-device relationship: little BB would help me get where I needed to go, and I would make sure I kept it charged at all times. Imagine my horror, then, when I began to see the first signs of deterioration in my beloved smartphone. It came on slowly –literally, the phone took longer and longer to perform simple functions and the screen often froze or, worse, a spinning hourglass would appear until I had to resort to taking the battery out and restarting my phone. I tried every treatment I knew from leaving the battery out overnight to deleting e-mails from the handset to free up memory space, but no progress. Finally, there was no more denying it, and the time had come to say goodbye to my old friend, my directional confidante–my Blackberry smartphone. Peace out, pink Crackberry, hello, white iPhone 4! Sure, I was sad to part with my first smartphone, but that period of mourning was over as quickly as it had begun. Between the phone’s sleek design and multitude of applications (my cherished GPS directions included) I had been seduced by the latest and greatest phone Apple had to offer.* Yet, recent news reports that I’m not the only one jumping the good ship of Blackberry for the shores of Apple. Even if you don’t understand business-ese, the numbers don’t look good for RIM, the company behind the popular business smartphone, Blackberry. What once was the company at the top percentage of smartphone market share, now has been surpassed by the Android (Google) with the iPhone (Apple) not far behind. What happened? “What happened to the Blackberry was the iPhone,” says Leander Kahney, Editor and Publisher at CultofMac.com. “The Blackberry was a one-trick pony, really. It was more of a single function device, so was an iPod because it just played music, and the Blackberry just did email.” Just email, however, was enough for corporate companies who Leander says were the majority of consumers and major clients for the company, RIM. “Companies used to buy them by the truckload and give them to their workers,” Kahney explains, “but their workers just started buying iPhones themselves for their personal use. They found the iPhone worked just as well for company e-mails, so they just left the Blackberry at home.” As consumers and execs started making the switch to the iPhone, Blackberry developers knew they had to find a way to compete with Apple if they wanted to keep their hold on corporate clients. Given the popularity of Apple’s iPad and the variety of other tablets offered by competing smartphone companies, RIM decided to throw their hat in the ring and introduced their own tablet, The Playbook. Similar to the iPad, the Playbook promised users “The world’s first professional grade tablet.” The Blackberry tablet did not take off the way RIM hoped it would, and Kahney says they should have come up with a better product rather than bite off Apple’s moves. “The Playbook just showed that they had no idea was they were doing. They made an iPad clone when they should have made a business device.” To be fair, Kahney says that the potential was there for RIM to one-up the iPad, if only they had played to their strengths. “The only security issue is that companies are worried workers will lose the iPad, and Blackberry could have offered a product that solved that issue – perhaps with some sort of corporate tracking component,” he suggests. Most Apple products require a trip to the IT department if you want your iPad to work within your office’s software interface, whereas Blackberry come readily equipped to support the corporate infrastructure. “They should have gone to those corporations and asked ‘What do you need from a tablet to be corporate friendly?’ The fact that they didn’t do that shows where they went wrong. They had ‘iPad envy,'” says Khaney. If Kahney speaks from the “Team Apple” perspective, then what do other smartphone experts have to say about the state of RIM? BTR also spoke with Phil Nickinson, smartphone expert and editor of AndroidCentral.com, who agrees that Blackberry’s strong suit remains in the corporate world and feels RIM’s failed Playbook is no reason call it a game for Blackberry products. “The Blackberry is certainly not dead. That company has produced some pretty compelling devices, and their sheer entrenchment in enterprises is what they have working to their advantage. The fact is, all that Blackberry stuff is still out there, and it’s not as though millions of handhelds are just going to disappear.” What Nickinson and Khaney both recognize is RIM’s influence within the corporate culture; companies appreciate how Blackberry products are easily integrated with their office infrastructure. As Khaney mentioned before, anyone looking to use their Apple product with their office’s interface will have to go to their IT department, whereas a Blackberry user already knows that their handheld or tablet is was designed to work with their office’s system with minimal adjustments, if any. It doesn’t matter that the average twenty-something smartphone user with no sense of direction is made the switch from Blackberry to iPhone, because my phone is for personal use and not from a corporate account. However, as Khaney pointed out before, corporate users who switched to the iPhone for their business use did so because they originally bought it for personal use. The dilemma for RIM and Blackberry products is really a catch-22. Due to the extreme popularity of the iPhone, iPad, and other Apple products, RIM naturally must come out with a product that competes with new touchpad smartphones and tablets. However, the Playbook resembles the competition rather than challenges them, and now RIM risks losing consumers who valued their corporate-friendly products. I’ll be the first to admit that everyone has a moment in when they don’t know which way to turn, and I believe that RIM simply lost itself in a scramble to appeal to Apple elitists. It may be too soon to pronounce the death of Blackberry, but RIM should scrap their playbook (literally) and refocus on what they do best if they want to compete with iPhones. *note: statement no longer valid after “sometime in the beginning of October”. Reference: http://iphone5release.org/apple%E2%80%99s-ios-5-being-released-to-assemblers-around-sept-23/.
https://www.btrtoday.com/read/thursday-small-screen-week/
Date: This toolkit is designed to support women-led civil society organisations (CSOs) in the Arab States region to strengthen their capacity to deliver remote services to survivors of violence against women and girls (VAWG). Etat des lieux des inégalités et de la discrimination à l’encontre des femmes & des filles dans la législation tunisienne Date: Le Bureau du Haut-Commissariat des Nations Unies aux Droits de l'Homme en Tunisie, l’ONU Femmes Tunisie et le Centre de recherche, d’études, de Documentation et d’Information sur la Femme(CREDIF) ont conjointement réalisé une étude relative à l’état des lieux des inégalités et de la discrimination à l’encontre des femmes et des filles dans la législation tunisienne en en 2016 puis mise à jour en 2021. Date: This newsletter captures the latest updates on the implementation of the Women’s Peace and Humanitarian Fund (WPHF) in Lebanon. The newsletter includes the key highlights and achievements of the WPHF programme partners' towards enhancing women’s participation in the Beirut Port Explosion’s response and recovery process. Date: Palestine’s Second National Action Plan for the implementation of United Nations Security Council Resolution 1325 (UNSCR 1325) on Women, Peace and Security and Subsequent Resolutions follows the first National Action Plan adopted in 2016 recognizing the disproportionate impact of conflict and occupation on Palestinian women and girls and their vital participation in peacebuilding. Date: Ce Policy Brief met l’accent sur les différentes formes d’impact de la COVID-19 sur la santé des femmes en Tunisie. Durant la première période de l’épidémie, de nombreuses prestations sanitaires ont été reportées tandis que le personnel hospitalier se consacrait à la préparation des services de santé pour s’attaquer à la COVID-19. COVID-19 and the care economy: Immediate action and structural transformation for a gender-responsive recovery Date: This brief presents emerging evidence on the impact of the global COVID-19 pandemic on the care economy. Complementing a separate UN Women brief on COVID-19 and economic recovery, this brief highlights key measures needed to address the increase in unpaid care work as a result of the pandemic, ensure adequate compensation and decent working conditions for paid care workers, and enable the participation of paid and unpaid caregivers in the policy decisions that affect them. Addressing the economic fallout of COVID-19: Pathways and policy options for a gender-responsive recovery Date: This brief presents emerging evidence of the COVID-19 pandemic’s impact on women’s economic empowerment. Complementing a separate UN Women policy brief on “COVID-19 and the care economy”, it considers the immediate gendered economic impacts, including widening socioeconomic divides and shifting national and international priorities for the allocation of resources, as well as the long-term implications for women’s employment and livelihoods. Date: Developed jointly by REACH and UN Women, this report identifies, discusses, and analyses the range of enabling factors and challenges faced by working Jordanian and Syrian refugee women. Generously supported by the European Union, through the EU Regional Trust Fund in response to the Syria Crisis, the EU MADAD Fund, as well as the Governments of France and Japan. Date: This year’s regular resources report analyses how UN Women mobilized its core contributions in 2019 to fulfil its normative, coordination, and operational activities mandate, in order to improve the lives of women and girls worldwide. Through the presentation of tangible results, this report presents case studies at the global, regional, and country levels to showcase the impact that regular resources have in the countries where UN Women is present. Date: This rapid assessment examines how the impacts of COVID-19 are threatening women’s ability to access justice. The assessment reflects challenges faced by women and girls of diverse backgrounds and socio-economic groups, including those experiencing overlapping disadvantages and those facing amplified challenges in humanitarian settings. Cross-regional and local experiences are highlighted, and quantitative data is utilized where available. Date: This brief explores the implications for the provision of essential services for women and girls who have experienced violence during the COVID-19 pandemic. It provides recommendations for governments, civil society, and international organizations that are seeking to improve the quality of and access to coordinated health, police and justice, and social services for all women and girls during the crisis and provides examples of promising practices to date. Date: Across every sphere, from health to the economy, security to social protection, the impacts of COVID-19 are exacerbated for women and girls simply by virtue of their sex. This policy brief explores how women and girls’ lives are changing in the face of COVID-19, and outlines suggested priority measures to accompany both the immediate response and longer-term recovery efforts. Family-friendly policies and other good workplace practices in the context of COVID-19: Key steps employers can take Date: With many businesses struggling to survive as a consequence of the coronavirus disease (COVID-19) outbreak, loss of jobs and income and rising working poverty are a reality for many workers. This document offers (interim) recommendations for employers to mitigate the negative consequences stemming from COVID-19. The empowerment of women and girls with disabilities: Towards full and effective participation and gender equality Date: This publication addresses the key areas of engagement for UN Women to strengthen its work on the empowerment of women and girls with disabilities. Date: This week in Beirut, UN Women and the United Nations Economic and Social Commission for Western Asia launched a regional project to estimate the cost of violence against women as an instrument for policy reform and advocacy. Date: The Global Guidance, developed by UN Women and UNESCO, provides key information to governments, policy-makers, teachers, practitioners and civil society who wish to take concrete action against school-related gender-based violence. It introduces approaches, methodologies, tools and resources that have shown positive results in preventing and responding to gender-based violence in schools settings. It should contribute to further promote the generation of knowledge, evidence and standards of response against this pervasive problem.
https://arabstates.unwomen.org/en/digital-library/publications?Topic=4d1f5200c780495c9767199979ee8010&f%5B0%5D=subject_area_publications%3A1305&f%5B1%5D=subject_area_publications%3A1320&f%5B2%5D=subject_area_publications%3A1335&f%5B3%5D=subject_area_publications%3A1414
Falling is one of the most common accidents with potentially irreversible consequences, especially considering special groups, such as the elderly or disabled. One approach to solve this issue would be an early detection of the falling event. Towards reaching the goal of early fall detection, we have worked on distinguishing and monitoring some basic human activities such as walking and running. Since we plan to implement the system mostly for seniors and the disabled, simplicity of the usage becomes very important. We have successfully implemented an algorithm that would not require the acceleration sensor to be fixed in a specific position (the smart phone itself in our application), whereas most of the previous research dictates the sensor to be fixed in a certain direction. This algorithm reviews data from the accelerometer to determine if a user has taken a step or not and keeps track of the total amount of steps. After testing, the algorithm was more accurate than a commercial pedometer in terms of comparing outputs to the actual number of steps taken by the user.
https://scholars.cmich.edu/en/publications/towards-the-run-and-walk-activity-classification-through-step-det-3
Behavioral therapy assists patients identify unhealthy behaviors, and also eliminating these behaviors for their overall well-being. Clinician’s work with the client with the understanding that all behaviors are learned and as such unhealthy behaviors can be changed. Who can benefit from Behavioral Counseling? Behavioral therapy can help individuals of all ages with various diagnosis such as anxiety, depression, PTSD, ADHD and many more. Common Types of Behavioral Counseling: - Cognitive Behavioral Therapy: Clinician’s provide a holistic approach by looking at the behaviors, and the thoughts or beliefs that influence the outcome of our actions. - Play Therapy: Clinician’s communicate with children in their common language of play. Clinician’s review responding to various situations in a more positive manner and reinforcing positive behavior while punishing negative behavior. Schedule a call with a member of our team. Whether you need help for yourself, a child in your care, or for a personal relationship, we are here to help. Here is what you need to do: - Answer a few questions about yourself and how we can help you. - Discuss insurance coverage - Book your session with a Licensed Professional Counselor.
https://www.vcounselors.com/behavioral
All copyrightable text and graphics, the selection, arrangement, and presentation of all materials (including information in the public domain), and the overall design of this web page are ©2012 Waller & Co.. All rights reserved. Permission is granted to download and print materials from this website for the purpose of viewing, reading, and retaining for reference. Any other copying, distribution, retransmission, or modification of information or materials on this site, whether in electronic or hard copy form, without the express prior written permission of Waller & Co., is strictly prohibited. Policy Concerning Infringement Claims and Repeat Infringers Waller & Co. will process and investigate proper notices of alleged copyright or other intellectual property infringement related to material on its websites or servers and will respond appropriately, following the guidelines of the Online Copyright Infringement Liability Limitation Act and other applicable intellectual property laws. Under appropriate circumstances, Waller & Co. will act expeditiously to remove or disable access to the material that is claimed to be infringing or claimed to be the subject of infringing activity. Waller & Co. will, following appropriate investigation, terminate or disable access by repeat infringers. Disclaimer The materials available at this web site are for informational purposes only and not for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any particular issue or problem. Use of and access to this Web site or any of the e-mail links contained within the site do not create an attorney-client relationship between Waller & Co. and the user or browser. The opinions expressed at or through this site are the opinions of the individual author and may not reflect the opinions of the firm or any individual attorney. Contact Us To request more information about Waller & Co., contact us at (312) 606-9100.
https://wallerco.com/legal-disclaimer/
Journalism is a public profession that plays a critical role in helping democracy develop and function. As a common carrier of public discussion, it provides a public forum for criticism and compromise. It also carries with it a responsibility to improve the quality of debate by providing verified information, intellectual rigor and objective reporting. These are essential to the practice of journalism, as discussed by Bill Kovach and Tom Rosenstiel in their book The Elements of Journalism (2007). I greatly admire journalists. They, so to speak, carry the words and images of the world on their shoulders. They influence the way we see the world. So good journalism is far more than simple fact reporting. The profession requires that journalists be expert informants, watchdogs and storytellers, as well as good writers. They are expected to do their job with professionalism, integrity and transparency. Their sense of responsibility should compel them to convey information objectively, accurately, fairly, impartially, and with proportionality. Unfortunately, bad journalism is all too common when it comes to Africa in the U.S. mainstream press. It is all too common to read articles that either fail to report important news about Africa or report important news shallowly, inaccurately, unfairly, or, more often than not, with a slanted focus. This leaves the average American sorely and dangerously uninformed and Africans sorely and egregiously misrepresented. The latest example of how anything Africa is poorly reported on is the Washington Post’s March 16 article on the U.S. administration’s stance on global health assistance. The article is very disconcerting. You can read this article in three different ways, but whichever way you read it, it is incongruent and one more example of careless journalistic writing, careless editing and careless publishing about Africa. I truly expect more of the Washington Post. If you read it vertically, from headline to featured photograph and its caption to the featured attention-grabbing quote, the resulting conclusion is: bad actors support reproductive rights. The U.S. is building a global antiabortion coalition, and African women line up for … abortion? to join the coalition? No. To vote in “Yola, Nigeria, last year” (the presidential elections were in February 23, 2019!), a country that “supports family-planning” and is among “other bad actors”. Message: a veiled criticism of African women’s prerogative to reproductive rights. If you read it horizontally, by focusing on key words, the resulting conclusion is: a report on a “key women’s conference” in New York and how the U.S. is tying its foreign aid for women’s empowerment to traditional family-centered values through an “ideological colonization” that condemns abortion, wants to ensure women continue contributing human and social capital through unpaid care and domestic work and backs nontraditional allies with dismal human rights records. If you read it word-for-word, the resulting conclusion is: a report on two opposing views on family planning worldwide - those against it and for restricted access to reproductive health and rights and those in favor of it and against “gag rules”. My intention is not to pick on the Washington Post. But I am a subscriber and read it every day in print form. For the most part, the coverage and reporting exemplify good journalism. But when it comes to covering Africa, I am very disappointed and more than a bit irked. As a “boots-on-the-ground” expert in economic and human development in West Africa, I have had more than one Letter to the Editor ("What spurs this human low", “Do business with Africa – not in Africa”) published in the Washington Post and have posted more than one article (“Shine an Illuminating Light on Africa”, “The Hottest Business Growth Bandwagon", “An Im#BalanceforWorse"). In all of them, I express my dismay with how anything Africa is so tendentiously covered in the mainstream American press and then try to provide a more balanced viewpoint or some constructive criticism. The article on the Trump administration’s global health coalition could have been a great and informative story about the importance of global health coalitions and how opposing stances on family planning work for or against progress or stagnation in countries like Nigeria. This West African economic and social behemoth has a high fertility rate that is fueling underdevelopment. According to the Nigerian Population Commission, the fertility rate is 5.5% or an average of 6 children born/woman and a 3% annual population growth rate. Even assuming a decline of total fertility rate to 3.7, with a population of more than 195 million, by 2050 the country is expected to grow to over 440 million…in a country that is the same size as Texas. Instead, the article in the Post is one more example of how incongruent and careless journalism has become the bane of Africa. I recognize that if an article gets a detail wrong it doesn’t mean the fundamental facts are wrong. Journalists work hard, and many of the reasons for reader frustration or disgust are beyond the control of reporters. But presenting fundamental elements of a story (text, photo + caption and attention-grabbing quote) in a careless and sloppy manner not only reflects poorly on the journalists and their publication. It also effectively lowers trust levels and makes readers wary of the rest of future content and ready to seize every misstep. The overriding objective of a journalist should be to empower readers and viewers to make the best possible decisions about their lives, their communities, their country and government. It is also his/her job to inform, educate, and guide about what is happening in the world at large. A great story is one that covers something that is of importance and relevant to their readers. It requires using verification, to ensure it is newsworthy and compelling, as well as reliable storytelling to make it interesting…without distorting it in any way, shape or form. Good journalism can change the lives of citizens in both small and big ways. Bad journalism can diminish the value of their lives. And that is the crux of the U.S. mainstream media’s failure when reporting on anything Africa: even it involves conscious, systematic verification to produce a “functional truth”, it evidences some deeply ingrained problems: poor or incomplete verification, emphasis on conflict and negative news, lack of context, failure to connect the facts to relevance or misleading connections, perpetuating stereotypes, and even invasion of privacy. The end product: a constantly skewed picture of Africa. In a world assailed with constant, instant and ubiquitous information and news, there is a growing need for readers to not only defend their right to “real” vs. “fake” news. We also have a responsibility to assess the strength and weaknesses of a news story. and to provide constructive criticism that can help professionals in the field be more sensitive to the experiences and viewpoints of people and organizations outside the traditional professional community of journalists. That’s what Letters to the Editor are for. Some letters I send and get published; others I send and are not published; other times I know they won’t get published and opt to practice my responsibility by posting an article on my web site and on Linkedin. This is one of those times: sent but not published and then developed and posted as a more detailed article. Almost done… We just sent you an email. Please click the link in the email to confirm your subscription!
https://www.upboostllc.com/blog/the-bane-of-africa-incongruent-and-careless-journalism
Penny: We talk a lot about change today but as Chris Adam, Director of Strategic AM Pty Ltd, Brisbane, Qeensland, points out in this post, change has always been with us – and we have survived. But consider our reactions to the changes that have occurred within the last 100 years. Have we really done much better than just blunder through? And, can we do better now? Indeed, “how” do we ‘rethink the future’. 1. The Inevitability of Change We need only reflect on the last century to recognise the inevitability of change: - The second decade of the 20th century was defined by “the war to end all wars” which brought destruction and dislocation on a scale never before experienced - The 1920s were a decade of social and economic change finishing with the most disruptive stock market crash of all time; - The 1930s yielded the depression with further social and economic dislocation on a grand scale; - The 1940s was defined by WWII with death, destruction and dislocation on a scale that dwarfed our previous efforts; - The 1950s were a massive social and economic challenge as the world rebuilt and millions immigrated; - The 1960s sexual revolution; - The 1970s energy crises; - The 1980s “greed is good” decade; - The 1990s with the “dot com bubble” My point is not to undermine the idea that current forces are not indeed a major challenge but to demonstrate that change is inevitable and, as a species, we have a proven capability to respond to such challenges 2. How do we “Re Think” the Future? We tend to see change as a threat and prefer it didn’t occur, so how do we address this challenge and avoid a “head in the sand” attitude? We need to take the future seriously but predicting the future is problematic. With the possible exception of WWII, practically no-one forecast the massive changes that characterised each decade of the 20th century. Current predictions that IT will lead to massive unemployment as machines take over reflect the exact same claims in the 1990s where it was forecast that traditional retail was “dead” as people would abandon shopping (as an inefficient and limited way to produce what we want and need). 3. So, if we can’t predict the future, how do we “take the future seriously”? Ideas? Penny: Today’s post is by Hein Aucamp, Director, WA Integrated Asset Management, and member of our Perth City Chapter, WA, He considers what infrastructure decision making today might gain from what he has learnt from years in the volatile area of Information Technology. I spent many years developing Engineering and mapping related software. Rapid change, which is becoming apparent in infrastructure, has always been a feature of Information Technology, where the knowledge half-life is said to be 2 years. I trust that Infrastructure Decision Making will never become as volatile as that but it will certainly be less stable than it has been in the past. Here are my suggestions of translatable lessons. One can cope much more easily with a volatile discipline by dividing it into two broad categories: first, timeless principles; second, current methods to achieve the timeless principles. We need to be able to distinguish between them, to hold tightly onto the timeless principles, and also to be lightly invested in what is temporarily useful. Timeless Principles To get our thinking started, I suggest that these are some of the timeless principles: • Infrastructure must solve a human need. This sounds as if it is stating the obvious, but think about a situation where infrastructure solves a narrow need but creates a broader pernicious problem: environmental damage, economic hardship for generations, etc. The timeless principle here is a healthy understanding of the service needs we are trying to solve, and the trade-offs involved in our proposed solutions – and indeed a calculation about whether in some cases it would be better to live with the problems. • Infrastructure creates a long-term responsibility, with unforeseen obligations that might emerge only over time – arising through legislation, reporting requirements, or safety issues. Think for example of the difficulties of managing asbestos, which was once very much in favour. Once built, society will tend to want to renew infrastructure. • Infrastructure will always have a political aspect. Public infrastructure requires funding by governments and will be attractive during electioneering. Suggestion: an independent central banking system has immunised monetary policy from most aspects of political influence except commentary. Are some lessons for infrastructure possible here? I am of course not suggesting a command economy, but perhaps a broad consistent policy climate could be created – analogous to the way objective renewal programs are supposed to stop jockeying for resealing of roads for influential people. Current Methods On the other hand, here are some of the temporary, current methods at our disposal to deal with the timeless principles. We know they are capable of adjustment, because they have been adjusted in the past. • Funding mechanisms. • Contrasting promises during elections. • Technology. • Legislation. • Best practices. Comment? Do you agree with this division? With the ‘timeless principles’? How can we apply these lessons from Information Technology? What other areas might we draw upon for valuable lessons? Welcome to 2018 and our January theme – “Get Ready for Change”. Who is this for? To get ready for change we need to take it seriously. Over the years, as professionals, we have studied and built up an enormous wealth of practical experience. This experience has served us well, and much of it will continue to serve us well. But some won’t. Sorting out what was valid, from what is still valid, requires us to challenge practically everything. This not only takes effort, it takes courage. What we know becomes part of who we are, so that challenging what we know seems as if we are challenging, even disowning, ourselves. Moreover, for those who are brave enough to challenge themselves, there is still the problem of disenfranchising friends and colleagues who are content to stay with the way things have always been. For those who want to take up this challenge, to move ahead and ready themselves for the issues and opportunities to come, but who want to do so without offending colleagues and friends – try out your ideas here! This is a ‘safe zone’. Only those who are seriously thinking about the future comment here. Not everyone agrees, but everyone is prepared to think, to put forth their ideas, challenge those of others, and to be challenged in turn. Re-thinking Challenging what we know is an exercise in re-thinking. Here are some of the issues that we will be addressing: - Re-thinking the aim of infrastructure investment (e.g. what do we mean by ‘public value’ and how do we create it? - Re-thinking the players – who should be involved in the decisions? How do we involve more points of view? How do we ‘think forward’? (e.g. Citizen’s juries) - Re-thinking the decision processes (tracking progress) - Re-thinking the way we measure? (beyond the dollar?) What else? What are the issues that you consider we should be looking at in our attempts to get ready for change? This is the last post for this month, and for our current theme on Adaptability in Infrastructure, but it is clearly not the end of the issues that we need to consider on this subject and so we will return to it later in 2018. Some of the issues remaining include: Adaptability v Flexibility? Are they the same? Mark Neaseby suggests that it is dangerous for us to think of these terms as interchangeable. Flexibility has connotations of deciding now what may occur in the future and ‘building it in’. I think of the technical college in the UK that, allowing for a potential future need for super tall machines, built one of its instructional areas with extra high ceilings. The need never eventuated, but the cost of heating that building in the cold UK winters greatly increased their operating costs and reduced the use that could be made of that space. Adaptability does not mean that we decide NOW what will be needed in the future, but rather that we design so that WHEN we know what is required, we are easily able to change. Adapting v Adaptability? Adaptability is a functional characteristic, and one that is going to be essential for future infrastructure design. Adapting is an action, and there is going to be considerable need for this as well, as we cope with infrastructure that is no longer suitable for its designed task, or where its designed task no longer exists. Questions now include: - What are the other key questions/ issues that you would like to discuss? - What examples of successful Adaptation or Adaptability in design do you know of? - Please share with us, who is working in this area and what are they doing. Wishing everyone a most prosperous and exciting 2018! Please rejoin us January 9 as we now take a short seasonal break. ‘Sure, the infrastructure is overdesigned/excessive at present but, with growth, it will be adequate in the longer term’. Thirty years ago, this was the accepted theory underlying much building construction, particularly in electricity generation. When you have grown up with a particular mindset, it is hard to shift it. Even when all the evidence points in the other direction. At my first presentation on infrastructure renewal in 1987 I had a keen audience and question time went for over an hour. Then one fellow asked ‘With the technology we now have, surely we could make our buildings last for 300 years’. My response brought question time to a sudden close. What I said was. ‘My goodness! Why would we want to?’ Peter Erkelens, of the Eindhoven University of Technology, is one professional who has been considering this problem for more than a decade now and he has developed the ‘lifespan approach’. His view is that we should design and select the components and their connections in such a way that they function in accordance with the wanted lifespan. He suggests we think in terms of the relationship between the economic (or functional) lifespan and the technical lifespan and that we look at three scenarios: A. Economic life span < Technical life span. The components of this infrastructure should be re-usable and/or recyclable B. Economic life span = Technical life span. The components should be recoverable and then recyclable C. Economic life span > Technical life span. The components of the infrastructure should be replaceable and recyclable Peter Erkelens argues that ‘the design efforts should be such, that the resulting products are sustainable. This requires thinking about environmental effects and should include options for re-use, replacement and recycling’. To see detailed ideas about how this concept can be applied to buildings, both offices and housing, See Peter Erkelens Hein Aucamp, Director, WA Integrated Asset Management, and member of our Perth City Chapter, WA, continues his exploration of adaptable infrastructure. The linear scarring of a redundant road and the tons of aggregate mean that the disposal cost is significant. So we tend to resist discussing that such a situation would benefit from adaptation; our problems begin in earnest when people agree with us and want to know how to do it. I can offer a tentative suggestion in another direction, but applications may prove elusive. The most adaptable infrastructure is that which supports the services with the greatest degrees of freedom. For example, compare air transport with road transport. Consider air transport. The physical infrastructure to connect Australian States by air is a small length of runways, some sophisticated buildings, and an expensive fit-out of smart electronic equipment with highly trained personnel. Every aspect is reconfigurable by procedures and routing schedules except for the landing and take-off points: and even these can be by-passed. On the other hand, the physical infrastructure to connect Australian States by road is a byzantine spider’s web of expensive roads and bridges of varying degrees or repair. Compared to air travel, nothing is reconfigurable by policy except speeds or standards when reconstruction is done. It is not always possible to transport goods and services by air rather than road; we will probably always need both. But my tentative suggestion is that where possible, choose services with the greatest degrees of freedom, and build the infrastructure to support those services. Provide virtual library services. Choose wireless or satellite services rather than cable services. I would really appreciate comments that could improve my line of thought. Hein Aucamp, Director, WA Integrated Asset Management, and member of the Perth City Chapter, WA, is our guest blogger this week. In this, the first of his two posts, he looks at what adaptable infrastructure means. One of the earliest examples of adaptable infrastructure was in the 1940s. Guy Maunsell assisted the war effort by designing his famous Maunsell Forts, which were floating concrete structures, sunk in strategic positions. Their deployment was similar to offshore oil platforms. They were adaptable in their deployment. One outside Britain’s territorial waters eventually became famous after being proclaimed as a principality – similar to the Principality of Hutt River in WA. But although military infrastructure is definitely adaptable from one perspective, it is also prohibitively expensive to imitate in peace time. For infrastructure to be adaptable in a meaningful way, we don’t merely mean that adaptation can be done to it; the adaptation process must be relatively cheap and easy. For example, if our so-called adaptation process involves demolishing and rebuilding 50% of an item, we could (with at least equal honesty) describe our modification as demolition, disposal of economic value, and reconstruction. Adaptability of infrastructure can apply to several different aspects of its responsiveness to changing needs. Ideally, we would require rapid, easy, and cheap ways to achieve the following: - Adaptability in deployment to allow efficient construction as needs become apparent. - Adaptability in function to allow efficient change in use for either a range or foreseen needs, or for emerging needs. - Adaptability in recovery to allow it to be moved or recycled and re-used when it unexpectedly emerges that it is no longer necessary – or that better alternatives have appeared. QUESTION Do we have any examples of infrastructure that is adaptable to this extent? We have modularisation which allows rapid building and extension of structures that follow a pattern. And we have some ability with roads and buildings to make rapid operational adjustments to accommodate a multipurpose environment. But true adaptability is hard to achieve. The main problem is that when the need reduces or disappears, the infrastructure resists modification without loss of economic value – which is a polite way of saying we have to demolish it and pay to reinstate the environment. Next: Adaptability in Recovery. A recent study by Reza Taheriattar at the University of NSW describes a mathematical approach to assessing adaptability. He distinguishes between ‘designed in adaptation’ (i.e. guessing at the likely future changes that will take place and allowing for them in the design) and ‘fortuitous adaptation’ (i.e guessing at them and calculating the cost of adjusting for them later.) [Note: I do not use the word ‘guessing’ disparagingly! Guesses, like assumptions, are extremely important, but we have to be careful we don’t mistake them for fact.] Reza demonstrates a financial analysis model and an options analysis model but perhaps the most interesting is his social/environmental analysis where he includes non-measurables by the use of fuzzy numbers and qualitative analysis. He doesn’t give any detail of how these figures are obtained but the general idea is promising. The example he gives is of rock seawalls where he compares a ‘designed in adaptation’ approach, (building a primary layer of larger armour units and building a parapet wall of stronger foundation) with a ‘fortuitious adaptation’ approach (later adding bigger armour units on seawall face and then strengthening parapet wall foundation’) His general conclusion is that his method provides ‘ an easy to use method for financial valuation of investment in adaptable infrastructure’ and that ‘Life cycle costing could indicate whether infrastructure adaptabiity is sustainable and whether inclusion of environmental/social criteria enhances viability.’ As with any such academic study, it is subject to further research and development and the engineers amongst you may like to refer to his presentation. All of us, however, may want to consider whether a study which compares two approaches to ‘known’ events. (‘known’ in the sense that you are able to put costs and timing on the outcomes) is really an answer to the problem of dealing with unknown events, where ‘adaptation’ may be more a question of how to efficiently and effectively leave your options open. Your Thoughts? “Renewal is so Yesterday” (December 5) argued that, in a changing world, we cannot afford to think of renewal as replacing what we currently have, perhaps with ‘better’or ‘higher’ quality which, when you think about it, is what we are currently doing. We need to focus instead on ensuring that any infrastructure we build now is ‘future friendly’, meaning it can be adapted as needs and opportunities change. However, new infrastructure – while it often occupies most of our thoughts and media column inches – is at any time really only a very tiny percent of our total infrastructure stock. How do we defer the need to renew – and thus avoid committing ourselves to another 20, 40 or more years of 20th century assets? The answer is obvious: we need to put more effort into maintenance. Well-maintained assets last! So, if we are to maximise our chances to benefit from the changes that are coming – and avoid adding to the mounds of redundant, or stranded, assets that change has already brought about – then we need to focus on extending the lives of our existing assets by better maintenance. It may not be as glamorous as renewal, but better maintenance is key to succeeding in a changing world. It does what nothing else can – it gives us the chance to learn more about the future before we build for it. Agree/Disagree? Counter arguments welcome! Special Note: The Asset Management Council is running a special webinar on Tuesday 19 December – “Life extension of a gas powered generator” the award winning presentation by Mark McKenzie and Giuliano Cangelosi, Find out more and sign up here The strength of concrete – but with adaptability! Interblocs are large scale concrete lego blocks, and just like lego you can construct – and then reconstruct. I was fascinated with the ability of interblocs to adjust as needs change over time, but when I spoke with Jack Bright at the IPWEA Conference in Perth last August, he was more interested to tell me about their environmental and cost aspects, and – not for the first time – I noted how good ideas tend to address not one, but multiple issues. 1. Avoiding Waste NZ and international evidence indicates that 2.5-3% of all ready mix concrete will end up as surplus to requirements. This is perfectly good concrete, however because of the perishable nature of concrete it ends up wasted. Traditional approaches to dealing with this waste is to dry it, crush it, and use as recycled concrete aggregate (which has a high embedded energy content), or is sent to landfill. Jack explained that the Interbloc system was part of a larger sustainability iniative called Envirocon, a product stewardship scheme for the ready mix concrete batching waste streams. Envirocon have developed technology to analyse each unique mix of concrete and put it to sleep for up to 72hrs, which allows the aggregation of wet surplus concrete for transport back to a central processing plant where it is upcycled into precast concrete products. The major benefit here is the reuse of a substantial waste stream with minimal extra processing. There are also a number of indirect benefits; an estimated 1.3 million km of truck movements are eliminated by enabling trucks to return direct to the plant; new jobs, compounding economic growth etc. 2. Re-Use and the Circular System The design principles behind the precast concrete products also lends itself to this idea of a truly circular system. Both Interbloc and Stonebloc are modular wall systems which deliver greater efficiencies in the building process. While acting as a permanent structure when assembled, the blocks can be easily disassembled, reconfigured to suit changing requirements, and reused at the end of the structures life. In New Zealand, where Interbloc Systems have been in use for over ten years, there is even a buy-back guarantee.
http://talkinginfrastructure.com/
Ansel Adams (1902 – 1984) was an American photographer best known for his black-and-white landscape photographs of the American West. With more than half a century of camera work behind him, Ansel Adams stands as one of America’s greatest landscape photographers. His career is punctuated with countless elegant, handsomely composed, and technically flawless photographs of magnificent natural landscapes. No contemporary photographer equaled the lifetime contributions of Ansel Adams in bringing public recognition of the art of photography or taught so widely the techniques of black and white photography. His strength as an artist is largely attributed to his tireless investigation of the methods of photography, developing a careful darkroom technique of exposure and development which he called the Zone System.Striking photographs of Yosemite and the surrounding Sierra Nevada capturing the elusive visual myth and mood of these wild places became the wellspring of Ansel Adam’s consciousness and brought him widespread popular acclaim. His intimate understanding as well as passion for conservation of this pristine wilderness gave Ansel Adams the energy and tenacity needed to bring subjects to life for a wider public. Adams developed the Zone System as a way to determine proper exposure and adjust the contrast of the final print. The resulting clarity and depth characterized his photographs and the work of those to whom he taught the system. Adams primarily used large-format cameras despite their size, weight, setup time, and film cost, because their high resolution helped ensure sharpness in his images.
https://monovisions.com/ansel-adams/
Before kicking off a new digital project, you need a thorough project brief, which can only come together accurately (and effectively) once you’re prepared to answer a few basic questions that you’re probably already familiar with: who, what, why, when, and how. When our team at Modern Tribe approaches a new project of any size, we tackle project definition through two major components: Gathering Requirements: • Who are the users? • What do they need? • Why do they need it? Building a Roadmap: • How will we build it? • When will we have it done? On the surface, these critical process milestones are closely intertwined. But each step requires independent consideration, allowing us to think about goals and objectives first, without getting mired in logistics and stuck on the realities of budgets, timelines, and technical limitations. First, we get clarity on what success looks like. Then we focus on how to bring that vision to reality. You’re unlikely to be the only decision maker in your organization, of course. There’s more likely a whole team of people waiting to offer input on what these priorities look like and where the biggest needs reside. Soliciting this input and subsequently organizing the resulting information in a cohesive way can be a time-consuming—and quite frankly, exhausting—endeavour. By creating some structure around this process, you can turn hours of wandering brainstorms into a short series of workshops that will ultimately tie your project scope back to your organization’s most important objectives. This can be especially helpful when you’re faced with a large, heavily invested team that may have a wide variety of technical aptitude, comfort, and confidence. Getting People Engaged “Requirements definition” has a tendency to sound pretty technical; many key business leads who could otherwise offer valuable early input instinctively end up opting out of this highly strategic early step because of misperceptions. Project leads also have a tendency to want to keep a small, efficient team, and might not want to invite too many opinions. But if stakeholders aren’t engaged early on, we almost inevitably hit some major roadblocks, “mystery voices,” and unanticipated requirements that could have been otherwise captured early in the process with the right engagement tools. Think of “requirements definition” as a fancy way to say “goal setting.” We’re building alignment on the priorities for the project and establishing success criteria. From the most straightforward website to a more complex digital project, that particular conversation is super relevant to the technical and non-technical alike. Fueling User Stories We’re not just building a product—we’re building a solution. Solutions solve something. So let’s start by understanding what challenges we’re trying to solve. Give your departments or business units the required space to talk about their core business objectives and the unique users and goals that drive success in each of those areas. User stories are a great way to document this information. A classic user story looks like this: As a <user> , I want <some goal> so that <some reason>. But there’s often some additional context that’s helpful to capture at the ideation stage. Practically speaking, here’s a framework you can use for the initial discussion, whether you’re whiteboarding in person or brainstorming remotely through a shared spreadsheet. Adding Context For each department or business unit, create a thorough list of key business functions. Establish user types, goals, and desired outcomes for each one. Then, based on your current system or process, determine how satisfied the team is with the status quo. Discuss if this function should be retained from the way you operate today, rebuilt in a similar fashion on the new platform, or reimagined completely. Business Objective Remember to conduct this exercise for internal users and external users. Evaluating from an internal perspective gives us the opportunity to understand opportunities to streamline workflows or reduce the strain your valuable human resources. The goal of this initial session is to go wide and shallow, ensuring that you capture all the key objectives at a high level without letting the conversation get too detailed in any one requirement. You’ll get a bird’s-eye view of the landscape and be able to identify the best areas to spend your creative energy moving forward. Prioritizing Needs For each of these user stories, ask your stakeholders to prioritize using three ratings: 1) Must have, 2) Should have, 3) Nice to have. This allows you to put aside the questions of budget and timeline while capturing the importance level of each. When you get to the estimating or roadmapping stage, you’ll have the data you need to make important decisions about what stays and what goes. If you find yourself with a group of all “must haves,” try asking them to rank their requests in priority order instead. Brainstorming with a Mission At the end of your session, be sure to revisit the future-state categories you assigned. For anything that’s marked as “retain” or “refine,” you’ll probably be able to capture any details or requests in that initial session. For requirements that need to be reimagined, set aside separate follow-up brainstorms with specific subject matter experts to dive a little deeper on the goals for these particular features: - What are the challenges with the current method? - Are there specific platforms or technologies that should be integrated or considered? - Is there an off-the-shelf solution that meets our needs, or are there solutions that we have explored and ruled out? Moving these detailed conversations to separate, focused sessions helps you streamline the conversation without causing fatigue for the rest of the group. Your stakeholders will be grateful that you’re using their time efficiently, and you’ll make a strong start in building trust in your core project team. Putting it All Together Once each department or team has created their list of requirements, compare across teams and look for places where there are shared goals or similar requests. The consolidated items can go in a general requirements section, with separate sections for unique requirements from each department. Once you have that curated list, you’ll have the critical data to inform your project scope—whether you’re tackling this project internally or preparing to craft an RFP. Either way, the energy you invest at this early stage will give you a solid understanding of the challenges your new platform needs to solve and opportunities you can create for your users. Download Our Project Definition Worksheets Here Get your workshop plans in motion with some pre-fab worksheets that you can customize and distribute to your team to guide the session. We’ve even written some sample background content and exercise instructions in case you want to send them out ahead of time as some homework. Just add your contact info below, and we’ll email the template over to you so you can get rolling. Have questions? Get in touch with us at [email protected].
https://tri.be/blog/requirements-definition/
Introduction ============ Environmental conditions dramatically structure soil bacterial communities; however, only a few environmental variables such as pH, salinity, and C substrate quality and quantity are known to drive community assemblages ([@B64]; [@B40]; [@B8]). Borrowing from foundations in plant community ecology, following light and water, nutrient additions dramatically alter species abundance within communities often allowing certain species to dominate and biodiversity to decline ([@B14]; [@B15]). For example, in a grassland, even low continual additions of N reduced species richness in just 2 years and depressed the number of species for as long as 20 years ([@B24]). Classically, soil bacterial metabolism and growth is limited by water availability, then the quality and quantity of C substrates, and finally nutrient concentrations. But the role of major nutrients, such as N and P, remains incomplete even though changes in nutrient availability shape the responses of specific bacterial species or species interactions within soil communities ([@B66]; [@B79]). Unlike plants, bacterial responses to resource constraints are complicated by interactions within consortia or communities where functionally disparate taxa (e.g., decomposers, nitrifiers, and methanogens) potentially dictate the form and availability of specific C substrates and nutrients necessary for other bacteria to become metabolically active and grow. In most soil communities such interactions are further complicated because soil organic C substrates are extremely numerous and diverse, containing both labile and more recalcitrant sources structuring the availability of N and P ([@B73]; [@B52]). Furthermore, the C, N, and P requirements of bacterial biomass differ among species and ecosystems, and are not homeostatic through time ([@B25]; [@B49]). Ecological stoichiometry is a unifying body of theory in ecology predicting relationships between the organismal biochemistry of plants, invertebrates, and microorganisms and the availability and recycling of nutrient elements in the environment ([@B36]). Ecological stoichiometry may also help identify the resource requirements of bacterial taxa and the conditions allowing certain bacteria to become metabolically active. Ecological stoichiometric theory was developed in aquatic ecosystems, but is universally valid, and over the last decades was also successfully applied to terrestrial systems ([@B80]; [@B81]; [@B25]; [@B7]). Soil (186:13:1) and soil microbial (60:7:1) C:N:P stoichiometry, like Redfield ratios for planktonic biomass (C:N:P = 106:16:1) ([@B80]), are well-constrained across multiple biomes ([@B25]) offering incredible utility in understanding bacterial resource limitations and constraints on biogeochemical processes ([@B92]; [@B97]). The C:N:P stoichiometry of plant residues, soil organic matter, and bacterial biomass influence litter decomposition rates ([@B6]; [@B100]), N and P mineralization rates ([@B69]), and C-use-efficiency of bacteria, determining metabolic activity and trace gas flux ([@B59]). Community structure is intimately connected to C:N:P ratios ([@B37]). Specifically, soil C:N:P stoichiometry sheds light on the potential for N and P availability to influence bacterial community structure. For example, higher N:P ratios in afforested soils of the Loess Plateau in China reflected P deficiencies among bacteria, leading to lower diversity but a higher abundance of Proteobacteria, Acidobacteria, and Nitrospirae ([@B82]). Further, a decrease in soil C:P ratios caused Gram-positive bacterial biomass to increase by 22% and the abundance of arbuscular mycorrhizal fungi to increase by 46% in a pasture following slash-and-burn agriculture in the South Ecuadorian Andes ([@B93]). Taken together, investigating soil C:N, C:P, and N:P ratios are instrumental in identifying patterns of ecological coherence among responding bacteria under varying resource conditions. Soil ecosystems of the McMurdo Dry Valleys, Eastern Antarctica are a model system for investigating stoichiometric controls over soil communities and ecosystem processes ([@B11]). The extreme environment limits biota to cryptogrammic vegetation, a few taxa of metazoan invertebrates, and microbial dominated food webs. Phylum-level bacterial diversity in Antarctic soils is surprisingly high considering the environmental extremes and dearth of resources, i.e., organic matter and available nutrients ([@B22]; [@B61]). However, these soils host comparatively low diversity at the family or genus level relative to other biomes ([@B39]), attenuating the nearly limitless possibilities of links between C:N:P stoichiometry and communities potentially present in high diversity ecosystems. Further, soil C:N:P ratios in most systems are necessarily complicated by plant residues with multiple different stoichiometric ratios that may mask links between C sources and release rate of N and P ([@B37]). Alternatively, due to the absence of vascular plants, Antarctic Dry Valley soils have some of the lowest soil organic matter concentrations on Earth ([@B20]; [@B57]) with much of the soil organic matter being "legacy" C accumulated over thousands of years by cryptoendolithic bacteria, paleolake deposition, and minor inputs of contemporary algae and cyanobacteria from lakes, intermittent streams and saturated zones ([@B20]). The concentrations of inorganic N and P are relatively low with N entering the system via atmospheric deposition ([@B67]), endolithic and hypolithic cyanobacterial N~2~ fixation over millions of years ([@B28]), and from dust, while P enters soils through mineral weathering ([@B12]; [@B51]). Both N and P concentrations vary among soils occurring on glacial tills with distinct exposure age and mineralogy ([@B12]). Accompanying nutrient limitations, water is a universal resource essential for polar bacteria. In the McMurdo Dry Valleys, low precipitation inputs and sublimation and ablation processes ([@B42]) cause dehydration stress and limit substrate diffusion to bacterial cells ([@B91]). Water additions to Antarctic soils do create higher bacterial growth rates, elevate soil respiration, and decrease soil community diversity ([@B87]; [@B19]). Therefore, water, in addition to nutrients, may have direct and indirect effects on community composition. In this study, we explored the effects of long-term, coupled resource additions and water on bacterial species responses and ecosystem processes in a cold desert of Antarctica. After treating soils with six different resource additions including combinations of water, C as mannitol, N as equal NH~4~^+^ NO~3~^−^, and P as Na~3~PO~4~ annually over 6 years in the field, we evaluated shifts in bacterial community composition metrics such as richness, alpha diversity and evenness, taxa co-occurrence patterns, and soil respiration. Based on polar desert resource conditions, our initial soil C:N:P ratio (mean = 167:8:1, *n* = 8; more initial soil chemistry data is provided in the first section of the "Materials and Methods"), and the modal C:N:P ratio of soils (186:13:1, [@B25]), we hypothesized that C, N, CN, and CP additions will alleviate resource limitations and provide organic C and inorganic N for a subset of the community to exploit, while P is not limiting and adding more inorganic P will not alter community composition. Materials and Methods {#s1} ===================== Study Site and Initial Soil Chemistry ------------------------------------- Our study was conducted in a polar desert of the McMurdo Dry Valleys (76°30′ -- 78°00′S, 160°00 -- 165°00′E) at the McMurdo Long Term Ecological Research (LTER) site in Antarctica. The site was located in the Lake Fryxell basin (77°36.5′S, 163°14.9′E) of Taylor Valley, on Ross Sea drift soils (late-Quaternary) ([@B17]). Dominant soil microflora include multiple species of algae from the division Chlorophyta and Heterokontophyta; microfauna comprised of nematodes, tardigrades and rotifers; and cyanobacteria, such species as, *Leptolyngbya frigida* and *Nostoc commune* in aquatic and terrestrial habitats ([@B3]). The basin experiences fewer than 50 days where average temperatures exceed 0°C within the summer months of December, January, February when mean annual temperature is -4.21°C ± 0.80 SD (*n* = 24). Soils receive less than 10 cm per yr^−1^ of effective mean annual precipitation falling as snow ([@B33]). Soils are Typic Haploturbels with shallow surface layer (≈0--10 cm depth), which experience continual cryoturbation, and a perennial permafrost layer (≈30--300 cm depth) ([@B16]; [@B17]). All soils are poorly developed silty-loams with an average pH of 9.69 ± 0.12 SEM (*n* = 8) and an electrical conductivity of 258 ± 115 μS cm^−1^ (*n* = 8). Initial soil chemistry demonstrated that soils were generally extremely low in organic C (organic C = 0.03% ± 0.003, total soil C = 0.13% ± 0.01, *n* = 8) and possessed relatively high soil P (2.35 ± 0.18 μg g^−1^ soil, *n* = 8) but low levels of soil N of (0.003% ± 0.0004, *n* = 8). Stoichiometry and Water Long-Term Manipulations ----------------------------------------------- To gain insights into the resource controls on microbial community assembly, we conducted a 6-year field stoichiometry experiment (austral summer field season 2006 -- 2007 to 2011 -- 2012) altering the stoichiometry of major nutrients (i.e., C, N, and P) and water availability. The experiment was a randomized block design with plots (1 m × 1 m) consisting of six treatments and an un-amended control: water only (W); C as mannitol and water (C); N as equal concentrations NH~4~^+^ and NO~3~^−^ with water (N); P as Na~3~PO~4~ and water (P); C, N, and water (CN); C, P, with water (CP); and the untreated control (U). The C additions, as mannitol, mirrored nutrient inputs from more contemporary algae and cyanobacteria. The N and P additions closely followed the Redfield ratio (106:16:1) to mimic new biomass from photosynthetic organisms entering organic matter-impoverished soils ([@B50]). Annually, all nutrients were delivered as aqueous solutions to bring the soils to field capacity with concentrations of 15.3 g C m^−2^, 2.69 g N m^−2^ as NH~4~NO~3~, and 0.37 g P m^−2^ as Na~3~PO~4~12H~2~O and water of 12.7 L H~2~O m^−2^. For more information on the treatments and treatment application in the field see [@B9]. The present study comprises an analysis of the Fryxell basin site only. Soil C:N:P and Chemistry ------------------------ To measure post-amendment changes in C:N:P stoichiometry we calculated C:N:P ratios from total C and N, and extractable P, and measured soil organic C and inorganic N. Soils were collected from all treatments (5 nutrient additions with water, 1 water addition, and a control × 8 replicates = 56) with a plastic scoop to a soil depth of 10 cm (approximately 500 g), sieved (2 mm sieve), and frozen until processing. Total C and N were measured on a Elantech Flash EA 1200 (CE Elantech, NJ, United States). Extractable soil P, as phosphate, was measured in 10 g soil with 0.5 M NaHCO~3~ (1:5 w/v) at pH 8.5, acidified with 3 mL of 6 N HCl, and analyzed on a Lachat Autoanalyzer ([@B12]). We measured dissolved organic C on a Shimadzu TOC-5000A (Shimadzu Corporation, Columbia, MD, United States). Inorganic N (μg N-NH~4~^+^ g soil^−1^, μg N-NO~3~^−^ g soil^−1^) was evaluated from 20 g of soil extracted with 2 M KCl extraction (1:2.5 w/v), passed through a Whatman \#1 filter, and measured on an a Lachat Quikchem 8500 (Lachat Instruments, Loveland, CO, United States). We tested for the effect of the additions on our response variables and soil C:N:P ratios using one-way ANOVA and Tukey's HSD test to identify significant differences among the treatments in R ([@B78]). For stoichiometric analyses data were converted into molar ratios. Bacterial Community Responses to Changes in Soil C:N:P and Water ---------------------------------------------------------------- After maintaining the treatments for more than half a decade, we characterized soil communities in treatment soils using barcoded sequencing of the 16S rRNA gene. Soils were collected from three randomly selected replicates in all treatments (5 nutrient additions with water, 1 water addition, and a control × 3 replicates = 21) to a depth of 10 cm using sterile plastic scoops. All soils were transported from the field in an insulated chest, sieved to 2 mm, and stepped-down to −20°C over 24 h. Nucleic acids were extracted from 1.5 g of soil using a PowerSoil DNA Isolation Kit (Mo Bio Corporation, Carlsbad, CA, United States). We PCR amplified the V4 region of the16S rRNA gene using bacterial specific primer set 515F and 806R with unique 12-nt error correcting Golay barcodes ([@B21]; [@B2]). The thermal cycle conditions consisted of an initial denaturing step at 94°C for 3 min followed by 35 cycles of denaturing at 94°C for 45 s, annealing at 50°C for 30 s, and amplifying at 72°C for 90 s. After purifying (Agencourt AMPure XP PCR Purification Beckman Coulter Inc., Brea, CA, United States) and pooling PCR amplicons at approximately equimolar concentrations, samples were sequenced at the Brigham Young University DNA Sequencing Center^[1](#fn01){ref-type="fn"}^ using a 454 Life Sciences genome sequencer FLX (Roche, Branford, CT, United States). All sequences were trimmed and cleaned using mothur \[v. 1.31.2; All sequences were trimmed and cleaned using mothur (v. 1.31.2^[2](#fn02){ref-type="fn"}^; [@B86])\]. After removing barcodes and primers, we eliminated sequences that were \<250 bp in length or possessed homopolymers longer than 8 bp. We then denoised the sequences with AmpliconNoise ([@B77]), removed chimeras using UCHIME ([@B35]), and eliminated chloroplast, mitochondrial, archaeal, and eukaryotic 16S rRNA gene sequences based on reference sequences from the Ribosomal Database Project ([@B27]). We aligned the sequences against the SILVA database ([@B76]) with the SEED aligner, created operational taxonomic units (OTUs) based on uncorrected pairwise distances at 97% sequence similarity, and determined the phylogenetic identity of OTUs using the SILVA database. To assess the effects of resources on soil bacterial communities, we first visualized differences in community composition using Principal Coordinates Analysis (PCoA) based on a Bray--Curtis distance matrix with the 'vegan' package in R ([@B78]). We then quantitated the effects of the different resource treatments \[i.e., W, C, single nutrients N and P, and combined C and nutrients (CN and CP)\] on community compositions using permutational multivariate analyses of variance (PERMANOVA) ([@B5]) performed with the *adonis* function in the vegan package in R ([@B72]). Second, we quantified and constructed 95% confidence intervals for estimated richness as the total number of OTUs, alpha diversity as the inverse Shannon index, and taxa evenness using Pielous's evenness based on 1000 iterations of 900 random resampled seqeunces from each replicate ([@B70]). Last, we calculated the relative recovery of nine phyla and three subclasses to identify differences in the distribution of major taxonomical groups (recovery ≥ 1.0%) due to the resource additions. Taxonomic trends of 24 families (recovery ≥ 1.0% in at least one replicate) were visualized in a heat map with hierarchal clustering using the *heatmap* function in the 'gplot' package in R ([@B96]). Bacterial Community Network Models ---------------------------------- To assess changes within communities at the OTU taxonomical level, we created network co-occurrence models for combinations of resources based on maximal information coefficient (MIC) analysis. We calculated all possible linear and non-linear associations between OTUs using the *mic* function and the 'minerva' package in R ([@B41]), which belongs to a class of maximal information-based non-parametric exploration statistics for identifying and classifying relationships ([@B83]). The nodes in the networks represented individual OTUs at 97% identity, while edges corresponded to valid or significant co-occurrence connections that occurred in at least 75% of all samples and had a MIC that was both \>0.7 and statistically significant (*P-*value = 0.01) ([@B10]). This filtering facilitated the determination of the OTUs interacting within the treatments and removed poorly represented OTUs reducing network complexity ([@B10]). We described the network through a series of topological parameters: mean path length, mean degree, mean clustering coefficient, density, and modularity ([@B43]). Network graphs in the *graphml* format were generated using 'igraph' package in R ([@B29]) and visualized with the interactive platform Gephi (v. 0.8.2-beta) ([@B13]). To identify the taxonomy of bacteria within the networks, we elevated nodes at the order taxonomical rank. We calculated the node number as the total number of nodes within each of the nineteen orders comprising the networks, and the relative recovery of nodes as the summation of the mean relative recoveries of the nodes within an a given order from P and CP or U and W communities. Soil Respiration ---------------- To investigate the links between microbial communities and ecosystem processes, we measured soil respiration (μmoles C-CO~2~ m^−2^ soil sec^−1^) in all resource treatments (5 nutrient additions with water, 1 water addition, and a control × 8 replicates = 56). Within 1 week of resource additions, soil CO~2~ flux in the field was evaluated using a Li-COR 8100 (LI-COR Biosciences, Lincoln, NE, United States) with a 10-cm diameter PVC ring inserted 2 cm into the soil at least 1 h prior to measurement ([@B9]). Results ======= N and P Additions Altered Soil C:N:P ------------------------------------ Resource additions clearly altered soil C:N:P leading to increases in N and P availability. Following multiple years of N additions, C:N were lower and N:P were higher in N and CN than P and CP soils (**Table [1](#T1){ref-type="table"}**). The shifts in ratios were highlighted by inorganic NH~4~^+^ (one-way ANOVA, *df* = 6, *F* = 39.1, *P* \< 0.001) and NO~3~^−^ (one-way ANOVA, *df* = 6, *F* = 14.9, *P* \< 0.001) concentrations being at least forty-four- and nine-times higher, respectively, in N and CN than all other soil treatment (**Table [2](#T2){ref-type="table"}**). Accompanying soil P additions, C:P were lower in P and CP soils than all other treatments besides CN (**Table [1](#T1){ref-type="table"}**). Similarly, extractable P increased more than 53.0% in P and CP relative to W, C, and N (one-way ANOVA, *df* = 6, *F* = 6.26, *P* \< 0.001, **Table [2](#T2){ref-type="table"}**). Conversely, the additions of C had no apparent effect on total soil C only slightly increasing SOC in the C compared to P and W treatments (one-way ANOVA, *df* = 6, *F* = 3.80, *P* = 0.004, **Table [2](#T2){ref-type="table"}**). The C:N:P ratios for each treatment are listed in **Table [1](#T1){ref-type="table"}**. ###### Molar C:N:P ratios of soils following 6 years of six resource additions in Fryxell lake basin of the Taylor Valleys in Antarctica. C:N:P C:N C:P N:P ---- ---------- ---------------- ---------------- ------------------ U 182:7:1 28.2 ± 2.69 ab 182 ± 29.7 ab 6.85 ± 1.01 bcd W 247:8:1 31.6 ± 5.76 ab 247 ± 45.4 a 8.28 ± 0.768 bcd C 218:8:1 30.0 ± 4.82 ab 218 ± 21.0 ab 8.43 ± 1.31 bc N 223:16:1 14.4 ± 1.17 b 223 ± 30.8 a 15.5 ± 1.89 a P 90:3:1 33.0 ± 49 a 90.3 ± 4.65 c 3.02 ± 0.400 d CN 165:12:1 14.4 ± 1.68 b 165 ± 26.5 abc 11.9 ± 1.85 ab CP 108:3:1 37.8 ± 6.00 a 108 ± 5.93 bc 3.47 ± 0.675 cd Treatments abbreviations include: un-amended control (U); water only (W); C as mannitol and water (C); N as equal concentrations NH 4 \+ and NO 3 − with water (N); P as Na 3 PO 4 and water (P); C, N, and water (CN); C, P, with water (CP). Data are mean (±SEM, n = 8) with different letters indicating differences among treatments (P \< 0.05) from ANOVA and Tukey's HSD. ###### Soil chemistry following 6 years of resource additions. U W C N P CN CP ------------------------------ ------------------ ----------------- ------------------ ------------------ ----------------- ----------------- ------------------ pH 9.7 ± 0.15 9.9 ± 0.05 9.9 ± 0.12 9.5 ± 0.20 9.8 ± 0.10 9.6 ± 0.17 9.8 ± 0.12 EC (ds m^−1^) 220 ± 42.1 183 ± 45.7 224 ± 40.9 331 ± 146 220 ± 42.0 301 ± 97.4 266 ± 73.8 Total C (%) 0.14 ± 0.01 0.12 ± 0.01 0.14 ± 0.01 0.14 ± 0.02 0.13 ± 0.01 0.15 ± 0.01 0.16 ± 0.02 Total N (%) 0.004 ± 0.001 bc 0.004 ± 0.001 c 0.005 ± 0.001 bc 0.008 ± 0.001 ab 0.004 ± 0.001 c 0.01 ± 0.001 a 0.004 ± 0.001 bc P (μg N g soil^−1^) 3.6 ± 0.86 ab 2.2 ± 0.37 b 2.7 ± 0.39 b 2.5 ± 0.23 b 5.6 ± 0.67 a 4.2 ± 0.74 ab 5.8 ± 0.54 a SOC (μg N g soil^−1^) 0.03 ± 0.004 ab 0.03 ± 0004 b 0.06 ± 0.008 a 0.03 ± 0.004 ab 0.03 ± 0.004 b 0.04 ± 0.007 ab 0.05 ± 0.005 ab N-NH~4~^+^ (μg N g soil^−1^) 0.08 ± 0.09 b 0.27 ± 0.05 b 0.04 ± 0.01 b 17 ± 1.6 a 0.29 ± 0.10 b 13 ± 2.6 a 0.05 ± 0.01 b N-NO~3~^−^ (μg N g soil^−1^) 0.64 ± 0.30 c 1.1 ± 0.57 c 0.09 ± 0.03 c 22 ± 4.2 a 0.77 ± 0.26 c 10 ± 2.5 b 1.1 ± 1.1 c For treatment abbreviations see Table 1 . Data are mean (±SEM, n = 8) with different letters indicating significant differences among treatments (P \< 0.05) based on ANOVA and Tukey's HSD. CN Reduced Evenness and Diversity but N Alone Enhanced Richness and Diversity ----------------------------------------------------------------------------- Bacterial evenness and diversity was reduced following CN additions, while diversity and richness was enhanced by N additions (**Figure [1](#F1){ref-type="fig"}**). Specifically, the addition of CN dramatically depressed taxa evenness by at least 20.1% relative to all other treatments, and alpha diversity by at least 24.5% relative to the C, N, and CP treatments. In contrast, N additions stimulated OTU richness by 48.4 and 38.9% relative to bacterial communities in C and P soils, respectively. Diversity also increased by more than 13% in N compared to C, P, and CN treatments. In general, all resource additions reduced the variability surrounding richness and diversity metrics. All community inferences were based on the recovery of 138,458 quality sequences and 1,450 unique OTUs with samples possessing an average sequencing coverage of 98.4% ± 0.21 (mean and SEM). All sequence data were submitted to NCBI and are available as BioProject [PRJNA476992](PRJNA476992). ![Only the addition of CN and N altered bacterial OTU richness and **(A)**, diversity **(B)**, or evenness **(C)**. Treatments include: carbon as mannitol (C), nitrogen as equal concentrations NH~4~^+^ and NO~3~^-^ (N), phosphorus (P), C and N (CN), C and P (CP), water only (W), and an un-amended control (UN). Values are means (*n* = 3) shown with accompanying 95% confidence intervals based on 16S rDNA community libraries (97% similarity cut-off).](fmicb-09-01401-g001){#F1} All Nutrient Additions Created Distinct Communities, Especially CN ------------------------------------------------------------------ The CN treatment dramatically influenced bacterial communities, most notably by the separation of communities along axis one, which explained 31.6% of the variation among communities (**Figure [2](#F2){ref-type="fig"}**). Further, the addition of any nutrient (i.e., C, N, P, and CP) reduced the variability among communities compared to the untreated control and water only addition along axis 2, which explained 19.6% of the variation. PERMANOVA results supported these interpretations, as communities were distinct among treatments (*F* = 3.5, *R*^2^ = 0.60, *P* \< 0.001, *df* = 6). ![The addition of resources contributed to shifts in bacterial composition with the most dramatic change occurring in CN soils. For treatment abbreviations see **Table [1](#T1){ref-type="table"}**. Treatments abbreviations include: C as mannitol (C), N as NH~4~^+^ and NO~3~^-^ (N), P as Na~3~PO~4~ (P), C and N (CN), C and P (CP), water only (W), and an un-amended control (U). The multivariate ordination was generated using principle coordinate analysis (PCoA) on a sample × OTU matrix of 16S rDNA community libraries (97% similarity cut-off).](fmicb-09-01401-g002){#F2} Family-Specific Responses to Nutrients -------------------------------------- All resource additions, except water, promoted taxonomic shifts in bacterial OTU abundance in 11 families across five phyla (**Figure [3](#F3){ref-type="fig"}**). The most pronounced increase in relative recovery occurred in the Micrococcaceae (Actinobacteria) in CN soils where a single bacterium, an *Arthrobacter species*, was relatively rare (0.06% ± 0.05) in untreated soils but constituted 47% ± 5.6 of the community in CN-amended soils. The only other bloom occurred in the Trueperaceae (*Deinococcus)*, where one OTU was classified as intermediate (0.42% ± 0.09) in untreated but became abundant (9.6% ± 3.8) in CP soils. In general, Actinobacteria were abundant in all soils and accounted for at least 36% of the community composition in all treatments (Supplementary Figure [S1](#SM1){ref-type="supplementary-material"}), but not all Actinobacteria responded positively to CN. For example, three Actinobacteria families, Solirubrobacteracea, Solirubrobacterales unclassified, and Rubrobacteriaceae, decreased from 2.1- to 4.9-fold in CN compared to all other treatments (**Figure [3](#F3){ref-type="fig"}**). CN additions also stimulated Xanthomonadaceae (2.7% ± 1.2, Gammaproteobacteria) and Sphingobacteriaceae (1.8% ± 0.48, Bacteroidetes). With the addition of N, the Nitrosomonadaceae (Betaproteobacteria) increased in recovery 5.2-times allowing the N treatment to have the highest recovery of Betaproteobacteria (3.1% ± 0.17). Annual N additions also enhanced the recovery of Rhodobacteraceae (0.51% ± 0.16, Alphaproteobacteria) and Gemmatimonadaceae (2.7% ± 0.27, Gemmatimonadetes) by at least 1.8-times relative to all other treatments. Both CP and P additions enhanced the recovery of Chitinophagaceae (Bacteroidetes) and Spartobacteria unclassified (Verrucomicrobia) in particular. The recovery of these families was at least 1.5-times higher in P and CP than all other soils and caused the recovery of Bacteroidetes to increase upward of 100% (*P* = 6.3% ± 2.2, CP = 6.8% 1.8) in both treatments. ![All resources, except water, promoted different taxonomical shifts. Heat map showing the distribution of OTUs for fourteen families that contributed ≥0.5% to the total recovery of communities. Treatment abbreviations are described in **Figure [1](#F1){ref-type="fig"}**. Values are based on means with hierarchal clustering of resource treatments (bottom) and family (left).](fmicb-09-01401-g003){#F3} Deinococcus-Thermus were present in all soils (Supplementary Figure [S1](#SM1){ref-type="supplementary-material"}), but the addition of CP, C, and P stimulated the recovery of these taxa causing the Deinococcaceae to range from 5.3 to 11% (**Figure [3](#F3){ref-type="fig"}**). The addition of C alone promoted different families with Sphingomonadaceae (Alphaproteobacteria) and the Geodermatophilaceae (Actinobacteria) increasing upward of 3.0- and 39-times, respectively, but combined these two taxa only accounted for less than 4.1% of the community in C-amended soils. Network Community Modeling -------------------------- Nutrient additions disrupted interactions among community assemblages. Due to the requirement of more than three samples to create reliable community network models with MIC, we only created community network models for two combined treatments (U and W, and P and CP) that were relatively similar (PERMANOVA: U and W, *F* = 5.8, *R*^2^ = 0.59, *P* = 0.1, *df* = 1; PW and CPW, *F* = 1.2, *R*^2^ = 0.22, *P* = 0.5, *df* = 1). After years of P and CP additions, multiple aspects of the community broke down relative to the untreated and water only soils (**Figure [4](#F4){ref-type="fig"}** and **Table [3](#T3){ref-type="table"}**). For example, the number of significant nodes or taxa, and edges or connections between taxa was 51 and 73% lower, respectively, in P and CP networks. The mean degree (number of connections per node to other nodes) declined twofold from U and W to P and CP networks, and mean path length (number of nodes needed to link any one node to any other in the network) decreased from 3.5 in the U and W to 2.8 in P and CP models. Within the two networks, specific orders were favored and the nodes were often major contributors to the recovery of the community. For example, the Trueperaceae represented 3 nodes in the combined P and CP network and 7.7% relative recovery in the P and the CP communities, but only 1 taxon in U and W network and 0.61% of the recovery in U and W communities. Alternatively, in the U and W network, Phycisphaerae (unclassified; 10 nodes, 2.1% relative recovery), Intrasporangiaceae (2 nodes, 0.13% relative recovery), and Xanthomonadaceae (2 nodes, 0.55% relative recovery) were present but completely absent from the P and CP network. The Spartobacteria (unclassified), Micrococcaceae, and Chitinophagaceae had similar numbers of nodes in all models, but contributed substantially more in abundance in the P and CP communities, 14, 3.2, and 3.1%, respectively. The Solirubrobacteriaceae and Solirubrobacterales (unclassified) consistently contributed to both models with 14 and 10 nodes in the P and CP, and the U and W networks, respectively, and comprised no less than 20% of the relative recovery from either community. ###### Community network model characteristics for soil bacteria in P-amended and control soils. Network topographical parameters U and W P and CP ---------------------------------- ------------- ------------------------------------ ------------- ------------------------------------ Nodes, edges (\#) 113, 1321 75, 433 Mean path length 2.8 3.5 Mean degree 23 11.5 Mean clustering coefficient 0.84 0.74 Density 0.21 0.16 Modularity 0.54 0.51 **U and W** **P and CP** **Network taxonomy (order)** **Node \#** **Relative recovery of nodes (%)** **Node \#** **Relative recovery of nodes (%)** Acidobacteriaceae 10 14 7 13 Intrasporangiaceae 2 0.13 0 0 Micrococcaceae 1 0.06 1 3.2 Nocardioidaceae 1 0.10 0 0 Rubrobacteriaceae 4 1.0 3 0.83 Conexibacteraceae 1 0.40 1 0.16 Solirubrobacteriaceae 2 1.3 4 2.0 Solirubrobacterales Unclassified 8 22 10 18 Chitinophagaceae 5 0.32 4 3.1 Sphingobacteriaceae 0 0 1 0.90 Caldilineaceae 1 0.28 1 1.5 Trueperaceae 1 0.51 3 7.7 Gemmatimonadaceae 1 0.65 1 0.16 Phycisphaerae unclassified 10 2.1 0 0 Sphingomonadaceae 1 0.98 2 1.8 Alcaligenaceae 1 0.12 1 0.14 Oxalobacteraceae 1 0.18 1 0.50 Xanthomonadaceae 2 0.55 0 0 Spartobacteria unclassified 4 7.3 4 14 For treatment abbreviations see Table 1 and for an explanation of network topographical parameters see network community modeling in the results section. For network taxonomy, node \# is the total number of nodes or OTUs in a given order for a network. The relative recovery of nodes is the summation of mean relative recoveries of the nodes within the order from P and CP or U and W communities. ![Community interactions among bacterial species deteriorated with the addition of P and CP. The two network models are based on OTUs from two 16S rDNA community libraries of Control and W, and P and CP treatments.](fmicb-09-01401-g004){#F4} CN Elevated Soil Respiration ---------------------------- Only the CN resource addition elevated soil respiration 1 week following the nutrient additions. As a result, we observed a 114--234% increase in soil respiration in CN soils compared to all other treatments (one-way ANOVA, *df* = 6, *F* = 11.6, *P* \< 0.001, **Figure [5](#F5){ref-type="fig"}**). The soil treatments exhibited no differences in temperature at the time of sampling (one-way ANOVA, *df* = 6, *F* = 1.46, *P* = 0.24). The mean temperature of all treatments was 7.2°C ± 0.32 SD (*n* = 56). ![Soil respiration dramatically increased following CN additions. Values are means ± SEM (*n* = 8) with letters indicating differences (*P* \< 0.05) based on a one-way ANOVA and Tukey's HSD test.](fmicb-09-01401-g005){#F5} Discussion ========== Stoichiometric shifts of C:N:P in the coldest and driest soils on Earth ([@B94]) alleviated resource limitations, created species-specific bacterial responses, and altered ecosystem processes. Our long-term coupled resource additions dramatically altered soil C:N:P leading to increases in inorganic N and P availability, but only a slight increase in soil organic C content, which was presumably consumed by bacteria. As hypothesized, C, N, CN, and CP additions created unique communities, relative to untreated soils, with CN and N having the most pronounced effect on bacterial species responses. We found that the alleviation of a C and N co-limitation facilitated the dominance of an *Arthrobacter species* (family, Micrococcaceae) that ultimately elevated soil respiration, and that shifts in C:N ratios may remove nutrient constraints on bacteria enhancing species richness and diversity. Contrary to our hypothesis, the addition of P, even to our relatively P-rich soils ([@B4]; [@B12]), helped create unique communities for all single and coupled resource additions. Colimitation of CN Facilitates Species Dominance and Enhanced Respiration ------------------------------------------------------------------------- The colimitation of organic C, N, and/or P are common in marine and freshwater systems where the abundance of photoautotrophs to organoheterotrophs is often influenced by two or more nutrients ([@B18]; [@B55]; [@B23]; [@B84]). In soils, co-limitation exists but is harder to identify due to the high levels of bacterial diversity and the wide variety of resource substrates for species to exploit, from extremely labile C substrates to recalcitrant soil organic matter. Over the 6 years of the present study, as CN limitations were eased, (e.g., C:N decreased and N:P increased in CN amended soils), an *Arthrobacter* species (family = Micrococcaceae, Actinobacteria) went from being rare (0.06% ± 0.05) to dominant (47% ± 5.6). *Arthrobacter* species are common psychrotrophs found in Adelie penguin guano ([@B99]) and Antarctic epilithic lichens ([@B88]) seeming to capitalize on the localized nutrient-rich penguin feces in the otherwise nutrient poor landscape. *Arthrobacter* strains can respond quickly to changes in nutrient conditions by breaking dormancy and growing within an hour of the removal of starvation stress ([@B103]). Compared to temperate *Arthrobacter* species, Antarctic *Arthrobacter* possess lower metabolic versatility ([@B34]) but similar genes to many psychrophilic/psychrotolerant species (e.g., cold active hydrolytic enzymes; sigma factors; signal transduction pathways; carotenoid biosynthesis pathway; and genes induced by cold-shock, oxidative, and osmotic stresses ([@B34]; [@B89]). Further, our unclassified *Arthrobacter* OTU falls within a genus whose members readily decompose almost any algal and cyanobacterial bioproducts, from cyanotoxins ([@B60]) to cellobiose, the final derivative of cellulose utilization ([@B85]). Thus, the *Arthrobacter* we recovered is likely a well-adapted psychrophile poised to exploit common bacterial and algal derived C sources when N is available. The functional consequences of *Arthrobacter* dominance were easily distinguishable. Often the functional consequences of soil bacterial community change is exceptionally difficult due to discern due to levels of functional redundancy ([@B98]; [@B68]) and large fractions of bacterial diversity being dormant or metabolically inactive at any given time ([@B62]). In contrast, only in soils where *Arthrobacter* bacteria achieved dominance did soil respiration dramatically increase (114--234%). Our results are consistent with the findings of [@B54] who showed that the addition of glucose, glycine, and ammonium stimulated the mineralization of lacustrine detritus and soil organic matter across different geomorphically defined landscapes in Garwood Valley, Antarctica. Even though the link between our dominant species and respiration is implied rather than explicit, dominant or abundant bacteria often contribute proportionally to universal soil processes such as respiration ([@B74]). Thus, *Arthrobacter* most likely exploited mannitol and ammonium or nitrate to a greater extent than other species to become more metabolically active. Bacterial competition for essential resources may loosely be classified into two competition categories, scramblers and contesters ([@B53]). Scramble competition or exploitation competition involves rapid utilization of resources without directly interacting with other bacteria, while contest competition or interference competition involves direct antagonistic interactions between competitors. While both scramblers and contesters occur in most soils, the effects of competition for resources on bacterial taxa are often only implied ([@B102]; [@B45]) and potential interactions among limiting nutrients are often neglected ([@B38]). In our CN-enriched soils, *Arthrobacter* is most likely a scrambler, better suited to capitalize on emerging resources. Its rise to dominance resulted in a decline in bacterial evenness and diversity while allowing for the persistence of rare taxa, as evidenced by similar species richness levels exhibited among the different resource treatments. Inorganic N Opened New Bacterial Niches for Rare Taxa ----------------------------------------------------- The removal of ammonium limitations opened new niches for once rare taxa to exploit. With the immense elevation of ammonium levels following N additions, bacterial richness increased upward of 48% in comparison to soils that receive only C or P additions. Higher levels of ammonium increased bacterial richness directly and indirectly by potentially stimulating nitrifying bacteria relying on ammonium and nitrite. We found that ammonium additions enhanced the abundance of two nitrite oxidizing *Nitrospira* species from the family Nitrosomonadaceae. Antarctic *Nitrospira* species may also contain amoA sequences ([@B65]) and participate in complete nitrification ([@B30]). Our findings are consistent with the suggestion that the extreme abiotic severity of the McMurdo Dry Valley soil habitat drives the presence of ammonia oxidizing bacteria (AOB) like *Nitrospira* ([@B46]; [@B65]). Only after the soils became considerably less harsh with N additions were AOB found in soils. Presumably the increase in nitrite triggered certain species to increase in abundance. For example, six Gemmatimonadaceae species with the ability to reduce nitrite to nitric oxide via the NirK gene (clade II) ([@B32]) increased in abundance at least 1.8-fold under N additions relative to all other treatments. Also, four Rhodobacteraceae taxa with a high affinity for ammonium, characterized by high transcript abundance for ammonium transporters ([@B75]), increased in abundance as N limitations were lifted. Many of these taxa responding to ammonium and possibly nitrite were rare ([@B90]) with an abundance in untreated soils \<0.1% of the total recovery ([@B1]). Thus, the enhanced availability of reduced forms of N following N-amendments stimulates microbial biodiversity in Antarctic soils. CP and P Deconstructed Species Assemblages ------------------------------------------ Lower soil C:P following P additions disrupted community co-occurrence patterns and facilitated new nutrient-related interactions among taxa. Even an increase in soil P, a nutrient that was not predicted to be limiting based on the initial soil C:N:P ratios, influenced species interactions. Excess soil P disrupted potential interactions among community assemblages, as evidenced by more than 50% of the network (i.e., significant species and interactions) disappearing and 14 taxa from three families (i.e., Phycisphaerae unclassified, Intrasporangiaceae, and Xanthomonadaceae) vanishing from the co-occurrence network. However, as P limitations were alleviated a more diffuse and less connected network potentially centered on P availability emerged. For example, two new Trueperaceae species (*Deinococcus*) were incorporated into the model and the collective abundance of the three Trueperaceae species was 13-times higher in P amended soils. Members of the Trueperaceae family are remarkably resistant to ionizing radiation and able to grow under multiple extreme conditions, including alkaline, moderately saline, and high temperature environments ([@B56]). Deinococcus-Thermus taxa in general possess a remarkable number of genes encoding for catabolic enzymes including phosphatases ([@B31]), suggesting that the access to P is potentially linked to radiation resistance and/or helps boost the survival of Trueperaceae in extreme soils. Additionally, multiple shared taxa between our two networks \[i.e., Spartobacteria unclassified (Verrucomicrobia), Micrococcaceae (Actinobacteria), and Chitinophagaceae (Bacteroidetes)\] increased in abundance under P additions. The abundance of taxa within these families may track the availability of P in soils and water ([@B58]; [@B95]). Even with P additions communities seemed to remain partially reliant on Solirubrobacteriaceae taxa. Members of the Solirubrobacteriaceae family may enhance the weathering of volcanic rocks, which are common in soils occurring on Ross Sea Till ([@B26]). Despite P additions, Solirubrobacteriaceae taxa were integral to all models constituting upward of 10% of species and 20% of the recovery suggesting that mineral weathering is essential to enhance micronutrient availability under both high and low nutrient conditions. As soils transition from a nutrient-poor to a nutrient-rich state, the excess soil P potentially disrupted interactions among bacterial taxa. For example, the addition of water and organic matter in McMurdo Dry Valley soils caused certain bacteria (members of the and Actinobacteria, Proteobacteria, and Firmicutes) to become active demonstrating a potential taxonomical shift from species adapted to dry oligotrophic to moist copiotrophic conditions ([@B19]). Thus, future climate-driven changes that ameliorate the current stoichiometric imbalances of the dry valley soils ([@B71]; [@B47]), may deconstruct current bacterial communities and reorganizing them into communities dominated by more copiotrophic taxa. Co-occurrence networks do provide insights into potential interactions among taxa within a community ([@B45]; [@B44]), but to fully understand interactions among bacteria a more direct approach is needed. Annual Water Additions Failed to Elicit Bacterial Response ---------------------------------------------------------- Water is necessary for imbalances in stoichiometric nutrient ratios to influence communities; however, our one-time water addition alone was not enough to create lasting effects on bacteria community structure. Frequent water additions do influence bacterial activity across Dry Valleys ([@B87]; [@B19]) where soil moisture is ephemeral and extremely patchy. Surface hydrogeological features such as water tracks in soils ([@B63]), lateral margins of stream and lake margins ([@B101]), and discontinuous patches of soils often form in the same location due to wind sheltering and microtopography ([@B48]) enhance bacterial metabolic activity and alter species distributions. Conclusion ========== Stoichiometric additions of C, N, and P reduced resource limitations, created species-specific bacterial responses, and in one case altered a fundamental ecosystem process. The most dramatic effects of changes in ecosystem stoichiometry occurring around C and N additions in our initially N limited soils. C as mannitol and N as equal molar concentrations ammonium and nitrate induced an almost twofold reduction in C:N ratios; caused bacterial evenness and diversity to decline; allowed one rare Micrococcaceae, an *Arthrobacter* species, to dominate community abundance; and elevated soil respiration by 136% compared to untreated soils. N additions alone also reduced C:N ratios, and in contrast to CN-additions, increased species richness and diversity by at least 48 and 13%, respectively, compared to soils receiving a single resource as C or P, and enhanced the abundance of rare taxa dependent on N for metabolism and growth. The addition of P to levels well below the C:P ratio necessary for balanced microbial growth also influenced soil microbial communities. Based on community co-occurrence networks, lower C:P ratios in soils following P and CP additions reduced the number of taxa interacting with one another by 51% and the number of interactions among taxa by 73% relative to untreated and watered soils. Our results suggest that the alleviation of C and N co-limitation facilitated the dominance of single species ultimately altering ecosystem processes; the reduced forms of inorganic N open multiple niches for bacteria to exploit, and excess soil P disrupted interactions within communities. Author Contributions ==================== ZA, SS, BB, DW, JB, RV, and BA conducted the experiments. ZA, SS, BB, DW, JB, MM, NG, RV, AB, and BA analyzed and interpreted the data. ZA, SS, DW, JB, NG, MM, RV, AB, and BA helped to write and review the manuscript. ZA agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Conflict of Interest Statement ============================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Funding.** Our research was supported by the National Science Foundation Office of Polar Programs grants to the McMurdo Long-Term Ecological Research Program ANT-0423595 and OPP-1115245. We thank Ashley Shaw providing valuable comments that improved the paper. <http://dnac.byu.edu/> <https://www.mothur.org/wiki/454_SOP> Supplementary Material ====================== The Supplementary Material for this article can be found online at: <https://www.frontiersin.org/articles/10.3389/fmicb.2018.01401/full#supplementary-material> ###### Click here for additional data file. [^1]: Edited by: Thulani Peter Makhalanyane, University of Pretoria, South Africa [^2]: Reviewed by: David Anthony Pearce, Northumbria University, United Kingdom; Richard Allen White III, RAW Molecular Systems (RMS) LLC, United States [^3]: This article was submitted to Extreme Microbiology, a section of the journal Frontiers in Microbiology
The next issue of the Journal of Tourism Challenges and Trends explores the new trends in the anthropology of tourism. Guest Editor: Dr. Tamás Régi, Associate Professor, Tourism Department, Kodolányi János University of Applied Sciences, Hungary – – – – – One can no longer question that tourism is not an unconnected social phenomenon but an integral part of many people’s life experience. It is probably pointless to ask where tourism starts and where it ends as many people engage in different type (virtual, bodily, imaginative, etc.) of tourism in almost every day. Categories such as hosts-guests; local-traveller often cannot give constructive frame any more for understanding people’s mobile experiences. Anthropologists were among the first who started to understand this process and anthropology is still among the most powerful method and methodology to understand tourism and tourists. However, as sociologists, geographers, philosophers, historians and scholars from various other disciplines chose tourism as their subject of study, the clear disciplinary borders long seem to be diminished. The question then emerges: What distinguishes the anthropology of tourism from other tourism related disciplines? The Journal of Tourism Challenges and Trends (JTCT) calls for original papers that addresse the following questions: – What is actually tourism anthropology? What distinguishes tourism anthropology from other tourism related disciplines? – What is the relationship between tourism anthropology and tourism studies? – How are current (and old) anthropological theories used in understanding the phenomenon of tourism? – Are classical ethnographic/anthropological research methods still valid and useful ways of collecting information about tourism? – How are the ideas of major anthropologists (Arjun Appadurai, Tim Ingold, Michael Taussig, Alfred Gell, Victor Turner, Comaroffs etc.), who did not publish directly on tourism, used in the field of tourism anthropology? – The current formulations of classical tourism anthropological theories from Nelson Graburn, Edward Bruner, Jeremy Boissevian, Erik Cohen etc. – How do researchers bridge the interdisciplinary concepts of tourism studies with classical anthropological thinking? How can anthropologists handle the everyday life of mobile subjects and observe the role of machines, computers, travel devices in contemporary societies? – What are the emerging schools, fields, ideas in the current anthropology of tourism? To explore these questions the JTCT is expecting original theoretical papers that address current issues in tourism anthropology. The JTCT also expects original case studies where the authors discuss their anthropological / ethnographic field data. There are no geographical or case studies restrictions but contributions should contain a strong theoretical interpretation. Timeline: Final submission: 1st of July 2013 Notification of authors: 1st of September 2013 Publishing date: December 2013.
https://journaltct.ro/tag/anthropology-of-tourism/
Updated: June 25, 2019 If you’re a newcomer to greenhouse horticulture, you may be wondering where to start. There’s a vast amount of information available about greenhouse gardening, and it can be a little bit overwhelming to find trustworthy sources of information. Well, we all know your pain and have listed here in five steps the essential info you really need to get started. 1. Gather some supplies. You’ll need a few crucial supplies to get started in your new greenhouse gardening hobby, including a greenhouse, yard hose, some plants, planting medium and planting pots. A pair of gardening gloves, a hand trowel and a cooling fan may also be beneficial in the garden greenhouse, but aren’t essential. You may opt to gather fertilizers (compost is ideal), insecticides, gardening books and gravel to help with drainage problems you may encounter in the greenhouse. 2. Pick your crops. When you have gathered all supplies, you’ll need to decide on what you want to grow. Will you decide on fresh vegetables, fruit trees and bushes, flowering plants, beautiful plants, or exotics? The selection is nearly limitless and is strictly a matter of individual choice. If you’re a true gardening rookie, consider growing tomatoes, lettuce or berries, because they are often simple choices. Flowers like begonias, petunias, bush roses, and daylilies also make great choices for the beginning greenhouse gardener. 3. Ready yourself to plant. Before starting to plant your seeds or seedlings, you must make certain your pots, equipment and soil are sterilized adequately. A blend of diluted bleach water, sprayed on tools and pots then washed with water and permitted to air dry should be sufficient. Potting soil purchased from a gardening supply store is already sterilized and should be put in the original bags and sealed after each use. If you’ve planned to make your own potting soil, you should definitely follow directions meticulously when baking your soil to make certain bacteria and pests are eliminated before use. 4. Begin planting! Start by filling your pot roughly halfway with potting soil. Before pulling your seedling from the container softly squeeze the sides first before removing. Gently remove excess dirt from the root ball and place the seedling upright in the new pot. With one hand holding the seedling in place, carefully place planting medium around the plant until the roots are covered and the plant is stabilized enough to stand vertical without your support. Water carefully and extensively and move the pot straight into its new house in your garden greenhouse. 5. Other tips. You’ll need to be sure your garden greenhouse panels are clean enough to let sunlight in, and that the temperature in the greenhouse is managed. Adhere to the planting ideas furnished with your seeds or seedlings to formulate a set temperature (tropical plants do best between 60 and 85 degrees, most vegetables and fruits like temperatures between 45 and 85 degrees Fahrenheit) and watering routine for your plants. Always keep in mind that it’s best to water your flowers early in the morning or later on in the evening. Be sure not to over water your plants! If the soil is damp to touch, leave them alone. If necessary, you can add fertilizer to your plants and flowers or use fresh compost every 6 to 8 weeks. To help make the most of your greenhouse adventure, it’s wise to get together with other gardeners to exchange tips and tricks. You can even keep a gardening journal detailed with snapshots of your progress to help you learn from your mistakes and successes in the greenhouse. Starting off a greenhouse gardening hobby can be extremely fulfilling, so you’ll want to enjoy your experience fully!
https://www.greenhomegnome.com/greenhouse-gardening-essentials-5-steps/
Applicant Name: Madapathage Nethum Thedas Dinujitha Title of Artwork: Wild Life Category: Drawing Artwork Description Importance of Wildlife Wildlife plays an important role in balancing the environment . Wildlife provides a stability to different processes of the nature . Wildlife and nature have been largely associated with humans for emotional and social reasons . The importance of wildlife can be categorized as ecological , economic and investigatory importance as well as conservation of biological diversities etc . Animals have also been highly useful to us in providing food , clothing and source of income . Our life is almost impossible without the support of wildlife .We are also a part of wildlife to make ecological balance on earth. It plays a very crucial role in our life . Evaluate: My list Comments:
https://www.ozone2climate.org/usercontent/0370c833-5bce-495a-bd36-8f97eabd6905
Sebastian Dacey is a visual artist, who originates from the United Kingdom, like other artists such as Edward Seago, Bernard Cohen, Rebecca Moss, Nptada, and Boo Ritson. Sebastian Dacey in private collections There are two collectors in possession of art by Sebastian Dacey at Artland. These collectors include K and KT, who also has works by other artists including Willy Spiller, Ruth Asawa, and Asger Dybvad Larsen. Historical Context of United Kingdom Britain has been an important hub for artistic production for centuries. While it accumulated vast wealth from colonisation and the rise of its Empire, it was also unsheltered from the cultural influences of other countries and continents. Throughout the contemporary era, Britain had been significantly overshadowed by the importance of its European neighbours on one side, and of the United States on the other. But towards the end of the 19th century, Britain became an important centre in the development of the avant-garde. This includes the Arts and Crafts Movement, a cutting-edge movement paving the way for artist-led organisations, groups and organisational co-operative types that would later develop into a template of sorts for bohemian artists movements of the Twentieth Century. Important artistic movements in British modernism include for example Vorticism, comprised of artists associated with the Bloomsbury group. Some significant British artists of the modern and contemporary period include Stanley Spencer, Francis Bacon, Lucian Freud, Leon Kossoff, Frank Auerbach, David Hockney, Bridget Riley, Paula Rego - and in more recent years the YBA generation led by Damien Hirst, Tracey Emin, Marc Quinn, Chris Ofili and others.
https://www.artland.com/artists/sebastian-dacey-57036f
Image data guide¶ An image data pipeline for machine learning is critical for performance during training and inference. You also need to know the formats and “shapes” of the images that your framework of choice requires. Additionally, you can further encode images in optimized formats that will speed up your ML processes. The following guide covers how you can preprocess images using SageMaker’s built-in image processing and for PyTorch or TensorFlow training. - To get started, run the following notebooks in order. There are four phases: Download data Structure data Preprocess (choose one of SageMaker built-in, PyTorch, or TensorFlow) Train (choose one of SageMaker built-in, PyTorch, or TensorFlow) - Download your image data¶ First, download the data. Structure your image data¶ Now you structure the data before the next phase which is framework-specific. Preprocessing¶ For preprocessing, you have several options. This guide covers SageMaker’s built-in option and options for PyTorch or TensorFlow. Choose one of the following notebooks and run it prior to going to the training step for the preprocessing option you chose. with SageMaker built-in¶ with PyTorch¶ with TensorFlow¶ Training on image data¶ Now that you preprocessed your image data, choose the corresponding notebook to train with.
https://sagemaker-examples.readthedocs.io/en/latest/prep_data/image_data_guide/index.html
# Chocolate Chocolate is a food made from roasted and ground cacao seed kernels that is available as a liquid, solid, or paste, either on its own or as a flavoring agent in other foods. Cacao has been consumed in some form since at least the Olmec civilization (19th-11th century BCE), and the majority of Mesoamerican people ─ including the Maya and Aztecs ─ made chocolate beverages. The seeds of the cacao tree have an intense bitter taste and must be fermented to develop the flavor. After fermentation, the seeds are dried, cleaned, and roasted. The shell is removed to produce cocoa nibs, which are then ground to cocoa mass, unadulterated chocolate in rough form. Once the cocoa mass is liquefied by heating, it is called chocolate liquor. The liquor may also be cooled and processed into its two components: cocoa solids and cocoa butter. Baking chocolate, also called bitter chocolate, contains cocoa solids and cocoa butter in varying proportions, without any added sugar. Powdered baking cocoa, which contains more fiber than cocoa butter, can be processed with alkali to produce dutch cocoa. Much of the chocolate consumed today is in the form of sweet chocolate, a combination of cocoa solids, cocoa butter or added vegetable oils, and sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids. Chocolate is one of the most popular food types and flavors in the world, and many foodstuffs involving chocolate exist, particularly desserts, including cakes, pudding, mousse, chocolate brownies, and chocolate chip cookies. Many candies are filled with or coated with sweetened chocolate. Chocolate bars, either made of solid chocolate or other ingredients coated in chocolate, are eaten as snacks. Gifts of chocolate molded into different shapes (such as eggs, hearts, coins) are traditional on certain Western holidays, including Christmas, Easter, Valentine's Day, and Hanukkah. Chocolate is also used in cold and hot beverages, such as chocolate milk and hot chocolate, and in some alcoholic drinks, such as creme de cacao. Although cocoa originated in the Americas, West African countries, particularly Côte d'Ivoire and Ghana, are the leading producers of cocoa in the 21st century, accounting for some 60% of the world cocoa supply. With some two million children involved in the farming of cocoa in West Africa, child slavery and trafficking associated with the cocoa trade remain major concerns. A 2018 report argued that international attempts to improve conditions for children were doomed to failure because of persistent poverty, absence of schools, increasing world cocoa demand, more intensive farming of cocoa, and continued exploitation of child labor. ## History ### Mesoamerican usage Chocolate has been prepared as a drink for nearly all of its history. For example, one vessel found at an Olmec archaeological site on the Gulf Coast of Veracruz, Mexico, dates chocolate's preparation by pre-Olmec peoples as early as 1750 BC. On the Pacific coast of Chiapas, Mexico, a Mokaya archaeological site provides evidence of cocoa beverages dating even earlier to 1900 BC. The residues and the kind of vessel in which they were found indicate the initial use of cocoa was not simply as a beverage, but the white pulp around the cocoa beans was likely used as a source of fermentable sugars for an alcoholic drink. An early Classic-period (460–480 AD) Maya tomb from the site in Rio Azul had vessels with the Maya glyph for cocoa on them with residue of a chocolate drink, suggests the Maya were drinking chocolate around 400 AD. Documents in Maya hieroglyphs stated chocolate was used for ceremonial purposes in addition to everyday life. The Maya grew cacao trees in their backyards and used the cocoa seeds the trees produced to make a frothy, bitter drink. By the 15th century, the Aztecs had gained control of a large part of Mesoamerica and had adopted cocoa into their culture. They associated chocolate with Quetzalcoatl, who, according to one legend, was cast away by the other gods for sharing chocolate with humans, and identified its extrication from the pod with the removal of the human heart in sacrifice. In contrast to the Maya, who liked their chocolate warm, the Aztecs drank it cold, seasoning it with a broad variety of additives, including the petals of the Cymbopetalum penduliflorum tree, chile pepper, allspice, vanilla, and honey. The Aztecs were unable to grow cocoa themselves, as their home in the Mexican highlands was unsuitable for it, so chocolate was a luxury imported into the empire. Those who lived in areas ruled by the Aztecs were required to offer cocoa seeds in payment of the tax they deemed "tribute". Cocoa beans were often used as currency. For example, the Aztecs used a system in which one turkey cost 100 cocoa beans and one fresh avocado was worth three beans. The Maya and Aztecs associated cocoa with human sacrifice, and chocolate drinks specifically with sacrificial human blood. The Spanish royal chronicler Gonzalo Fernández de Oviedo y Valdés described a chocolate drink he had seen in Nicaragua in 1528, mixed with achiote: "because those people are fond of drinking human blood, to make this beverage seem like blood, they add a little achiote, so that it then turns red. ... and part of that foam is left on the lips and around the mouth, and when it is red for having achiote, it seems a horrific thing, because it seems like blood itself." ### European adaptation Until the 16th century, no European had ever heard of the popular drink from the Central American peoples. Christopher Columbus and his son Ferdinand encountered the cocoa bean on Columbus's fourth mission to the Americas on 15 August 1502, when he and his crew stole a large native canoe that proved to contain cocoa beans among other goods for trade. Spanish conquistador Hernán Cortés may have been the first European to encounter it, as the frothy drink was part of the after-dinner routine of Montezuma. José de Acosta, a Spanish Jesuit missionary who lived in Peru and then Mexico in the later 16th century, wrote of its growing influence on the Spaniards: Although bananas are more profitable, cocoa is more highly esteemed in Mexico. . . Cocoa is a smaller fruit than almonds and thicker, which toasted do not taste bad. It is so prized among the Indians and even among Spaniards. . . because since it is a dried fruit it can be stored for a long time without deterioration, and they brings ships loaded with them from the province of Guatemala. . . It also serves as currency, because with five cocoas you can buy one thing, with thirty another, and with a hundred something else, without there being contradiction; and they give these cocoas as alms to the poor who beg for them. The principal product of this cocoa is a concoction which they make that they call “chocolate,” which is a crazy thing treasured in that land, and those who are not accustomed are disgusted by it, because it has a foam on top and a bubbling like that of feces, which certainly takes a lot to put up with. Anyway, it is the prized beverage which the Indians offer to nobles who come to or pass through their lands; and the Spaniards, especially Spanish women born in those lands die for black chocolate. This aforementioned chocolate is said to be made in various forms and temperaments, hot, cold, and lukewarm. They are wont to use spices and much chili; they also make it into a paste, and it is said that it is a medicine to treat coughs, the stomach, and colds. Whatever may be the case, in fact those who have not been reared on this opinion are not appetized by it. While Columbus had taken cocoa beans with him back to Spain, chocolate made no impact until Spanish friars introduced it to the Spanish court. After the Spanish conquest of the Aztecs, chocolate was imported to Europe. There, it quickly became a court favorite. It was still served as a beverage, but the Spanish added sugar, as well as honey (the original sweetener used by the Aztecs for chocolate), to counteract the natural bitterness. Vanilla, another indigenous American introduction, was also a popular additive, with pepper and other spices sometimes used to give the illusion of a more potent vanilla flavor. Unfortunately, these spices tended to unsettle the European constitution; the Encyclopédie states, "The pleasant scent and sublime taste it imparts to chocolate have made it highly recommended; but a long experience having shown that it could potentially upset one's stomach", which is why chocolate without vanilla was sometimes referred to as "healthy chocolate". By 1602, chocolate had made its way from Spain to Austria. By 1662, Pope Alexander VII had declared that religious fasts were not broken by consuming chocolate drinks. Within about a hundred years, chocolate established a foothold throughout Europe. The new craze for chocolate brought with it a thriving slave market, as between the early 1600s and late 1800s, the laborious and slow processing of the cocoa bean was manual. Cocoa plantations spread, as the English, Dutch, and French colonized and planted. With the depletion of Mesoamerican workers, largely to disease, cocoa production was often the work of poor wage laborers and African slaves. Wind-powered and horse-drawn mills were used to speed production, augmenting human labor. Heating the working areas of the table-mill, an innovation that emerged in France in 1732, also assisted in extraction. #### Solid chocolate New processes that sped the production of chocolate emerged early in the Industrial Revolution. In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. A few years thereafter, in 1828, he created a press to remove about half the natural fat (cocoa butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate. Known as "Dutch cocoa", this machine-pressed chocolate was instrumental in the transformation of chocolate to its solid form when, in 1847, English chocolatier Joseph Fry discovered a way to make chocolate moldable when he mixed the ingredients of cocoa powder and sugar with melted cocoa butter. Subsequently, his chocolate factory, Fry's of Bristol, England, began mass-producing chocolate bars, Fry's Chocolate Cream, launched in 1866, and they became very popular. Milk had sometimes been used as an addition to chocolate beverages since the mid-17th century, but in 1875 Swiss chocolatier Daniel Peter invented milk chocolate by mixing a powdered milk developed by Henri Nestlé with the liquor. In 1879, the texture and taste of chocolate was further improved when Rudolphe Lindt invented the conching machine. Besides Nestlé, several notable chocolate companies had their start in the late 19th and early 20th centuries. Rowntree's of York set up and began producing chocolate in 1862, after buying out the Tuke family business. Cadbury was manufacturing boxed chocolates in England by 1868. Manufacturing their first Easter egg in 1875, Cadbury created the modern chocolate Easter egg after developing a pure cocoa butter that could easily be molded into smooth shapes. In 1893, Milton S. Hershey purchased chocolate processing equipment at the World's Columbian Exposition in Chicago, and soon began the career of Hershey's chocolates with chocolate-coated caramels. ### Introduction to the United States The Baker Chocolate Company, which makes Baker's Chocolate, is the oldest producer of chocolate in the United States. In 1765 Dr. James Baker and John Hannon founded the company in Boston. Using cocoa beans from the West Indies, the pair built their chocolate business, which is still in operation. White chocolate was first introduced to the U.S. in 1946 by Frederick E. Hebert of Hebert Candies in Shrewsbury, Massachusetts, near Boston, after he had tasted "white coat" candies while traveling in Europe. ### Etymology Cocoa, pronounced by the Olmecs as kakawa, dates to 1000 BC or earlier. The word "chocolate" entered the English language from Spanish in about 1600. The word entered Spanish from the word chocolātl in Nahuatl, the language of the Aztecs. The origin of the Nahuatl word is uncertain, as it does not appear in any early Nahuatl source, where the word for chocolate drink is cacahuatl, "cocoa water". It is possible that the Spaniards coined the word (perhaps in order to avoid caca, a vulgar Spanish word for "faeces") by combining the Yucatec Mayan word chocol, "hot", with the Nahuatl word atl, "water". A widely cited proposal is that the derives from unattested xocolatl meaning "bitter drink" is unsupported; the change from x- to ch- is unexplained, as is the -l-. Another proposed etymology derives it from the word chicolatl, meaning "beaten drink", which may derive from the word for the frothing stick, chicoli. Other scholars reject all these proposals, considering the origin of first element of the name to be unknown. The term "chocolatier", for a chocolate confection maker, is attested from 1888. ## Types Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called "baking chocolate", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar. ### By cocoa content #### Raw chocolate Raw chocolate is chocolate produced primarily from unroasted cocoa beans. #### Dark Dark chocolate is produced by adding fat and sugar to the cocoa mixture. The U.S. Food and Drug Administration calls this "sweet chocolate", and requires a 15% concentration of chocolate liquor. European rules specify a minimum of 35% cocoa solids. A higher amount of cocoa solids indicates more bitterness. Semisweet chocolate is dark chocolate with low sugar content. Bittersweet chocolate is chocolate liquor to which some sugar (typically a third), more cocoa butter and vanilla are added. It has less sugar and more liquor than semisweet chocolate, but the two are interchangeable in baking. It is also known to last for two years if stored properly. As of 2017, there is no high-quality evidence that dark chocolate affects blood pressure significantly or provides other health benefits. #### Milk Milk chocolate is sweet chocolate that also contains milk powder or condensed milk. In the UK and Ireland, milk chocolate must contain a minimum of 20% total dry cocoa solids; in the rest of the European Union, the minimum is 25%. #### White White chocolate, although similar in texture to that of milk and dark chocolate, does not contain any cocoa solids that impart a dark color. In 2002, the US Food and Drug Administration established a standard for white chocolate as the "common or usual name of products made from cocoa fat (i.e., cocoa butter), milk solids, nutritive carbohydrate sweeteners, and other safe and suitable ingredients, but containing no nonfat cocoa solids". ### By application #### Baking chocolate Baking chocolate, or cooking chocolate, is chocolate intended to be used for baking and in sweet foods that may or may not be sweetened. Dark chocolate, milk chocolate, and white chocolate, are produced and marketed as baking chocolate. However, lower quality baking chocolate may not be as flavorful compared to higher-quality chocolate, and may have a different mouthfeel. Poorly tempered or untempered chocolate may have whitish spots on the dark chocolate part, called chocolate bloom; it is an indication that sugar or fat has separated due to poor storage. It is not toxic and can be safely consumed. #### Modeling chocolate Modeling chocolate is a chocolate paste made by melting chocolate and combining it with corn syrup, glucose syrup, or golden syrup. ## Production Roughly two-thirds of the entire world's cocoa is produced in West Africa, with 43% sourced from Côte d'Ivoire, where, as of 2007, child labor is a common practice to obtain the product. According to the World Cocoa Foundation, in 2007 some 50 million people around the world depended on cocoa as a source of livelihood. As of 2007 in the UK, most chocolatiers purchase their chocolate from them, to melt, mold and package to their own design. According to the WCF's 2012 report, the Ivory Coast is the largest producer of cocoa in the world. The two main jobs associated with creating chocolate candy are chocolate makers and chocolatiers. Chocolate makers use harvested cocoa beans and other ingredients to produce couverture chocolate (covering). Chocolatiers use the finished couverture to make chocolate candies (bars, truffles, etc.). Production costs can be decreased by reducing cocoa solids content or by substituting cocoa butter with another fat. Cocoa growers object to allowing the resulting food to be called "chocolate", due to the risk of lower demand for their crops. ### Genome The sequencing in 2010 of the genome of the cacao tree may allow yields to be improved. Due to concerns about global warming effects on lowland climate in the narrow band of latitudes where cocoa is grown (20 degrees north and south of the equator), the commercial company Mars, Incorporated and the University of California, Berkeley, are conducting genomic research in 2017–18 to improve the survivability of cacao plants in hot climates. ### Cacao varieties Chocolate is made from cocoa beans, the dried and fermented seeds of the cacao tree (Theobroma cacao), a small, 4–8 m tall (15–26 ft tall) evergreen tree native to the deep tropical region of the Americas. Recent genetic studies suggest the most common genotype of the plant originated in the Amazon basin and was gradually transported by humans throughout South and Central America. Early forms of another genotype have also been found in what is now Venezuela. The scientific name, Theobroma, means "food of the gods". The fruit, called a cocoa pod, is ovoid, 15–30 cm (6–12 in) long and 8–10 cm (3–4 in) wide, ripening yellow to orange, and weighing about 500 g (1.1 lb) when ripe. Cacao trees are small, understory trees that need rich, well-drained soils. They naturally grow within 20° of either side of the equator because they need about 2000 mm of rainfall a year, and temperatures in the range of 21 to 32 °C (70 to 90 °F). Cacao trees cannot tolerate a temperature lower than 15 °C (59 °F). The three main varieties of cocoa beans used in chocolate are criollo, forastero, and trinitario. ### Processing Cocoa pods are harvested by cutting them from the tree using a machete, or by knocking them off the tree using a stick. It is important to harvest the pods when they are fully ripe, because if the pod is unripe, the beans will have a low cocoa butter content, or low sugar content, reducing the ultimate flavor. #### Microbial fermentation The beans (which are sterile within their pods) and their surrounding pulp are removed from the pods and placed in piles or bins to ferment. Micro-organisms, present naturally in the environment, ferment the pectin-containing material. Yeasts produce ethanol, lactic acid bacteria produce lactic acid, and acetic acid bacteria produce acetic acid. In some cocoa-producing regions an association between filamentous fungi and bacteria (called "cocobiota") acts to produce metabolites beneficial to human health when consumed. The fermentation process, which takes up to seven days, also produces several flavor precursors, that eventually provide the chocolate taste. After fermentation, the beans must be dried to prevent mold growth. Climate and weather permitting, this is done by spreading the beans out in the sun from five to seven days. In some growing regions (for example, Tobago), the dried beans are then polished for sale by "dancing the cocoa": spreading the beans onto a floor, adding oil or water, and shuffling the beans against each other using bare feet. The dried beans are then transported to a chocolate manufacturing facility. The beans are cleaned (removing twigs, stones, and other debris), roasted, and graded. Next, the shell of each bean is removed to extract the nib. The nibs are ground and liquefied, resulting in pure chocolate liquor. The liquor can be further processed into cocoa solids and cocoa butter. #### Moist incubation The beans are dried without fermentation. The nibs are removed and hydrated in an acidic solution. Then they are heated for 72 hours and dried again. Gas chromatography/mass spectrometry showed that the incubated chocolate had higher levels of Strecker aldehydes, and lower levels of pyrazines. ### Blending Chocolate liquor is blended with the cocoa butter in varying quantities to make different types of chocolate or couverture. The basic blends of ingredients for the various types of chocolate (in order of highest quantity of cocoa liquor first), are: Dark chocolate: sugar, cocoa butter, cocoa liquor, and (sometimes) vanilla Milk chocolate: sugar, cocoa butter, cocoa liquor, milk or milk powder, and vanilla White chocolate: sugar, cocoa butter, milk or milk powder, and vanilla Usually, an emulsifying agent, such as soy lecithin, is added, though a few manufacturers prefer to exclude this ingredient for purity reasons and to remain GMO-free, sometimes at the cost of a perfectly smooth texture. Some manufacturers are now using PGPR, an artificial emulsifier derived from castor oil that allows them to reduce the amount of cocoa butter while maintaining the same mouthfeel. The texture is also heavily influenced by processing, specifically conching (see below). The more expensive chocolate tends to be processed longer and thus has a smoother texture and mouthfeel, regardless of whether emulsifying agents are added. Different manufacturers develop their own "signature" blends based on the above formulas, but varying proportions of the different constituents are used. The finest, plain dark chocolate couverture contains at least 70% cocoa (both solids and butter), whereas milk chocolate usually contains up to 50%. High-quality white chocolate couverture contains only about 35% cocoa butter. Producers of high-quality, small-batch chocolate argue that mass production produces bad-quality chocolate. Some mass-produced chocolate contains much less cocoa (as low as 7% in many cases), and fats other than cocoa butter. Vegetable oils and artificial vanilla flavor are often used in cheaper chocolate to mask poorly fermented and/or roasted beans. In 2007, the Chocolate Manufacturers Association in the United States, whose members include Hershey, Nestlé, and Archer Daniels Midland, lobbied the Food and Drug Administration (FDA) to change the legal definition of chocolate to let them substitute partially hydrogenated vegetable oils for cocoa butter, in addition to using artificial sweeteners and milk substitutes. Currently, the FDA does not allow a product to be referred to as "chocolate" if the product contains any of these ingredients. In the EU a product can be sold as chocolate if it contains up to 5% vegetable oil, and must be labeled as "family milk chocolate" rather than "milk chocolate" if it contains 20% milk. According to Canadian Food and Drug Regulations, a "chocolate product" is a food product that is sourced from at least one "cocoa product" and contains at least one of the following: "chocolate, bittersweet chocolate, semi-sweet chocolate, dark chocolate, sweet chocolate, milk chocolate, or white chocolate". A "cocoa product" is defined as a food product that is sourced from cocoa beans and contains "cocoa nibs, cocoa liquor, cocoa mass, unsweetened chocolate, bitter chocolate, chocolate liquor, cocoa, low-fat cocoa, cocoa powder, or low-fat cocoa powder". ### Conching The penultimate process is called conching. A conche is a container filled with metal beads, which act as grinders. The refined and blended chocolate mass is kept in a liquid state by frictional heat. Chocolate before conching has an uneven and gritty texture. The conching process produces cocoa and sugar particles smaller than the tongue can detect (typically around 20 μm) and reduces rough edges, hence the smooth feel in the mouth. The length of the conching process determines the final smoothness and quality of the chocolate. High-quality chocolate is conched for about 72 hours, and lesser grades about four to six hours. After the process is complete, the chocolate mass is stored in tanks heated to about 45 to 50 °C (113 to 122 °F) until final processing. ### Tempering The final process is called tempering. Uncontrolled crystallization of cocoa butter typically results in crystals of varying size, some or all large enough to be seen with the naked eye. This causes the surface of the chocolate to appear mottled and matte, and causes the chocolate to crumble rather than snap when broken. The uniform sheen and crisp bite of properly processed chocolate are the results of consistently small cocoa butter crystals produced by the tempering process. The fats in cocoa butter can crystallize in six different forms (polymorphous crystallization). The primary purpose of tempering is to assure that only the best form is present. The six different crystal forms have different properties. As a solid piece of chocolate, the cocoa butter fat particles are in a crystalline rigid structure that gives the chocolate its solid appearance. Once heated, the crystals of the polymorphic cocoa butter can break apart from the rigid structure and allow the chocolate to obtain a more fluid consistency as the temperature increases – the melting process. When the heat is removed, the cocoa butter crystals become rigid again and come closer together, allowing the chocolate to solidify. The temperature in which the crystals obtain enough energy to break apart from their rigid conformation would depend on the milk fat content in the chocolate and the shape of the fat molecules, as well as the form of the cocoa butterfat. Chocolate with a higher fat content will melt at a lower temperature. Making chocolate considered "good" is about forming as many type V crystals as possible. This provides the best appearance and texture and creates the most stable crystals, so the texture and appearance will not degrade over time. To accomplish this, the temperature is carefully manipulated during the crystallization. Generally, the chocolate is first heated to 45 °C (113 °F) to melt all six forms of crystals. Next, the chocolate is cooled to about 27 °C (81 °F), which will allow crystal types IV and V to form. At this temperature, the chocolate is agitated to create many small crystal "seeds" which will serve as nuclei to create small crystals in the chocolate. The chocolate is then heated to about 31 °C (88 °F) to eliminate any type IV crystals, leaving just type V. After this point, any excessive heating of the chocolate will destroy the temper and this process will have to be repeated. Other methods of chocolate tempering are used as well. The most common variant is introducing already tempered, solid "seed" chocolate. The temper of chocolate can be measured with a chocolate temper meter to ensure accuracy and consistency. A sample cup is filled with the chocolate and placed in the unit which then displays or prints the results. Two classic ways of manually tempering chocolate are: Working the molten chocolate on a heat-absorbing surface, such as a stone slab, until thickening indicates the presence of sufficient crystal "seeds"; the chocolate is then gently warmed to working temperature. Stirring solid chocolate into molten chocolate to "inoculate" the liquid chocolate with crystals (this method uses the already formed crystals of the solid chocolate to "seed" the molten chocolate). Chocolate tempering machines (or temperers) with computer controls can be used for producing consistently tempered chocolate. In particular, continuous tempering machines are used in large volume applications. Various methods and apparatuses for continuous flow tempering. In general, molten chocolate coming in at 40–50 °C is cooled in heat exchangers to crystallization temperates of about 26–30 °C, passed through a tempering column consisting of spinning plates to induce shear, then warmed slightly to re-melt undesirable crystal formations. ### Shaping Chocolate is molded in different shapes for different uses: Chocolate bars (tablets) are rectangular blocks of chocolate meant to be broken down to cubes (or other predefined shapes), which can then be used for consumption, cooking and baking. The term is also used for combination bars, which are a type of candy bars Chocolate chips are small pieces of chocolate, usually drop-like, which are meant for decoration and baking Pistoles, callets and fèves are small, coin-like or bean-like pieces of chocolate meant for baking and patisserie applications (also see Pistole (coin) and Fève (trinket)) Chocolate blocks are large, cuboid chunks of chocolate meant for professional use and further processing Other, more specialized shapes for chocolate include sticks, curls and hollow semi-spheres ### Storage Chocolate is very sensitive to temperature and humidity. Ideal storage temperatures are between 15 and 17 °C (59 and 63 °F), with a relative humidity of less than 50%. If refrigerated or frozen without containment, chocolate can absorb enough moisture to cause a whitish discoloration, the result of fat or sugar crystals rising to the surface. Various types of "blooming" effects can occur if chocolate is stored or served improperly. Chocolate bloom is caused by storage temperature fluctuating or exceeding 24 °C (75 °F), while sugar bloom is caused by temperature below 15 °C (59 °F) or excess humidity. To distinguish between different types of bloom, one can rub the surface of the chocolate lightly, and if the bloom disappears, it is fat bloom. Moving chocolate between temperature extremes, can result in an oily texture. Although visually unappealing, chocolate suffering from bloom is safe for consumption and taste unaffected. Bloom can be reversed by retempering the chocolate or using it for any use that requires melting the chocolate. Chocolate is generally stored away from other foods, as it can absorb different aromas. Ideally, chocolates are packed or wrapped, and placed in proper storage with the correct humidity and temperature. Additionally, chocolate is frequently stored in a dark place or protected from light by wrapping paper. The glossy shine, snap, aroma, texture, and taste of the chocolate can show the quality and if it was stored well. ## Composition ### Nutrition One hundred grams of milk chocolate supplies 540 calories. It is 59% carbohydrates (52% as sugar and 3% as dietary fiber), 30% fat and 8% protein (table). Approximately 65% of the fat in milk chocolate is saturated, mainly palmitic acid and stearic acid, while the predominant unsaturated fat is oleic acid (table). 100-grams of milk chocolate is an excellent source (over 19% of the Daily Value, DV) of riboflavin, vitamin B12 and the dietary minerals, manganese, phosphorus and zinc. Chocolate is a good source (10–19% DV) of calcium, magnesium and iron. ### Effects on health Chocolate may be a factor for heartburn in some people because one of its constituents, theobromine, may affect the esophageal sphincter muscle in a way that permits stomach acids to enter the esophagus. Theobromine poisoning is an overdosage reaction to the bitter alkaloid, which happens more frequently in domestic animals than humans. However, daily intake of 50–100 g cocoa (0.8–1.5 g theobromine) by humans has been associated with sweating, trembling and severe headache. Chocolate contains alkaloids such as theobromine and phenethylamine, which have physiological effects in humans, but the presence of theobromine renders it toxic to some animals, including dogs and cats. According to a 2005 study, the average lead concentration of cocoa beans is ≤ 0.5 ng/g, which is one of the lowest reported values for a natural food. However, during cultivation and production, chocolate may absorb lead from the environment (such as in atmospheric emissions of leaded gasoline, which is still being used in Nigeria). Reports from 2014 indicate that "chocolate might be a significant source" of lead ingestion for children if consumption is high (with dark chocolate containing higher amounts), and "one 10 g cube of dark chocolate may contain as much as 20% of the daily lead oral limit." Chocolate and cocoa contain moderate to high amounts of oxalate, which may increase the risk of kidney stones. A few studies have documented allergic reactions from chocolate in children. Other research has shown that dark chocolate can aggravate acne in men who are prone to it. Research has also shown that consuming dark chocolate does not substantially affect blood pressure. Chocolate and cocoa are under preliminary research to determine if consumption affects the risk of certain cardiovascular diseases or cognitive abilities. One tablespoonful (5 grams) of dry unsweetened cocoa powder has 12.1 mg of caffeine and a 25-g single serving of dark chocolate has 22.4 mg of caffeine. Although a single 7 oz. serving of coffee may contain 80–175 mg, studies have shown psychoactive effects in caffeine doses as low as 9 mg, and a dose as low as 12.5 mg was shown to have effects on cognitive performance. ### Phytochemicals No particular compound within chocolate has been associated with any specific health outcome; it is more likely that its observed, overall beneficial effect in humans "may owe something more to the whole than to any given part.". However, the polyphenol epicatechin is regarded as potentially significant. Cocoa solids are a source of flavonoids and alkaloids, such as theobromine, phenethylamine, and caffeine. ## Labeling Some manufacturers provide the percentage of chocolate in a finished chocolate confection as a label quoting percentage of "cocoa" or "cacao". This refers to the combined percentage of both cocoa solids and cocoa butter in the bar, not just the percentage of cocoa solids. The Belgian AMBAO certification mark indicates that no non-cocoa vegetable fats have been used in making the chocolate. A long-standing dispute between Britain on the one hand and Belgium and France over British use of vegetable fats in chocolate ended in 2000 with the adoption of new standards which permitted the use of up to five percent vegetable fats in clearly labelled products. This British style of chocolate has sometimes been pejoratively referred to as "vegelate". Chocolates that are organic or fair trade certified carry labels accordingly. In the United States, some large chocolate manufacturers lobbied the federal government to permit confections containing cheaper hydrogenated vegetable oil in place of cocoa butter to be sold as "chocolate". In June 2007, in response to consumer concern about the proposal, the FDA reiterated "Cacao fat, as one of the signature characteristics of the product, will remain a principal component of standardized chocolate." ## Industry Chocolate, prevalent throughout the world, is a steadily growing, US$50 billion-a-year worldwide business. Europe accounts for 45% of the world's chocolate revenue, and the US spent $20 billion in 2013. Big Chocolate is the grouping of major international chocolate companies in Europe and the U.S. U.S. companies Mars and Hershey's alone generated $13 billion a year in chocolate sales and account for two-thirds of U.S. production in 2004. Despite the expanding reach of the chocolate industry internationally, cocoa farmers and labourers in the Ivory Coast are unaware of the uses of the beans; the high cost of chocolate products in the Ivory Coast make it inaccessible to the majority of the population, who do not know what it tastes like. ### Manufacturers Chocolate manufacturers produce a range of products from chocolate bars to fudge. Large manufacturers of chocolate products include Cadbury (the world's largest confectionery manufacturer), Ferrero, Guylian, The Hershey Company, Lindt & Sprüngli, Mars, Incorporated, Milka, Neuhaus and Suchard. Guylian is best known for its chocolate sea shells; Cadbury for its Dairy Milk and Creme Egg. The Hershey Company, the largest chocolate manufacturer in North America, produces the Hershey Bar and Hershey's Kisses. Mars Incorporated, a large privately owned U.S. corporation, produces Mars Bar, Milky Way, M&M's, Twix, and Snickers. Lindt is known for its truffle balls and gold foil-wrapped Easter bunnies. Food conglomerates Nestlé SA and Kraft Foods both have chocolate brands. Nestlé acquired Rowntree's in 1988 and now markets chocolates under their brand, including Smarties (a chocolate candy) and Kit Kat (a chocolate bar); Kraft Foods through its 1990 acquisition of Jacobs Suchard, now owns Milka and Suchard. In February 2010, Kraft also acquired British-based Cadbury; Fry's, Trebor Basset and the fair trade brand Green & Black's also belongs to the group. ### Child labor in cocoa harvesting The widespread use of children in cocoa production is controversial, not only for the concerns about child labor and exploitation, but also because up to 12,000 of the 200,000 children working in the Ivory Coast, the world's biggest producer of cocoa, may be victims of trafficking or slavery. Most attention on this subject has focused on West Africa, which collectively supplies 69 percent of the world's cocoa, and the Ivory Coast in particular, which supplies 35 percent of the world's cocoa. Thirty percent of children under age 15 in sub-Saharan Africa are child laborers, mostly in agricultural activities including cocoa farming. Major chocolate producers, such as Nestlé, buy cocoa at commodities exchanges where Ivorian cocoa is mixed with other cocoa. In 2009, Salvation Army International Development (SAID) UK stated that 12,000 children have been trafficked on cocoa farms in the Ivory Coast of Africa, where half of the world's chocolate is made. SAID UK states that it is these child slaves who are likely to be working in "harsh and abusive" conditions for the production of chocolate, and an increasing number of health-food and anti-slavery organisations are highlighting and campaigning against the use of trafficking in the chocolate industry. As of 2017, approximately 2.1 million children in Ghana and Côte d'Ivoire were involved in farming cocoa, carrying heavy loads, clearing forests, and being exposed to pesticides. According to Sona Ebai, the former secretary-general of the Alliance of Cocoa Producing Countries: "I think child labor cannot be just the responsibility of industry to solve. I think it's the proverbial all-hands-on-deck: government, civil society, the private sector. And there, you need leadership." Reported in 2018, a 3-year pilot program – conducted by Nestlé with 26,000 farmers mostly located in Côte d'Ivoire – observed a 51% decrease in the number of children doing hazardous jobs in cocoa farming. The US Department of Labor formed the Child Labor Cocoa Coordinating Group as a public-private partnership with the governments of Ghana and Côte d'Ivoire to address child labor practices in the cocoa industry. The International Cocoa Initiative involving major cocoa manufacturers established the Child Labor Monitoring and Remediation System intended to monitor thousands of farms in Ghana and Côte d'Ivoire for child labor conditions, but the program reached less than 20% of the child laborers. Despite these efforts, goals to reduce child labor in West Africa by 70% before 2020 are frustrated by persistent poverty, absence of schools, expansion of cocoa farmland, and increased demand for cocoa. In April 2018, the Cocoa Barometer report stated: "Not a single company or government is anywhere near reaching the sector-wide objective of the elimination of child labor, and not even near their commitments of a 70% reduction of child labor by 2020". ### Fair trade In the 2000s, some chocolate producers began to engage in fair trade initiatives, to address concerns about the marginalization of cocoa laborers in developing countries. Traditionally, Africa and other developing countries received low prices for their exported commodities such as cocoa, which caused poverty to abound. Fairtrade seeks to establish a system of direct trade from developing countries to counteract this unfair system. One solution for fair labor practices is for farmers to become part of an Agricultural cooperative. Cooperatives pay farmers a fair price for their cocoa so farmers have enough money for food, clothes, and school fees. One of the main tenets of fair trade is that farmers receive a fair price, but this does not mean that the larger amount of money paid for fair trade cocoa goes directly to the farmers. The effectiveness of fair trade has been questioned. In a 2014 article, The Economist stated that workers on fair trade farms have a lower standard of living than on similar farms outside the fair trade system. ## Usage and consumption ### Bars Chocolate is sold in chocolate bars, which come in dark chocolate, milk chocolate and white chocolate varieties. Some bars that are mostly chocolate have other ingredients blended into the chocolate, such as nuts, raisins, or crisped rice. Chocolate is used as an ingredient in a huge variety of bars, which typically contain various confectionary ingredients (e.g., nougat, wafers, caramel, nuts, etc.) which are coated in chocolate. ### Coating and filling Chocolate is used as a flavouring product in many desserts, such as chocolate cakes, chocolate brownies, chocolate mousse and chocolate chip cookies. Numerous types of candy and snacks contain chocolate, either as a filling (e.g., M&M's) or as a coating (e.g., chocolate-coated raisins or chocolate-coated peanuts). ### Beverages Some non-alcoholic beverages contain chocolate, such as chocolate milk, hot chocolate, chocolate milkshakes and tejate. Some alcoholic liqueurs are flavoured with chocolate, such as chocolate liqueur and creme de cacao. Chocolate is a popular flavour of ice cream and pudding, and chocolate sauce is a commonly added as a topping on ice cream sundaes. The caffè mocha is an espresso beverage containing chocolate. ## Popular culture ### Religious and cultural links Chocolate is associated with festivals such as Easter, when moulded chocolate rabbits and eggs are traditionally given in Christian communities, and Hanukkah, when chocolate coins are given in Jewish communities. Chocolate hearts and chocolate in heart-shaped boxes are popular on Valentine's Day and are often presented along with flowers and a greeting card. In 1868, Cadbury created a decorated box of chocolates in the shape of a heart for Valentine's Day. Boxes of filled chocolates quickly became associated with the holiday. Chocolate is an acceptable gift on other holidays and on occasions such as birthdays. Many confectioners make holiday-specific chocolate candies. Chocolate Easter eggs or rabbits and Santa Claus figures are two examples. Such confections can be solid, hollow, or filled with sweets or fondant. ### Books and film Chocolate has been the center of several successful book and film adaptations. In 1964, Roald Dahl published a children's novel titled Charlie and the Chocolate Factory. The novel centers on a poor boy named Charlie Bucket who takes a tour through the greatest chocolate factory in the world, owned by the eccentric Willy Wonka. Two film adaptations of the novel were produced: Willy Wonka & the Chocolate Factory (1971) and Charlie and the Chocolate Factory (2005). A third adaptation, an origin prequel film titled Wonka, is scheduled for release in 2023. Like Water for Chocolate a 1989 love story by novelist Laura Esquivel, was adapted to film in 1992. Chocolat, a 1999 novel by Joanne Harris, was adapted for film in Chocolat which was released a year later.
https://en.wikipedia.org/wiki/Chocolate#Classification
In this three-part pop-up course, Dr. Beeta Baghoolizadeh will discuss the making and erasing of race and racial markers in Iran during the nineteenth and twentieth centuries. Despite centuries of enslaving Central Asian, Caucasian, South Asian, and East African peoples within Iran, the term slave came to evoke the image of enslaved East Africans in the Iranian imaginary by the late nineteenth century. Blackness, in turn, became synonymous with enslavement, despite the many examples of free East Africans in Iran. This pop-up course charts the changing meanings of Blackness during the nineteenth and twentieth centuries through the lens of slavery and abolition in three parts: the changing geographies of enslavement, the advent of the camera and memorialization projects in Iran, and finally, the framing of blackface minstrelsy in the absence of slavery. Thursday, September 17, 12:15–1:15 pm A Language of Color: Race and Racialization in Nineteenth-Century Persian Thursday, September 24, 12:15–1:15 pm Photographing and Seeing Race, 1840–1906 Thursday, October 1, 12:15–1:15 pm The Afterlife of Slavery in Blackface Theater, 1940s–1979 A Zoom link will be circulated the morning of each session.
https://eblasts.bgcdml.net/2018_ap_events/f20-wcth-baghoolizadeh/
Fully furnished off-grid style hunting cabin that sleeps 8-11 people. Includes a mud room, full bath, living area, propane heat stove, a kitchen with propane cook stove/oven and sleeping quarters. Entire property is surrounded by Keeney Swamp State Forest that covers a total of 2,408 acres and also borders the 708 acre Keeney Swamp Wildlife Management Area. Excellent area for hunting, trapping and hiking. Size: 1.9 Acres Price: $28,900 Town: Birdsall County: allegany 40 acres bordering State Forest in German NY 465 N. End Road Five Streams State Forest is 6,353 acres of land dotted by the streams, ponds and seasonal roads. This 40 acres is between the state land and private land. The state land begins with a large pond accessible from the acreage. This property comes with a newer driveway, electric, a shed and a large metal cover-all to park a camper/RV. This is all on a year-round maintained road. Size: 40 Acres Price: $59,900 Town: German County: chenango 149 acres Commercial Blueberry Farm with Chinese Chestnut Trees in Humphrey NY Button Road Great investment opportunity! Consists of 5 acres of fenced-in commercial Blueberry patch with yearly income. Blueberry bushes produce 15-20 pounds per bush per season. These Blueberry bushes are well-maintained with a trickle irrigation system. Lower 5 acre field consists of Chinese chestnuts, quarter acre of cranberries and more blueberry bushes. Remaining land is mostly hardwoods with some pine trees. Size: 149.2 Acres Price: $244,900 Town: Humphrey County: cattaraugus 21 acres Hunting Land in Solon NY Wildman Road bordering State Land What a nice piece of ground just off of Rt 81. There is some topography leading down to the streams that join on the property. There is even Brook Trout in the creek. Some hemlock hollows along the stream bed then a rise through the woods to a State land boundary. Should building a home or cabin be on your agenda, there are a couple of nice building sites up along the road frontage. Size: 21.12 Acres Price: $44,900 Town: Solon County: cortland 30 acres Woodlands in Naples NY 7879 State Route 245 bordering State Land Just over a mile outside the village of Naples this beautifully wooded and rugged property is located at Conklin’s Gully and borders High Tor Wildlife Management Area (6,800 acres). There is an old driveway on the property near the Conklin Gully parking lot that leads up to a great building site. Very private with great views. A perfect spot for a home or camp. Size: 30 Acres Price: $84,900 Town: Naples County: ontario 5 acres Cabin with Storage Shed in Florence NY Houlahan Road bordering State Forest A North Country Paradise! Secluded land and cabin nestled deep in the woodlands with all the comforts including small kitchen, bathroom, living area and comfortable (stand up) sleeping loft. Borders 14,000 acres of state forest full of ponds, stream and trails. Experience all the Tug Hill Region has to offer as you are minutes from the Redfield Reservoir, Mad River and Salmon River. Size: 5.62 Acres Price: $54,900 Town: Florence County: oneida 87 acres House with Farm Fields and Woods in Summerhill NY 13430 State Route 90 This home has 1,200 sq. ft with 2 bedrooms and 1 bath. An attached carport and a small barn to store your equipment. 87 acres with approximately 15 acres of fields, and the south end of the property is mostly wooded with some standing marketable timber. Fields have been fallow for a few years but have been planted in corn, buckwheat and some potatoes in the past. Summer Hill State Forest is located directly across the road. Size: 87.9 Acres Price: $179,900 Town: Summerhill County: cayuga 38 acres Hunting Land in Ward NY Waugh Brook Road bordering State Forest Nice mixture of pine trees, apple trees, open meadow, and hardwoods. Perfect area and location for a camp or house. Very accessible and great for hunting, hiking and riding ATVs. Abundance of wildlife sign throughout the property. Some tree stands on the property. Enough open land for food plots. South and east side of the property borders State Forest which gives you more recreational opportunities. Size: 38 Acres Price: $49,900 Town: Ward County: allegany 135 acres House with Airplane Hangar and Airstrip in Frewsburg NY 11791 Bone Run Road 135 acres with beautiful 4 bedroom, 2.5 bath house, airplane hangar, and airstrip in Frewsburg NY bordering South Valley State Forest. The house has been totally renovated including new hardwood and tile flooring. Large covered patio with electric, perfect for family events. Approximately 1,850 foot airstrip with an insulated airplane hangar with hydraulic bi-folding door for easy access. Near Allegheny Reservoir. Size: 135 Acres Price: $549,900 Town: Frewsburg County: cattaraugus 21 acres Hunting Land and Building Lot in Friendship NY 5492 Pigtail Road This 21.9 acres consists of mostly hardwoods with some pine trees scattered throughout. Roughly 5 acres of open field to build your cabin or house. Land lays mostly flat with some slope towards the back of the property with 2 different creeks running through. 950 ft. of road frontage makes accessing the property easy and convenient. Across the street from State land for more recreational opportunities. Size: 21.9 Acres Price: $39,900 Town: Friendship County: allegany 120 acres Hunting Land on County Road 16 in West Almond NY bordering State Land 120 acres in West Almond NY perfect for the sportsman! Excellent hunting property! This land borders a significant amount of acreage of State land. There are endless deer and other wildlife trails and signs throughout. Several big bucks have been taken in the area. The property also borders a large hay field. Miles of beautiful views, and there is a year-round flowing stream. Size: 120.7 Acres Price: $119,900 Town: West Almond County: allegany 43 acres Log Cabin in German NY 774 Skillman Hoffman Road bordering State Forest Well-maintained custom off-grid log cabin on 43 acres bordering NY State Land on 2 sides. 24 x 30 log cabin has 3 bedrooms, a nice living room with a large fireplace, separate wood stove and an eat-in kitchen. Cleaver plumbing pumps water out of a shallow well into a holding tank then a portable on-demand hot water heater for showers/rinsing. Use the outhouse for bathroom. Propane runs the stove and refrigerator. Sits on a poured concrete pad. Size: 43 Acres Price: $129,900 Town: German County: chenango Beautiful Home on 11 acres in Bainbridge NY 442 Hohreiter Road bordering State Forest This beautiful ranch home with full walkout basement is a great find! Nestled high in the hills of Chenango County and completely surrounded by South Hill State Forest. You really cannot get much better than this as far as privacy goes. Includes a solid 2 car garage with poured concrete walls, and an outbuilding that was used for a maple sugar shack. Very private setting with an exceptionally well-built home! Size: 11.9 Acres Price: $274,900 Town: Guilford County: chenango 16 acres Cabin and Barn in Richford NY Robinson Hollow Road bordering State Forest on 3 Sides Fully-insulated 18 x 30 cabin with 1 bedroom, full bath, a small loft and all amenities. Cabin is fully wired and has solar panels with a battery bank and a gas generator for electricity. Recent full septic system was installed. Water is supplied by a spring-fed well. Barn on the property is a well-constructed 24 x 50 with 16 foot ceilings. Unique property that would make someone a great place to get away from it all! Size: 16.99 Acres Price: $89,900 Town: Richford County: tioga 175 Acres Former Dairy Farm in Bovina NY 834 Crescent Valley Road The 175+- acres has frontage on 2 town roads and includes some tillable acreage, pasture, hidden meadows, hayfields and plenty of woodland for hiking and hunting whitetail deer, turkey and the occasional black bear plus the land borders a New York State Forest for additional land to hike and hunt. Size: 174.91 Acres Price: $398,900 Town: Bovina County: delaware 7 acres Cottage with Fields and Woods in Afton NY 132 Edwards Lane This attractive little recreational get-away has the potential to be a year-round residence. The 24 x 24, 1 and 3/4 story, 1368 sq.ft. cottage is well-built. Other structures on the site include a 10 x 14 workshop storage building and a 8 x 10 lawn tractor and tool shed. This site is quiet and private, located at the end of a dead end road, but not remote as the small town of Afton is just 5 minutes away. Size: 7.35 Acres Price: $79,900 Town: Afton County: chenango 12 acres Building Lot bordering Dryden Lake in the Finger Lakes Region Dryden NY Beautiful 12 acre building lot in the Finger Lakes Region ready for your new custom home with great views! This lot with mature evergreen trees is part of the former Dryden Lake Golf Course bordering the DEC owned Dryden Lake. Includes a nice pond for you to stock and wood storage shed for your equipment and tools. Enough acreage to drive golf balls, hike and hunt on your own land or have a farmette. Size: 12.23 Acres Price: $148,900 Town: Dryden County: tompkins 19 acres Hunting Land with Camp in Virgil NY Snyder Hill Road bordering State Forest Great hunting acreage with a nice camp that has a wood stove and a place to cook! Combine that with the picnic table and there it is! Owner has a nice water hole dug nearby as well. The 13 x 13 camp has a nice 6 ft. deck off the front to enjoy the property. Many woods roads throughout for walking, ATVs and hunting. See deer, turkeys and the occasional grouse. You have NY State land bordering two sides of the parcel. Size: 19.97 Acres Price: $58,900 Town: Virgil County: cortland 768 acres Adirondacks NY with Pond, River and Timber for Investment Unique opportunity to own affordable large Adirondack acreage in Northern New York State. Few minutes from Paul Smiths College and a few miles from the Village of Saranac Lake and Lake Placid. Consists of 4 adjoining tax parcels surrounding Mud Pond and a couple other ponds plus included is 1/2 mile+- of the North Branch of the Saranac River. Almost 2 miles of road frontage which would allow for a potential subdivision. Size: 768.24 Acres Price: $498,900 Town: Franklin County: franklin 30 acre Lake in Homer NY with Frontage on the W. Branch Tioughnioga River Own your own lake for Excellent Fishing! The lake is approx. 30 acres with frontage on W. Branch Tioughnioga River. Total acreage is 36+- acres with a gated, deeded right-of-way off from NYS Route 11 leading to the private lake. Water flows into the lake from the north from a NYS owned pond and flows out into the Tioughnioga River where you have over 3,000 ft. of frontage to do more fishing. Great opportunity to own your own lake, camp out, fish and enjoy nature!
https://nylandquest.redbarnportal.com/property-category/bordering-state-land/
Washington, May 28 (IANS) In March 2016, NASA will send a unique Mars lander to explore the Red Planet’s deep interior to find clues about how all rocky planets, including the Earth, formed and evolved. The lander called InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) is about the size of a car and will be the first mission devoted to understand the interior structure of the Red Planet. The current testing will help in ensuring that InSight can operate and survive in deep space travel and the harsh conditions of the Martian surface. The spacecraft will lift off from Vandenberg Air Force Base in California and land on Mars about six months later, the US space agency said. “Today, our robotic scientific explorers are paving the way, making great progress on the journey to Mars. Together, humans and robotics will pioneer Mars and the solar system,” said Jim Green, director of the NASA’s Planetary Science Division. The technical capabilities and knowledge gained from InSight, and other Mars missions, are crucial to NASA’s journey to Mars, which includes sending astronauts to the Red Planet in the 2030s. During the environmental testing phase at the Lockheed Martin’s Space Systems facility in Denver, the lander will be exposed to extreme temperatures, vacuum conditions of nearly zero air pressure simulating interplanetary space, and a battery of other tests over the next seven months. The first will be a thermal vacuum test in the spacecraft’s “cruise” configuration, which will be used during its seven-month journey to Mars. Other tests include vibrations simulating launch and checking for electronic interference between different parts of the spacecraft. The testing phase concludes with a second thermal vacuum test in which the spacecraft is exposed to the temperatures and atmospheric pressures it will experience as it operates on the Martian surface.
https://asianjournal.ca/nasa-testing-new-mars-lander-for-2016-landing/
Changes in serum lipid profiles caused by three regimens of interferon-free direct-acting antivirals for patients infected with hepatitis C virus. Serum low-density lipoprotein cholesterol (LDL-C) increases during treatment of chronic hepatitis C (CHC) with interferon-free direct-acting antivirals (DAAs). We sought to compare the changes of serum lipid profiles caused by three regimens. A total of 216 CHC patients were enrolled. Among 170 patients infected with hepatitis C virus (HCV) genotype 1b, 85 received daclatasvir plus asunaprevir (DCV/ASV) and 85 received sofosbuvir plus ledipasvir (SOF/LDV). Forty-six infected with HCV genotype 2 received sofosbuvir plus ribavirin (SOF/RBV). Serum total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol, and triglyceride were measured at baseline and 4, 8, 12 (for all regimens), and 24 weeks (for DCV/ASV) during treatment (4w, 8w, 12w, and 24w, respectively) and 12 and 24 weeks after treatment (p12w and p24w, respectively). In 69 (81.2%) patients who received DCV/ASV and achieved a sustained virologic response at 24 weeks after the end of treatment (SVR24), TC and LDL-C increased significantly from baseline to p24w. In 84 (98.8%) treated with SOF/LDV who achieved SVR24, TC and LDL-C increased significantly from baseline to 8w, and TC decreased significantly from 8w to p12w. The 45 (97.8%) who received SOF/RBV and achieved SVR24 showed no significant changes. At 12w, TC and LDL-C increased to a greater degree in patients receiving SOF/LDV than in those receiving DCV/ASV or SOF/RBV. During treatment with DAAs, the serum lipid profile may reflect not only recovery from the disruption of lipid metabolism induced by HCV, but also the pharmacological effects of DAAs. Further investigations are needed to elucidate the effect of DAAs on serum lipid profiles.
You must provide medical documentation of your specific disability in order to verify accommodation eligibility under Section 504 of the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990. Documentation needs to be current in order to reflect present functioning. Additional documentation may be requested based upon your current status and your request(s). All documentation is confidential and will be kept on file at the disability services office. You are responsible for making an appointment with a disability counselor to discuss their documentation and eligibility for accommodations, which are determined each semester. The disability services office will make the final determination of eligibility for accommodations. Disability services cannot review an accommodation request until documentation is complete. Please contact disability services if you have any questions. Physical disabilities may include but are not limited to: mobility impairment, chemical sensitivity, cancer, and AIDS. Neurologists or other medical specialists can provide information regarding diagnosis and treatment of those individuals with physical disabilities. *Please Note: Documentation guidelines are to be viewed as recommendations for qualified clinicians and students when submitting documentation. Documentation is reviewed on an individualized basis,and accommodations are made in coordination with disability services and the student. Recommended documentation should include: - The most recent date of assessment (or reassessment), if the condition is progressive or if the student expects a change in condition. - A specific diagnosis of a disability. - A review of pertinent history and the date of first diagnosis. - Description of the current symptoms, fluctuating conditions/symptoms and prognosis. - A summary of the assessment procedures used to make the diagnosis. - Medical information which may be important in a college/university environment, including any medication and whether there are side effects of the prescribed medication and whether there are crisis episodes associated with the disability - Substantial limitations to this student's major life activities. - Suggestions of reasonable accommodations that might be appropriate at the postsecondary level are encouraged. - These recommendations should be supported by the diagnosis. Please note that disability services will make the final determination of eligibility for accommodations. - Include the name, address, phone number, title, and credentials of the specialist.
https://ask.salemstate.edu/kb/physical-disabilities-documentation-guidelines
Here at =mc we encourage every participant we meet on our programmes to get in touch if they have specific issues they want to follow up on. From this we hear some common problems, issues, challenges, and worries. In this regular feature we share some of those challenges, and our advice, for dealing with them. This time, the issue comes from a Supporter Care Manager in a mid-sized mental health UK charity. The =mc consultants offer advice on a difficult issue which you may well identify with: handling performance when someone is having a tough time. Hello =mc. I really hope you can help me. I work in a mental health charity. I manage a small team in Supporter Care and we are very mindful of the mental health challenges in our work. However, as a small team we also rely on each other to get the job done. If any one person is off work, for whatever reason, it has a huge impact. Managing holidays is a bit of a nightmare but we deal with it. The challenge I’m having right now is that one of my team members is not really pulling their weight. They just don’t churn out the same level as output as the others, and I’m constantly finding errors in their work. This means I have to supervise them much more closely than the others, and re-do lots of their work, which is a huge drain on my time. They also have social anxiety disorder which impacts on how they work with the rest of the team. I’m nervous however about how to best deal with this as they are having a horrible time personally. They’ve had a medical issue, and the treatment has been pretty rough. The pandemic has made it a bit more difficult, as they basically have to go through hospital appointments, operations, and recovery, alone. So, on the one hand I really want to help them, but on the other hand the team are under pressure as a result. What am I supposed to do? Charlie Dear Supporter Care Manager, I feel for you. This is a very tricky situation and handling performance issues with care is a real skill, especially in these circumstances. Management is so often a balancing act between delivering strategic outcomes and the emotional labour that goes into supporting people to do their best work. There is plenty of nuance here that we don’t really know so if you want to chat it through in more detail, please call us. In the meantime, it would help to step back a bit and reflect on what you have done so far. Have you been clear in your expectations around the output and standard of work you are looking for? Constantly correcting errors without giving this person feedback on what you expect them to do differently is going to create dependency on you. It may be difficult, but if you don’t set boundaries, you will be in this position for a long time coming. Start with a common error, and see if you can help them to learn in the first instance. Build from there. Show them what good looks like, and create a plan of how to get there. Laura Hi there Supporter Care Manager, thanks for getting in touch. Charlie is absolutely right, you need to set standards clearly for everyone in the team including this person. It may be that you also need to consider their specific needs when you set a learning plan, and consider what is a reasonable expectation for them at this time. I recommend taking a person-centred approach. Tell them where the problems are, the impact this has and ask for their input. Be direct, but also show that you want to help. What do they think would help them to get to an error-free result? What level of output can they reasonably deliver at this time given everything that is going on? What are the coping and support mechanisms they are using, or have used before, that could help now? How could work support those or enable those? Consider things like reduced hours, flexible hours, and time out for appointments with their counsellor or therapy teams if that’s useful. It also sounds like you may have a resourcing issue. In our sector resources are always under pressure, but we have a duty to care for our people to ensure they can work safely. If holidays are a problem, then there is no flexibility for the rest of life’s challenges. When this happens, you need to make a change: either bring in more resources (temporary or otherwise) or if that isn’t possible review workstreams and see what can be reduced. I’m not suggesting you chuck money at the problem; there may not be any spare cash in any organisation right now. However, could you be more creative in how you deliver work, or push back on less urgent/important areas for now? Building some flex or slack in will help take the pressure off the whole team. Philly Dear Manager, I too have been in this situation. A really powerful tool I have found useful is coaching or mentoring by someone outside of the team. If your colleague is up for it, talking to another person will give them a safe space in which they can explore ideas about how to balance work and health; how to get the help they need; and how they can address the challenges they have in their work. This will be a confidential process and has the benefit of putting the power to do things differently back into their hands. It saves you time of course, and also encourages this person to find their own solutions. I wonder if they may need some training as well, on the outputs you refer to. Do you have an experienced team member who can show them their own techniques, who could hold some one-to-one sessions to get this person up to speed? That way you can be the manager, setting out expectations and monitoring results. The colleague can be the mentor – showing and encouraging learning. And a coach could be a confidante, someone to talk things through. You may find after a short time they are more confident, and improvements come through. Yvette Hello Manager, I agree with everything said already, and see this as a staged approach: start with feedback, ask them what they need, review resources, offer coaching and training support. Then keep checking how it is going and what else could help. I also hope you are getting some support for yourself with this. Please be assured, you are not the only manager that will have to deal with the need to support people whilst also managing a busy workload. You have done the right thing in asking for help – trying to go it alone on this one will be stressful. Laura has suggested getting some additional resources if you can. Another useful resource for you could be other managers or your HR team. Find out what help your organisation can provide for you, what other people have done in this situation that might help you, share experiences and talk it through. As a mental health charity, I’m sure you will have plenty of guides and experts who you can discuss this with in confidence. You may even have things like an Employee Assistance Programme or Occupational Health – both of which can give you practical advice on how to support someone with mental and physical health. See if you have a Capability Policy or similar. Whilst this is a formal process, don’t think of it as a negative process. It will help you diagnose specific actions that you can take, to create an improvement plan for your colleague. You don’t even need to use it now, but you can consider how you might need to do this in the future if things don’t improve with the previous interventions. I really do wish you all the best with this, your colleague is having a tough time and every bit of help for them will be important. Equally important is that you get support too. If you’d like to explore ways of handling for situations like this, contact us online or call 020 7978 1516 to discuss similar challenges and how we might be able to help. Finally, if you’re facing a challenge you’d like some advice on in the next issue of the Safe Space, email us on [email protected]. While we can’t promise to publish all the requests we receive, we will offer advice by email as a minimum.
https://www.managementcentre.co.uk/blog/safe-space-8-poor-performance-and-personal-problems-what-should-i-do/
A record 658 candidates submitted their candidacies this week ahead of the May 30 parliamentary election, a lot of them banking on popular discontent with the political establishment and traditional parties in a bid to win one of the legislature’s 56 seats. Out of those, only seven are independent, the rest belonging to 15 parties and formations including hunters and animal welfare activists. It is the biggest ever number of parties, groups and candidates to contest parliamentary elections. This year’s 658 candidates are up from 488 in 2016. Observers consider it reasonable, when big parties go through a crisis, for small formations and independents to try to capitalise, believing they have something different to offer, although most of them do not. Some are opportunistic formations seeking to take advantage of people’s discontent with traditional parties, which have been tarnished by corruption to varying degrees. “In most cases, the formation of these groups is mostly characterised by spontaneous or sentimental motives, overlooking the needs and requirements that could warrant a lasting formation that can successfully contest elections,” said political analyst Christophoros Christophorou. A sustainable scheme requires long-term efforts, huge energy reserves, a lot of human resources and money, along with a communication, promotional campaign. “Most parties are critically short on these,” he said. While nothing has come close to this year’s number of new alliances and parties, Christophorou said the emergence of new parties is not a new phenomenon in the island’s political history. Even back in the early 1940s, when Akel was founded, another party, conservative KEK, followed along with numerous politically affiliated trade unions. And back in February 1970, four new parties emerged over two to three days after Takis Evdokas founded the pro-Enosis Democratic National Party (DEK) in 1969. Diko defectors founded Nedipa in 1980, led by Alecos Michaelides who later became foreign minister, and the Union of the Centre – EK, led by Tassos Papadopoulos. Both unsuccessfully contested the 1981 elections together with the Pancyprian Renewal Party. Former president George Vasiliou also formed his own party after losing the 1993 presidential election, the Movement of Free Democrats, while the adoption of the proportional electoral system encouraged the formation of more parties in 1996, the Green party and the New Horizons. The political scene started to change, however, as Cyprus moved into the 21st century. “People are becoming more demanding, trust in parties declined sharply and received a big blow because of the stance leaderships adopted in the Annan Plan referendum of April 2004,” Christophorou said. Disagreements within Disy led to the foundation of four new parties Evrodi, Evroko, the Movement of Free Citizens, and a formation that contested with success the first Cyprus seats in the European Parliament in 2004. Other groups have emerged since 2000, and, in total 14 have contested elections from 2001 to 2016, most of them unsuccessfully. Christophorou said mistrust in traditional parties has many causes, in particular the failure to produce proposals and offer a vision for the future. “Phenomena of corruption have worsened the situation,” Christophorou said, with the public blaming traditional parties. He said it appeared that a section of voters who felt alienated were seeking political shelter, a new party that could cover them ideologically and respond to their demands. There is a party that was created by hunters for example, an older one with animal activists, and one formed by Varosha residents who are disappointed by the political leadership’s handling of the Cyprus problem. While increased interest in participating in public affairs is positive, there are doubts over the new groups’ chances of success. “Appearing on the eve of elections offers absolutely no guarantee for success,” Christophorou said. Most of the 14 parties that appeared between 2000 and 2016 have since disappeared. On the other hand, formations of people getting together to contest the election without clear ideological or other bonds may succeed but nothing guarantees they will last. To survive, any organisation needs to convince people about their “raison d’etre”, that their existence serves something, responds to society’s needs. Small parties have more chances to win a seat or seats today than ten years ago, though this is curbed by the rise of the entry threshold in 2015 to 3.6 per cent, but sustaining an organisation requires much more than the will to exist, Christophorou said. The chairman of sociologists, Nektarios Partasides says the political system was going through a deep crisis where the substance has been lost and politics has become a spectacle. Parties have become public relations companies using catchy slogans that are “no different from marketing ads”, and candidates on billboards resemble glamour photos hosted by lifestyle magazines, he told the Sunday Mail. There is no investment in long-term planning and objectives, and people will not hear from candidates how they plan on fixing the economy, tackling geostrategic issues, and resolving problems in the education system, if elected. “We don’t see this this. We see slogans. There is no substance,” Partasides said. It has reached the point where all that counts is a candidate’s potential to be elected, and not the quality of their personal characteristics. Even all the changes, creation of smaller parties, which disappear after a while, mergers and defections, ultimately aim at making a party sellable to voters. People, on the other hand, easily decide or agree to stand as candidates because they do not feel the weight of the role they have to play, there is no sense of accountability or responsibility. “It shows an inability to establish general political principles to serve general good,” Partasides said. He says involving people in decision making could make things better. Living in the digital age makes it easier for people to participate.
https://cyprus-mail.com/2021/05/16/the-election-of-discontent/
What Is the Number One Cause of Cancer? Cancer is actually a complex group of individual diseases, each with different causes and effects. At the core of causing cancer are changes that occur in the genetic material within cells, the DNA. These changes, or mutations, lead to abnormal, uncontrolled cell proliferation. Cancer mutations can be caused by a variety of situations and exposures. Causes of cancer include cancer-causing chemicals, called carcinogens, such as tobacco smoke, chemotherapies, asbestos, benzene, vinyl chloride, and many more, as well as environmental factors, such as ultraviolet light, pollution, and hepatitis B. For many forms of cancer, genetic inheritance plays a significant role in causing the disease because of inherited gene mutations. For more information, read our full medical article about cancer signs, symptoms, treatment, and prognosis.
https://www.medicinenet.com/what_is_the_number_one_cause_of_cancer/ask.htm
There are several challenges in imparting new information and inculcating new practices in resource-constrained and rural communities, frequently in the domain of global health, but also in the realms of education, agriculture, financial literacy, and repair, among others. Using video-based learning to educate populations can be scalable and encourage local participation at the same time if done right. Apps to connect rural students with educational resources Turn the Bus is a 501 (c) (3) US nonprofit organization located in the Greater Seattle area, focused on delivering quality education via smartphones to low income communities in India. We believe that we can leverage digital technology to bridge the gap between high income and low income populations by delivering education that enables underprivileged people in the remotest parts of the world to lift themselves out of poverty.
https://x4i.org/social-challenge-idea/quality-education+reduced-inequality/exploring-video-based-education-options-rural-communities/c-mhfle7szi
TECHNICAL FIELD The present invention relates generally to cleaning surfaces with a stream of particles, and relates more specifically to a method and system that effectively propels particles comprising solid carbon dioxide (&quot;CO. sub.2 &quot;) pellets at high speeds onto a coated surface to remove the coating in a rapid but environmentally benign manner. BACKGROUND OF THE INVENTION Many types of articles require cleaning or removal of coatings. Removing hardened coatings such as paint from a surface can be difficult, time consuming, and costly. Moreover, certain removal methods generate significant amounts of environmentally hazardous wastes. One example is the removal of paint from the surface of aircraft. Military aircraft especially have need for periodic removal of worn or damaged paint. Military aircraft are painted for purposes of camouflage, structural integrity, reduction of radar reflectivity, etc. Once an aircraft is painted, it must be periodically stripped and repainted as new paints become available or as old paint coats become worn or damaged. Known paint stripping methods are time consuming and expensive. Because known paint removal processes take so much time, the aircraft is either out of commission for a lengthy period of time or must be flown with a compromised coating. Three methods are currently in use for removing paint from the surface of aircraft. A first method uses chemical solvents, typically petroleum- based, to remove the paint from the aircraft surface. The chemical method produces approximately 17,000 pounds of hazardous liquid waste byproducts for a military fighter aircraft such as an F-15. Use of chemical solvents requires protective clothing and respirators during the stripping process, and procedures for collecting and storing the hazardous waste byproducts. Such measures significantly increase the cost of the process and are a threat to the quality of the environment. A second known paint removal method involves blasting with media such as plastic pellets. The plastic pellets are blasted with high pressure air onto the surface to be cleaned. The impact of the pellets causes the physical removal of the paint by abrasion. As with chemical solvents, use of plastic pellet blasting require the use of protective clothing and respirators. This process produces significant amounts of waste in the form of plastic pellets mixed in with paint chips. The collection and disposal of the plastic pellets and paint chips is time consuming. A third paint removal method physically removes paint from the surface of an aircraft using media comprising particles of solid CO.sub. 2 instead of plastic. The carbon dioxide or &quot;dry ice&quot; is formed into solid pellets and propelled at high speed onto the surface of the aircraft. The impact of the dry ice pellet chips the paint off the aircraft surface. At ambient temperatures and pressures, the CO.sub.2 pellets evaporate into the atmosphere through a process known as sublimation. After the CO.sub.2 pellets sublimate, the only remaining by- products of this paint removal process are dried paint chips. A CO.sub.2 pellet removal system produces about 240 pounds of dried paint chips for an F-15 aircraft. In comparison to the chemical stripping process, the CO2 process produces 70 times less waste. Advantageously, the dried paint chips are easier to gather up and confine, and the process does not produce environmentally hazardous byproducts that must be separately handled and disposed. Moreover, methods of paint removal involving use of CO.sub.2 pellets are considered environmentally benign since the CO.sub.2 is usually obtained as a byproduct of other industrial processes, and no new CO.sub.2 is introduced into the atmosphere. Present CO.sub.2 paint removal systems typically include a liquid CO. sub.2 storage tank, a pelletizer that converts CO.sub.2 from liquid into solid pellet form, a feeder that feeds the CO.sub.2 pellets into a stream of high pressure air, a conduit or hose connected to the feeder, and a nozzle. One example of a paint removal system and method that employs CO. sub.2 pellets as particulate media is described in U.S. Pat. No. 5,109, 636 to Lloyd et at., entitled &quot;Particle Blast Cleaning Apparatus and Method&quot;. Known CO.sub.2 paint removal systems have certain drawbacks. For example, most known systems are inefficient since only a small portion of the CO.sub.2 pellets produced by the pelletizer are ultimately ejected from the nozzle. Some of the inefficiency is believed due to the temperature difference between the nozzle and point at which the pellets are introduced into the pressurized air stream, which can be several degrees. Moreover, the temperature of the pressurized air stream into which the pellets are introduced is typically many degrees warmer than the temperature of the CO.sub.2 pellets. The present inventors have discovered that only about 18% of the pellets actually produced by the pelletizer in known prior art systems ultimately reach the aircraft surface. It it believed that a substantial portion of the CO.sub.2 pellets sublimate prior to exiting the blast nozzle. The number of pellets that actually impact the surface to be cleaned affect the overall efficiency of the system and the rate of cleaning. Known CO.sub.2 paint removal systems also cause undesirable backside paint removal under certain conditions. Aircraft surfaces having a thickness of 0.040 inches or less sometime experience partial removal of paint from the opposite or &quot;backside&quot; of the surface. The problem seems to occur most often when larger size pellets are used (greater than 0.080 inches in diameter), which apparently impact the surface with such force that a localized deflection is created that passes through the metal to the back side of the surface, where it contributes to removal of paint from that surface. Accordingly, there is a need for an improved CO.sub.2 pellet cleaning system that is more efficient in terms of pellet survival and provides better cleaning ability than prior art systems. In this regard, there is a need for an improved CO.sub.2 pellet cleaning method and system that reduces sublimation of the CO.sub.2 pellets before they are propelled from the nozzle. SUMMARY OF THE INVENTION As will be seen, the present invention overcomes these and other disadvantages associated with prior art particle blast cleaning removal methods and systems. Stated generally, the present invention includes a means for providing CO.sub.2 particles as media for blast cleaning, a delivery system for delivering the particles against a surface to be cleaned, and a means for controlling the temperature in the delivery system so as to prolong the life of the CO.sub.2 particles during passage through the delivery system. More particularly described, the present invention comprises means for forming liquid CO.sub.2 into solid pellet form, a CO.sub.2 pellet collecting means, means for introducing the CO.sub.2 pellets into a stream of high pressure air, a delivery conduit, a nozzle, and a means for controlling the temperature of the high pressure air within the system at a predetermined level, preferably at or below the liquid phase temperature of CO.sub.2 pellets. The delivery conduit connects the introducing means to the nozzle and enables the CO.sub.2 pellets and high pressure air to travel therethrough. The controlling means maintains the temperature of the high pressure air within the system to eliminate melting and sublimation within the system and deliver substantially all of the CO.sub.2 pellets to the nozzle. Because virtually all of the CO. sub.2 pellets introduced into the delivery system survive the passage through the delivery system with the use of the present invention, increased efficiencies are possible in terms of surface area cleaned per unit time. Another aspect of the present invention involves an improved method of cleaning a surface utilizing CO.sub.2 pellets. The method includes the steps of forming liquid CO.sub.2 into solid pellet form, introducing the CO.sub.2 pellets into a stream of high pressure air, causing the pellets to travel through a conduit of a predetermined length, controlling the temperature of the high pressure air within the delivery system at or below a predetermined temperature, and propelling the CO.sub. 2 pellets from a nozzle so that the pellets impact a surface to be cleaning, which impact causes the surface to be cleaned. The step of controlling the temperature in the system results in substantially improved survival of the pellets as they travel from the pelletizer to the nozzle outlet. Using a CO.sub.2 pellet cleaning system constructed in accordance with the invention, substantially all of the pellets actually produced in the pelletizer survive the travel through the conduit and are propelled through the nozzle and against a surface to be cleaned. This results from cooling and maintaining the temperature within the system below the sublimation point of CO.sub.2 (i.e., below -109&deg; F.). This increase in efficiency has resulted in an eight-fold decrease in strip time of a typical aircraft, for example, a decrease from 136 hours to only 17 hours. As a further example, use of the present invention for removing paint from F-15 fighter planes could enable savings of $3 million per year over prior art chemical de-painting processes, and $2 million per year over existing CO.sub.2 paint removal systems. Accordingly, it is an object of the present invention to provide an improved CO.sub.2 particle cleaning method and system. It is a further object of the present invention to provide an improved CO.sub.2 pellet cleaning method and system that substantially reduces melting and sublimation of CO.sub.2 pellets within the system, prior to ejection of the pellets from a nozzle against a surface to be cleaned. It is a further object of the present invention to provide an improved CO.sub.2 pellet cleaning method and system having increased cleaning efficiency due to increased rates of pellet ejection from cleaning nozzles. It is yet a further object of the present invention to provide an improved CO.sub.2 pellet cleaning method and system that maintains the benefits of known CO.sub.2 cleaning systems with minimal, non- hazardous waste byproducts. Other objects, features and advantages of the present invention will become apparent upon reading the following specification when taken in conjunction with the drawings and the appended claims. BRIEF DESCRIPTION OF THE DRAWING FIGURES FIG. 1 is an isometric view of a CO.sub.2 particle cleaning system as applied to the removal of paint from a painted military aircraft. FIG. 2 is a schematic drawing of a CO.sub.2 particle cleaning system constructed in accordance with the preferred embodiment of the present invention. FIG. 3 is an isometric view of the CO.sub.2 particle cleaning system components. FIG. 4 is a cross sectional view of the hopper and feeder in operation. FIG. 5 is a partial cross sectional view of the hopper and feeder taken across the line 5--5 in FIG. 4. FIG. 6 is isometric view of the high pressure air nozzle mounted on a robotic arm. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Referring now to the drawings, in which like numerals indicate like elements throughout the several views, FIG. 1 illustrates a plurality of identically constructed CO.sub.2 particle media cleaning systems 10a, 10b, 10c constructed in accordance with the preferred embodiment of the present invention, as applied to the cleaning of an article, in this case a military aircraft 11. Each system 10a, 10b, 10c is shown used in conjunction with a robot 17a, 17b, 17c that is configured to support and move a delivery system comprising a robot arm 44a, 44b, 44c supporting a nozzle 38a, 38b, 38c along a path adjacent to the aircraft 11, to remove paint from the aircraft in an automated fashion. Each nozzle 38a, 38b, 38c receives CO.sub.2 pellets and pressurized air via a feed hose 36a, 36b, 36c connected to its respective system 10. Although a plurality of systems 10a, 10b, 10c are shown, a single system 10 will be described. Moreover, while the present invention is shown in conjunction with the cleaning of an aircraft in an automated fashion, it will be understood that the invention has applicability to other types of article cleaning, automated or manual, and is not limited to aircraft paint stripping. For an application such as aircraft paint removal, a preferred configuration is to provide one or more robots such as the robots 17a, b, c positioned to assume fight side, left side, and rear positions, respectively, adjacent the aircraft. Each robot 17 is equipped with at least one selectively positionable robot arm 44 that movably supports at least one nozzle 38 and a feed hose 36 connected to a cleaning system 10. Programming allows each of the side robots to cover approximately 40% of the skin surface. The rear robot may cover the remaining 20%. Although multiple robots are shown, it will be appreciated that one or more robots may programmed to cover some or all of aircraft, as desired. A schematic diagram of a typical preferred cleaning system 10 is shown in FIG. 2, and an isometric view of the components within the system is shown in FIG. 3. Referring first to FIG. 2, the preferred system 10 includes a liquid CO.sub.2 storage tank 12. The liquid CO.sub. 2 storage tank supplies a CO.sub.2 pelletizer 14 with liquid CO.sub.2 through a liquid CO.sub.2 supply line 13. The pelletizer 14 converts liquid CO.sub. 2 into solid CO.sub.2 media comprising pellets 16. The preferred CO.sub.2 pelletizer 14 is a model 65-200, manufactured by Cold Jet, Inc., of Milford, Ohio, having a manufacturing capacity of 600 pounds of pellets per hour. The pellets are preferably cylindrical in shape and preferably about 0.040 inches in diameter. As will be understood by those skilled in the art, pellets of the preferred size are formed by use of a die having a plurality of openings sized at 0. 040 inches. Carbon dioxide &quot;snow&quot; is forced through the die to form pellets of the indicated diameter, and cut into pellets having a length of about 0.25 inches internally of the preferred pelletizer. Further details concerning the pellet formation may be obtained by the manufacturer of the preferred pelletizer. When using 0.040 inch diameter pellets, the problem of backside paint removal is reduced because the pellets are small enough that, upon impact, they do not impact the surface with sufficient force to damage or remove paint on the opposite side of the surface. After the pellets 16 are formed by the pelletizer 14, they are fed into a hopper 18, where a supply of pellets accumulates. The hopper 18 provides a continuous and steady supply of pellets to a high pressure rotary feeder 20 that is operative to inject the pellets into a pressurized air stream, as will be described. Air pressurized to about 300 pounds per square inch (&quot;psi&quot;) is supplied to the rotary feeder 20 by an air compressor 22. As will be known to those skilled in the art, air compressors typically create an increase in the temperature of air from ambient as a result of the compression process. Typically, air compressors of the nature suitable for use in the present invention will provide pressurized air at about 100&deg; F., depending upon factors such as ambient humidity and temperature. An air dryer 24 receives the high pressure air from the air compressor 22 and cools the pressurized air to no more than about 50. degree. F. Preferably, the discharge dew point of the air discharged from the dryer 24 is -40&deg; F. The cooler, high pressure air at approximately 50&deg; F. is then provided from the air dryer 24 into a mixing chamber 26. The mixing chamber 26 is a piping system including an expansion and contraction, valving, pressure gauges, and automated controls. As illustrated in FIG. 3, the preferred mixing chamber 26 is made of 11/4 inch diameter schedule 40 316 stainless steel pipe, with a 45&deg; leg 27 for introducing the liquid N.sub.2. The leg 27 is welded to a straight section of the mixing chamber 26, forming a 45&deg; lateral. A 1 inch needle valve 31 is attached to the straight section prior to attachment of the leg 27 to control the warm or compressed air volume. A 1 inch cryogenic valve 48 is attached to the leg 27. Warm air enters through the needle valve 31 and mixes with liquid N2 entering through the valve 48 in leg 27. A pressure gauge (not illustrated) with feedback to the air compressor is also provided to monitor pressure in the system, to ensure that the preferred pressure of 300 psig is maintained. Referring again to FIG. 2, the system 10 further includes a supply tank 28 for storing a supply of liquid nitrogen. The liquid nitrogen tank preferably stores at least 9000 gallons of liquid N.sub.2, at a working pressure of 400 psig, with tank pressure regulation. The supply tank 28 is connected to the mixing chamber 26 by means of a vacuum jacketed hose 29, which is connected to a cryogenic globe control valve 48. The liquid nitrogen is injected at the mixing chamber 26 into the air fed to the rotary feeder to control pellet sublimation, as will be described. The injection of liquid nitrogen supercools the high pressure air to at least -120&deg; F., which is below the sublimation point of solid carbon dioxide (which is -109.33&deg; F.). From the mixing chamber 26, the cooled high pressure air is provided via an air feed line 25 to the rotary feeder 20. At the rotary feeder, the pellets are injected into the supercooled air fed in via the line 25. From the rotary feeder, the combined pressurized air and pellets are fed into a delivery hose 36, at the end of which is connected a nozzle 38. The pellets are ejected from the nozzle 38 and directed against a surface to be cleaned. In accordance with another aspect of the invention, the temperature of the system is monitored at several locations, to generate feedback control signals for controlling the injection of liquid N.sub.2 into the pressurized air stream. These control signals are utilized to control the setting of the control valve 48 and thus the amount of liquid N.sub.2 that is introduced into the air line 25. Preferably, at least two temperature sensors 46a, 46b are employed to measure and control the temperature in the delivery system, with a first sensor 46a mounted to monitor the temperature at the point of connection of the delivery hose 36 to the nozzle 38, and a second sensor 46b mounted to read the temperatures in the rotary feeder at the point of ejection of the air and pellets into the delivery hose 36. Each sensor 46 provides a signal proportional to the temperature, for feedback control. Signals from the sensors 46a, 46b, denominated CONTROL SIGNAL 1 and CONTROL SIGNAL 2, are provided to a temperature controller 60, which is responsive to the control signals to provide a control signal on line 61 denominated VALVE CONTROL. This signal controls the opening and closing of the valve 48 to admit more liquid nitrogen if the temperature is higher (warmer) than a predetermined first set point, or less liquid nitrogen is the temperature is lower (colder) than a predetermined second set point. The first and second set points define a range of temperatures within which the temperature is preferably maintained, to prolong pellet life. As shown in more detail in FIGS. 4 and 5, the hopper 18 includes a body 19 and a chute 21. The chute 21 is connected to a feeder 20. The pellets 16 are collected in the body 19 of the hopper 18 and are forced by gravity downwardly into the chute 21 and thence into the feeder 20. The preferred hopper is a model 65-200 series 30 feeder assembly manufactured by Cold Jet, Inc. of Loveland, Ohio. Details of the preferred hopper are available in the literature supplied by the manufacturer. The preferred hopper accommodates up to 15 pounds of 0.040 inch pellets and provides a continuous flow to the feeder 20 at a maximum rate of at least 750 pounds per hour. As will be understood by those skilled in the art, properly sized fill, discharge, and purge openings are provided in the preferred hopper, and media level is sensed with a sensing means (not shown) to ensure continuous fill from the pelletizer. As seen in FIG. 4, the preferred hopper 18 further includes a stir rod 50 that extends into the center of the body 19, for agitating the pellets contained therein and reducing occurrences of media bridges with their resultant voids. A pneumatic vibrator 52 is mounted to the exterior of the hopper 18. The vibrator preferably has a variable frequency of between about 30 Hz and about 140 Hz, with an applied force of about 20 lbs., which is considered sufficient to agitate the 0.040 pellets for movement without excessive pellet damage. When the vibrator 52 is activated, it causes the stir rod 50 to vibrate and reduce the tendency of the CO.sub.2 pellets from sticking together. The rod 50 is slowly rotated by a motor (not shown) to further enhance the free flow of pellets. The pellets are also prevented from sticking together by slightly pressurizing the hopper with carbon dioxide in gaseous form. The gaseous carbon dioxide also reduces moisture in the hopper by preventing the intrusion of ambient air, which might contribute to undesirable pellet adhesion. Still referring to FIG. 4, the rotary feeder 20 comprises means for introducing the pellets into a stream of pressurized air. The feeder includes a housing 23 for receiving the chute 21 of the hopper 18. The housing defines a passageway 54 that leads to a rotor lock 34. The rotor lock 34 includes a plurality of TEFLON&reg;-lined cavities 32 that are recessed from the perimeter of the rotor lock. The rotor lock is rotated at a predetermined but controllable rate by a motor (not illustrated) to control the rate of pellet injection into the pressurized air. As the rotor lock 34 rotates, each of a plurality of cavities 32 successively moves into position atop the rotor lock below the chute 21 and fills with pellets 16. The pellets 16 within the cavities 32 rotate clockwise until the cavity reaches a pellet discharge point at an outlet 39, positioned underneath the rotor lock 34. At the outlet 39, the pellets drop from the cavity 32 into a passageway 37 below the outlet 39. The passageway 37 connects the pellet discharge outlet 39 to a channel 35 and feeds the pellets into an air stream passing through the channel. The channel 35 includes a threaded connector 41 that receives a mating connector on the air line 25. The channel 35 leads to an outlet connector 43 that is connected to the delivery hose 36. As shown in FIGS. 4 and 5, an air jet 45 is positioned within the passageway 37 to direct pressurized air into a cavity 32 of the rotor lock 34 as the cavity moves into position to empty its contents into the passageway. The pressurized air is continuously directed into the cavity to help to empty the cavity of pellets and direct the flow of pellets into the channel 35. A T-connector or tap 33 is provided on the air line 25 to provide a source of air for the air jet 45. It should be understood that the feeder 20 is preferably constructed with components that withstand the extreme cold of the liquid nitrogen. For example, conventional CO.sub.2 feeders may experience failure of rubber &quot;O&quot;-ring type pressure seals because of the cold. Seal materials capable of withstanding the liquid N.sub.2 are preferred. Since seal geometry, mating surfaces, and surface finish contribute to seal life, attention should be paid to these aspects of the feeder as well. Referring now to FIG. 6, the delivery hose 36 directs the flow of pressurized air, including the CO.sub.2 pellets and injected liquid nitrogen, to the high pressure air nozzle 38. The delivery hose 36, which also should be capable of withstanding extreme cold, is preferably a corrugated flex hose such as a type 1502-20-3000 manufactured by Pacific Flex, Inc., Rancho Cucamonga, Calif. Preferably, hose length should be maintained at less than 35 feet to maximize the utility of the present invention to maintain the temperature sufficiently low during the travel of the pellets from the feeder 20 to the nozzle. The preferred nozzle 38 comprises a wide, flat body 40 having an elongate, rectangular outlet approximately 0.140 inch thick. The nozzle body 40 is preferably about 5 inches wide, and preferably made of aluminum. The nozzle may be constructed as described in U.S. Pat. No. 5, 050,805 to provide a supersonic output flow, or may be a four inch wide model 2E0073 manufactured by Cold Jet, Inc. Internally of the nozzle 38, a converging-diverging shape is used to accelerate the gas. Preferably, the nozzle delivers 0.040 inch pellets at a maximum rate of 750 pounds per hour, with an average pellet speed at the target surface of 750 to 850 feet per second. A uniform pellet distribution is preferred to avoid local substrate damage. The nozzle preferably supports a minimum average paint removal rate of 0.48 square feet per minute. The nozzle 38 is preferably mounted on the arm 44 of a robot for automatically removing paint from the surface of an aircraft, as shown in FIGS. 1 and 6. The mechanics and control systems for moving the nozzle 38 along a path disposed adjacent to the surface to be cleaned are not considered a part of the present invention. In order to control the temperature of the high pressure air throughout the system, temperature sensors 46 are provided at various locations in the system. Two sensors 46a, 46b are provided in the preferred embodiment, although other sensors can be provided as needed. As shown in FIG. 6, one sensor 46a is located at the point of connection of the delivery hose 36 to the nozzle 38. The first sensor 46a provides a control signal CONTROL SIGNAL 1 (FIG. 2), which varies in proportion to temperature variations between -120&deg; F. and -150&deg; F. It is preferred to maintain the temperature generally close to -120&deg; F. to avoid excessive use of liquid nitrogen. As shown in FIG. 4, a second sensor 46b is provided after the outlet connector 43 leading from the channel 35 of the feeder 20. The second sensor 46b provides a control signal CONTROL SIGNAL 2 (FIG. 2), which varies in proportion to temperature variations between -110&deg; F. and -120&deg; F. The temperature sensors 46 measure the temperature of the high pressure air and are utilized by the controller 60 to adjust the cryogenic valve 48 located between the liquid nitrogen storage tank 28 and the mixing chamber 26. The preferred temperature sensors 46 are both model 3856K57 &quot;T&quot; type thermocouples, available from McMaster-Carr, Atlanta, Ga. Details of the sensors are available in the literature supplied by the manufacturer. The sensors 46 provide a voltage that varies with temperature. As shown in FIG. 2, voltages from the sensors are provided as CONTROL SIGNAL 1 and CONTROL SIGNAL 2 to the controller 60. The preferred controller 60 is a programmable electronic temperature controller such as a Model CN-2010 manufactured by Omega Engineering, Inc. , Stamford, Conn., an APC Seriplex System with thermocouple input, analog output, on/off control, or the like. The controller is preferably programmed to provide a control signal, VALVE CONTROL, on line 61, that causes the valve 48 to open or close to maintain the temperature at or near the desired level. The preferred control valve 48 is 1 inch cryogenic valve, model 49295K73 available from McMaster-Carr, or a Parker cryogenic analog control valve. Although automatic control of the valve 48 by an electronic controller 60 is preferred, it should be understood that the present invention also contemplates practice of a method for prolonging the life of the pellets by manually monitoring the temperature at the various locations, such as with a +100&deg; to -320&deg; F. cryogenic thermometer connected to the thermocouples 46, and manually adjusting the setting of the valve 48 to maintain the temperature within the operative range for maximum pellet life. Either control signal from sensors 46a, 46b can cause the valve to open to its maximum (open) position, for maximum inflow of liquid N. sub. 2, when the temperature from its respective sensor reads a temperature at the maximum of the indicated range. Likewise, either control signal from sensors 46a, 46b can cause the valve to close to its minimum (closed) position, for minimum inflow of liquid N.sub.2, when the temperature from its respective sensor reads a temperature at the minimum of the indicated range. Thus, when either sensor detects that the temperature being read is above the desired temperature, that is, the temperature at which pellet sublimation will occur, it will provide a control signal that opens the valve 48 to permit introduction of additional liquid N.sub.2. In this manner, the amount of N.sub.2 is controlled to minimize sublimation of pellets during their travel from the point of injection into the pressurized air, to the point of introduction into the nozzle 38. Preferred rates of introduction of liquid N.sub.2 in the preferred embodiment of the present invention from between about 2 gallons per minute (gpm) to a maximum of about 4 gpm. Usage of liquid N. sub.2 in this range has been found to provide satisfactory operation of the present invention, with virtually 100% survival of 0.040 inch pellets delivered from the feeder 20 to the nozzle 38 utilizing a 35 foot delivery hose 36. Regarding the nozzle 38, those skilled in the art will appreciate that present commercial blasting systems use one of two presently available types of nozzles--single hose and dual hose. The differences in these two approaches stems from the type of pellet delivery system used. A single hose nozzle receives pressurized gas and dry ice media at one location through a single hose. Hence, the pellets are introduced upstream of the nozzle. On the other hand, the dual hose nozzle receives high pressure gas through one hose and a second hose delivers the media at relatively low pressure. In this latter arrangement, the media is introduced at the nozzle. It will be understood that the present invention finds utility with both types of blasting systems, with the liquid N.sub.2 injection for a single hose system being described herein as the preferred embodiment. For a dual hose system, the present invention preferably involves temperature monitoring and injection of N. sub.2 for the pressurized air. It has been observed that single hose nozzle designs have difficulty operating continuously and reliably at high media rates (above about 350 pounds of CO.sub.2 pellets per hour) due to a tendency of the nozzle to clog and reduce the velocity of the pellets. However, with the use injection of liquid nitrogen to cool the compressed air in accordance with the present invention, this problem is substantially eliminated. Since mixing liquid nitrogen with compressed air to lower the temperature allows a single hose nozzle design to operate using high media rates (above about 650 pounds of CO.sub.2 per hour), it is possible to increase the width of single hose nozzles to above the conventional three inches. Nozzles having increased outlet widths, for example 5 inches, allow an increase in paint removal rates by increasing the surface area cleaned per unit time. It has been further observed that increased media rates with increased width nozzles may under certain circumstances result in a slight decrease in the impact speed of the pellets. It may be that the increase in media flow rates increase the friction inside the nozzle, which in turn decreases the mean velocity of the gas, producing a lower pellet speed. Accordingly, increased nozzle width may require increased air pressures and other nozzle modifications. Operation When the CO.sub.2 pelletizer 14 is activated, it receives liquid CO. sub.2 from the liquid CO.sub.2 storage tank 12 through the CO. sub.2 supply line 13. Once the CO.sub.2 pelletizer 14 begins producing pellets 16, the pellets are transferred into the hopper 18. The hopper 18 receives a volume of CO.sub.2 pellets 16 into the body 19. The vibrator 52, secured to the stir rod 50, shakes the pellets 16 and prevents them from sticking together as they pass through the hopper 18. The CO.sub.2 pellets pass from the hopper 18, through the chute 21, and into the cavities 32 of the rotor lock 34. Each cavity 32 receives a small volume of pellets from the chute 21, and delivers it into the passageway 37 of the feeder 20. The air jet 45 directs pressurized air into the cavity 32 at the pellet discharge point 39, which causes the cavity to empty itself of pellets 16. The pellets 16 travel down the passageway 37 and into the channel 35 of the feeder 20. The pellets 16 and high pressure air flow from the feeder 20, through the delivery hose 36 and into the high pressure air nozzle 38. The robot arm 44 orients the nozzle 38 in a particular direction. The nozzle 38 propels the pellets 16 onto a surface to be cleaned. The impact of the pellets 16 on the surface causes any coatings, dirt, etc. on the surface, such as paint, to separate from the surface and fall off. Once the pellets 16 are exposed to ambient temperature and pressure, they quickly sublimate and leave only the paint particles as waste. During operation, the temperature sensors 46 continually measure the temperature of the high pressure air at various points, at the beginning and end of the delivery hose 36 in the preferred embodiment. (However, it will be understood that temperature monitoring and control can occur at various points in the CO.sub.2 pellet forming, accumulating, feeding, and delivery systems.) If the temperature at any of the sensors 46 exceeds - 120&deg; F., the controller 60 causes the valve 48 to open to allow additional liquid nitrogen to flow into the mixing chamber 26 and further cool the high pressure air. Conversely, if the temperature sensors 46 measure a temperature less than -130&deg; F., the controller 60 causes the valve 48 to decrease the amount of liquid nitrogen injected into the mixing chamber 26. The preferred range of temperature for the high pressure air in this system is between -130. degree. F. and -120&deg; F., which is below the sublimation point of carbon dioxide (about -109. degree. F.), thereby prolonging the lives of the pellets as they travel along the delivery hose 36. It is further contemplated that the pellets 16 may be made from other elements or substances that are capable of sublimation, for example argon, and that other cryogenic materials, for example liquid helium, may be utilized for cooling. It is noted, however, that such other substances and cryogenic materials may not be as safe for the environment as CO.sub. 2 and N.sub.2. It will be appreciated that the embodiment discussed above is the preferred embodiment, falling within the scope of the appended claims, and that various alternative embodiments are contemplated. While the focus of this application has been principally on paint removal applications, it is anticipated that the present invention is equally applicable to the cleaning and/or removal of any matter from any surface. By way of example but not limitation, the present application is equally applicable to the removal of epoxy resin coatings from electronic circuit boards. GOVERNMENT RIGHTS&lt;P&gt;&lt;P&gt;This invention was made with Government support under Contract Nos. F09603-90-G-0016/Q601 and F09603- 87-G-0695/Q601 awarded by the Department of Defense (U.S. Air Force). The Government has certain rights in the invention.
Funding remains key for zoo rehabilitation Work has started to modernise the Guyana Zoological Park, which currently functions as a shelter for injured animals or a holding facility at best, but securing funding for the transformation remains crucial. The park has been in a prolonged state of depression but its resuscitation is one of the pillars of the Three Parks Initiative, which is a collaboration between the Ministry of Natural Resources and the Environment (MNRE) and the Protected Areas Com-mission (PAC). With the help of a Seattle-based firm that specializes in zoo design, PJA Architects & Landscape Architects, a Guyana Zoological Park Master Plan has been drafted and this envisages the revolutionising of the zoo experience by removing the cages and creating nature-oriented mini-eco-systems in their place. The plan is also intended to increase visits and generate revenue, which will in turn contribute to the modernisation of the entire zoo. According to the Pro-tected Areas Officer Annalise Bayney, who is also the Officer-in-Charge of the Three Parks Initiative, the zoo in its present state is not a zoo. She said that the zoo currently functions as a shelter for injured animals or a holding facility at best. She related that the animals in the zoo were not captured for the purpose of being there, but were either donated or left at the doorstep of the facility. She said that the injured animals treated by the zoo are often released into the wild, because the zoo administration understands that its current facilities are outdated and inadequate. The presence of macaws in the trees at the zoo is an example of animals that the zoo would have nurtured back to health. She explained that the facilities the zoo now houses were built in the 1950s before there were any sort of guidelines to be followed for zoo husbandry. She stated that at this point in time, the zoo does not in fact conform to the international standards of minimum enclosure size. Bayney does, however, believe that in five years time and with secured funding, the master plan will be implemented and the current dilapidated zoo will instead provide visitors with a natural world existing within the city limits. “The new vision for the zoo unveiled to the Guyanese public will feature the four major eco-systems in Guyana – Coastal Wetlands, Savannah, Mountain Highlands and Lowland Rainforests,” the plan states. It proposes certain main attractions for the eco-systems; the Coastal Wetlands would feature the Black Caiman, while the Savannah’s main exhibit would be the Giant River Otter and the Mountain Highlands would feature the Jaguar. Also, the Rainforest eco-system will be the home to the Harpy Eagle, while a fifth exhibit—World Treasures—would boast international animals obtained through zoo exchange programmes. The plan aims to reinvent the Guyanese zoo experience by creating exhibits which allow animals to be viewed in areas resembling their natural habitats. According to the concept plan, the zoo will be extended to what is known as the service road and this extension will provide large enclosures for the international animal exhibit. In addition, the islands within the compound would feature raised platforms and bridges for exhibits such as ‘Monkey Island,’ which will allow visitors to see the animals at eye level in their natural habitat. The zoo would also be extended further east, the section which is usually barricaded, to feature the zoo’s priority exhibit: an aviary. Bayney explained that this aviary will attempt to replicate the life of the birds in the forest by using a bio-mesh to contain the animals as opposed to cages. She stated that while it will be one of the larger exhibits, it will require the least amount of infrastructural changes, and as such will be completed first. Bayney also spoke about the proposed improvement of the current nature school as well as the inclusion of a petting zoo, which is already under construction and should be completed within a few months. For all of this to be accomplished, Bayney explained that many of the present structures have to be demolished and rebuilt. She asserted that while there are a few enclosures which can be rehabilitated, it is not possible to do so for most. She also maintained that all of the new structures will have to be raised at least three feet off the ground to prevent flooding. She stated that the estimated project budget is US$14 million. And while the government has contributed to recap the zoo entrance and repaint the murals, there is still a lot more to be done. According to Bayney this funding will have to be sourced from the private sector and she noted that the administration has already engaged a few companies. She stated that the Beharry Group of Companies has been very responsive to the initiative, while noting that it funded the concept plan. When questioned about the current upkeep of the zoo or lack thereof, Bayney stated that with the limited facilities and resources, the zoo is doing the best that it possibly can. She said that it is important to remember that the structures are inadequate and the animals are either old or unhealthy. Bayney related that there is in fact an animal currently residing in the zoo which has been there since the opening of the facility in 1952—the Harpy Eagle, which is, to date, the oldest living eagle of its kind in captivity. She noted that the concept plan also includes new veterinary facilities and proposes to implement new water filtration systems, which will help to keep the water in enclosures as clean as possible. This will be a change from the usually green and stagnant water found in the cages. The Zoological Park, which was established in 1952, is the only recognized facility in the country which houses animals for the purpose of zoo husbandry. According to statics provided by the administration, the park, which is centrally located in the city, “is visited by an average of over 120,000 Guy-anese and international tourists every year.” Latest in Local News Facebook Live set to broadcast T20 games PORT OF SPAIN, Trinidad, CMC – Caribbean Premier League organisers have combined with online social networking service, Facebook, to broadcast all 34 games of the new season, live to 40 countries. Increased venues, franchises in pipeline as CPL seeks expansion PORT OF SPAIN, Trinidad, CMC – Caribbean Premier League organisers say they will explore pushing the increasingly popular Twenty20 tournament into Asia, as well as expanding the number of franchises involved and hosting them in North American cities. Grade Six results to be released Tuesday The Ministry of Education today said that results of the National Grade Six Assessment –NGSA will be released on Tuesday, July 5, 2016. UG needs emergency $$ to prevent collapse A conference of local and diaspora stakeholders of the University of Guyana (UG) has called on the government to provide emergency funding to the institution to reverse its “imminent collapse”. Gafoor’s, insurers in ‘stalemate’ over Houston Complex repairs There is a “stalemate” between Gafoors and its insurance underwriters over how to proceed on a building that was damaged in the massive fire at the company’s Houston Complex last month but some reconstruction has started. Duo stole over $31M in electronics from Digicel Two men were yesterday charged with stealing over $31 million in electronics belonging to Digicel Guyana Incorporated. Albert De Freitas, 27, and Basdeo Sackoon, 26, appeared before city magistrate Dylon Bess charged with stealing a total of $31,107,510 in items from the telecoms company. Driver gets one year in jail for causing boy’s death A man was yesterday sentenced to one year in jail after being found guilty of three offences, including causing the death of an 11-year-old child by dangerous driving. Gov’t support for city’s financial measures to hinge on transparency, public interest President David Granger on Monday said that central government’s support for the city’s measures to make itself financial sustainable would rest on transparency and the public’s interest.
http://www.stabroeknews.com/2014/news/stories/06/01/funding-remains-key-zoo-rehabilitation/
Farm Credit Financial Partners is dedicated and controlled by a 10-member Board of Directors. Each Board member is in the executive leadership team of our partner Farm Credit Agricultural Credit Associations. Elections are held annually for governance positions. Board Responsibility The Board is responsible for overseeing and interacting with senior leadership. Through their experience and knowledge, they exercise sound judgment on key aspects of our business including strategic planning, leadership development and succession, operating performance. Audit Committee A 5-member subset of the Board serves as the Audit Committee to provide oversight responsibility for the integrity of the company’s financial statements, compliance with legal and regulatory requirements, and the performance of internal and external audit. The Audit Committee has authority to conduct or authorize investigations into any matters within its scope of responsibility. For additional information on the roles and responsibilities of the Audit Committee, you can review the Audit Committee Charter or contact the Chief Audit Officer at 413-271-8830. Standards of Conduct At Farm Credit Financial Partners, our commitment of ethical behavior and integrity are at the core of our operating model and are reflected in everything we do. By pledging to follow ethical practices, we reaffirm our commitment to serve the long-term interests of our partners. Every action and business decision is driven by these values. We have adopted Standards of Conduct which define expected actions and behavior. All Directors, employees and agents are expected to conduct themselves in a professional manner and avoid conflicts of interest and the appearance of any impropriety. To learn more about Farm Credit Financial Partners’ Standards of Conduct see the links below:
https://www.financialpartners.com/who-we-are/governance
It’s pretty safe to assume that a person has a set of values he or she lives by. When asked, it’s probably easy to identify a handful of your own core values. But, if pressed, how easy is would it be to name all the values that define who you are? Do you think your children could name your top three? Heck, go ahead and ask them, it might be eye opening to find out what they think! More often than not, we assume that not only do we understand what our values are without really needing to think about it, but we also assume our children and others know what those values are too. Just as we teach our children vocabulary to learn and understand new concepts, so too do we need to teach the vocabulary of personal and family values. So, what are values? They are words, ideas, and standards we live by, that define our person, our family. Values may be taught to us through family tradition, including examples such as integrity, honor, bravery, or education. We also learn values from our culture: togetherness, independence, or tradition. Values can come from spiritual means: faith, selflessness, compassion, or forgiveness, or from friends: trust, honesty, and respect. We even have values we learn from our favorite sports team: loyalty, pride, and sportsmanship! Our personal values define who we are. Ideally, they serve as guideposts on life’s curvy roads. Take a moment, and ask yourself the following questions. Do you notice any patterns, such as consistent values you rely on; or perhaps, values you were not fully aware you held? – How do you choose to spend your free time? – How did you decide on your job/career? – How do you choose your close friends? – What are the most important factors you consider when faced with tough choices? – How do you react/respond when someone wrongs you? – If someone described you in three words, what three words would you hope they choose? – Look through your bookshelves; do the titles reflect consistent values? Defining and internalizing a set of personal values is an important part of developing an identity. Values offer a baseline for decision-making, problem solving, and choosing the people with whom you surround yourself. Values help define our families, and help mold our children. As parents, the family values we establish serve as the foundation for the personal values our children adopt. Throughout their lives, we can teach our children the things we value. As small children, it may be through books, songs, or play. In grade school, it may be through drawings or games. For teens, keeping a “values business card” may serve as an important reminder. Activity Break out the paper, crayons, paint, and markers! With the younger kiddos, make a family crest. Include symbols of what values your family stand for. Sometimes it’s helpful to have an initial discussion to generate a “values word bank.” This can get the ball rolling on choosing those that matter most to each of you. With older children, making values “business cards” may be more developmentally appropriate. Make a business card (there are some great templates on most word processing programs) and have each family member write down his or her own personal values. Then, encourage everyone to always keep this business card on them, (maybe in a backpack, wallet, car visor, iPhone case) to remind them of what he or she stands for.
https://thewillowpartnership.com/forms
Clinical and epidemiological studies have consistently observed the heterogeneous symptomatology and course of geriatric depression. Given the importance of genetic and environmental risk factors, aging processes, neurodegenerative and cerebrovascular disease processes, and medical comorbidity, the integration of basic and clinical neuroscience research approaches is critical for the understanding of the variability in illness course, as well as the development of prevention and intervention strategies that are more effective. These considerations were the impetus for a workshop, sponsored by the Geriatrics Research Branch in the Division of Adult Translational Research and Treatment Development of the National Institute of Mental Health that was held on September 7-8, 2005. The primary goal of the workshop was to bring together investigators in geriatric psychiatry research with researchers in specific topic areas outside of geriatric mental health to identify priority areas to advance translational research in geriatric depression. As described in this report, the workshop focused on a discussion of the development and application of integrative approaches combining genetics and neuroimaging methods to understand such complex issues as treatment response variability, the role of medical comorbidity in depression, and the potential overlap between depression and dementia. Future directions for integrative research were identified. Understanding the nature of geriatric depression requires the application of translational research and interdisciplinary research approaches. Geriatric depression could serve as a model for translational research integrating basic and clinical neuroscience approaches that would have implications for the study of other neuropsychiatric disorders.
https://jhu.pure.elsevier.com/en/publications/translational-research-in-late-life-mood-disorders-implications-f-4
Recent work building open-domain chatbots has demonstrated that increasing model size improves performance. On the other hand, latency and connectivity considerations dictate the move of digital assistants on the device. Giving a digital assistant like Siri, Alexa, or Google Assistant the ability to discuss just about anything leads to the need for reducing the chatbot model size such that it fits on the user's device. We demonstrate that low parameter models can simultaneously retain their general knowledge conversational abilities while improving in a specific domain. Additionally, we propose a generic framework that accounts for variety in question types, tracks reference throughout multi-turn conversations, and removes inconsistent and potentially toxic responses. Our framework seamlessly transitions between chatting and performing transactional tasks, which will ultimately make interactions with digital assistants more human-like. We evaluate our framework on 1 internal and 4 public benchmark datasets using both automatic (Perplexity) and human (SSA - Sensibleness and Specificity Average) evaluation metrics and establish comparable performance while reducing model parameters by 90%. *=Equal Contributions Apple attended Interspeech 2019, the world's largest conference on the science and technology of spoken language processing. The conference took place in Graz, Austria from September 15th to 19th. See accepted papers below. Apple continues to build cutting-edge technology in the space of machine hearing, speech recognition, natural language processing, machine translation, text-to-speech, and artificial intelligence, improving the lives of millions of customers every day.
https://machinelearning.apple.com/research/low-resource-adaptation
In the first place, a thorough work-up is needed to come to an accurate diagnosis using bladder cancer diagnostic tests. Moreover, once the family physician has noticed blood in the urine and there is a suspicion of bladder cancer, a quick referral to a urologist should be made. Likewise, the specialist will do a number of tests aimed at determining where the blood is coming from. In the case of bladder cancer the specialist will also want to determine the extent of it. Here is a list of tests that likely will be done (see below). Diagnostic tests for bladder cancer _________________________________________ urine cytology cystoscopy bimanual examination adequate biopsy of cancer and punch biopsy of other sites CT scan of entire urinary tract bone scan chest X-rays ultrasonography or MRI scan CT scan of abdomen and pelvis genetic tests _________________________________________ Explanation what these tests are designed to do With this in mind, here is what these tests are designed to do: The urine cytology helps the urologist to know whether or not there are cancer cells present in the urinary tract. Also, after successful removal of one lesion it might indicate whether or not there are other lesions in addition in the urinary tract. Specifically, using cystoscopy the specialist can look into the bladder and do procedures such as biopsies and remove smaller cancers or polyps. In addition, bimanual examination, rectal examination and all the other tests such as CT scans, bone scans, MRI scans and ultrasonography will tell the urologist how widespread the cancer is. This is important to know as the final outcome (“prognosis”) is determined by the extent of the disease (“stage”) at the time the cancer was initially diagnosed. To clarify, as treatments vary for different stages of cancer, it makes sense to carefully investigate first at what stage the bladder cancer is. Other pathological tests There is another test that the pathologist may consider to do. This is a genetic test looking for chromosomal abnormalities in the cancer tissue. For instance, in Ref. 3 the authors showed that there is a link between chromosome 9 anomalies and the recurrence of superficial bladder cancer. Often patients are annoyed or angry after a cancer diagnosis. Unfortunately the physician must order a few more tests prior to treatment. Patience before bladder cancer treatment leads to better results This is understandable from the patient’s point of view who wants to get on with it and have it treated. From the urologist’s and oncologist’s point of view, however, one has to know what kind of cancer it is, low grade or high grade; what is the spread of it, is it in the pelvic lymph glands, in the lymph glands beside the aorta (called “paraaortic lymph nodes”), or are there even lung metastases. In the one case there might have to be surgery first followed by radiotherapy. In another case the urologist likely would decide to use combination chemotherapy first followed by radiotherapy. These decisions have far reaching implications for long-term survival and cannot be made lightly. The physician can offer best results to the patient when he collects all of the evidence about the cancer first and develops a strategy of therapy. References 1. Cancer: Principles&Practice of Oncology. 5th edition, volume 1. Edited by Vincent T DeVita, Jr. et al. Lippincott-Raven Publ., Philadelphia,PA, 1997. 2. Cancer: Principles &Practice of Oncology, 4th edition, by V.T. De Vita,Jr.,et al. J.B. LippincottCo.,Philadelphia,1993.Chapter 34, page 1054 (data from text arranged as a table). 3. M Simoneau et al. Oncogene 2000 Dec 19(54): 6317-6323. 4. G Dalbagni , HW Herr Urol Clin North Am 2000 Feb 27(1): 137-146. 5. HW Herr J Clin Oncol 2001 Jan 1;19(1): 89-93. 6. DA Corral, CJ Logothetis World J Urol 1997; 15(2): 139 – 143. 7. PM Dodd et al. J Clin Oncol 1999 Aug;17 (8): 2546- 2552. 8. B. Sears: “The age-free zone”.Regan Books, Harper Collins, 2000.
https://nethealthbook.com/cancer-overview/bladder-cancer/bladder-cancer-diagnostic-tests/
William Turner one of the most eminent designers of the 20th Century first produced Roslyn as a wallpaper design in 1910. It was later manufactured as a printed fabric from the 1920s. Originally known as The Cleves it was part of the Sanderson range until 1974 making it one of the companys most enduring designs. Once again it is available as both a fabric and wallpaper. Whilst we make every effort to ensure that the colours shown on our website are as close to the actual fabric or wallcovering as possible, colour variations do occur as screens and monitors will have different colour calibration. This means that actual product may look different to that shown on your screen or monitor. We would advise that you request a sample of your chosen fabric or wallpaper prior to placing an order to ensure satisfaction. Shades can sometimes vary slightly between samples and batches, so if the colour match is critical, please ask us to send a 'stock cutting' of the current batch when placing your order.
https://www.newenglandinteriors.co.uk/wallpaper-by-new-england-c594/sanderson-wallpaper-c621/vintage-c712/sanderson-roslyn-wallpaper-neutral-gold-p14954
When Hurricane Hugo hit the coast of South Carolina in September of 1989, the Francis Marion National Forest (Francis Marion) suffered a devastating blow. Sixty percent of its pine trees sustained moderate or heavy damage, and its bottomland hardwood trees fared even worse: 43 percent were broken and 43 percent were uprooted. At that time, national forest planners recognized that they needed to update their 1985 Land and Resource Management Plan to help the forest recover. By 1996, they had developed a plan that would assist ecosystems in becoming more resistant to the effects of hurricanes, drought, and insect outbreaks. The plan featured a specific focus on the recovery of the red-cockaded woodpecker and the restoration of longleaf pine forests. More than 25 years after the devastation from Hurricane Hugo, the health of woodpecker and longleaf pine species have improved. However, threats posed by growing population in the region, increasing urban development, and encroaching nonnative invasive plant species have been joined by a new and even more alarming threat: climate change. Environmental scientists project that rising temperatures and sea levels in the area will produce more floods and droughts, more severe tropical storm systems, and a general increase in climate variability. This combination of social and environmental changes will result in another new management context for the Francis Marion. Managing ecosystems and human communities so both are resilient to the impacts of climate change will play a key role in the forest’s next management plan. Working with staff at the U.S. Forest Service Eastern Forest Environmental Threat Assessment Center (EFETAC), forest managers for the Francis Marion used the Template for Assessing Climate Change Impacts and Management Options (TACCIMO) web tool to guide development of the 2016 version of their forest plan. TACCIMO provides vital climate information in an easy-to-use format, enabling users to quickly find location-specific climate studies through quotations from peer-reviewed literature and GIS-based climate change projections. With access to information through TACCIMO, the team developed a three-phase plan to integrate climate change into their next forest management plan. In the first assessment phase, forest managers used TACCIMO to facilitate a comprehensive review of existing information relevant to conditions and trends of the forest. The TACCIMO analysis revealed some troubling trends for the forest: rising sea levels and more severe storm surges threaten to drastically alter the composition of tidal marshes on the Francis Marion, and greater overall climate variability may lead to more wildfires and allow invasive species to infest the area. These findings were used to develop a “need to change” document, which serves as a transition from the assessment to the forest plan development phase. For the second phase now in progress, the planning team is developing adaptive management strategies that respond not only to a changing climate, but also to the spread of nonnative invasive species and increasing urban development. These strategies form the basis of a monitoring and evaluation plan, and include multi-party monitoring at the local and regional levels. The strategies also encourage cooperation among partners to meet common goals. The final plan for the forest will integrate the concept of adaptive management as a way to dynamically respond to changes that result from our changing climate. The plan aims to lessen the potential impacts of climate change by promoting native and resilient ecosystems, providing more carbon sinks, reducing existing forest stresses, and collaborating with partners to monitor and respond to climate-related changes in forest health. Adapted from an article on the U.S. Climate Resilience Toolkit website. For more information, email Emrys Treasure at [email protected].
https://www.srs.fs.usda.gov/compass/2015/06/25/the-future-of-the-francis-marions-coastal-forests/
Washing hands is one of the many collective actions that can contain coronavirus Covid-19 The spread of the coronavirus could be slowed using findings from behavioural science which show collective actions suceed, according to a new research paper from Ireland's Economic Social Research Institute. The conclusion in the paper called Using Behavioural Science to Help Fight the Coronavirus is based on more than 100 scientific research papers reviewed over the past week by the institute’s Behavioural Research Unit. The review covers seven topics: hand washing, face touching, isolation, collective action, avoiding undesirable behaviours, crisis communication, and risk perception. Some clear conclusions emerge from the evidence. It says simple, cheap behavioural interventions can make large differences. "For instance, many more people use hand sanitisers when they are placed with colourful signs in unmissable locations, such as the centre of entrance halls, the middle of lift lobby areas, or immediately facing doors. All organisations can do this," it said. The researchers collected evidence on the psychological impact of isolation and how to help people to cope. "The implication is that we need more official support, perhaps resourcing a dedicated phoneline. People can be helped to plan isolation. They do better by staying in contact with others and keeping up a routine. This is important to ensure that those who need to self-isolate are supported and not deterred from doing so," it said. Findings from behavioural science also suggest how to encourage people to act in the public interest and to avoid undesirable behaviours like panic buying or xenophobic responses. Professor Pete Lunn, Head of the ESRI's Behavioural Research Unit. “Some of the most important findings concern collective action – we are all in this together. The evidence shows that public-spirited behaviour is much more likely when there is frequent communication of how we can best help each other and strong group identity, not only nationally but also in smaller groups like workplaces, schools and local communities. Polite social disapproval for those who don’t comply is important too,” he said The evidence covered in the paper contains lessons regarding crisis communication and perceptions of risk. Speed, honesty and credibility are important in official communication, but so is empathy. It is best if the communication of risks is coupled with actions people can take to reduce it. Summary of reserch This paper summarises useful evidence from behavioural science for fighting the COVID-19 outbreak. It is based on an extensive literature search of relevant behavioural interventions and studies of crises. The findings aim to be useful not only to government and public authorities, but to organisations, workplaces and households. Seven issues are covered: (1) Evidence on handwashing shows that education and information are not enough. Placing hand sanitisers and colourful signage in central locations (e.g. directly beyond doors, canteen entrances, the middle of entrance halls and lift lobbies) increases use substantially. All organisations and public buildings could adopt this cheap and effective practice. (2) By contrast, we lack direct evidence on reducing face touching. Articulating new norms of acceptable behaviour (as for sneezing and coughing) and keeping tissues within arm’s reach could help. (3) Isolation is likely to cause some distress and mental health problems, requiring additional services. Preparedness, through activating social networks, making concrete isolation plans, and becoming familiar with the process, helps. These supports are important, as some people may try to avoid necessary isolation. (4) Public-spirited behaviour is most likely when there is clear and frequent communication, strong group identity, and social disapproval for those who don’t comply. This has implications for language, leadership and day-to-day social interaction. (5) Authorities often overestimate the risk of panic, but undesirable behaviours to watch out for are panic buying of key supplies and xenophobic responses. Communicating the social unacceptability of both could be part of a collective strategy. (6) Evidence links crisis communication to behaviour change. As well as speed, honesty and credibility, effective communication involves empathy and promoting useful individual actions and decisions. Using multiple platforms and tailoring message to subgroups are beneficial too. (7) Risk perceptions are easily biased. Highlighting single cases or using emotive language will increase bias. Risk is probably best communicated through numbers, with ranges to describe uncertainty, emphasising that numbers in the middle are more likely. Stating a maximum, e.g. “up to X thousand”, will bias public perception. A final section discusses possibilities for combining these insights, the need for simplicity, the role of the media, and possibilities for rapid pretesting. Read also: FULL PAPER Subscribe or register today to discover more from DonegalLive.ie Buy the e-paper of the Donegal Democrat, Donegal People's Press, Donegal Post and Inish Times here for instant access to Donegal's premier news titles. Keep up with the latest news from Donegal with our daily newsletter featuring the most important stories of the day delivered to your inbox every evening at 5pm.
https://www.leinsterexpress.ie/news/coronavirus/525178/we-are-all-in-this-together-collective-action-can-contain-coronavirus-covid-19.html
Definitions Semantics and a Common Syntax Every discussion of eco-terrorism faces the recurrent difficulty of defining precisely what is being described. Terms such as "eco-terrorism," "environmental terrorism," and "ecotage," are used almost interchangeably, generating confusion and misunderstanding, and sub-definitions such as "impact eco-terror" and "ideological eco-terror" further complicate the lexicon. An excellent example of this confusion is found within the Encyclopedia Britannica, where ecological terrorism and environmental terrorism are lumped together, with a definition that includes both terroristic crimes utilizing or targeting environmental resources, and the use of terror for the sake of protecting the environment . Additionally, the terms environmental terror and eco-terror are often used interchangeably , though they represent very different practices. Effort must be made to distinguish exactly which actions of concern are being addressed . While the term "eco-terrorism" has been well-recognized in a number of arenas [1,16], it nonetheless may be most accurate to use "violent, radical, direct action environmentalism" to describe the behaviors in question. The unwieldi-ness of that descriptor, however, calls for a shorter label. For the purposes of brevity and consistency, we shall use the term "eco-terrorism" to refer to: "the use or threatened use of violence of a criminal nature against innocent victims or property by an environmentally oriented, subnational group for environmental-political reasons, or aimed as an audience beyond the target, often of a symbolic nature" . Even within the confines of this term and definition, however, we must face several related and ongoing issues, including (1) recognizing that the term "terrorist" is laden with connotations that have the risk of being inappropriately applied to the broader non-violent environmental movement; (2) discerning who in fact qualifies as an eco-terrorist; and (3) determining the nature or role of violence in the definition of eco-terror. Each merits a brief discussion before moving on to consider eco-terror and globalism. The observation that the mere term "terrorism" carries with it social and legal connotations certainly gives pause to any careless application of the term . The illegitimacy or criminal presumption that attaches to the term may taint those to whom the term is applied, regardless of the actual activities they undertake. This concern is particularly important in considering the chilling effect that labeling otherwise legitimate forms of social protest as eco-terrorism might cause. A uniform definition of eco-terrorism is therefore highly desirable to minimize any inappropriate usages, and to prevent inadvertent or intentional corruption of the term. The definition provided by the Federal Bureau of Investigation in the U.S. is certainly open to semantic or philosophical debate. It does, however, provide a consistent reference for discussing the subject, and by virtue of its adoption by federal law enforcement, it provides an official meaning for both opponents and apologists of eco-terroristic behavior. Eco-terror in a broader anti-globalism context clearly implies more than a single definition of terrorism, and in a number of nations, eco-terrorism is not specially defined . There is, of course, no pressing need for a globally accepted definition of eco-terror, provided each country has addressed the issue in some manner. It is more important that official definitions be developed, and that these respective national legal definitions are well-articulated and consistently applied. Anything less risks enveloping broader environmental activism within the specific, criminalized confines of eco-terror . For illustration, while the European Union has taken note of the threat posed by eco-terror , the depth of that attention does not reflect the priority that law enforcement in the United States has placed on this crime, nor the problems inherent in defining eco-terroristic actions. Related to the matter of eco-terror definitions is the question of what behavior qualifies one as an eco-terrorist. While any definition, such as the federal definition in the U.S., may imply what actions are terroristic, the articulation of the crime does not specifically suggest what is being prohibited. For example, in 1999 the FBI noted that it has become "increasingly difficult to differentiate acts of terrorism from acts of vandalism" . The FBI's narrow interpretation of the definition of terrorism under existing federal law allowed, in the agency's opinion, a number of crimes that might well have been committed by eco-terrorists to instead be classified as extremist vandalism. The FBI notes that it does not serve as the lead investigative body for such vandalism cases . The difference between this depiction in 1999 and the statements made before the U.S. Congress in 2002 on behalf of the agency signal that the FBI has broadened its approach. The definition expressed in 2002 is, not surprisingly, similar to the definition of domestic terrorism espoused in the 2001 Patriot Act.* In any anti-globalism setting, therefore, it is important to understand the limitations of existing laws and policies. Regardless of the criminality or social undesirability of certain acts, law enforcement agencies may lack the specific authority to effectively investigate and prosecute these crimes. Somewhat tangential from the relationship between the definition of an act of eco-terror and law enforcement efforts, is a more philosophical debate over who is committing acts of terror, or who actually qualifies as an eco-terrorist. On one hand, Arnold suggests that the anti-globalist and anarchist ranting of Theodore Kaczynski (the Unabomber), along with his familiarity with "green" activism, categorizes him as an eco-terrorist. With his string of deadly targeted mail bombs, Kaczynski was certainly a terrorist, but what additional criteria might be necessary to label him an eco-terrorist? Even with the federal definition in the U.S., it is necessary to delve into the motives of the Unabomber before one can proclaim him an eco-terrorist. At the other end of the spectrum are those that would argue that the various environmental policies of the U.S. government or the actions of corporate animal researchers would qualify as eco-terroristic. Embedded within this semantic exercise is a fundamental moral question: is the violence intended to harm or terrorize a populace, or is it merely incidental to another activity? In other words, can the criminal efforts of the Unabomber in any way be likened to the activities of an animal researcher? If an easy distinction between the two is apparent, then arguing that researchers are eco-terrorists becomes more difficult. Certainly, public perceptions of differences in the two have led to the prevailing legal structure and standards, wherein the former is a crime, and the latter, practiced within bounds, is not. Finally, the concept of violence is a recurring problem in defining eco-terror. Direct action environmentalism has a long relationship with various forms of violence. When Edward Abbey's character Doc objects to some of the proposed actions of the Monkey Wrench Gang: "'All this violence,' Doc said. 'We are a law-abiding people,'" he is chastised by Hayduke: "'What's more American than violence?'" Hayduke wanted to know. "'Violence, it's as American as pizza pie.'" . While Abbey's Hayduke may have had little difficulty with his version of violence, there exists an apparent intellectual, though not legal, distinction between violence directed towards inanimate objects such as property and violence directed at persons. This distinction forms the foundation upon which direct action environmentalism maintains its claims to non-violence. The perceived difference is premised upon the assertion that so long as persons are neither targeted nor harmed, the related direct actions are not "violent." Actions such as arson, tree-spiking, vandalism of machinery or buildings, and other property-focused crimes would be considered non-violent under this paradigm, whereas mail bombs, letters containing razor blades, and the direct assault on an individual would qualify as violent. Direct action environmentalists, in many cases, claim to eschew the latter while embracing the former . Interestingly, property crimes do not necessarily enjoy any particular relief under criminal law. For example, in the United States, arson is categorized as severely as the offenses of rape, assault, and manslaughter . Kumar notes that "there is no real distinction between violence to people and violence to property." It is a very thin line indeed, relying upon a distinction between the targets, rather than the acts themselves. It is important to recognize that the concept of violence may be interpreted far more broadly than direct action environmentalists may foresee or desire. As one commenter notes: "Violence in any form can inspire terror in its victims and in those indirectly affected by the violence" . It is further troubling that violence against property targets has been recognized as a potential precursor to escalated violence such as kidnappings and murder [17,26], and that we may indeed expect an increase in environmentally-motivated attacks against specific individuals . With individuals reporting attacks against their properties and harassment of their families [28-30], and with the attempted use of toxic compounds during protests , direct action envir-onmentalism may indeed be precariously poised to escalate. The violence dichotomy can become further confused by its role in anti-globalism direct actions. In protests such as Seattle's World Trade Organization (WTO) in 1999, the ensuing street violence resulted in arrests, curfews, and deployment of National Guard troops to restore order . While the protesters did not directly assault individuals, and while the majority of the protesters may not have engaged in the most destructive acts [4,32], it is nonetheless difficult to excuse the chaos caused as non-violent, particularly when juxtaposed with the tradition of non-violent, civil disobedience displayed by such causes as the civil rights movement in the U.S. Some would argue that the violence inherent in such anti-globalist protests is merely a convenient excuse for public rioting . The "Battle in Seattle" nonetheless continues to stir debate as a prime example demonstrating the boundary between non-violent and violent protests in the name of anti-globalism. Negotiating Essentials Always wanted to get a better deal but didn't have the needed negotiation skills? Here are some of the best negotiation theories. The ability to negotiate is a skill which everyone should have. With the ability to negotiate you can take charge of your life, your finances and your destiny. If you feel that others are simply born with the skill to negotiate, you should know that everyone can learn this wonderful skill.
https://www.briangwilliams.us/globalization-environment/definitions-semantics-and-a-common-syntax.html
Fairfield First Selectman’s Response to Questions on Hiring of Communications Consultant At the end of this past Wednesday’s Board of Selectmen meeting, there were some inaccurate statements and several false allegations regarding the communications consultant hired by the Town this past summer. It is unfortunate that these statements were made without any background or research. I would like to take this opportunity to provide residents with background and circumstances regarding this hiring. Let’s look at the facts: In early August, I was notified of a possible arrest of a town employee. I realized this would be a traumatic experience for our Town and our residents would have significant concerns along with a number of questions. It would be important to fully explain the facts and circumstances of the situation. The Town does not have a communications department and I felt professional expertise was required to address this properly and expeditiously. It was and always has been my intent to provide complete and accurate information to our residents. Hiring a communications professional in this type of situation is following best practices. To that end, I reached out to Christopher Gidez who has a background in crisis communication as well as considerable experience in managing communications on complex environmental issues. We agreed on a $3,000 retainer. It was not anticipated that there would be an extended need for Mr. Gidez’s services. However, on the same day that two Town employees were arrested, the Health Department was notified that the Connecticut Department of Energy and Environmental Protection (DEEP) was coming to Gould Manor Park to test for contaminants. In order to be prepared to address any public health or safety concerns, the Town immediately hired a Licensed Environmental Professional, Tighe & Bond. Additionally, since the testing performed at Gould Manor by DEEP was a result of concerns over the use of fill from the aggregate pile at the reclamation yard, I asked our Health Department to search for any additional Town sites where the aggregate fill may have been used. Due to the seriousness and complexity of the testing of various Town sites, I asked the communications consultant to stay on to ensure that the Town provided communication that was factual, transparent, coordinated and timely during this time of anxiety and uncertainty over the health and safety of our residents. Over the last two months, the Town has put in place an extensive communications plan. We set up a webpage, www.fairfieldct.org/filluseissues, that includes complete test results, a video from the State Department of Public Health, a “scorecard” so residents can track the status of all parks and fields and a list of third party resources to help residents better understand the relative risk of the materials identified by the testing. We have provided frequent email updates. We also set up a special email address so residents can direct specific questions and concerns to the Health Department. All the communications have been to ensure our residents have access to complete and accurate information. My three priorities continue to be Public Safety, Accountability and Transparency/Communications. Our message is clear and consistent. I am proud of how our Town employees have risen to this challenge, and I am grateful for the input we have received from Town residents by way of questions, suggestions and concerns. This has guided our communications. It is inappropriate for anyone to mischaracterize these communications as anything other than factual and public safety oriented. It is my responsibility, and the Town’s responsibility, to communicate fully and regularly as the facts have unfolded over the past two months. Mr. Gidez has been an excellent and very professional resource to the Town. It is time to stop playing politics. It is time to stop trying to scare our residents. It is the obligation of every Town employee, elected official and volunteer appointee to focus on doing what is best for our residents and for our Town.
The present invention relates to telecommunications and can be used in single and multi-span simplex and (half-) duplex analog and digital systems of local and long distance communication through coaxial and symmetrical physical pairs of metallic wires, fiber optics and metallic waveguides and other cables for transmission of data, telephone. television, internet, and other coded/modulated/any signals. A method of simplex and/or (half-) duplex electrical communication through coaxial and symmetrical pairs of metallic wires, fiber optics and metallic waveguides and other cables, in the spans of which the echo-reflections of signals is suppressed at the ends of the pair/waveguide/cable is universally known and world-wide accepted (CCITT recommendations G332, e), and G333, e); and also CCIM recommendation 567-2, XVI Plenary Assembly, V12, Jeneve, 1986). For this purpose the output/input resistance of the connected with them equipment is formed equal to their rated wave resistance. There are known devices for telecommunication realizing a well known method, which during the transmission the signal is compressed in frequency spectra, converted/(re-)coded in time and/or level/amplitude, frequency, phase and/or various types of modulation/manipulation, (pre-) equalization, etc., are introduced, for example in Digital Subscriber Loop Technologies A, H, S, V, or XDSL, etc. (ATG""ss Communications and Networking Technology Guide series copyright 1997 by the applied technology group, and the DSL Source Book, Plane Answers About Digital Subscriber Line Opportunities, winner of 1997 Excellence Award, Society of Technical Communication (STC). Second Edition, copyright 1998, Paradyne Corporation). This method performs more efficiently or more complete a range of amplitude characteristic, protection from interferences/noises bandwidth, energy and other possibilities of existing devices for telecommunication, and as a result their carrying capacity, transmission distance/span length, volume of services, profitability, etc., are increased. However, in the devices of all known systems of telecommunication, during suppression of signal reflections at the end of the pair/waveguide/cable, energy and protection from interferences/noises of signal are uselessly wasted at each span, and additional system distortions are introduced. Losses are so great that when they are bringing into use, either the above listed processing of the signal is not needed, or it will be several times more efficient. Devices for telecommunication through coaxial and symmetrical pairs of metallic wires are generally known, in which, the spans at the end of the pair connected to the output of the transmitter and/or input of the receiver signal formed as a current generator and/or a voltage amplifier, a matching resistor which is equal to a rated wave resistance of the pair is connected parallel to the pair, and at the output of the transmitter and/or input of the receiver formed as a voltage generator and/or a current amplifier, a matching resistor is connected in series to the pair, (CHU-SUN YEN, CRAWFORT R. D. Distribution and Equalization of signal on coaxial cables used in 10 Mbit/s baseband local area networks./IEE Trans.-1983.-V.com-31, N 10,-P.1181-1186). In the devices of existing well-known systems of telecommunication through coaxial and symmetrical pairs of metallic wires, at the each end of each physical pair on matching resistors signal energy is uselessly wasted for suppression of echo of signals from the ends of the pair at the end of the pair in over necessary and sufficient for transmission with practically ideal accuracy (0.01-0.001). At the resistor which matches the heat of the pair, in spans of existing and generally known devices of systems telecommunication, half signal energy is uselessly wasted during transmitting. Only in a particular case by attenuation of the pair up to 10-15 dB at an average during the transmitting {fraction (1/10)}th part of energy is uselessly lost. At the resistor which matches the pair end, by pair attenuation over 20-30 dB during the receiving, all signal energy is uselessly lost and only in a particular case during attenuation 10-30 dB a part of energy is lost. As a result, at the end of each physical pair of each span, the received signal is uselessly weakened 1-2 times by attenuation pair up to 10-15 dB, 2-4 times by attenuation 10-30 dB and 4 times by attenuation 20-30 dB, and how much greater. Accordingly, the efficiency of signal energy is reduced at each span 1-4 times, 4-16 times, and 16 times, the received signal is additionally distorted and reduced and the protection of signal from noises/interferences/influences is lowered, while the pair itself has attenuation which is not of less but only a known attenuation. Even greater energy is lost at each span in a transmitting equipment for realization of its practical zero or infinite output resistance formed as voltage/current generator of signal for uselessly stable suppression of echo from the ends of pair in a transmitting equipment by its matching resistor, above a practically necessary and sufficient one. The known devices for telecommunication through fiber optics and metallic waveguides and other cables possess the same disadvantages. As a result, by any given frequency band and transmission distance the universally known existing world-wide accepted systems of telecommunication are excessively expensive and complicated. They need a great quantity of equipment, number of spans and repeater/regenerative stations. For increase of frequency band, transmission speed, carrying capacity and/or transmission distance/span length, it is necessary to provide very expensive and/or lengthy laying of new lines and/or replacement of existing wires/waveguides/cables, equipment and places of its mounting. This makes the systems and networks of transmission more complicated, prevents its development and requires a lot of time. Accordingly, it is an object of the present invention to increase a frequency band, transmission speed, carrying capacity and/or span length, and/or reduction of cable section, a quantity of equipment and a number of spans and/or simplification and cost reduction of them and the telecommunication by release/bring into use of a hidden signal energy which is useless lost now, by weakening of it inside the spans of attenuation and distortion of connected existing pair/waveguide/cable and/or conversion of it into additional amplification, equalization and interference/noise immunity/protection of the signal. In keeping with these objects and with others which will become apparent hereinafter, separately in each span at any or at both end/s of the pair/waveguide/cable, the signal is full reflected, and for this purpose in the spans with coaxial or symmetrical pair of metallic wires at any given or in any combination at both end/s of the pair a mode of idle running is introduced, and the equipment connected to the pair is formed as a current generator and/or voltage amplifier. Or a mode of short-circuit is introduced, and the equipment connected to the pair is formed as a voltage generator and/or current amplifier. Analogously, in the span/s with fiber optics or metallic waveguides, or other cables, the signal is full reflected at any or both ends, and with reduction of frequency during suppression of introduced echo from the ends of the pair/waveguides/cable along them between their ends, with its attenuation insufficient for transmission of the required/rated accuracy/error ratio, at these frequencies at the same ends by the pair/waveguide/cable of the same spans the introduced full reflection is replaced by a partial reflection. In each span separately the output/input resistance of the equipment connected with them is changed in direction of its wave resistance so as to weaken the echo introduced at any/both ends of the pair/waveguide/cable in over to weakening of the introduced echo along them between their ends of their attenuation, so as to provide as a sum a minimal required as sufficient for the given/rated accuracy/error ratio of transmission. However, moreover at the same single or separately in any combination, at both ends of the pair/waveguide/cable of span, the signal is amplified, equalized and protected from interferences/noises by (auto-) transformer and/or resonant circuit, which is controlled by the signal reflection introduced at the given end, and acts proportionally to it. Or the above listed operations in above listed way are performed by the pair/waveguide/cable length of short than quarter wavelength of the upper-frequency limit of the transmitted signal. In order to achieve the same objective, separately in each span between coaxial or symmetrical pair and the equipment-transmitter-receiver connected to it, cable end connectors-equalizers are introduced. At this end of the pair during the duplex connection the transmitter-receiver of the signal are formed as a current generator and a voltage amplifier or a voltage generator and a current amplifier. And at the end of the pair connected to the equipment formed as a current generator and/or voltage amplifier, in the cable end connector-equalizer an opening equalizing circuit is introduced in series with the matching resistor. It is formed as a series inductors, a part of which is shunted by resistors and/or capacitors. In addition, a low frequency matching circuit is introduced and connected in series to them, which is formed as a series of resistors shunted by capacitors and/or R-C circuits, which is equal to an increment of a wave impedance of the pair at low frequencies. And/or a resonant circuit is introduced and a inductor/s of the circuit are connected in series to the wires of the pair end, and a capacitor of the circuit is connected parallel to the connected in series matching resistor and the introduced opening equalizing circuit and the low frequency matching circuit. Or a equalizing (auto-) transformer is introduced, whose primary winding is connected parallel to the introduced opening equalizing circuit, and whose secondary winding is connected in accordance with the primary winding in series to the connected equipment. Or the matching resistor is formed as a potentiometer, whose one leg connected in series between the interconnected pair and equipment. Or a capacitor is introduced and connected parallel to the matching resistor/potentiometer or parallel to it and the introduced series to it low-frequency matching circuit, and in addition, either in the opening equalizing circuit a resistor shunted by the inductor with inductivity L2, formed equal to a rated wave resistance R of the pair and capacity of the introduced capacitor is equal to L4/R2. Or together with the (auto-) transformer a resistor with equal to rated wave resistance R is introduced and connected in senes to the opening equalizing circuit, and they shunts the primary winding of the (auto-) transformer with inductivity LT and the capacity of introduced capacitor=LT/R2. While at the end of the pair connected to the equipment such as a voltage generator and/or current amplifier, into the cable end connector-equalizer a shorting equalizing circuit is introduced and connected parallel to the matching resistor, in which in parallel connected and/or series resonant circuits, capacitors, series and/or integrated stairs R-C circuits. Or a low frequency matching circuit is introduced and connected in series to them, or it is connected in series with the matching resistor, and together they are shunted by the introduced shorting equalizing circuit. Or a resonant circuit is introduced and the inductor/s of the circuit are connected in series to the wires of the pair end, while a capacitor of the circuit is connected between the end of the pair. Or a equalizing (auto-) transformer and resistor with equal to rated wave resistance R is introduced, while the secondary winding of the (auto-) transformer with inductivity LT is shunted by introduced resistor and connected in series to the matching resistor, while the primary winding in accordance to secondary winding is connected through a shorting equalizing circuit parallel to the connected equipment, and also a capacitor with a capacity C=LT/R2 is introduced and connected parallel to the matching resistor. Or the matching resistor with equal to rated wave resistance R is formed as a matching resistive divider which is introduced parallel to the connected equipment, with the resistance of the parallel legs of divider=R and the wires of the pair connected directly or through introduces elements to cross-leg of the divider while the length-leg is shunted by the shorting equalizing circuit or directly by a capacitor C when the (auto-) transformer is introduced. Or for controlling the equalization, the introduced matching resistive divider is formed as a row of weight resistors whose resistance of parallel connection is equal R and the resistance of each is more than R and inversely proportional to the relative step of change it division, and a switch is provided from one leg to the other. Or in the cable end connector-equalizer is introduced and on any or both end/s, and/or between branches a (auto-) transformer switched on. Or in the cable end connector-equalizer is introduced and to a winding/s of the introduced (auto-) transformer/s and/or to a transmitter-receiver is connected resonating capacitor/s. In the end cable connector-equalizing a of coaxial design the matching resistor/potentiometer/divider and other elements are connected in the circuit of the inner wire, while by a symmetrical design, they are connected symmetrical into the circuit of each wire and their identical cross branches are connected in series. Or in addition the formed pairs of identical elements are replaced with one equivalent element. For the purpose of immediate improvement of the telecommunication systems and their equipment, the introduced cable end connector-equalizer is connected to the improved equipment through one/both way repeater/equalizer/regenerator/converter, with one output/input connected to the cable end connector-equalizers which corresponds to the type of the connected cable end connector-equalizer, while the other input/output is corresponds to a resistance of the improved equipment or two cable end connectors-equalizers are connected by one/both way repeater/equalizer/regenerator/converter and its output/input to each of two connected cable end connectors-equalizers in correspondence to the type of the connected cable end connectors-equalizers, while one of the cable end connectors-equalizers is connected to the improved equipment. In order to improve the efficiency of the equipment formed as current generator/voltage amplifier, the length-leg of the potentiometer connected between the equipment and the pair end is shunted by the shorting equalizing circuit, and by the equipment formed as voltage generator/current amplifier the opening equalizing circuit is connected one in series to the resistor which shunts the secondary winding of the (auto-) transformer, and/or the other circuit is connected in series to the cross-leg of the matching resistive divider. The (auto-) transformer and/or introductor/s of the cable end connector-equalizer are implement with taps. The novel features which are considered as characteristic for the present invention are set forth in particular in the appended claims. The invention itself, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.
Dangerous dairy: Bangladesh milk adulteration report delayed despite ‘large majority’ of samples tainted The situation has been ongoing since February this year, when the Bangladesh Food Safety Authority (BFSA) and Bangladesh Standards and Testing Institution (BSTI) were first ordered to submit a report and explain ‘why their inaction and failure in preventing adulteration and taking appropriate legal steps against it should not be declared illegal’. This report was submitted earlier this month, in which the BFSA reported that a large majority of the 190 samples of raw milk, packaged milk, curds and fodder were found to contain adulterants such as bacteria, aflatoxin, lead, tetracycline, mould and more. The BFSA also said that a 16-member committee had been formed to ‘find those behind the adulteration’, but none of these perpetrators were identified in the report. Following this, Justice Md Nazrul Islam Talukder and Justice KM Hafizul Alam set the deadline for submitting a detailed report on the milk adulterators for May 15 this year, but both agencies failed to meet this and instead filed a petition asking for more time. The deadline was then extended to June 23 this year, but must ‘include the name of companies involved in adulteration of milk, dairy products and fodders’, according to Dhaka Tribune. “How will the nation be built if health of its future generation is not sound?” it asked. “No one will be allowed to toy with people’s lives, [and those] responsible for milk adulteration will have to face punishment,” said the High Court. Dr Shahnila Ferdous, head of National Food Safety Laboratory (NFSL) and the individual who first revealed the lab’s findings on high levels of contamination in the country’s milk and milk products, has also been asked to appear in court on June 21 and describe sampling methods. In Dr Ferdous’ initial report, she revealed that 96% of raw milk samples tested were microbiologically contaminated, whereas 27 of 96 samples contained dangerous components such as lead or aflatoxins. Up to 80% of packaged milk and 51% of curd samples contained microbial contaminants, whereas 100% of all cow feed was found to be contaminated by pollutants such as pesticides, tetracycline, ciproxin and others. “Heavy metals found in milk basically came from cattle feeds, while pesticide is present because of its excessive use in grass and other agricultural feeds,” Ferdous told local media. “This happens mostly due to the lack of awareness among farmers and feed manufacturers.” On May 21, Dr Ferdousi submitted a report listing the relevant adulterators to the High Court on May 21, in which 30 curd-producing companies, 30 fodder companies, 31 packet milk companies, 96 raw cow milk producers, and several individuals were named. BSTI and BFSA were purportedly berated by the court for not being able to do the same sooner. “If the NFSL could do it, why couldn't you? [You] are not conducting any research,” said the bench. The High Court had also requested the country’s Anti-Corruption Commission to take action, however the commission’s lawyer Syed Mamun Mahbub told the court that the commission could not take action against those responsible until the BSTI and the BFSA submitted ‘reports [of the perpetrators] to the commission’. War on adulteration In addition to dairy, the High Court has also urged the Bangladeshi government to declare a ‘war on food adulteration’. “We urge the government and the head of the government (Prime Minister Sheikh Hasina) to declare a war on food adulteration. If necessary, an emergency should be declared over the issue of food safety like what was done in the crackdown on drugs,” said Justice Sheikh Hassan Arif. Justice Arif made the comment while passing orders together with Justice Razik-Al-Jalil on the banning of 52 food products in the country earlier this month, which the BSTI had identified to be adulterated or substandard. “There is no scope of compromise or negligence when the issue is food safety. Every regulatory agency should work in coordination with each other to overcome this problem,” he added. “Food adulteration must not continue. Though the court is not supposed to prioritise the list of jobs of the executive, food safety should be on top.” Amongst the 52 banned products were mustard oil, potato chips, drinking water, salt, turmeric powder, ghee and more, whereas affected brands included Rupchanda, Sun, Duncan Modhumoti and Baghabari amongst others. The court has orderd BSTI, BFSA and the Directorate of National Consumer Rights Protection (DNCRP) to remove the products from market and also submit a progress report on May 23. According to Dhaka Tribune, when the BSTI released its report revealing the 52 compromised food items earlier this month, the Conscious Consumer Society (CCS) had sent a legal notice to various governmental agencies on May 6 urging immediate action but received no response. This was later brought to the attention of the High Court, which summoned BSTI and BFSA officers for explanation and culminating in the banning of the compromised food products.
https://www.dairyreporter.com/Article/2019/05/27/Dangerous-dairy-Bangladesh-milk-adulteration-report-delayed-despite-large-majority-of-samples-tainted
Mughal Gardens, Paradise on EarthSeptember 15, 2020 The Mughal regime is a well-documented era and the cultural achievements of this period are an important landmark in South Asian art history. While the awe-inspiring buildings, tombs, and mausoleums of this period are renowned for their grand... Eid Al-Adha — Seven Beautiful Mosques Around the WorldJuly 30, 2020 Eid al-Adha is observed between the 10th to the 13th days of the Islamic month of Dhu al-Hijjah. This is the Eid of the sacrifice as it commemorates Ibrahim’s sacrifice or qurbani of his own son, Ishmael to... Hagia Sophia Becomes a Mosque – What You Need to KnowJuly 23, 2020 According to recent news, the President of Turkey has converted Istanbul’s famous Hagia Sophia back into a mosque. His decision is highly controversial, given Hagia Sophia’s reputation as a major monument of human history. Here is the historical,... Home and Migration in the Artworks of Zarina HashmiJuly 1, 2020 Zarina Hashmi (16 July 1937 – 25 April 2020), was an Indian-American artist and printmaker based in New York City. She was known by her professional name, Zarina. Her works include drawings, prints, and sculptures. She used abstract... Painting of the Week: Tughra of Sultan Süleiman the MagnificentApril 19, 2020 The Ottoman Empire was an exotic realm filled with sheikhs, silks, and saffron. During the 16th and 17th centuries, it reached the height of its powers and extended into several continents, crossing nations and languages. It encompassed the... Nur Jahan – the Light of the WorldApril 4, 2020 Mehr-un-Nissa was an influential woman of the Mughal era, well-known for her great beauty, unmatched intelligence, and volatile temper. As a woman in the 1600s, the tale of her rise to power and prominence is unique and noteworthy. After... Turquoise: Art, Culture and Myth of a GemstoneJuly 27, 2019 The bright blue-green Turquoise has been valued and revered for centuries by all the cultures around the world that had access to it. Several original sources of the stone are now long gone, having been mined to remove...
https://www.dailyartmagazine.com/category/stuff/eastern-world/islamic-art/
While energy efficiency, waste management and transportation are important, a UK report, has shown that 57% of the carbon emissions from healthcare derive from manufacturing and procurement of pharmaceuticals and medical equipment. Similarly, an Australian study estimated that procurement accounted for 61.6% of the carbon footprint of a satellite dialysis unit, in contrast to 18.6% for electricity usage, 8.8% for patient and staff travel and 7.6% for water usage. Because of this, meaningful efforts to reduce the renal sector carbon footprint must address procurement. What can you do? … some suggestions include… - Work with the person/department responsible for procurement in your facility to incorporate environmental sustainability weightings in procurement decisions - Voice your opinions about issues relating to procurement … Can recycled paper be used? Do suppliers retrieve packaging and pallets after delivery? - Pressure industry groups to present transparent environmental impact data regarding product manufacture, end-of-life disposal, and organisational environmental performance. If appropriately weighted, these criteria would help stimulate competition, and send a clear message that environmental sustainability is a renal community priority. - Encourage Industry to drive product development away from hazardous materials (eg: PVC).
https://www.greendialysis.org/healthcare-professionals-and-health-authorities/other-potentials/procurement
A one-trial inhibitory avoidance task to zebrafish: rapid acquisition of an NMDA-dependent long-term memory. The behavioral tasks aiming to evaluate learning and memory mechanisms currently available to zebrafish (Danio rerio) involve long training sessions frequently along multiple days and are based on shuttle box or active-avoidance protocols, preventing a detailed analysis of cellular and molecular time-dependent processes involved in memory acquisition and consolidation. In order to explore zebrafish's potential contribution to the characterization of the molecular machinery underlying learning and memory rapidly acquired and reliable paradigms are necessary. In this study we present a rapid and effective learning protocol in a single-trial inhibitory avoidance in zebrafish. In a simple apparatus, adult animals learned to refrain from swimming from a white into a dark compartment in order to avoid an electric shock during a single-trial training session that lasted less than 2 min. The resulting memory is robust, long-lasting and sensitive to NMDA-receptor antagonist MK-801 given in the tank water immediately after training. Experiments aiming to further characterize the events underlying memory formation, retrieval or extinction or those looking for cognitive profiling of mutants, neurotoxicological studies and disease models may benefit from this task, and together with complementary strategies available for zebrafish may significantly improve our current knowledge on learning and memory mechanisms.
Newspaper editors clear up false Mae Young death reports Editors at a South Carolina newspaper have issued a retraction after publishing a premature report suggesting female wrestler Mae Young had died. The ring queen, a pioneer for women in the male-dominated sport, is seriously ill and has been in hospice care in South Carolina since last month (Dec13), but columnist Mike Mooneyham claimed she had been taken off life support and passed away on Thursday (09Jan14). The news was picked up by various media outlets, prompting wrestling promoter Dixie Carter and fighter Samoa Joe to pay tribute online, but now the reporter has apologised for the misinformation. A statement released by newspaper bosses later on Thursday reads: “The Post and Courier incorrectly reported today that legendary women’s pro wrestling star Mae Young had died at the age of 90. That report is incorrect. Young has been gravely ill.
http://www.hollywood.com/general/newspaper-editors-clear-up-false-mae-young-death-reports-59851039/
Ankle sprains are far too common in many sports, especially soccer. The most common type of ankle sprain is a lateral ankle sprain, which means on the outside of the ankle. There are some who confuse the term ‘sprain’ and ‘strain’. A sprain is an injury to a ligament, which connects bone to bone. A strain is injury to a muscle. Both types of injuries can be prevented with proper strengthening. A TheraBand is a thin, flat rubber band that can be used for many different exercises, specifically ankle strengthening to prevent ankle sprains. There are four primary ranges of motion that the ankle joint can do: Inversion (moving your toes in), eversion (moving your toes outwards), dorsiflexion (toes to your nose), and plantarflexion (pointing your toes, like using a gas pedal). The most common Mechanism of Injury (MOI) for a lateral ankle sprain is typically plantarflexion and inversion, also known as lateral ankle sprain, which injures the ligaments on the outside of your ankle. When the muscles surrounding our ankle are strong, in some cases, a lateral ankle sprain can be avoided. Using a Thera-band causes resistance to build up the muscles in your ankle. Attaching a Thera-band around the base of a chair or table and making a loop for your foot to go in is the best way to utilize these bands for strengthening. Doing 2 to 3 sets of 12 repetitions doing each range of motion, two to three times a week will put you in a great position of preventing ankle sprains. When You’ve Sustained an Ankle Injury, Here is a Guideline for Rehab Protocol: Progression is as Follows - Once swelling and pain are manageable, may begin increasing range of motion - Once range of motion is equal bilaterally (compared to other ankle), may begin strengthening exercises - Once strength is equal bilaterally and there is no compensation (limping, favoring the good side, etc) the athlete may begin progressing their way back to their sport Goals - Decrease swelling - RICE ( Rest, Ice Compression, Elevation) - Ice: 20 minutes at least 3 times/day - Compression: Ace wrap starting at the toes going up to mid-calf all the time except during icing, shower, sleep - Elevation: Keep ankle raised above the heart. Do “pumps” (flexing and extending the foot forwards and backwards) for 20 minutes while highly elevated, ideally at night but can be performed multiple times throughout the day. - Do not ice directly before or after the “pumps” - RICE ( Rest, Ice Compression, Elevation) Decrease Pain - Natural solutions (contact me for more info) - NSAIDS for no longer than 7-10 days Increase Range of Motion/Mobility - ABC’s - Actively moving the ankle in each direction: forward, backward, and side to side - Wall Touches 2×20 - Swinging leg 80x Increase Strength - Band exercises 2×10 in each direction - Slow and controlled - Only move the ankle, not the hip for the side to side exercises - Calf Raises (going on tippy toes) 2×10 - Slow and controlled - Toe Raises - Lean up against a wall and slowly and controlled lift up your toes so that you are on your heels. Repeat 2×10 Suggested Equipment & Braces Very important! Preventative (NOT a brace): Wear as directed once ankle injury has subsided to prevent recurrent injury: Pro Taco Ankle The below ankle braces should ideally only be worn during the strengthening/return to play part of the injury. Wearing them for too long can decrease mobility in the ankle causing other injuries up the kinetic chain to occur. - Don Joy Ankle Pod (specifically made for soccer cleats) - Strap Brace - DICK’S Brace Find out more about the CRYSC Sports Medicine Program by visiting our website. For any question or concerns related to ankle injuries/exercises, sports medicine, or athletic training, please reach out to the CRYSC Head Athletic Trainer, Julie Graves, at [email protected].
https://dev.rapidsyouthsoccer.org/news/sports-medicine/thera-band-ankle-exercises/
Asymptotics of Hitchin’s metric on the Hitchin section Abstract. We consider Hitchin’s hyperkähler metric on the moduli space of degree zero -Higgs bundles over a compact Riemann surface. It has been conjectured that, when one goes to infinity along a generic ray in , converges to an explicit “semiflat” metric , with an exponential rate of convergence. We show that this is indeed the case for the restriction of to the tangent bundle of the Hitchin section . 1. Introduction 1.1. Summary Fix a compact Riemann surface . In Hitchin studied the moduli space of degree zero -Higgs bundles on , and showed in particular that admits a canonically defined hyperkähler metric . In [6, 8] a new conjectural construction of was given. The full conjecture is complicated to state (see for a review), but one of its consequences is a concrete picture of the generic asymptotics of , as follows. The non-compact space is fibered over the space of holomorphic quadratic differentials on . We consider a path to infinity in , lying over a generic ray , where has only simple zeroes. Along such a path, the prediction is that |(1.1)| where is the semiflat metric, given by a simple explicit formula (see §3.1), and is any constant with , where is the length of the shortest saddle connection in the metric (see §3.2). Very recently Mazzeo-Swoboda-Weiss-Witt have shown that, along a generic ray, the difference does decay at least polynomially in . This work motivated us to wonder whether one could show directly that the decay is actually exponential. In this paper we show that this is indeed the case for the restriction of to the tangent bundle of a certain embedded copy of inside , the Hitchin section: (1.1) holds there for any . (Unfortunately, we miss the conjectured sharp constant by a factor of .) The precise statement is given in 1 below. 1.2. The strategy Points of correspond to holomorphic quadratic differentials on . Since these form a linear space, tangent vectors to likewise correspond to holomorphic quadratic differentials . Given , both and arise as integrals over (which can be found in (4.23) and (4.24) below). The integrand in is completely explicit, while the integrand in depends on the solutions of two elliptic scalar PDEs on the surface . To prove (1.1) for some given , we need to show that these two integrals agree up to . To do this, we let denote the -distance from to the closest zero of , and divide the surface into two regions, as illustrated in Figure 1: - The “far” region . In this region we can show that the integrands agree to order : indeed, we show that the difference of the integrands decays as for any . This part of our analysis contains no big surprises, and is closely parallel to the analysis carried out by Mazzeo-Swoboda-Weiss-Witt in the more general setup of arbitrary -Higgs bundles in . (However, because we restrict to the Hitchin section , our job is somewhat simpler: we only have to deal with scalar PDEs, and use more-or-less standard techniques. The specific estimates we use in this part are built on the work of Minsky in .) - The “near” region . This region looks more difficult because our estimates do not show that is close to zero here. The happy surprise—which was really the reason for writing this paper—is that when , turns out to be close to an exact form that we can control, as follows. For any , is a disjoint union of disks centered on the zeros of . On each we show that , for a -form which has the same decay property as , namely . Thus is exponentially small on the boundary of , and Stokes’s theorem gives . Combining these contributions we obtain the desired estimate . 1.3. Outline We carry out the strategy described above as follows. In §§2-3 we set up the background and notation, and state our main result precisely, as 1. In §4 we derive integral expressions for the restrictions of and to . In §§5-7 we develop the main PDE estimates we use to derive exponential decay. In §§8-9 we construct the -forms which we use in the “near” region. In §10 we put all this together to complete the proof of the main theorem. 1.4. Origin in experiment This work was initially inspired by computer experiments (using programs developed by the authors, and building on work of the first author and Wolf in ) that seemed to show exponential decay of in certain cases, despite the lack of an exponentially decaying bound on the integrand near the zeros of . While these experiments were conducted in a slightly different setting—namely, meromorphic Higgs bundles on with a single pole—all of the essential features and challenges are present in both cases. The experimental results therefore suggested that some “cancellation” would occur in . Further investigation of the integrand in this region led to the results of §§8-9 below, and thus to the main theorem. This experimental counterpart of this work is ongoing and will be the subject of a forthcoming paper and software release. 1.5. Outlook It would be very desirable to understand how to extend 1 to Higgs bundles of higher rank, say -Higgs bundles. There is a conjecture very similar to (1.1) in that case, but instead of the shortest saddle connection, it involves the lightest finite web as defined in . While the analysis of should extend to this case using methods similar to those of , it is not clear how our approach to should be generalized. Similarly, one would like to extend 1 to work on the full instead of only . The analysis of has already been done on the full in , so again the issue is whether the analysis of can be extended. In another direction, it would be desirable to improve 1 to show that the exponential estimate holds for all instead of just . However, this might require a new method; in our computation we meet several different corrections which are naively of the same order ; one would need to find some mechanism by which these different corrections can cancel one another. 1.6. Acknowledgements The authors thank Rafe Mazzeo, Jan Swoboda, Hartmut Weiss, and Michael Wolf for helpful discussions related to this work, and also thank the anonymous referee for a careful reading and helpful comments and corrections. The authors also gratefully acknowledge support from the U.S. National Science Foundation through individual grants DMS 1709877 (DD), DMS 1711692 (AN), and through the GEAR Network (DMS 1107452, 1107263, 1107367, “RNMS: GEometric structures And Representation varieties”) which supported a conference where some of this work was conducted. 2. Background 2.1. Higgs bundles Recall that a stable -Higgs bundle over of degree zero is a pair where - is a rank holomorphic vector bundle over , equipped with a trivialization of , - is a traceless holomorphic section of , - all -invariant subbundles of have negative degree. There is a (coarse) moduli space parameterizing stable -Higgs bundles over of degree zero [9, 10]. 2.2. Harmonic metrics For each stable -Higgs bundle of degree zero, it is shown in that there is a distinguished unit-determinant Hermitian metric on , the harmonic metric. The metric is determined by solving an elliptic PDE: letting denote the Chern connection in , with curvature , and letting with the -adjoint of , we require |(2.1)| In this equation both and depend on . 2.3. The hyperkähler metric Now we recall Hitchin’s hyperkähler metric on the moduli space . A beautiful description of this metric was given by Hitchin in in terms of an infinite-dimensional hyperkähler quotient. In this paper we will not use the hyperkähler structure; all we need is a practical recipe for computing the metric. In this section we review that recipe. Let be tangent to an arc in , and lift this arc to a family of Higgs bundles , equipped with harmonic metrics . Identify all the with a fixed -bundle . Then we have a family of unitary connections on and 1-forms which for all satisfy (2.1). For brevity, let and denote these objects at . Differentiating at we obtain a pair of -forms |(2.2)| Given we define a nonnegative density on by |(2.3)| Here is a local conformal coordinate on . In coordinate-independent terms, the density corresponds (using the orientation of ) to the -form , where denotes the Hodge star operator on -forms. Now we equip with the metric |(2.4)| Let be the linearized gauge map, defined by |(2.5)| We consider the orthogonal decomposition of relative to the image of , |(2.6)| with and . Hitchin’s hyperkähler metric is |(2.7)| 2.4. The Hitchin section Fix a spin structure on the compact Riemann surface . The spin structure determines a holomorphic line bundle equipped with an isomorphism , and thus a rank holomorphic vector bundle |(2.8)| This bundle has which is canonically trivial. Let be the space of holomorphic quadratic differentials on , |(2.9)| For each there is a corresponding Higgs field, |(2.10)| The Higgs bundles are all stable, and thus determine a map . The image is an embedded submanifold, the Hitchin section.111More precisely there are Hitchin sections, corresponding to the equivalence classes of spin structures on . All of our discussion applies to any of them. Moreover, is a holomorphic map, with respect to the complex structure on induced from its realization as moduli space of Higgs bundles (which is the complex structure denoted in ). Thus is a complex submanifold of . From now on, by abuse of notation, we identify with . Our interest in this paper is in the restriction of the hyperkähler metric from the full to . This restriction is a Kähler metric on , which we will also denote . 3. Metric estimate 3.1. The semiflat metric Let be the locus of quadratic differentials with only simple zeros, which is an open and dense set. On we define an explicit Kähler metric as follows. A tangent vector to can be represented by a quadratic differential . We define |(3.1)| Note that the integrand on the right hand side is a smooth density on . The condition that implies that this integral is convergent. We remark that is a “(rigid) special Kähler” metric on in the sense of . It does not extend to a Riemannian metric on the full . 3.2. Threshold and radius Any nonzero quadratic differential induces a flat metric on , which is smooth except for conical singularities at the zeros of . From now on we always use this metric to define geodesics and lengths on , unless a different metric is explicitly referenced. A saddle connection of is a geodesic segment on which begins and ends on zeros of (not necessarily two distinct zeros), and which has no zeros of in its interior. We define the threshold by |(3.2)| Then is continuous and has the homogeneity property |(3.3)| The threshold measures the distance “between zeros” of (including the possibility of a segment between a zero and itself). In what follows it will also be important to consider the distance from an arbitrary point to the zeros of . We define the radius function of by |(3.4)| The main technical estimates that are used in the proof of 1 are all phrased in terms of bounds on various functions on in terms of the radius. 3.3. The estimate Now we can state the main result of this paper: Theorem 1. If , and is any holomorphic quadratic differential on , then for any we have |(3.5)| as , where denotes any norm on the vector space . Having fixed such a norm, the implicit multiplicative constant in (3.5) can be taken to depend only on , , and the genus of . 4. Coordinate computations 4.1. Self-duality equation and variation in coordinates To set the stage for the proof of 1 we start by deriving local coordinate expressions for the self-duality equation (2.1) at a point , and for its first variation in the direction of representing . In a local conformal coordinate on we write for a holomorphic function . Let denote a local section of satisfying ; there are two such local sections, the choice of which will not matter in the sequel. Using the local trivialization of given by the frame , which we call the holomorphic gauge, we can write |(4.1)| |(4.2)| This diagonal form for reflects that the splitting is orthogonal for the harmonic metric in this case [9, Theorem 11.2]. Then (2.1) reduces to a scalar equation for , |(4.3)| where is the flat Laplacian. In more invariant terms, (4.3) is an equation for the globally defined metric on . For Higgs bundles of this type, the Hermitian metric , the Kähler metric , and the (local) scalar function all contain equivalent information. In most of what follows we work with , which unlike and is a coordinate-dependent quantity: Under a conformal change of coordinates it transforms as . We refer to objects with this transformation property as log densities. Note that the difference of two log densities is a function. Also, if is a quadratic differential, then is a log density. When considering the density on which corresponds to the unique harmonic metric on the Higgs bundle associated to , we sometimes write to emphasize its dependence on , and to distinguish it from other local solutions to (4.3) on domains in or in the plane that we consider. Next, we consider a variation expressed locally as . Differentiating (4.3) we find that the corresponding first order variation , describing the infinitesimal change in , satisfies the inhomogeneous linear equation |(4.4)| Unlike , is a well-defined global function on (independent of the coordinate ). Since the operator is negative definite, (4.4) uniquely determines . 4.2. Unitary gauge In preparation for calculating the inner product of variations it is more convenient to work in unitary gauge, expressing the Higgs field and connection relative to the frame ; then (4.1)-(4.2) become |(4.5)| with infinitesimal variations given by |(4.6a)| |(4.6b)| which of course gives a corresponding expression for . 4.3. Orthogonal decomposition Let be obtained from a solution of the self-duality equation (2.1). Define the linear map by |(4.7)| A variation is -orthogonal to the image of the linearized gauge map if and only if it satisfies . We say that such a variation is in gauge. For a general variation , the orthogonal decomposition of (2.6) is given by |(4.8)| where satisfies . For the specific variation obtained in (4.6) we find that , and a straightforward calculation yields |(4.9)| The computation of therefore reduces to solving |(4.10)| for . Equation (4.10) implies in particular that is diagonal and traceless; thus we may write |(4.11)| After so doing, (4.10) becomes a scalar equation for , |(4.12)| We note the striking similarity between (4.12) and (4.4); in fact, replacing and in (4.12) gives exactly (4.4). This suggests that we combine (the metric variation) and (the infinitesimal gauge transformation to put the tangent vector in gauge) into the single complex function |(4.13)| which we call the complex variation, which then satisfies the inhomogeneous linear equation |(4.14)| As with (4.4) above, when working on the entire compact surface the equation (4.14) uniquely determines the complex function . We write for this unique global solution determined by when it is necessary to distinguish it from other local solutions of the same equation. 4.4. Calculating the norm Using the calculations above we can now determine an explicit integral expression for in terms of , , , and . The first step is to calculate in unitary gauge. We find where |(4.15)| Now is orthogonal to , hence the hyperkähler norm of the associated tangent vector to the moduli space is |(4.16)| Now we need only to substitute the expressions for from (4.6) and from (4.15) and simplify. Two observations will be useful in doing this. First, if where is expressed in unitary gauge as for a matrix-valued function, then |(4.17)| Second, if , with a scalar -form, then we have |(4.18)| Using (4.17) to simplify and (4.18) to simplify , we find222Abusing notation, we often write integrals over with the integrand expressed in a local coordinate and frame for . |(4.19)| where . Proceeding similarly for using (4.15), we have |(4.20)| Subtracting (4.20) from (4.19) we obtain |(4.21)| Next we integrate by parts on to replace and by and respectively, and substitute for and using the differential equations (4.3) and (4.12). A few terms cancel and we are left with |(4.22)| or more compactly, |(4.23)| As a reassuring consistency check, note that is indeed a Hermitian metric, i.e. : one sees this easily from (4.23), since changing leads to and . The same is not true of (4.19) by itself: it holds only once we subtract the pure gauge part (4.20). To sum up the results of this section, and restate the formula (3.1) for in the same local coordinates, we have: Theorem 2. The goal of the next three sections is to gain some control over the integral expressions (4.23) and (4.24) by studying the behavior of the functions and . We will see that these functions are well-approximated by |(4.25)| at points that are not too close to the zeros of . It is easy to check that substituting these approximations directly into (4.23) yields exactly the semiflat integral (4.24). Bounding the difference thus reduces to understanding the error in the approximations. 5. Exponential decay principle We now develop a criterion for solutions to certain elliptic PDE on regions in the plane to decay exponentially fast as we move away from the boundary of the region. The method is standard—combining the maximum principle with the known behavior of the eigenfunctions of the Laplacian—and the results in this section are surely not new. A similar method was used in , for example, to derive the exponential decay results for (4.3) that we will generalize in §6. Theorem 3. Let be a disk in , and for let denote the distance to the boundary of this disk. Suppose that satisfies |(5.1)| where , , and suppose that for every there exists a constant such that obeys the exponential decay condition |(5.2)| Then, for any , there exist constants and , such that obeys the exponential decay condition |(5.3)| where . Moreover, given any , can be chosen to be equal to . The proof will rely on the following maximum principle. Lemma 4. Let be a bounded region in , and let with . Suppose satisfies |(5.4)| and that satisfies |(5.5)| where are functions such that |(5.6)| If on , then on . Proof. First we claim , or equivalently that on . By compactness of , the function achieves its minimum at a point , and it suffices to show that the minimum value is nonnegative. If then this is true by the hypothesis that on . If then because is everywhere nonnegative. Thus the remaining case is that and is an interior local minimum of , hence . Then we find Since this shows as required. To complete the proof we must also show that . However, this follows by applying the argument above to the function , which satisfies , where . Since on and , the necessary hypotheses still hold in this case. ∎ In the proof of 3 we will use 4 to reduce to the case where is constant and where and are both radially symmetric eigenfunctions of the Laplacian. In preparation for doing this, we recall the properties of those eigenfunctions and relate them to the exponential decay behavior under consideration. The modified Bessel function of the first kind is the unique positive, even, smooth function on such that |(5.7)| and . Thus the function is the solution to the Dirichlet problem for on the disk with unit boundary values. The function satisfies (see e.g. [1, Section 9.7.1]) |(5.8)| where means that as . It follows that, if a function satisfies an exponential decay condition |(5.9)| for some , then for any we have |(5.10)| for some linear in . Conversely, if we have (5.10) and then we get (5.9) for some linear in .333Adjusting the constants when converting between exponential and Bessel bounds is necessary due to the factor in the expansion of , with the relevant observation being that for all whereas of course . Proof of 3. Suppose we are given constants . The function obeys |(5.11)| and thus |(5.12)| for some linear in . For satisfying (5.1) and satisfying (5.12) we will show that |(5.13)| for some . Once this is achieved we can pass back from (5.13) to the desired (5.3) using the exponential bound on discussed above. Moreover, since depends linearly on , we can choose in (5.3), at the cost of possibly rescaling . Define |(5.14)| for a constant . Note that . We will determine a value of so that 4 can be applied to and on . Specifically, we must ensure that: - on and - on , so that, in the notation of 4, we can take and . First we consider (i). The function is constant on and equal to . Since , it suffices to choose |(5.15)| Now we turn to (ii). We have |(5.16)| which we have written in this way to emphasize that . With the given bound (5.12) on , the desired inequality (ii) follows if |(5.17)| or equivalently |(5.18)| 6. Estimates for the density As in Section 4 above, let be the solution of the self-duality equation (4.3) on the compact Riemann surface for a given holomorphic quadratic differential expressed locally as . It was shown by Minsky in (see also [14, Lemma 2.2]) that is approximated by up to an error that decays exponentially in the distance from the zeros of . Building on Minsky’s results (and following a similar outline to [3, Section 5.4]), we establish the following estimate which gives a slightly faster exponential decay rate: Theorem 5. Fix and assume . For any , there exist constants and such that the density satisfies |(6.1)| for all with . The constants and can be taken to depend only on and the topological type of . Furthermore, under the same hypotheses we have the estimate |(6.2)| where and denote, respectively, the gradient and the norm of a tangent vector with respect to the metric . To prove this, we will first establish some rough bounds on . These will allow us to apply 3 to the equation satisfied by . 6.1. Rough bounds Let be the Poincaré metric on of constant (Gaussian) curvature . In general, the Gaussian curvature of a metric is given by (see e.g. [2, Section 1.5]); therefore, the equation satisfied by the Poincaré metric becomes |(6.3)| which is equation (4.3) with . Now for the solution of (4.3) associated to a general quadratic differential , we have the following lower bounds in terms of and : Lemma 6. We have everywhere on , and on .
https://www.arxiv-vanity.com/papers/1802.07200/
RRM Design Group is currently seeking a full-time Senior Urban Designer with a passion for projects related to design, function, and arrangement of buildings, streets, open space, and other urban features to enhance the private and public environment. The successful candidate will help prepare and manage a wide range of key planning documents, including TOD, corridor, streetscape, and downtown vision, master, and specific plans, design guidelines and standards, land development plans and regulations, and project design review. They will thrive in a collaborative environment, have a passion for preparing quality work, and a desire to manage projects and support business development. The Senior Urban Designer will be a member of our Planning and Urban Design Department and will collaborate with RRM’s architects, engineers, and landscape architects to produce high quality projects. They will possess a strong understanding of project management fundamentals, exercise independence and judgement while receiving general supervision and support from senior management. They will help mentor and delegate to support staff. Candidate must have excellent problem-solving and decision-making skills; good verbal and written communication skills; and strong interpersonal skills. The ideal candidate will enjoy tackling challenges, bring an intellectual curiosity, and thrive in a fast-paced setting. Post COVID implications, travel for business may be required for project work and interoffice coordination. - A minimum of seven (7) years of planning experience - Bachelor’s degree in urban design, city planning, architecture, or landscape architecture.
https://norcalapa.org/jobs/rrm-design-group-senior-urban-designer/