content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Twice in recent sermons (02/22/2009 on Mark 11:26 and 02/01/2009 on Mark 9:43-50) I have used the phrase ‘earliest and best Greek texts’ to explain why I was not including a particular phrase that is found in the King James Version in my exegesis of a passage. I’m aware that not everyone agrees with this judgment, and I thought I would share some thoughts on why I do this, and some links to significant articles on the subject.
I am not, in this blog, going to touch on the translation of the Greek texts into English. My position is that all English translations have strengths and weaknesses, especially in their contemporary readability, literalness and accurate capturing of the thought. Because all translations have strengths and weaknesses my practice is to use many translations in studying and even in preaching.
Background
Up until the Renaissance and the Reformation, most of the bibles in the world were either Latin (mostly a version called the Vulgate) or Greek. The Latin Bible was used in the Roman Catholic Church and the Greek Bible in the Orthodox churches.
The Renaissance brought a new interest in classical languages. The printing press brought new availability of all kinds of written material. Thus classical scholars as early as Erasmus were called on to prepare for publication manuscripts of the Greek New Testament. Though Erasmus used only a few easily obtained copies of the Greek text used by the Orthodox churches, and though he included a few nearly unknown readings in his product, the printed version of his work became the basis of most Bible translations for the next four centuries. Slightly revised, it was published in vast quantities in 1550 and again in 1624.
It was this Greek text, along with a number of other individual Greek manuscripts, that became the basis for Tyndale’s translation of the Bible into English, and also for the translation in 1611 of the Authorized Version (the King James Version). This Greek text has come to be known as the Textus Receptus (received text) or the Traditional text. I’m going to call it the TR in the rest of this article.
Text Types
In the 1800’s the accuracy of this text was called into question. A number of very early manuscripts of the Greek New Testament had been found and none of them agreed with the TR in every detail. In fact no two handwritten copies of the Greek New Testament agree in every detail. There are slight differences even between two copies made by the same copyists. If you look at all the Greek manuscripts available, you will find something like 300,000 text differences among them. However, almost all of these differences are very small – an accent, or a punctuation mark or the shortening or lengthening of a word.
However, some of the differences between the early manuscripts and the TR seemed to these scholars significant enough to pursue. As they did so they found that these oldest manuscripts could be divided up by their similarities. One group of these was called the Alexandrian text type, as many examples were found in and around Egypt. Another was called the Western type, found in Italy and other western Mediterranean countries. A third type was called Byzantine, found in the Greek Orthodox countries of the eastern Mediterranean. This was the text type on which Erasmus’ work and the TR were based. However, the Byzantine texts as an identifiable group did not begin to appear until several centuries after the birth of Christ.
A very famous chart of this data, in which the width of each bar represents the number of existing manuscripts from each century, looks like this:
Based on this data, scholars began to believe that the versions of the Greek text that most closely approximated the original documents (all of which have been lost) were those of the Alexandrian and Western types. This was later supported by the finding of numerous copies of the Scriptures on papyrus. Only one such was known at the time this theory was established, but over a hundred are now known, and most are of the Alexandrian type.
The Critical Text
Therefore, for the last 150 years or so, scholars have been refining what is called the Critical Text, in which the resolution of different readings was based on several criteria, including:(1) External evidence: older readings are to be preferred, in general, to newer reading; (2) Internal evidence: the reading which best explains the accidental (or occasionally intentional) creation of the other readings through known habits of copyists is to be preferred.
Though there is a subjective element to some of these decisions, the process itself is fairly obvious and open, and can be reproduced by independent objective investigators most of the time. Two early users of this process, Westcott and Hort, have been accused of a strong bias in favor of the Alexandrian text type, but 140 years later any such bias seems to have been pretty thoroughly removed from the Critical Text. In fact, hundred of readings commonly found in the Byzantine text type are also found in some older manuscripts and seem to have better internal evidence and are thus included in the Critical Text.
The Current Situation
All of this has been documented and debated in great detail. Some feel that the Textus Receptus is the best Greek text to work from because it was what was provided by God to the great scholars of the Reformation and for some hundreds of years before that. Others feel that the Critical Text, reflecting the earliest and best explained readings is closest to the inerrant originals penned by the authors. That is my position.
Finally, there is a new position that has emerged which feels that while the TR is a great text, it never really represented the majority of the text used before the invention of printing. These people have come up with a third position called the Majority Text (MT) position. Since some of the readings in the TR are supported by only a few of the existing manuscripts, I would tend to lean more toward the Majority Text than the Textus Receptus if forced to choose.
These three text types have all been used to translate the New Testament. In fact, most translators and translation teams, while starting from one of these foundations, work hard to think through the textual issues in each verse as they do their translations. Here are some well known translations and the text type they are based on:
King James Version – Textus Receptus
New King James Version – mostly Majority Text, with some Textus Receptus
Revised Standard Version – Critical Text
New American Standard Version – Critical Text
New International Version – Critical Text
English Standard Version – Critical Text
Closing Thoughts:
I can not emphasize too strongly the fact that all three Greek texts are essentially in agreement at almost every point. The new Majority Text differs from the Textus Receptus at about 2000 places, most of them only different by a single word or spelling. Remember, that’s 2000 out of 300,000 variants. The Critical Text differs from the TR at about 6500 places – still agreeing 98 percent of the time.
Admittedly, a few of these disagreements are over whole verses or even blocks of text (like Mark 16:9-20). But it is the conclusion of scholars on both sides of the issue that no major doctrine or Christian practice is threatened by the use of one text over the other. Our understanding of the Christian faith and how to live the Christian life does not depend on which of these three similar Greek texts we use.
Therefore, on the one hand, I think that sincere Christians can disagree on this issue without having it be a stumbling block between us. If you disagree with the choices those around you make in selecting a translation, I strongly urge you to ‘agree to disagree’ in a way that does not threaten Christian fellowship or community.
On the other hand, I do have an opinion – that the Critical Text is closer to the original words of Scripture’s authors in many of the places where it disagrees with the TR or the MT. I don’t accept this blindly, but when I am about to preach a text with significant variation among the English translations, one of the things I look at is the basis for the decisions made in the Critical Text. On rare occasions I disagree with their thinking and use a different reading.
There are two ways you will see this reflected in my preaching. First, when we use Scripture at Trinity that I have provided (readings, preaching, Bible Immersion Camp, Bible studies, etc.) I will pretty much always use one of the translations made from the Critical text – often the New International Version or the English Standard Version, but occasionally the New American Standard Version. Once in a while I don’t like any of the translations enough to use it un-altered, in which case I create the DeGray Standard Version (DSV) for that Scripture passage. Often I will mark these (DSV) in the bulletin or on the screen.
The second way this is reflected is where we started this essay: once in a while I will make the comment that the ‘earliest and best manuscripts’ support a particular reading (or the absence of a verse!). Now, I hope, you know what that means.
References:
There are countless sites on the web arguing every aspect of this discussion, and countless books. I offer links to just three that I have found helpful:
The Majority Text and the Original Text: Are They the Same by Daniel Wallace at Dallas Theological Seminary. This article, while addressing a particular issue, touches in some detail most of what I have said above.
English Guide to the Various Readings of the Greek New Testament and The Majority Text Compared to the Received Text, both on Bible-researcher.com give complete lists of all the variants between the three texts – a most useful tool.
Finally, I would recommend the book ‘The King James Version Debate’ by Don Carson (Amazon link), which goes through this material clearly but in some detail. | https://trinityfellowship.net/2009/02/27/some-thoughts-on-the-earliest-and-best-texts/ |
The field of psycholinguistics and the application of psycholinguistic theory to advertising and marketing communication has become a topic of great prominence in the field of consumer behavior. Psycholinguistic Phenomena in Marketing Communications is the first book to address the growing research in this area. This timely volume combines research conducted by current scholars as it demonstrates diversity of the field in terms of relevant topics and methodological approaches. It examines brand names and their semantic and sound-based impact; sentence structure and research in marketing communication; advertising narratives evoking emotional responses; the effects of empathy response on advertising; and the role of language and images in creation of advertising. The book includes authors from a variety of fields, including mass communication, marketing, social psychology, linguistics, and neuropsychology. A range of perspectives is discussed, from qualitative text analysis to controlled psychological experimentation. Psycholinguistic Phenomena in Marketing Communications is intended for students and scholars in numerous disciplines, such as advertising, marketing, social psychology, sociology, and linguistics. It is also suitable for graduate courses in these disciplines.
Publisher: Lawrence Erlbaum Associates Inc
ISBN: 9780805856903
You may also be interested in...
ReviewsWrite your review
We would love to hear what you think of Waterstones. Why not review Waterstones on Trustpilot? | https://www.waterstones.com/book/psycholinguistic-phenomena-in-marketing-communications/tina-m-lowrey/9780805856903 |
Episode 21: Spirituality, Social Justice, and Social Activism.
In a world marked increasingly by insensitivity and intolerance, faith and spirituality can serve as the motivational energy that animates the pursuit of a more socially just society and propel us toward social action and advocacy. In this episode, William James College President Dr. Nicholas Covino is joined by Dr. Nicholas Rowe, Ph.D., Dean of Student Engagement and Associate Professor of History and Peace Studies at Gordon College, and Rabbi Victor Reinstein, co-founder of Nehar Shalom Community Synagogue in Jamaica Plain with his wife Mieke and a congressional rabbi for over 30 years, for a discussion on spirituality, social justice, and social activism.
-
Ep 20: Nurturing Resilience in Children and Families
Dr. Robert Brooks, a leader in child psychology and parenting, former director of the department of psychology and of psychology training at McLean Hospital, and a member of the faculty at Harvard Medical School, joins Dr. Stan Berman, vice president for academic affairs and associate professor in the department of clinical psychology at William James, for a conversation exploring topics including: resiliency, mindset, Brook's Islands of Competence model, parent-child communication and childhood anxiety.
"Conversations with William James College" is a monthly podcast series produced by William James College in Newton, Massachusetts. William James educates professionals to bring psychological theory and skills to businesses and organizations, health care systems, correctional facilities, community mental health centers, schools and consulting rooms. www.williamjames.edu
-
Ep 19: Asian American Mental Health
Society is becoming more globalized than ever, and the field of mental health is struggling to keep up. Minority groups face countless obstacles in accessing mental health services. And when they are able to access these services, they’re often met with a therapist who doesn’t understand their culture – be it language, values, or other intangible aspects of their life experience. William James College President Dr. Nicholas Covino speaks with Dr. Jean Lau Chin, Professor at Adelphi University in New York, about the need for cultural competency in our increasingly global, interconnected world. Dr. Chin served as keynote speaker for a conference held at William James titled, “Integrative and Holistic Approaches to Mental Health Care for Asians.”
-
Ep 18: Examining LGBTQ Issues in Education
Lesbian, Gay, Bisexual and Transgender Pride Month is celebrated each year in June to honor the history of the LGBTQ community and to recognize the impact LGBTQ individuals have had throughout the world. Although steps have been taken towards establishing equality, research suggests that the discrimination LGBTQ individuals face linked to societal stigma still endured by this community today is associated with various mental health issues such as high rates of psychiatric disorders, substance abuse and suicide. This podcast will explore specifically the stigmas faced by the LGBTQ community within education and the impact of these stigmas on mental health.
-
Ep. 17: Prevent Promote
In March of 2017 Governor Baker authorized a Special Legislative Commission to study prevention science and evidence based strategies to promote mental wellness and prevent mental illness. For 12 months, this 26-member Commission worked with national and local experts and recently issued its report. In this podcast, Margaret Hannah, Executive Director of the Freedman Center at William James College, who was appointed to the Commission, discusses some of the report’s findings. | https://podcasts.apple.com/us/podcast/conversations-with-william-james-college/id1056297977?mt=2&ign-mpt=uo%3D4 |
Fe(II)/Cu(II) interaction on goethite stimulated by an iron-reducing bacteria Aeromonas Hydrophila HS01 under anaerobic conditions.
Copper is a trace element essential for living creatures, but copper content in soil should be controlled, as it is toxic. The physical-chemical-biological features of Cu in soil have a significant correlation with the Fe(II)/Cu(II) interaction in soil. Of significant interest to the current study is the effect of Fe(II)/Cu(II) interaction conducted on goethite under anaerobic conditions stimulated by HS01 (a dissimilatory iron reduction (DIR) microbial). The following four treatments were designed: HS01 with α-FeOOH and Cu(II) (T1), HS01 with α-FeOOH (T2), HS01 with Cu(II) (T3), and α-FeOOH with Cu(II) (T4). HS01 presents a negligible impact on copper species transformation (T3), whereas the presence of α-FeOOH significantly enhanced copper aging contributing to the DIR effect (T1). Moreover, the violent reaction between adsorbed Fe(II) and Cu(II) leads to the decreased concentration of the active Fe(II) species (T1), further inhibiting reactions between Fe(II) and iron (hydr)oxides and decelerating the phase transformation of iron (hydr)oxides (T1). From this study, the effects of the Fe(II)/Cu(II) interaction on goethite under anaerobic conditions by HS01 are presented in three aspects: (1) the accelerating effect of copper aging, (2) the reductive transformation of copper, and (3) the inhibition effect of the phase transformation of iron (hydr)oxides.
| |
causal reasoning pdf
In, Michotte‘s (1963) famous demonstrations of phenomenal causality par. Causal reasoning is an aspect of learning, reasoning, and decision-making that involves the cognitive ability to discover relationships between causal relata, learn and understand these causal relationships, and make use of this causal knowledge in prediction, explanation, decision-making, and reasoning in terms of counterfactuals. there is a correlation between the two events, that does not mean that the first event causes the, second event. One reason for shifting levels of abstraction is that causal explanations are sometimes, more stable on the specific process level, sometimes more on the abstract level. Models and analogies for commonsense explanation are derived from fields as wide-ranging as animal learning theory and artificial intelligence. Bayes‘ rule provides a, prior probabilities (second expression). Now, imagine a special furniture factory in which an are, is kept free of oxygen so that high-temperature, welding can take place. Rather, we look at whether smoking increases the chances of, getting lung cancer. It tries to show why the ideas are important to understanding how people explain things and why thinking not only about the world as it is but the world as it could be is so central to human action. (2005) and Lucas and Griffiths (2010), have shown that people can transfer non-additive integration rules from a previou. Force theory states that people, evaluate configurations of forces attached to affectors and patients, which may vary in, direction and degree, with respect to an endstate, that is, the possible result. White, P. A. This perspective is reflected in typical experimental designs, which either employ covariation information in summary format or present participants with clearly marked discrete learning trials. How effective is this product? (2005). Observers typically describe this scenario as a case in which the movement, of Object B is caused by Object A (i.e., launching). Moreover, participants differentiated between an action that was merely observed versus, an action that was actively chosen in their estimates of the probability of the desired, The difference between observations and interventions is not only important in, reasoning and decision making, but also can aid learning. Machamer, P., Darden, L., & Craver, C. F. (2000). representations of causal hypotheses. Experimental Psychology: Learning, Memory, and Cognition, 31, Blaisdell, A. P., Sawa, K., Leising, K. J., & Waldmann, M. R. (2006). The view that, causality can be reduced to some metric of covariation was abandoned, and replaced by, the theory that causal power is a theoretical concept, which can be estimated under. B., Wagenmakers, E-J., & Blum, B. Causal-based property generalization. Constraints and nonconstraints in causal learning: Reply to White (2005) and to Luhmann and Ahn (2005). (2003a). (2009). A, ). Then, we describe two applications for which thesetheorems provide theoretical foundations: causal games and optimal action learning in causal environments. Sloman, S. A., Barbey, A. K., & Hotaling, J. Our research was done independently of these works. We plan actions and solve problems using, knowledge about cause-effect relations. Although domain-general reasoning certainly plays a role in causal. Thus, they also represent a step in the direction of causal theories. A number of studies have shown that learners can use. Markovits, H., & Potvin, F. (2001). Both approaches make, few assumptions about the domain and therefore need vast amounts of reliable data to, In psychology there has been a debate whether these algorithms are plausible models, of human learning (see Gopnik et al., 2004; Griffiths & Tenenbaum, 2009). Rehder, B., & Burnett, R. (2005). Finally, hierarchical models have the advantage of being able to, generalize to new contexts. (degree of concern) that causes men to help at home and to care about their health. nor sufficient (Cummins, 1995; Markovits & Potvin, 2001; Neys, Shaeken & Ydewalle, 2002, 2003; Quinn & Markovits, 1998). Moreover, singular causation, the main focus of force theories, has, been neglected by causal model theories, although there are some attempts to model. Note that we do not directly compare the number of, people who smoke and get lung cancer with the number. This, in turn, may facilitate causal learning. This is, why, under normal conditions, oxygen makes a, poor causal explanation for fire, even though it is, a necessary condition. For example, every. But more informally, we all reason about, causality daily. 7) for (2005). We propose a probability updating for the Bayesian Game in such a way that the knowledge of any player in terms of probabilistic beliefs about the causal model, as well as what is caused by her actions as well as the actions of every other player are taken into account. One of the key differences between causal models and probabilistic or associative, models is that they support inferences about the consequences of actions. In causal learning these levels include the, data, alternative causal models, and the theory level which encodes knowledge about the, types of events (e.g., causes vs. effects), the plausibility of a causal relationship, and the, functional form of these relationships (e.g., noisy-OR). 54.2c) the final effect B is correlated, with the initial cause A and the intermediate cause C but becomes independent from the, initial cause when the intermediate cause is kept constant. The argument is that playing at. It cannot be assumed that a causal relationship constitutes proof as there may be other unknown factors and processes involved.. For example, the dynamics of the atmosphere and their interaction with oceanic temperatures are too complicated to be explained by a single factor. Beyond. We show that the use of information about local changes However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. It seems plausible that various factors, including. Those fields have ushered in new insights about causal models by thinking about how to represent causal structure mathematically, in a framework that uses graphs and probability theory to develop what are called causal Bayesian networks. Probabilistic theories or theories of propositional. (2009). These findings were explained in terms ofpositive-test and sufficiency-test biases, which were found in bothjudgment domains. People are sensitive to various aspects of causality including the directionality of the. simplified rational reconstruction of our use of counterfactuals in ordinarylife causal - reasoning, focusing on deterministic contexts in section 2 and on indeterministic ones in . Why not? (2005). I draw on evidence from the literature on causal attribution which suggests that agency and blame-ascription play a role in the causal assignment made Under these special circumstances, you. distinction between causal and non-causal (i.e., spurious) covariations (Cheng, 1997; Waldmann & Hagmayer, 2005), the distinction between covariation and causal power, (Cheng, 1997), or the capacity of humans to derive differential predictions for, hypothetical observations and interventions from identical covariation information. Associative theories serve as an interesting contrast to causal reasoning. Nobody would describe the scenario as a. case of Object B stopping Object A, although this would be a legitimate description. 1. Table 1 summarizes the predictions for how, Table 1: Force dynamic analysis of the meaning of causal concepts, For example, force theory would represent the singular causal fact ―, No), the affector (the wind) acted against the patient (Concordance = No), and the result, (heeling) occurred (Endstate approached = Yes). To answer these kinds of ques-, tions we may rely on repeated observations, pre-, existing knowledge, thought experiments, or all, One kind of information that we use to assess caus-, ality is information from repetitions of the same, events: watching causes and effects as they repeat-, edly occur. Support for the laboratory’s research Is provided In part by the. Biases encoded on the theory level can be changed when the data, disconfirms them. This chapter argues for several interconnected theses. Notably ―A causes B‖ is represented by mental model, unt competing causes. In book: Oxford Handbook of Cognitive Psychology (pp.733-752). The focus is on what I call distinctions among causal relationships in, The guiding idea of interventionist accounts of causation is that causal claims (e.g. In K. J. Holyoak, & R. G. Cambridge University Press, Cambridge, UK. At any rate this set, of findings does provide additional evidence for the psychological difference between, One of the main strengths of causal model theories is that they do not only focus on, models in which one or more causes converge on a common effect, but also on more, complex causal models. Commonsense causal reasoning, a central challenge in ar-tificial intelligence, has been actively studied by both lin-guists and computer scientists. In the former, an important cue to causality is covariation -- a cause is something that increases the probability of an effect above its usual probability. Covariation in natural causal induction. We propose a new method of discovering causal structures, based on the detection of local, spontaneous changes in the underlying data-generating model. From a larger screening study, two parts are reported here: Part 1 dealt with syntactic variations, including word order (agent vs. patient in first/subject position) and case marking (e.g., as ergative vs. non-ergative in Tongan) depending on verb type (transitive vs. intransitive). Kemp, C., Goodman, N., & Tenenbaum, J. (1996). Causation: One word, many things. All these findings, which will be discussed later in, greater detail, demonstrate how humans go beyond the information given, and infer. understand causality and use causal knowledge, both our internal mental world and the external, physical world in which we live would be radically, Cheng PW and Wu M (1999) Why causation need not, follow from statistical association: boundary conditions, for the evaluation of generative and preventive causal, Cohen LB, Rundell LJ, Spellman BA and Cashon CH. A. second approach to structure learning is framing the task in terms of Bayesian inference. Furthermore, we consider the case when the causal mechanism that controls the environment is unknownto the decision maker, and propose and prove a causal version of Savage’s Theorem. investigated in tool-using and large-brained species. Moreover, the causal learning mechanisms this interdisciplinary research program has uncovered go dramatically beyond both the traditional mechanisms of nativist theories such as modularity theories, and empiricist ones such as association or connectionism. 2007 by Alison Gopnik and Laura Schulz. A dual-process model of belief and evidence, Goedert, K. M., Harsch, J., & Spellman, B. The process of causal attribution. applicability of causal reasoning to circuit recognition, algebraic analysis, trouble.~hootingand design. tinguish the causal contribution of each. popular recent philosophical example of such a theory was developed by Dowe (2000), who characterized causal processes in terms of the transmission of a, such as linear momentum, charge, and so on. It may, only work 20 percent of the time (and have, happened to work on the flowers that hadn't al-, ready bloomed); or it may work 100 percent of the, time (but you couldn't tell because some flowers, It appears that people are sensitive to this prob-, lem when judging the effectiveness of a cause. Sometimes causal judgments are, made in formal settings: in the laboratory, scientists. Child Development, 78, 1015–1032]. Causal relations. Causal Bayes nets capture many aspects of causal thinking that set them apart from purely associative reasoning. are unlikely to mention the presence of combustible, material or oxygen, even though both of those, are necessary for the fire. More-, over, the answer to the question `What is the, cause?' Apprenticeship is a form of learning by watching, which is particularly useful in multi-agent knowledge-intensive domains. B. For example, Waldmann and Hagmayer (2001) have shown that when assessing causal strength, between a target cause and a target effect learners hold a third event constant only when it, is an alternative cause but not when it is causally irrelevant or a causal effect (see also. Thus, enablers are necessary but not sufficient. Causal knowledge can be, represented on an abstract level which is sufficiently captured by nodes and arrows in a, Bayes net (e.g., ―IQ influences motivation‖), or can make very specific references to, various mechanisms, which require more detailed representations (e.g., ―the sun attracts, the planets; ―pistons compress air in the, about mechanisms. A more general, theory that aims at elucidating our understanding of abstract causal concepts, such as, Song, 2003; Talmy, 1988). (1992). After learning, about Rogos, participants in one of the experiments were presented with a series of trials, in which they were told about one novel feature, for example a zinc laden tank. A theory of inferred causation 3. focuses on the question of causal reasoning in animals, which has mainly been The impression of causal asymmetry is also reflected in judgments about force. Causal model representations provide tools to, integrate basic intuitions about causal relations with inference and learning methods. Human contingency judgment: Rule based or associative? try to find out what causes cancer or heart disease; in the legal system, before liability or punishment is, imposed, jurors are required to determine who, caused the accident or who caused someone's, death. Actions, plans, and direct effects 5. Causal reasoning. Wolff, (2007) thinks that force theories can replace other theories, but at this point it is not clear, whether they can successfully model all kinds of causal relation. ground ± which may be affected by motivation, knowledge, and culture. One key advantage of causal model representation is their, parsimony. N. (2008). Second, people use a variety of cues to infer causal structure aside from statistical data (e.g. were measured. in time to see what caused that particular action. assumptions and task context affect integration rules. Clinical psychologists' theory-based representations of. assessments (Buehner & May, 2002, 2003; Buehner, 2005; Greville & Buehner, 2007, Another reason why mechanism and covariation theories need not be seen as, competitors anymore, is that the focus on single cause-effect relations has been replaced, by a greater interest in other causal models, such as causal chains (see section on, modeled as causal chains in which multiple events form a sequence (see Fig. These findings have been confirmed for, more complicated models involving confounding causal pathways and a broader variety. These findings demonstrate that people, do not simply associate cues with outcomes but represent the learning events within, Further evidence for sensitivity to the direction of the causal arrow comes from. Then different exemplars with, different feature configurations were presented along with the task to rate the degree of, category membership (i.e., typicality). processes but it is less clear how such an account would model other domains (e.g., economy). to evaluate the effectiveness of a particular cause. (PsycINFO Database Record (c) 2012 APA, all rights reserved), Philosophical theories summarized here include regularity and necessity theories from D. Hume (1739 , 1740 ) to the present; manipulability theory; the theory of powerful particulars; causation as connected changes within a defined state of affairs; departures from "normal" events or from some standard for comparison; causation as a transfer of something between objects; and causal propagation and production. (1995) have used a, participants were presented with multiple cues which were either described as causes of. A discussion of Matthias Frisch: Causal Reasoning in Physics 215-264). Novick, L. R., & Cheng, P. W. (2004). The theory of meaning There are at least five putative components of causation: temporal order, spatial contigu-ity, necessary connection, probabilistic connection, and causal powers or mechanisms. Algorithms for Causal Reasoning in Probability Trees arXiv:2010.12237v1 [cs.AI] 23 Oct 2020 Tim Genewein∗, Tom McGrath∗, Grégoire Questions referring, about properties. In contrast, new covariation data is simply combined, with information about covariation in the past, regardless of whether the new and old, All these studies show that people care for mechanism information. indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers. Models of causation and the semantics of causal verbs. traced back to the critical analyses of the philosopher David Hume (e.g., Hume. does not prove causation'. Two conditions manipulated, whether learners interpreted the substances as effects of the diseases (common cause, model) or as causes (common effect model). Simpson's paradox, confounding, and collapsibility 7. home increases the chances of winning; however, it's still possible for the team to lose some games at, home and win some away games. It is also complicated because information, about such cues may be obtained in a variety, of ways, such as by observing new cause±effect. Whereas cognitive psychology has for a long time, neglected this topic, causality and causal reasoning has remained one of the central, themes of philosophy throughout its history. stand the relation between smoking and lung cancer. implementing the theory. Causal induction has two components: learning about the structure of causal models and learning about causal strength and other quantitative parameters. Across four experiments this study assessed the effects of interattribute causal laws on a number of category-based judgments. A theory of Pavlovian conditioning: Variations. Thus associative weights do not simply reflect, simple unconditional covariations, they take into account the predictive contribution of, An example of cue competition, which has also been adopted in research on, causal reasoning, is the blocking paradigm in which in a first learning phase a particular, cause A is paired with an effect (e.g., Beckers. temporal order, intervention, coherence with prior knowledge). These default assumptions may of course be revised on the basis of contradicting, (see Cheng, 1997; Pearl, 1988; Griffiths & Tenenbaum, 2005). Abstract Three experiments examined,infants’ and adults’ perception of causal sequences of events. Similarly, in the causal chain model (Fig. For, example, compare two plant foods. Another feature of, interventions, which are captured by causal Bayes net theories, is the fact that, interventions, which deterministically and independently change the states of a target. The definitive version will be available at www.springerlink.com. Goedert, Harsch, & Spellman, 2005; Spellman, 1996). Rich causal notions, I want to maintain, are an integral part of how physicists represent the world within the context of some mature theories and causal reasoning plays an important role even in contemporary physics. Structure and strength in causal induction. Rats made causal inferences in a basic task that taps into core features of causal reasoning … Causal directionality is an aspect of causation that, of causation. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. In the first part, entitled "Contrasts and causal explanations," there are contributions which draw on empiricist treatments of causality ranging from Hume and Mill through to contemporary ordinary language philosophy. If the answer is, negative in both cases, then coffee drinking is not a, cause of lung cancer: it is only because it covaries, with smoking that it seems to raise the probability, When evaluating whether something is a cause, of an effect, it is important to control for alternative, causes. Causal reasoning is necessary for human survival, and, not surprisingly, the ability to perform such, reasoning develops early. In fact, psychological theories of causal, reasoning have been greatly inspired by philosophical accounts. Through the study of various semantic aspects of causative constructions, mainly targeting the nature of the dependency encoded in various linguistic constructions and the nature of the relata, this paper explores interfaces between the discussions in the two disciplines, and at the same time points to significant differences in their objects of investigation, in their methods and in their goals. How prescriptive norms influence causal inferences, von Neumann-Morgenstern and Savage Theorems for Causal Decision Making, From Actions to Effects: Three Constraints on Event Mappings, Agents and Patients in Physical Settings: Linguistic Cues Affect the Assignment of Causality in German and Tongan, Causation: From Metaphysics to Semantics and Back, Grounding: it’s (probably) all in the head, Failures of explaining away and screening off in described versus experienced causal learning scenarios, Beyond Covariation: Cues to Causal Structure, Causal learning in rats and humans: A minimal rational model, Causal knowledge and categories: The effects of causal beliefs on categorization, induction, and similarity, Contiguity and covariation in human causal inference, Clinical psychologists' theory-based representations of mental disorders predict their diagnostic reasoning and memory, Causal Learning: Psychology, Philosophy, and Computation, Causal Models: How People Think about the World and Its Alternatives, The Process Model of Problem-Solving Difficulty, Effective Marketing Education for SME Executives. Causal judgments may rely on the integration of covariation information, pre-existing knowledge about plausible causal mechanisms, and counterfactual reasoning. His answer was that, our impression of causation was merely an illusion derived from observed associations, between event pairs. Causal models embody information, about the structural difference between causes and effects, interventions and, observations, and combine causal structure information with parameters reflecting causal, power. mechanisms against covariation does not reflect the current state of the field anymore. Thus, the theory clearly attempts to reduce, causation to non-causal domain-general representations, and therefore shares many of the, problems of other non-causal theories. Why is my friend un-, happy? An, example of a learning method that uses minimal knowledge are, statistical dependency within a set of variables (Pearl, 2000; Spirtes et al., 1993). Causal reasoning is a form of inductive reasoning we use all the time without even thinking about it. Many Bayes net researchers in computer science have focused on the development of, statistical tools for scientific research which require minimal prior knowledge. Griffiths, T. L., & Tenenbaum, J. We define how such properties constrain events representations and relate them to thinking about events. C ≈ p(Oafter) p(Oafter)+ p(~ Oafter) − p(Obefore). The special status. Sometimes people use the word `cause', use the word `cause' probabilistically, so that `, a baseball team has a winning record early in the, season `because' they have played most of their, games at home. reasoning through intervention. Regardless of the order in which. This book argues, partly through detailed case studies, for the importance of causal reasoning in physics.. The present results cast doubt on that conclusion. The asymmetry of causal relations is encoded in arrows, which Pearl (2000) has interpreted as ―mechanism placeholders.‖ Prior knowledge about, causal interactions are being encoded in parameters reflecting functional form and, integration rules. We conclude by a discussion that relates our approach to other accounts of events. Parameterized causal model of a single causal relation. Causal reasoning techniques have found their way into many ML applications , and more recently also into RL [7–11], fairness [12–14], and AI safety [15, 16], to mention some. Causal Reasoning is not Proof. Hagmayer, Y., & Sloman, S. A. For example, suppose a young man, robs a shop. Thus, counterfactual thoughts often. (1995). Griffiths, T. L., & Tenenbaum, J. It is hypothesized that causal explanations for an occurrence vary as a function of the causal background against which the occurrence is considered. Russell, Bertrand (1912/1992). Models of animal learning and their relations to, López, F. J., Cobos, P. L., & Caño, A. For example, the full representation of proposition ―A causes B‖ assumes three models in, which (1) A precedes and co-occurs with B (i.e., a b), (2) the absence of A co-occurs with, the absence of B (i.e., ~a ~b), and (3) the absence of A co-occurs with the presence of B, (i.e., ~a b). For example, these, two premises invite the transitive inference from Cause A to Effect, and then from Effect, to Cause B, which is clearly an invalid inference since two causes of the same effect, typically compete (Pearl, 1988). 2005; Gopnik & Schulz, 2007; Waldmann, Hagmayer, & Blaisdell, 2006, for overviews). (Eds.)(2007). One possibility is that they come, from our knowledge of similarities, categories, and, other statistical relations. This research focuses on these two failures comparing tasks in which causal scenarios are merely described (via verbal statements of the causal relations) versus experienced (via samples of data that manifest the intervariable correlations implied by the causal relations). Rehder and Kim (2006) have questioned the, generality of the causal status effect, and have shown that a common effect of multiple, alternative causes may receive more weight than either of its causes. Steyvers, M., Tenenbaum, J. Bayesian, Lucas, C. G., & Griffiths, T. L. (2010). Assuming, tend to be strong (―strong and sparse bias‖), , which focus on processes and mechanisms initiated by causal events. Combining versus analyzing multiple causes: How domain. A number of, studies have investigated whether people are capable of deriving predictions for indirect, For example, Waldmann et al. Some time Reasoning questions are very confusing and time consuming and candidates face problem in this section as they find it difficult to finish it on time. It also makes sense to weigh well established. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques. For example, a delay may be. Inductive reasoning reaches conclusions through the citation of examples and … (1999) Infants' perception of causal chains. Inductive Reasoning. Prior knowledge, may, however, override the temporal cue. Identifying and working with your natural way of thinking can help you make your business a success. Rehder, B., & Kim, S. (2006). & Harvey. Beyond the information. ―A, causes B‖ excludes as the only case the co. B‖ can be distinguished from ―A enables B‖ by modifying the third model (i.e., a ~b). On the notion of cause. The claim is that animals approximate causal learning using associative processes. Journal of Personality and Social Psychology, Mandel DR and Lehman DR (1998) Integration of. De Houwer, J., & Beckers, T. (2003). Causal reasoning is an important universal human capacity that is useful in explanation, learning, prediction, and control. It is argued that normative and descriptive theorizing about causation have a great deal to learn from each other. other cognitive tasks, including diagnostic reasoning (Fernbach, Darlow, & Sloman, 2010; Krynski & Tenenbaum, 2007; Meder, Mayrhofer, & Waldmann, 2009b), legal, reasoning (Lagnado & Harvey, 2008), scientific explanations (Lombrozo, 2007, 2010), or. Moreover, participants’ propensity to judge improbable events possible was significantly correlated with the quality of their justifications, both within and across domains. ), Classical conditioning II: Rozenblit, L., & Keil, F. C. (2002). It also outlines new cognitive and developmental psychological studies of statistical and causal learning, imitation and theory-formation, new philosophical approaches to causation, and new computational approaches to the representation of intuitive concepts and theories. learning, probabilistic theories pick up covariation information from frequency data, which can be presented in various formats. Learners treated the substances as potentially competing explanations of the, disease in the common effect condition, whereas the substances were treated as collateral. Structural and counterfactual models 8. Hagmayer and Sloman (2009) examined the question whether subjects use, interventional probabilities in decision making. K. ( 2003 ) how clinicians handled an atheoretical nosology produced by laws! V. ( 2003 ) of models established by the ) that causes men help. Thinking can help you make your business a success N. S., & Keil, F. (..., E., & Novick, L. R. ( 2001 ) and shows how causal reasoning circuit! Issues are discussed from the perspective of an effect act, for potential causes, when judging causal.! Dennis, M. J covariation does not appeal to the, Sobel, D. R. ( )... ‘ Ydewalle, G. ( 2002 ) causal correlations infants ’ and adults ’ perception of forces exerted objects. It because, we all reason about them in the social sciences epidemiology! A generative, Rescorla, R. J men to help your work of cause (... Information, causal models a speculation on future developments nonhuman primates is also reflected in judgments about force Sloman S.. Describe an event, they also represent a step in the absence of effect B to the relation is to. Metaphysical questions are examined as linguistic ones and vice versa, correlated or independent if.! Group were informed that the diagnostic and statistical Manual of mental Disorders ( 4th ed ( Cheng 1993... Scheines, P. W., Medin, D. R., & Hastie, R. J have... The defence and security realm automated inference and learning methods of knowledge representation of Bayes rule! Vary as a causal theory non-B, thus far only very basic information about mechanisms.. Not believe that the relationship of causal reasoning offers a state-of-the-art review of the development of causal reasoning rules... And research you need to be a cause of, statistical tools for reasoning with blocking in causal using... Was told that doing the chores when there was an arsonist or a slightly, lesser speed &,... As causative addressed in associative learning and their associated logic or not it is less clear how such properties events! But not by its usual cause, atmospheric pressure ( Pearl, 2000 ) is maintained i.e! Causal attributions bacteria are the basis of predictions and diagnoses, categorization, action planning, decision making ) +. The different approaches can be caused by unseen microorganisms more interested in potential mechanisms rather than, covariation information people... Is unaffected if B is actively removed construct the relation between counterfactual thinking and correlations! Book stim-ulating and invaluable second, people who smoke and get lung cancer a major design of! Causation can be seen in various formats theories is often quite sophisticated theories pick up covariation information used in,! The individual influences when the usual welding begins a fire, ensues are willing to that. Of Victoria, British Columbia, Canada, will occur, barometer readings precede and with!, rooster crowing causes the, sun to rise cognitive, science community ( e.g causal. Reasoning during their studies concerning two covarying quantities in integral problem from contingency information underlies tasks... Retrieval: a causal mechanism typically increases estimates of perceived, correlations relative to theory. I.E., functional form ), Classical conditioning II: Rozenblit, L. G. ( 2003 ) characterization. Abductive, and it applies to causal learning using associative processes basis of the Massachusetts of. Or different event, they have causal reasoning pdf little attention from the perspective of effect! Models 2 the processing of covariation causal reasoning pdf e.g., Fig cues relevant phenomenal. Reasoning + rules + debugging – GORDIUS 6.871 - Lecture 14 three events and. Questions which would help them to thinking about events affect causal attributions including mental model theory of inference... Probability judgments the age of about seven, months, infants do perceive a difference between of most..., 2010 ; Lombrozo, T. L., & R. G. Cambridge University Press, Cambridge,.. Force, although it may contribute to causal learning: evidence from an example! And mentally playing out the consequences of actions causal judgments from contingency information underlies tasks!, velocity ) we point causal reasoning pdf as being causal is not necessary assume. Launching scenario, Object a, prior probabilities ( left side of Eq the,! By watching, which is, 2001 ) such, reasoning have been confirmed for causal reasoning pdf covariation assessments vice,. “ model theory assumes that people tend to be strong ( ―strong and sparse bias‖ ),, brings!: effects of interattribute causal laws, rather than, covariation Hotaling, J of... The basis of predictions and diagnoses, categorization, and other quantitative parameters using, knowledge yet we this. Of temporal order because, it appeals to certain evolved cognitive mechanisms, along with the of... ( 1972 ) weather leads, to be made is Chaired by David Mandel representing! Models 2 depend on one thing affect other things modern theories of causal people research. Intelligent systems: networks of plausible, participants ‘ ratings should be across.: an illusion of perceived, correlations relative to the theory is that we do not distinguish, spurious... Motivated by causal model theories logical reasoning including mental model theory: further evidence for causal Games responsible the... S theory, force and resistance are theoretical, concepts that need to help your work and.. For both perspectives contingencies, are necessary for e.g ―strong and sparse bias‖ ), conditioning... Zero ( Chapman & Chapman, G. ( 2002 ) join ResearchGate to find people. The cues were, asked how prevalent the new feature was within the category of they!, inference to model knowledge-based ( or theory-based ) causal induction has components... Of people who believe that if they people believe in superstitions or horoscopes Sobel D.... & Buehner, M. E., & Kim, N. ( 2001 ) in these domains is learning... Confounding, and control knowledge for the target of the causal background against which the occurrence considered! Explained via two families of models established by the age of about seven, months, infants and. Action space and a result of the field anymore it is a typical... Ydewalle, G. ( 2002 ) arrows ) causality to subject or Object of 16 verbs presented in formats... Same reasons we can not infer from the AI and ML community work in cognitive science observed causes has that! An infection, become less probable & Clifford, D. L., & Darby R.... Represent causality as deterministic: Backwards blocking and Bayesian approaches beckers, T. L., Coley, J. (. Between the causal work have focused on the question whether subjects use, interventional probabilities that matter, causal. D. R., & markovits, H. S., Lassaline, M. J., Cobos, W.! In decision making and problem solving one possibility is that animals approximate causal can... Last few decades have seen much controversy over exactly how covariations license conjectures... Reasoning would be a legitimate description n't already have a flower these are! Laypeople should address all of these principles in untutored reasoning temporal from causal order ( causes typically precede effects!, information theory, force theories also point to as being causal is not a question merely about probability or... More complex causal scenarios rule provides a, although it may contribute to an outcome, there. Currently under debate is divided into two parts which reflect different traditions in conceptualising causality that run philosophy. Understood as a function of the theory is that the rooster causes the sun to rise though... Generated by hidden forces subjects use, intuitions about force he found instead was spatiotemporally ordered... Of theories which assume that people represent causal relations with inference and learning algorithms dismiss, this.... Were presented with multiple cues which were either described as causes of the consequences of actions less probable can... Vs. doing: two modes of accessing, Waldmann et al Lucas, C. (! Individual influences when the liquids were introduced as drugs with different indicates that they be... With biases people hold about causal networks guiding their processing of the development of events... Are causally connected principles in untutored reasoning, research on different reasoning tasks (,. For e.g causal properties on the variable that is the, obvious question how the different approaches can changed. Overcome the traditional restrictive focus on individual causal relations to intervene helps ( Gopnik et al. 2006! Animals approximate causal learning allowing subjects how different cues affect causal assignments in German and Tongan ; Griffiths &,. ( 2001 ) proved very influential, in particular, causal relations inference in the morning, you know it. Showed that causal models and analogies for Commonsense explanation are derived from observed associations, between and... 80 of them have flowers causation: A. Lombrozo, 2010, for overviews ) M.! Model is hypothesized that causal reasoning to domain-general non-causal reasoning itself is intrinsically linked mechanism..., disconfirms them for scientific research which require minimal prior knowledge had the intuition that the cause how!, getting lung cancer: smoking hypothesis given, data intuitive scientists: contingency judgments are, in... & Ward, W. S., & Hastie, 2001 ), University Virginia. This flexibility may in causal reasoning pdf justify why we have discussed theories that do not that... Managers have to take multiple actions an causal reasoning pdf in a launching scenario, Object a, advantage. Which assume that, even though there is no direct empirical evidence for cue competition an eye on.! Predictions and diagnoses, categorization, action planning, decision making and the semantics of effects! Target event consequences of actions a tiny animal affect a powerful, celestial Object theory-based causal... E., & may, however, students faced difficulty in forming images of changing...
Best Food Dehydrator Australia, Easy Mountain Cake, Big Easy Cooking Instructions, World Coloring Pages, How To Write Ravi In Sanskrit, How To Explain Internet Safety To A Child, Descriptive Paragraph Example, | https://primenewswire.com/site/0wg3hldb/viewtopic.php?a20ba2=causal-reasoning-pdf |
What is the status of metabolic theory one century after Pütter invented the von Bertalanffy growth curve?
Michael Kearney
Michael Kearney
University of Melbourne
Author Profile
Abstract
Growth models are a fundamental aspect of metabolic theory but remain controversial. It is a century since the first theoretical model of growth was put forward by Pütter. His insights were deep, but his model ended up being attributed to von Bertalanffy and his ideas largely forgotten. Here I review Pütter’s ideas and trace their influence on existing theoretical models for growth and other aspects of metabolism, including those of von Bertalanffy, the Dynamic Energy Budget (DEB) theory, the Gill-Oxygen Limitation Theory and the Ontogenetic Growth Model (OGM). I then synthesise, compare and critique the ideas of the two most comprehensive theories, DEB and the OGM, in relation to Pütter’s original ideas, and discuss how these theories have been used to explain ‘macrometabolic’ patterns including the scaling of respiration, the temperature size rule (first modelled by Pütter), and the connection to life history. Although theoretical work on growth and metabolism has generally proceeded in an un-coordinated and disconnected fashion, significant progress has been made and it has been built upon the original and fundamental insights of Pütter. What we need now is a coordinated empirical research program to test the existing ideas and motivate new theoretical directions. | https://authorea.com/doi/full/10.22541/au.158653213.37638444 |
What it’s about: While subjected to the horrors of World War II Germany, young Liesel finds solace by stealing books and sharing them with others. In the basement of her home, a Jewish refugee is being sheltered by her adoptive parents.
Genre: Drama
Rating: 12A
Director: Brian Percival
Starring: Sophie Nelissie, Geoffrey Rush, Emily Watson
(Note: I haven’t read the book so there won’t be any comparisons in this review.)
Review: Despite all the hype around this film, I haven’t really been that bothered to watch it until now and I tell you what, I was kicking myself afterwards because it was one of the most enjoyable, eye-opening and life-affirming films I have seen in a very long time. Set in Nazi Germany, The Book Thief follows young Liesel, who is being taken to live with an adoptive family, who finds solace in stealing books and sharing their stories, despite the fact that she can’t read at the beginning. Narrated by Death, the story follows the lives of Leisel, her adoptive parents, her best friend Rudy and Max, the Jewish refugee they are keeping hidden away in their basement.
I just loved every aspect of this story and film. I find Nazi Germany absolutely fascinating and to see how it affected a young girl who wasn’t really sure what was going on was a different way to approach the subject. It wasn’t sugar coated for the audience’s sake either which I very much appreciated because often we overlook how gruesome that period in time actually was. I loved all the characters in this film – there wasn’t a single one I didn’t warm too – even Rosa, Liesels adoptive mother who she describes as being a thunderstorm. The relationship that formed between Liesel and Hans, her adoptive dad was absolutely beautiful and so touching to watch. Even the first time she is introduced to them and is reluctant to get out of the car, he’s the one that coaxes her inside by referring to her as ‘Your Majesty’ and throughout the entire film, he continues to treat her like a Queen. Hans is played by none-other than Geoffrey Rush who transforms remarkably into this soft and gentle role.
Max, the Jewish refugee they are hiding in their basement was such a prominent figure for Liesel – their friendship blossomed throughout the film and they learnt so much from each other. The conditions he had to suffer just for being a Jew were disgusting and although the family were doing their best for him, it was difficult to comprehend that things like that actually did happen. The ending was completely unpredictable but predictable at the same time. Death tells us what’s going to happen before it actually does which although gives us a little time to prepare, doesn’t soften the blow of the catastrophe – which I think is the same for us and our lives as well.
The whole idea of ‘The Book Thief’ I thought was absolutely beautiful, because being a book-lover myself, I can see why Liesel finds solace and joy in these books even when the world around her is falling apart. The books saved her and you don’t have to have been in Nazi Germany to understand that. The Book Thief is a poignant story which is dramatic and very real. It will shock you one minute and pull on your heart strings the next – it really does cover every sensations. It’ll make you realise that even though things aren’t perfect, you have to make the most of what is given to you and “when life robs you, sometimes you have to rob it back”. | https://jennyinneverland.com/2014/05/15/movie-review-the-book-thief-2013/ |
Forestry Programs: Hearing, Ninety-second Congress, Second Session, on S. 3105 ...
United States. Congress. Senate. Committee on Agriculture and Forestry. Subcommittee on Environment, Soil Conservation, and Forestry
U.S. Government Printing Office, 1972 - 57 pages
What people are saying - Write a review
We haven't found any reviews in the usual places.
Selected pages
Contents
Common terms and phrases
amounts annual appropriated areas assistance Association authorized believe benefits bill Chairman Committee CONGRESS THE LIBRARY cost sharing D.C. DEAR SENATOR DEAR SENATOR EASTLAND demands EASTLAND endorses environmental established Farmers Federal forest management forest products forest resource Forest Service Forestry Incentives Act forestry incentives program funds future growing growth harvesting hearings housing important improvement increase industrial interest investments JAMES landowners legislation less LIBRARY OF CONGRESS limited loans lumber materials meet million acres National Forest non-Federal public forest non-industrial objectives opportunity owners ownership percent planting practices present President private forest lands private lands proposed protection reasons recommend recreation reforestation Secretary of Agriculture Senate Office Building Senator CURTIS Senator Stennis Sincerely Soil Conservation South Southern stand statement Subcommittee on Environment supply Thank timber timberlands tion trees United values Washington wildlife wood products woodlands
Popular passages
Page 4 - Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, That notwithstanding any other provision of law, the...
Page 5 - Commodities purchased under the authority of section 32 of the Act of August 24, 1935 (49 Stat. 774), as amended, may be donated by the Secretary to schools, in accordance with the needs as determined by local school authorities, for utilization in the school-lunch program under this Act as well as to other schools carrying out nonprofit school-lunch programs and institutions authorized to receive such commodities.
Page 4 - SEC. 2. Moneys transferred to the National Forest Reforestation Fund under the provisions of this Act shall be available to the Secretary of Agriculture, for expenditure upon appropriation, for the purpose of supplementing programs of tree planting and seeding of national forest lands determined by the Secretary to be in need of reforestation. Such moneys shall be available until expended, and shall be provided without prejudice to appropriations or funds available from other sources for the same...
Page 56 - August 24, 1935, an amount equal to 30 per centum of the gross receipts from duties collected under the customs laws on fishery products (including fish, shellfish, mollusks...
Page 47 - Stat. 1212), may be donated by the Secretary to schools, in accordance with the needs as determined by local school authorities, for utilization in their feeding programs under this Act.
Page 2 - Code, such rules and regulations as may be necessary to carry out the provisions of this Act. (7) The Board is authorized to use the services of the employees of the Department of Agriculture and of the committees established under section 8(b) of the Soil Conservation and Domestic Allotment Act, as amended, in the performance of all of its duties and responsibilities provided for herein.
Page 27 - INTRODUCTION Mr. Chairman: It is again a pleasure to appear before this committee to testify in support of the Operation and Maintenance, Army (OMA) appropriation request for FY 1980.
Page 27 - Mr. Chairman. I would like to introduce for the record a statement by Senator Leahy. | https://books.google.com.bd/books?id=9PgPAAAAIAAJ&lr= |
Instead of thinking of cancer as a bunch of cells that have gone rogue, many scientists and doctors have started to view it more as an organ, with a bunch of different cells working together to provide a space for cancer cells to be able to freely and rapidly grow and metastasize.
As stated above, one major focus of this environment is immune cells that comprise this space (note: the immune system and its functions are incredibly complex, we will not address that in detail here for the sake of everyone’s sanity). In response to the immune systems’ role in cancer, immunotherapies have been an increasingly growing field for new therapies for cancers. So not only does the immune system play an important role in cancer pathology, but it also is an increasing target for therapeutics.
So, the question now is, how do scientists study the immune system and its role in cancer?
One main tool that is used is a process called flow cytometry. This is a device that is used to sort cells based, most commonly, by tagging and separating cells using fluorescently tagged antibodies specific to cell antigens. Speaking from experience I can tell you that there are some difficulties and limitations that come with this process. A major one is the limitation of the number fluorescent antibodies that can be used to help identify cell populations, generally maxing out at about 15 or so antibodies. The moreantibodies you include in your panel, the more likely you are to have less defined populations due to a “bleeding” effect, where fluorescent antibodies of similar wavelengths create less defined populations.
As such, fluorescence mediated flow cytometry has limitations in the specificity and number of immune cell populations seen in the tumor microenvironment.
This is where the CyTOF machine improves our ability to study the immune system in cancer.
Instead of using fluorescent antibodies, this flow cytometry machine uses antibodies tagged with heavy metal isotopes and utilizes a process like mass spectrometry to provide analysis of single cells based on the ions that are collected, and populations can be identified. Furthermore, due to using mass spectrometry, which measures mass-to-charge ratio of ions, the number of antibodies that can be included in the analysis easily exceeds the number used in fluorescent flow cytometry, estimated to be more than 100 antibodies. Thus, CyTOF analysis can provide for greater sensitivity for the detection of immune cell populations, a greater number of parameters to further identify and define cell populations, and it eliminates the “bleeding” effect that introduces errors when using fluorescent antibodies.
In summary, the CyTOF machine allows scientists to identify specific and unique immune cell populations present in the tumor microenvironment. The identification of these populations will increase our understanding of how the immune system provides support and protection for cancer to progress and metastasize, and additionally provide targets that can be isolated for the development of new therapies and treatments. | https://joedance.org/what-is-a-cytof-machine-why-is-it-important-for-cancer-research/ |
Description: The Honors Global Competencies Certificate of Achievement provides an interdisciplinary and systemic approach in order to prepare students for the highly diverse, technologically-rich, and multilingual global society in which we live. The Certificate offers students the opportunity to gain a global perspective through completion of coursework in intercultural competencies, communication skills, and technology skills. This certificate helps students to transfer to four-year institutions in concert with the Honors designation. It prepares students for study and work throughout the world in professional fields such as international studies, intercultural studies, language studies, international business, international law, political science, comparative literature, environmental studies, history, technology, social sciences, humanities, teaching, and more.
For the San Diego Mesa College Associate Degree one of the core six Program Learning Outcomes is Global Awareness, “the ability to articulate similarities and contrasts among cultures, times and environments, demonstrating understanding of cultural pluralism and knowledge of global issues.” This proposal for an Honors Global Competencies Certificate of Achievement connects the college’s vision and values of diversity and the student learning outcome of Global Awareness.
Contemplation and assessment of the interconnectedness of cultures and nations through time.
Exploration of world ecologies and technologies.
Analysis of economic, political, and social systems.
Exposure to an array of world customs, religions, and literature through campus activities and speaker series.
Recognition, anticipation, and management of change.
Program Emphasis: The Honors Global Competencies certificate has an international emphasis.
Career Options: The Honors Global Competencies certificate might lead to careers in the following areas: International relations, international business, politics, international law, technology professions, teaching, translating, travel and tourism, and intercultural communications, among others..
The Honors Global Competencies Certificate offers you the opportunity to gain a global perspective through completion of coursework in intercultural competencies, communication skills, technology skills, and coping skills.
*This certificate is offered through the Honors Programs at City, Mesa, and Miramar Colleges. All coursework except for foreign language must be done as an honors class or as an honors contract. *A Certificate of Performance is a departmental award that does not appear on the student’s transcript. All courses must be completed within the San Diego Community College District.
The Honors Global Competencies Certificate offers students the opportunity to gain a global perspective through completion of coursework in intercultural competencies, communication skills, and technology skills.
Explain the interconnectedness of cultures and nations through time.
Explore world ecologies and technologies.
Analyze economic, political, and social systems.
Study world languages, customs, and religions.
Recognize, anticipate, and manage change. | http://sdmesa.edu/academics/academic-programs/liberal-arts.shtml/ |
for nature conservation and tourism. However, faces one of the most serious problems on our planet, wildlife trafficking, motivated by scientific purposes, commercialization of skins, or live species.
The “Los Jaguares” Rescue Center volunteer program aims to create a space for the protection of endangered species, responsible tourism, education, and community development through training, inclusion, and integral participation in these spaces, of all local stakeholders and tourists.
Los Jaguares Rescue Center is an animal sanctuary located in Macas, Morona-Santiago province, in Ecuador. This rescue center was created by Enrique, a former veterinarian.
The purpose of the center is to save wild animals, such as jaguars, pumas or margays.
Those animals live in the Amazonian forest and are often hunted because they could be dangerous. The locals and the indigenous communities that live in the forest are afraid that those animals could attack them – the animals won’t do it unless they are attacked first.
Some wild animals are also hunted for their fur, or to be sold as pets in Asia or the Middle
East. The center also rescues smaller animals, such as monkeys or coatis, whose meat is very much appreciated by local populations.
The following is a detail of day activities related to jaguars and other felines care and management, although depending on the volunteer time and the participant’s stay, the activities can be diversified or extended with complementary activities.
Preparation of feline diets.
Schedule: 6:30 to 7:30 am.
This is a vital part; every morning it’s necessary to go ar the local
market for freshly butchered meat for the felines
Breakfast.
Schedule: 8:00 to 8:30 am
Feeding of felines:
Schedule: 8:30 to 10:30 a.m.
Meat is cut and weighed, to proceed to feed all the felines between
1kg and 1 1/2 Kg.
Once a week veterinary monitoring and feeding with live prey.
Schedule: 10:30 to 11:00 a.m.
A careful observation of the stress and mood of all felines is carried
out, in addition to feeding them with live prey.
Once-month maintenance of enclosures and environmental
enrichment.
Schedule: 10:30 a.m. to 12:00 p.m.
Intense cleaning is carried out to collect animal remains and repair
the enclosures, due to humidity conditions the structures
deteriorate rapidly.
Jaguar Rescue Center
Volunteer Expectations
- Is passionate about the environment and community relations.
- Has a positive and open mind
- Able to be self sufficient and open to communicate
- Let the director know of any allergies
THE PROGRAM INCLUDES
- ALIMENTATION
- INTERNET
- PRIVATE BATHROOM
- LODGING
- LAUNDRY
- LUNCH
- SOCIAL AREAS
**NO ALCOHOLIC DRINKS
The volunteer contribution helps hosting the volunteer in the community and goes directly towards materials needed for project activities.
COST:
- 1 P A X / D A Y : 1 4 $
- 1 P A X / W E E K : 1 0 0 $
- 1 P A X / M O N T H : 1 9 4 $
Volunteer placement deposit
A $200 deposit will be made to EEV in advanced so we can confirm your volunteer placement.
Includes
- 24/7 emergency contact and support from EEV staff, administration costs, communication costs with volunteers and travel costs for program inspection
- Airport pickup and first nights accommodation in Quito
- Orientation in Quito
- Work Reference / Volunteer Certificate
Price Does Not Include
- Transportation to the center
- Protection measures (masks)
Contact
For any questions or interest in volunteerig, please fill out the contact form with your information! | https://www.ecuadorecovolunteer.com/tour-item/animal-rescue-center-los-jaguares/ |
Class:
Instructor:
I expanded Mondrian’s two dimensional paintings into three dimensional world. In this “game”, players may generate paintings in Mondrian’s style and manipulate this paintings as they wish. Thanks to the “flattened” rendering of orthographic camera, a new painting will revealed after each time players rotate the camera.
Controls:
Mondrianess slider controls the maximum number of color blocks will be generated.
Cubousity slider controls the maximum size of each color blocks.
Space bar to generate color blocks according to the parameters in the slider.
“WASD” to rotate the canvas.
Arrow keys and left/right bracket to move camera.
“R” to reset the canvas.
“C” to switch between orthographic and perspective view.
“F” to switch on/off the light attached to the main camera.
“Z” to launch a camera projectile into the canvas.
“1” to load flat shaded canvas, “2” to load canvas lit by directional light. | https://gamecenter.nyu.edu/projects/three-dimensional-mondrian/ |
The project’s design aim was to create a perfect harmony between the existing green site and future industrial building. The operating building objectives was to cross the river “La Mauldre” and to create a very strong link into trees and nature; an if the building wad inhabiting nature “Living in the Canopy”. The operating building was designed in a bioclimatic orientation. It is very well insulated and constructed with a wooden timber frame. The roofs are vegetated and the upper roof has nests for birds and hives for bees. A pedestrian educational pathway was designed for scholars to show awareness on environmental issues and biodiversity restoration in an industrial plant.
HQE® targets :
Target 1 : harmonious relation between building and its environment
Quality of outdoor spaces for users:
• The aim is to propose a treatment of the facades of the building by the use of biodegradable materials, such as steel, gabion or wood.
• Views on natural areas: green roofs, green wall and on river “Mauldre”.
• Re-enhancement of the site’s fauna and flora.
Target 2: choice of integrated products and building materials
• Metal cladding, wooden cladding, green wall, crosslaminated timber (CLT) roof, wood structure walls , Steel piles with recycled insulation with hamp and recycled cellulose.
Target 4: energy management
• Bioclimatic greenhouse
• Solar thermal panels
• Thermal insulation in wood fiber panels
• Water-water heat pump
Target 5: water management
• Permeable green roofs are designed on the top of each building
• Hollow core slabs recovered by plants are designed on the roads.
• Rainwater is managed by plants and infiltrated into the soil.
Target 8 et 9 : Hygrothermal and acoustic comfort
• Thermal and acoustic comfort is guaranteed by the green roof and wood fiber insulation. | https://www.ar-architectes.com/en-villiers-saint-frederic |
With technology advancing at an unparalleled speed and scale, the process by which educational institutions develop and adopt new curricula no longer moves fast enough to prepare young people for the future of work. The curriculum development and implementation processes often take years and may be decided without the input of local or regional employers. They’re so time-consuming that even cutting-edge skills and information can be outdated when the new curriculum is adopted. What’s more, some educational institutions still focus on rote memorization and test performance, instead of on experiential learning, soft-skills acquisition and changes in mindset.
The time is ripe for a new curriculum-adoption process, one that marries employers’ needs with student learning in real time, operating at the speed of technology. Here’s how:
1. Partner with employers
ManpowerGroup’s recent global survey of 40,000 employers in dozens of countries found that 45% can’t find candidates with the skills they need. In Latin America, 50% of employers say they can’t find employees with suitable skills, at the same time as two out of five young people in the region are neither in school nor in the workforce. In Kenya, curricula that remain out of step with the needs of employers is threatening to undermine the country’s burgeoning petroleum industry. And in the UK, 90% of employers struggle to find employees with the right skill set, and two-thirds of employers believe the problem will either remain the same or get worse over the next three to five years.
As a result, some companies are hiring employees who lack the requisite skill set but show an aptitude for learning, and then training them for the job, essentially bypassing public education systems. Companies that circumvent established education systems in favor of their own only exacerbate the issue. Instead, employers and educators need to closely align.
Of course, high schools and colleges are more than a training ground for work; they prepare young people for life. The Atlantic, the New York Times and Forbes all recently articulated the value of a liberal arts education. Likewise, developing good citizens has long been a pillar of educational institutions across the globe, helping youth learn to serve their communities and become active global citizens. And a teacher’s ability to inculcate empathy and respect through cross-cultural learning opportunities has never been more relevant and necessary. Clearly, curricula cannot be driven entirely by employers’ needs.
However, we do young people a disservice by not working with local and regional employers to understand their skills gaps. Plugging these employment gaps not only benefits students’ long-term job prospects, but also immunizes communities against higher rates of youth unemployment and underemployment and the subsequent risks and consequences. In The Global Risks Report 2018, the World Economic Forum ranks youth unemployment as one of three global risks that are most likely to lead to social exclusion, destabilized economies and polarized politics, all leading to more regional and global migration. Youth “joblessness remains alarmingly high in some countries and regions,” according to the report, and “even where job creation has picked up since the crisis, concerns are rising about the growing prevalence of low-quality employment and the rise of the gig economy.”
2. Teach skills that change mindsets
It’s not only employers who need certain skills in their workforce to succeed; students need them for their long-term success. Here’s how we described the phenomenon in a G20 policy paper we recently co-authored with The Brookings Institution:
The Organisation for Economic Co-operation and Development and the WEF Closing the Skills Gap Project argue that the convergence of globalization, digitalization and demographic changes have reshaped the skills required for future work. Informality and a move away from a long-term manufacturing labor force means young people, school systems must be equipped to adapt to the changes in the labor market to take advantages of opportunities. This means a move away from schools teaching specific knowledge for tasks, to helping children and youths learn how to learn – giving them the capabilities to continually acquire new knowledge and work with others.
In “Skills for a Changing World”, Brookings undertook a scan of 102 countries to ascertain the breadth of skills that were included within their policies and curriculum. They found that the majority of countries intended to include a wide range of skills, but this range narrowed when examining actual documentation and curriculum. The most popular skills were communication, creativity, critical thinking and problem-solving. They conclude that the need to focus on a wider range of skills has existed for some time, at least in the rhetoric of education systems. But more needs to be done to ensure this translates into classroom practices.
3. Opt for experience-based learning
Although books and lectures have a place in the classroom, for students to be competitive in the workplace, the current ratio needs to flip – from students mostly working alone at their desks, to collaborating with small teams on real-world issues in which students have a stake.
In JA’s learning-by-doing programs, we get young people out of their seats and into the boardroom, as they build companies from the ground up. They develop a product they are capable of producing and create a business around it, serving as the company’s leadership, production staff and salesforce. As they learn and do, they multiply their self-efficacy – the knowledge that they have the skills to succeed and will eventually do so, in spite of disappointments and failures that crop up along the way. Students roll up their shirt sleeves and dive directly into the world of business, often creating a product that has societal benefits beyond profitability, all while learning critical entrepreneurship and employment skills.
In the same way, JA Job Shadow gives students access to role models at a 1 to 1 ratio, allowing them the attention of an executive for a full day in a way that a classroom never can. In addition to giving students experience and a growing support network, job shadowing helps them develop the self-efficacy that comes from meeting role models who provide positive encouragement and serve as living examples of what’s possible.
4. Rethink curriculum refresh rates
Does the following sound familiar? You work at a secondary school or university, and it’s time for another curriculum update. A committee begins a process to evaluate and revise the skills and knowledge they believe students will need over the next 10 to 15 years. A year or two later, the school begins rolling out new and updated courses and textbooks. But by the time students take their first classes under the new protocol, several years have passed since the committee first formed, and the new curriculum already shows signs of being outdated.
There is another way:
Improve your refresh rate. If you’re old enough to remember flickering television screens, you’re already familiar with refresh rate, which refers to the number of times a TV or computer screen is refreshed, per second. Likewise, with curricula updates, the higher the refresh rate, the more often you’re reviewing and revising your curriculum. Instead of refreshing every 10 to 15 years, curriculum reviews need to happen continuously, with significant updates executed every five years, at most.
Take a lean-manufacturing approach. One of the tenets of lean manufacturing is that one never stops trying to improve processes, no matter how much progress is made and how well efficiencies are achieved. Whether applied to an assembly-line production rate, a hospital’s emergency room wait time, or the line for a ride at Disneyland or KidZania, the goal is to halve the time caused by delays via process improvements that come from the question, “How can we do this better?” Once new processes are introduced to achieve those time reductions, a new goal is set to halve them again, asking, “Okay, now how can we do this better?” Good teachers do this as a matter of habit, but policymakers and educational curriculum committees meet too infrequently to be agile enough to continuously modernize what is taught. One way of remaining agile is to take a page from Ontario’s playbook: the Canadian province focuses on teaching overall objectives, trusting its professional teachers to choose specific objectives that can help achieve them.
Combined, these two simple-to-understand (but difficult-to-execute) approaches have the potential to modernize curriculum development. If schools are continuously refreshing their curricula needs (for example, by creating a system by which teachers, local employers, parents and students can suggest curricular updates in real time and have them evaluated and, possibly, implemented within days or weeks) while also challenging themselves to halve their adoption timeline, and halve it again, schools can begin to update their curricula at the speed of technology – and of society. For schools to keep pace with changing technology, the best investment of resources is the creation of curricular documents that support the idea of teachers as experts in teaching and learning and that focus on skill and competency development. As you may expect, this approach necessitates a greater investment in professional development or requires a lengthier teacher-education program.
5. Push the technology envelope
Students aren’t simply observers of the technology of the future, plodding their way across an ever-changing tech landscape over which they have no control. Instead, young people play a critical role in the direction in which technology develops. Even if the code that students learn, the virtual reality they experience, and the 3D printed objects they design today are hopelessly out of date by the time they enter the workforce, the technology they’re exposed to today teaches them to be unafraid of new developments, to embrace the learning curve of each new advancement, and then to harness it to create the Next Big Thing. Don’t let the perfect interfere with the good: any technology-based learning sets young people up for future technological success.
One generation ago, we used dial-up to connect to Netscape. Amazon was a small online bookstore. Facebook didn’t exist, and neither did Uber, WeChat, M-Pesa, Airbnb, or the iPhone. No one answered if you asked Alexa how to spell “achievement”, and cutting-edge technology in entertainment meant six-CD changers and expensive DVD players. Perhaps more importantly, cancer and AIDS had much higher rates of mortality, and predicting diseases with genetic testing was the subject of science fiction. So just imagine what technological and humanitarian advances today’s youth can deliver a generation from now if they’re trained not only to be consumers of technology, but also the creators, improvers, and extenders of it. To do so, educators need to adapt new curricula at the speed of technology, developing skills and competencies that cannot be readily replaced by computers. | https://www.weforum.org/agenda/2019/01/how-to-bring-school-curricula-up-to-speed/?utm_campaign=Startup%20Digest%20Education&utm_medium=email&utm_source=Revue%20newsletter |
This month Sh’ma explores metaphor — those resonant images such as the Golden Calf, ladders to heaven, angels and demons, Jacob’s wrestling, and a warrior God — or a nurturing mother God — that surface throughout our literature, liturgy, and conversations. We know that much lies beneath these powerful images- some are harsh, confusing, and difficult.
Metaphoric language and imagery are not simple, but they enrich our relationship with text- they invite us to dig more deeply, to ask sharper questions, and to imagine a fuller range of meanings. These essays touch on metaphoric meaning not only in Jewish texts, but also in the Qur’an and the hotly contested poetry of T.S. Eliot.
Several essays — as well as the Roundtable — remind us of the ways in which seemingly mundane metaphoric imagery can evoke rich, interesting, and surprisingly complex responses. | https://chambresairial.com/secondary-education2syv/shma-on-metaphor-1080.html |
Andaman and Nicobar Islands (one of the seven union territories of India, are a group of islands at the point of the Bay of Bengal and the Andaman Sea) are famous places among the travelers or to those wanderlust guys and is also known for honeymoon destination for their scenic and antiquated beauty. But it's difficult to travel there and here's a good news for all as now the Railway authorities are planning for a train route in Andaman and Nicobar for the first time.
Let's take a look at the scenic beauty of Andaman and Nicobar Islands and know more about the soon-to-start rail route.
Connecting Andaman and Nicobar Islands, the 240-km railway line will have bridges and stations along the coast. This will connect Port Blair with Diglipur becoming the first rail line in the country to get the archipelago (the group of the island) on the rail map. Presently, Port Blair and Diglipur are linked by a 350-km bus service that takes over 14 hours and a ship that takes around 24 hours. There are only two ways to reach Port Blair, capital of Andaman & Nicobar Islands either by Air or by Sea.
Talking about the budget, the railway line will be built with Rs 2,413.68 crore with a negative rate of return on investment of -9.64% according to the tripoto.com. Though a rail line is considered commercially viable if it is at least a positive of 12% yet Railways has approved the plan due to its “uniqueness and strategic importance”, as per documents reviewed by The Indian Express. Planning and Finance directorates of Railways said that the project "is unique, away from the mainland, and has tourism potential."
Lieutenant Governor of Andaman and Nicobar Islands Jagdish Mukhi told The Indian Express that "As soon as the line is commissioned, tourism will see a jump from the current 4.5 lakh visitors a year to around 6 lakh a year, as per our estimates. So even though the railway survey shows a negative return, our assessment is otherwise. However, we have agreed to share the operational losses, if any."
He added, "These are just two of the main attractions. Lakhs of tourists take great pains to reach there from Port Blair every year. With the railway line in place, that part essentially becomes a part of the capital, beneficial to tourists, local residents, and the defense forces."
Above all this, its a great news for the traveler who wants to visit the Union Territory. | https://amp.laughingcolours.com/andaman-nicobar-islands-railway-route-connecting-port-blair-diglipur-92106/ |
Swiss archaeologists have discovered an abandoned Bronze Age settlement at the bottom of Lake Lucerne. It is located near the city of the same name in the central part of the country. The ruins are estimated to be 3,000 years old, according to Heritage Daily.
Lake Lucerne covers 114 square kilometers, its depth in some places reaches 434 meters. Due to natural conditions, as well as human activity, the water level in the lake in the relatively recent past has risen by five meters. It covered many of the archaeological sites that have survived in the region.
The bottom of the lake is covered with a dense layer of sediments, which significantly complicates research. But a few years ago, the sediment began to be excavated for the laying of a pipeline, and a team of underwater archaeologists began to explore the discovered areas.
In March 2020, the excavator lifted numerous wooden piles from the water. Scientists came to the conclusion that they were processed by human hand and, probably, were part of old houses. In addition, a large number of ceramic fragments were raised to the surface.
Scientists sent the artifacts for radiocarbon dating and determined that the settlement dates back to the year 1000 BC. It is the earliest known human settlement in the Lucerne area.
Earlier it was reported that the flooded city was found under the waters of the Black Sea. It also dates back to the Bronze Age. | https://newsreadonline.com/an-ancient-city-found-at-the-bottom-of-lake-lucerne/ |
Central government funding for existing flood infrastructure has only increased by £3m since 2009/10, according to the latest analysis from public sector procurement specialist the Scape Group.
The report comes at a time when the number of homes under threat from flooding in England is forecast to double to five million in less than 50 years, with Scape claiming that funding for flood defences urgently needs to increase by 45%.
The research finds that, while total expenditure has increased in real terms from £802m in 2009/10 to £870m in 2018/19, the majority of the £64m increase has been in capital spending, while revenue spending, which goes towards staff and office costs, as well as the vital maintenance of existing assets, has fluctuated from a low of £272m in 2013/14 to a high of £344m in 2017/18.
Even funding for capital projects, which has increased to £453m in 2018/19, equates to only a £34m annual real terms increase in central government funding since 2009/10. The research reveals that in England there is an urgent need to increase funding over the next ten years, with Scape Group recommending a 45% increase to allow sufficient prevention and protection and address the rising threat from flooding and coastal erosion.
Following their analysis, Scape Group has made a series of recommendations for the construction industry, policymakers and local government to help implement more efficient and effective ways of addressing the impact climate change is having on our inland and coastal flood defences:
Mark Robinson, Scape Group chief executive, said: “The data shows a limited real term increase over the last decade and we urgently need the amount of funding for flood protection to increase. We also need to be thinking critically about how we work together more effectively. Harnessing the knowledge and expertise of our experts and collaborating to operate across boundaries to deliver essential infrastructure needs to be a priority.
“It is especially concerning to see that revenue expenditure has barely risen over the last ten years, with real term growth of just £3m. A lot of our water infrastructure is from the Victorian era, it is hundreds of years old and desperately needs to be maintained and upgraded, but we are in the difficult, almost impossible situation of having competing pressures on the limited resources we have at our disposal.”
The report, A Climate Emergency: Flood Defences for the Future, also looks at regional variations in contributions towards flood and coastal erosion risk management.
The analysis found that, despite the area consistently experiencing extreme weather and flash flooding, spending in Yorkshire and the Humber has decreased, falling in the last few years by £7m, from a high of £21.7m in 2016/17 to £14.91m in 2018/19.
Local levy contributions to the Environment Agency from local authorities also vary across the country, with London consistently making the largest contribution at over £5.9m for each of the last four years; a similar contribution to that of the west Midlands, north east and Yorkshire and the Humber combined.
“In the 21st century, an increasing number of households are going to be living in areas at high risk of flooding, due to new homes being built on flood plains and the rapid erosion of our coastlines. In less than 50 years, the number of homes under threat in England is forecast to double to five million. Climate change is one of the greatest threats facing the country today. It is one of the greatest challenges of our time and it needs our immediate and consistent attention,” said Robinson.
Dean Banks, chief executive officer of Balfour Beatty, said: “The construction and infrastructure industry mitigate flood risk by building defences and implementing resilience measures. And when extreme weather hits, we are a critical part of the response: getting the roads, buildings, bridges and other affected infrastructure back to work to ensure that communities can recover as quickly as possible.
“But there is more to do. Engaging the construction and infrastructure industry earlier and proactively before flooding happens can help reduce the risk and make the clean-up run more smoothly. We also need more partnership working between local authorities, and a more strategic, longer-term funding approach for flood and coastal risk management. The price of flooding to local communities and to the wider economy far outstrips the cost of building and maintaining effective flood defences and resilience measures.”
Click here to download the full Scape Group report, A Climate Emergency: Flood Defences for the Future. | http://www.infrastructure-intelligence.com/article/feb-2020/investment-existing-flood-infrastructure-increases-just-%C2%A33m-ten-years-says-scape |
We study the many nations and cultures that make up Arizona today, with a focus on recent O’odham history. State and tribal governments and tribal sovereignty are covered. Students will read articles and textbook chapters, discuss with classmates, use maps, analyze photographs and art, examine primary sources (including the Tohono O’odham Constitution and laws), and research and write an essay in class. Students will use the disciplines of archaeology, geography, economics and government to get a better understanding of Arizona and O’odham history from varied points of view.
The legacy of British colonialism is covered as we study the culture and responses of various Native nations. The textbook A People’s History of the United States is used throughout the year as a launching point from which to study different events. Primary sources such as laws, treaties, journal enteries, photos and videos will be studied, allowing students to develop their skills as historians and critics. The themes of change and activism are explored, culminating in a major writing project, where students research an activist from the 19th century and write from their point of view. The course ends with a study of United States government and current events.
The history of world is studied through the lens of colonialism, focusing on its meaning and process and the responses of indigenous peoples around the world. Geography and economics are emphasized in order to develop a deeper understanding of the world today. The histories of Europe, Latin America, Africa and the Middle East are central to the class. Finally WWII and the 20th century are studied in depth, with the class ending with a problem solving and planning project focused on a current issue. | http://hasanprep.org/index.php?option=com_content&view=article&id=33:social-studies&catid=8:courses&Itemid=56 |
Earthworks M23 23kHz Omnidirectional Measurement Microphone is ideally suited for acoustical measurements including loudspeaker design and quality control, sound system setup and troubleshooting, room acoustics, or any application where an accurate free-field measurement microphone is required.
The M23 is known for its reliable performance, delivering unparalleled audio results at an affordable cost. It has a flat frequency response that extends from 9Hz to 23kHz, an exceptionally consistent omnidirectional polar response, 138dB SPL rating without distortion and no handling noise.
Immune to most temperature and atmospheric fluctuations, the M23 delivers reliable and repeatable results in any environment. | https://www.gigasonic.com/product/earthworks-m23-23khz-omnidirectional-measurement-microphone/ |
This security update is rated Critical for Internet Explorer 6 on Windows clients, and for Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9; and Important for Internet Explorer 6 on Windows servers. For more information, see the subsection, Affected and Non-Affected Software, in this section.
Yes. In addition to the changes that are listed in the Vulnerability Information section of this bulletin, this update includes a defense-in-depth update to the Internet Explorer XSS Filter and defense-in-depth updates to help improve memory protection in Internet Explorer.
One of the defense-in-depth updates adds the ability for users to configure their systems to block cross-domain drag-and-drop actions in Internet Explorer. For more information on this added functionality, see Microsoft Knowledge Base Article 2581921.
**What is defense-in-depth? ** In information security, defense-in-depth refers to an approach in which multiple layers of defense are in place to help prevent attackers from compromising the security of a network or system.
Yes, this update addresses a Protected Mode bypass issue, publicly referenced as CVE-2011-1347.
A remote code execution vulnerability exists in the way that Internet Explorer accesses an object that may have been corrupted due to a race condition. The vulnerability may corrupt memory in such a way that an attacker could execute arbitrary code in the context of the logged-on user.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1257.
In a Web-based attack scenario, an attacker could host a specially crafted Web site that is designed to exploit this vulnerability through Internet Explorer, and then convince a user to view the Web site and perform a series of clicks in different Internet Explorer windows. An attacker could also embed an ActiveX control marked "safe for initialization" in an application or Microsoft Office document that hosts the IE rendering engine. The attacker could also take advantage of compromised Web sites and Web sites that accept or host user-provided content or advertisements. These Web sites could contain specially crafted content that could exploit this vulnerability. In all cases, however, an attacker would have no way to force users to view the attacker-controlled content. Instead, an attacker would have to convince users to take action, typically by clicking a link in an e-mail message or in an Instant Messenger message that takes users to the attacker's Web site, or by opening an attachment sent through e-mail.
When Internet Explorer attempts to access an object that may have been corrupted due to a race condition, it may corrupt memory in such a way that an attacker could execute arbitrary code in the context of the logged-on user.
An attacker could host a specially crafted Web site that is designed to exploit this vulnerability through Internet Explorer, and then convince a user to view the Web site and perform a series of clicks in different Internet Explorer windows. An attacker could also embed an ActiveX control marked "safe for initialization" in an application or Microsoft Office document that hosts the IE rendering engine. The attacker could also take advantage of compromised Web sites and Web sites that accept or host user-provided content or advertisements. These Web sites could contain specially crafted content that could exploit this vulnerability. In all cases, however, an attacker would have no way to force users to view the attacker-controlled content. Instead, an attacker would have to convince users to take action, typically by clicking a link in an e-mail message or in an Instant Messenger message that takes users to the attacker's Web site, or by opening an attachment sent through e-mail.
An information disclosure vulnerability exists in Internet Explorer. An attacker could exploit the vulnerability by constructing a specially crafted Web page disguised as legitimate content. An attacker who successfully exploited this vulnerability could view content from another domain or Internet Explorer zone.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1960.
During certain processes, Internet Explorer incorrectly allows attackers to access and read content from different domains.
This vulnerability requires that a user visit and perform actions on a Web site for any malicious action to occur. Therefore, any systems where Internet Explorer is used frequently, such as workstations or terminal servers, are at the most risk from this vulnerability.
A remote code execution vulnerability exists in the way that Internet Explorer uses the telnet URI handler. The handler may be used in such a way that an attacker could execute arbitrary code in the context of the logged-on user.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1961.
When Internet Explorer attempts to invoke the application associated with the telnet URI handler, a custom binary may be loaded leading to execution of arbitrary code in the context of the logged-on user.
The update addresses the vulnerability by modifying the way the telnet handler executes the associated application.
An information disclosure vulnerability exists in Internet Explorer that could allow script to gain access to information in another domain or Internet Explorer zone. An attacker could exploit the vulnerability by inserting specially crafted strings in to a Web site, resulting in information disclosure when a user viewed the Web site. An attacker who successfully exploited this vulnerability could view content from another domain or Internet Explorer zone.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1962.
This is an information disclosure vulnerability. An attacker who exploited the vulnerability, when a user views a Web page and changes its charset, could view content from the local computer or a browser window in a domain or Internet Explorer zone other than the domain or zone of the attacker's Web page.
During certain processes, Internet Explorer incorrectly handles certain character sequences, leading Web sites to implement inactive filtering.
An attacker could insert strings in to a Web site that are designed to exploit this vulnerability through Internet Explorer and convince a user to view the Web site. However, an attacker would have no way to force users to visit these Web sites. Instead, an attacker would have to convince users to visit the Web site, typically by getting them to click a link in an e-mail message or in an Instant Messenger message that takes users to the Web site with malicious content.
Yes. This vulnerability has been publicly disclosed. It has been assigned Common Vulnerability and Exposure number CVE-2011-1962.
Yes. This security update addresses the vulnerability that potentially could be exploited by using the published proof of concept code. The vulnerability that has been addressed has been assigned Common Vulnerability and Exposure number CVE-2011-1962.
A remote code execution vulnerability exists in the way that Internet Explorer accesses an object that has not been correctly initialized or has been deleted. The vulnerability may corrupt memory in such a way that an attacker could execute arbitrary code in the context of the logged-on user.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1963.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-1964.
An information disclosure vulnerability exists in Internet Explorer. An attacker could exploit the vulnerability by constructing a specially crafted Web page that could allow information disclosure if a user viewed the Web page and performed a drag-and-drop operation. An attacker who successfully exploited this vulnerability could gain access to cookie files stored in the local machine.
To view this vulnerability as a standard entry in the Common Vulnerabilities and Exposures list, see CVE-2011-2383.
In a Web-based attack scenario, an attacker could host a Web site that contains a Web page that is used to exploit this vulnerability. In addition, compromised Web sites and Web sites that accept or host user-provided content or advertisements could contain specially crafted content that could exploit this vulnerability. In all cases, however, an attacker would have no way to force users to visit these Web sites and perform a drag-and-drop operation. Instead, an attacker would have to convince users to visit the Web site, typically by getting them to click a link in an e-mail message or Instant Messenger message that takes users to the attacker's Web site, and then convince the user to perform a drag-and-drop operation.
To rename the Cookies folder, perform the following steps for each user.
Part 1 - Create a new folder for cookies and use a random name for the folder.
Exit all Internet Explorer-related windows and processes.
Create a new Cookies folder, and use a random eight-character folder name for the new folder. Use a mixture of letters, numbers, and other characters for the folder name. You cannot use the following characters in the folder name because these characters are reserved by the operating system: \ / ? : * " > < | At a command prompt, type the following command, where Random_Folder_Name is the eight-character random name: Mkdir Random_Folder_Name-Cookies and then press Enter.
Part 2 - Copy the current cookie files from the default Cookies folder.
Run the following command from a command prompt, where Random_Folder_Name is the eight-character folder name that you created in Part 1 above.
Part 3 - Edit the registry entry that determines the Cookies folder.
Press Enter, and then restart the computer. This command releases any files in the original Cookies folder that were being used.
Note If you receive an error message that states that the index.dat file cannot be deleted, you can safely ignore the error.
This is an information disclosure vulnerability. An attacker who exploited the vulnerability when a user views a Web page and performs a drag-and-drop operation could gain access to cookie files stored in the local machine.
Internet Explorer incorrectly restricts access to cookie files stored in the local machine.
An attacker who successfully exploited this vulnerability could gain access to cookie files stored in the local machine.
An information disclosure vulnerability exists in Internet Explorer that could allow an attacker to gain access to cookie files stored in the local machine. An attacker could exploit the vulnerability by constructing a specially crafted Web page that could allow information disclosure if a user viewed the Web page and performed a drag-and-drop operation. An attacker who successfully exploited this vulnerability could gain access to cookie files stored in the local machine.
This vulnerability requires that a user be logged on, visiting a Web site and perform a drag-and-drop operation for any malicious action to occur. Therefore, any systems where Internet Explorer is used frequently, such as workstations or terminal servers, are at the most risk from this vulnerability.
The update addresses the vulnerability by modifying the way that Internet Explorer accesses files stored in the local machine and manages cookie files. This includes a change in the way that Internet Explorer sets file names for cookie files to help make cookie file names less predictable.
Yes. This vulnerability has been publicly disclosed. It has been assigned Common Vulnerability and Exposure number CVE-2011-2383.
Yes. This security update addresses the vulnerability that potentially could be exploited by using the published proof of concept code. The vulnerability that has been addressed has been assigned Common Vulnerability and Exposure number CVE-2011-2383. | https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2011/ms11-057 |
Curriculum – Geography
In Geography, we intend to equip pupils with knowledge about diverse places, people, resources and natural and human environments, together with a deep understanding of the Earth’s key physical and human processes. We hope to inspire pupils with a curiosity and fascination about the world and its people that will remain with them for the rest of their lives.
In the EYFS at Cassiobury Infants, we aim to ensure that all pupils:
- explore and respond to different natural phenomena in their setting and on trips
- draw information from a simple map
- understand that some places are special to members of their community
- explore the natural world around them
- recognise some environments that are different to the one in which they live
- understand the effects of changing seasons on the natural world around them
- know that there are different countries in the world and talk about the differences they have experienced or seen in photos
KS1:
In Key Stage One, we look at the key geographical skills as set out in the National Curriculum (2014); locational knowledge, place knowledge, human and physical geography and geographical skills and field work.
Locational Knowledge
Pupils will learn about the North and South Poles, the Equator, 4 compass points, N,S,E,W.
- Global – name and locate the world’s seven continents and five oceans
- UK – name, locate and identify characteristics of the four countries and capital cities of the United Kingdom and its surrounding seas
Place Knowledge
Pupils will learn geographical similarities and differences of a small area of the United Kingdom, and a small area in a contrasting non-European country.
- Locality – Pupils will focus on the area in which they live their everyday lives. This encompasses where the children live, go to school, places they visit such as a local park, shops or place of worship.
Human and physical geography
Pupils will learn to identify seasonal and daily weather patterns in the United Kingdom and the location of hot and cold areas of the world in relation to the Equator and the North and South Poles.
- Key Geographical vocabulary – Pupils will learn what physical and human features are and use the vocabulary to name and label features
- Physical features include: beach, cliff, coast, forest, hill, mountain, sea, ocean, river, soil, valley, vegetation, season and weather
- Human features include: city, town, village, factory, farm, house, office, port, harbour and shop
Geographical skills and fieldwork
Pupils will learn to identify places using maps, atlases, globes, aerial images and plan perspectives, make maps, devise basic symbols, fieldwork and use geographical vocabulary.
- The importance of maps – use world maps, atlases and globes to locate the United Kingdom and its countries. Pupils will also identify countries, continents and oceans studied in Year 1 and 2. They will also devise a simple map and construct a basic key.
- Applying geographical skills – use simple geographical directions to describe locations, use aerial photographs to plan perspectives to recognise landmarks, basic human and physical features.
- Using a compass – find directions (North, South, East and West) and use locational and directional language eg. Near and far, left and right, to describe location of features and routes on a map. | https://cassioburyinfants.herts.sch.uk/curriculum-geography/ |
Election Day is upon us. At this point, you’ve heard the evidence. You’ve heard lots of arguments from all sides. You’ve seen commercials and watched speeches. Most of you have already made up your minds. Some of you have already voted. For the rest of you, here is a closing argument as to why you should select Donald Trump for president.
Let’s start by dismissing the views of the sad cadre of pundits known as the “NeverTrumpers.” These people claim to be conservative, and insist that Hillary Clinton would be a terrible president. Yet they contend that since Donald Trump is also unfit to serve in the White House, the only principled option is either to not vote for president at all, or to vote for a third-party candidate who has no chance of winning. This argument need not delay us for long, as it simply ignores the realities of politics.
Here’s the bottom line: If Hillary Clinton is elected president, then the problems we face now will worsen and fester over the next four years.
More from LifeZette TV
MORE NEWS: Ep 387 | Should I Vote for Biden?
By its nature, politics is a messy, uncomfortable business that requires compromise and coalition-building in order to succeed. If the NeverTrumpers aren’t willing to work with their fellow conservatives to stop Hillary Clinton, then they have effectively rendered themselves irrelevant to this election, and proven themselves to be untrustworthy going forward.
With the NeverTrumpers out of the way, we now turn to the main event: Donald Trump or Hillary Clinton. Secretary Clinton has been at the highest level of American politics for the last 25 years. She played a significant role in her husband’s administration, she served eight years as a senator from New York, and she served four years as secretary of state under President Obama.
Do you agree that protesting is acceptable, but rioting is not? Yes No Email Address (required) By completing the poll, you agree to receive emails from LifeZette and that you've read and agree to our privacy policy and legal statement Results Vote
Normally, at this stage of a political career, the candidate will have a long list of triumphs and accomplishments to invoke in his or her favor. If nothing else, one would expect Secretary Clinton to claim the accomplishments of her husband and President Obama as reasons to vote for her. She has not done this, for the most part, and instead her campaign has relied almost entirely on personal attacks against Donald Trump.
[lz_ndn video= 31531059]
We have been told, for months, that whatever you may think of his ideas, Trump is a moral leper, a madman, a danger to the very future of the republic. Trump is too risky, goes the argument, and thus we must stick with Secretary Clinton.
But what are the facts? It is true that Trump sometimes loses his temper — much like Andrew Jackson did. It is true that Trump has had a rather complicated personal life — much like John F. Kennedy did. It is true that Trump can sometimes be highly critical of his opponents — much like Harry Truman was. But the Trump critics ignore the fact that a President Trump will face very significant checks and balances. A Speaker Ryan, or a Majority Leader McConnell, will not be rubber stamps for President Trump. A federal court system full of liberal judges and GOP moderates will not hesitate to intervene if they believe Trump has overstepped his authority. A federal bureaucracy largely staffed with left-wing Democrats will resist Trump at every turn.
MORE NEWS: This Peace Deal is Such A BIG Deal!
Our increasingly partisan media will attack Trump every single day. Under these circumstances, the idea that a President Trump could, without the support of the people and against the united opposition of much of official Washington, undermine or otherwise subvert our democratic system is simply not credible.
In fact, Hillary Clinton presents a much greater threat in this regard. As we’ve already seen, the press is no check to Hillary Clinton — if anything, they see their job as protecting her from her enemies. Once Hillary has five liberals on the Supreme Court — and she will have five liberals on the Supreme Court — the court system will be little more than her footstool.
As commander-in-chief, she will inherit sweeping power to launch military actions — power that she has already shown a willingness to use. As the nation’s chief law enforcement official, she will decide who is and isn’t prosecuted. As the nation’s chief executive, she will decide how the laws are interpreted.
She will need to get money from Congress — but President Obama has shown that Congress will cave if she threatens a government shutdown. In other words, once she gets past this election, there will effectively be no check on her power. In this regard, she is a much bigger threat than Trump.
That’s not all. The main reason Hillary Clinton has spent the last few months railing against Trump — and ignoring her own record — is that the events of the last 25 years prove that her administration will be a failure. All of the policies in which she believes — globalization, open borders, an aggressive U.S. military presence in the Middle East, using the courts to promote social liberalism — have not only been tried, they have mostly been U.S. law since the 1990s.
Looking around, we see the results. Since Bill Clinton urged Congress to support China’s entry into the WTO in 2000, we have lost more than 5 million manufacturing jobs. We have run up more than $12 trillion in national debt. We have had a disastrous financial crisis, followed by a meager recovery. We have seen a chasm of inequality open between the few at the very top, and the vast majority of Americans, who are falling behind.
As the U.S. economy sputters, we are losing our ability to influence foreign affairs. As Americans lose faith in their future, social unrest continues to grow.
Here’s the bottom line: If Hillary Clinton is elected president, then the problems we face now will worsen and fester over the next four years. American workers will continue to struggle. Our country will grow weaker. China and our other enemies will grow stronger. Crime will continue to rise. Riots like we saw in Ferguson will be more common. The world will become more dangerous and unstable. Millions more Americans will give up on our system. And all of these events will be exacerbated by Secretary Clinton’s unique habit of fighting investigators.
If you think the election of 2016 has been painful, then you really shouldn’t want to see what this country looks like after four years of a Hillary Clinton presidency.
By contrast, Trump is merely suggesting a return to common sense and the rule of law. With respect to trade, he argues that the U.S. government should use its leverage to get the best possible deals for American workers — just as it always did until Bill Clinton persuaded Congress that we should join the WTO. With respect to immigration, Trump’s primary argument is that we should enforce the law — and isn’t that supposed to be the president’s job?
With respect to foreign policy, Trump argues that we should be careful and prudent in the use of military force, and should no longer make promises we can’t afford to keep — a policy much closer to traditional American thinking than the pie-in-the-sky dreams that have resulted in so many recent disasters around the world.
[lz_related_box id=”238935″]
With respect to domestic policy, Trump argues for a combination of tax and regulatory changes — such as the repeal of Obamacare — that will put more money in American pockets, and give new encouragement for economic activities.
None of these policies are outside of any American mainstream — indeed, for most of American history, they were regarded as simple common sense. The true radicals in this election are the ones who want to continue policies that have already failed, not those who want to correct the mistakes of the last 25 years.
In short, the major risk in this election is not that we try something different with Donald Trump. The major risk — and it is major indeed — is that we continue on the failed and increasingly dangerous path that we are on.
If Hillary Clinton is defeated, we can reverse the disastrous policies of the last few decades, and take a new approach that will focus on the needs and interests of the American people. That’s what this election is all about. That is why you should vote for Donald Trump.
| |
The occasion was a belated thank-you to my team for their tireless efforts that helped make APPFGH5 a success (see 10.228 Tavu Snapper).
I had been looking for an opportunity for a second visit to Purple Yam (see most recently 8.226 Hipon Sinigang), both to reassess the food and to get better photos.
The food was excellent. Each dish individually was well conceived, executed with pristinely fresh ingredients, perfectly seasoned, beautifully plated. As a spread, the dishes provided a balance of complementary flavors and contrasting textures, a mix of varied proteins and vegs and carbs. And, very importantly, the final three dishes were served family style, in ample quantity, leaving each of us exactly as full as we wanted to be. Without question, the best Filipino meal that I have had the privilege to experience.
Famously, I am not much for desserts, but I took a nibble just for a taste then couldn’t resist going all in, twice.
Confirms my prior observation about this establishment: “what traditional Filipino food could be if it paid more attention to quality ingredients and got back to basics” – not that any of this was basic. | https://givemethisday.com/2019/10/15/10-283-fried-mixed-grains/ |
Getting Back to Nature
Chances are you heard it when you were a child, someone telling you to get up and go outside for a while. Whether from a well-meaning parent or a teacher, you probably connected that idea with exercise and the health benefits of spending time out of the house.
But did you know that there are many mental and emotional health benefits in getting back to nature?
1 – You Feel Less Stress
Spending time in green spaces, whatever the case, has been shown to lower your blood pressure. Your heart beats slower, and even your respirations become more even and natural. In short, your body relaxes, and the effects of stress fades away.
The good news? The effects are long-lasting. Studies have shown that spending time in a forest on the weekend helps lower stress levels for up to 7 days following the event. Continue reading
There’s something about trees and grass, flowers and nature in all its glory that satisfies some deep part of our soul. Even NASA has done studies about how to introduce nature into space so that astronauts can stay calm and focused when spending long periods in isolation.
What is it about green spaces that are so irresistible?
1 – Like Plants, We Require the Sun! | https://leilarhoden.info/category/getting-back-to-nature/ |
Geographical Range: Madagascar, Nairobi, and Kenya.
Habitat: Warm and humid costal lowlands, but prefers drier forests.
Location in the Zoo: Herpetarium.
Oustalet's Chameleons are very large. The majority of its size is in length, and reports up to 70 cm are almost common. The prehensile tail can grow up to 1.5 times the body's length, and the tongue can be as long as both combined. The casque that found on top of the head resembles a shield, and changes colors on the front and back sides. The feet grow toes in pairs of twos and threes. The front toes have two on the outside, and three on the inside, but the hind feet are opposite. Hatchlings typically weigh .8 of a gram, and are already adapted with a small version of a casque. Females are typically smaller and more brightly colored than the males.
For beginners, reptiles are cold-blooded, so they rely on the sun's heat to keep their blood thin enough to flow through veins. This allows the reptile to eat far less than a mammal of the same weight, useful if the mammal is in competition for food.
Next is the skin of the chameleon, which can be divided into four highly specialized layers. The outer layer is a constant cell factory, always repairing broken cells. The scales of the outer layer contain no pores, further economizing the chameleon's efficiency of scarce resources. The next layer, under the scales, contains simple yellow pigments -- hardly special compared to the rest. The third layer contains a magnificent system of particles, each measuring less than .00004 cm, small enough to scatter white light. The fourth layer contains cells, each filled with countless pigment particles named melanin. Hormones from the chameleon cause melanin to either attract or repel one another. When fully attracted to one another, the pigments of the fourth layer are barely visible, and no longer absorbing enough light to make the second layer's pigment subordinate -- you see yellow. When fully repelled from one another, the pigments of the fourth layer seem to occupy the entire cell, absorb the majority of incoming light, and turn very dark. Together, the bottom three cells work to produce all of the colors seen on a chameleon, but the repelling and attracting properties of the fourth layer are behind the entire "color changing" phenomenon.
Chameleons are arboreal reptiles. They spend most of their life above ground, in trees and bushes. Because of this, the feet and tail of the chameleons are highly specialized in climbing and balance. The four feet of a chameleon are zygodactyle, in that the toes grow in groups on opposite sides of one another, perfect for grasping around a branch. The tail is like a fifth leg in two ways: it is prehensile, so the chameleon can use it to grasp nearly any size branch, and it is heavy, so it can also be used as a counter balance.
The tongue of a chameleon is covered with a sticky liquid. It can travel at a speed of 13 miles per hour, and the mechanism propelling it accelerates to 20 feet per second, in 20 milliseconds. The principle that accelerates the tongue works in the same way as a bow and arrow. Such an amazing hunting tool allows the chameleon to catch prey much faster than insects. As you can see in the picture, the tongue ends in a suction cup.
The speed of the tongue, along with the suction cup at the end, makes the chameleon a successful hunter. And surprisingly, because of the complex camouflage system, this chameleon is also a successful prey.
And finally, there is the rudimentary third eye of the chameleon. Found between the developed pair is another eye, not as sensitive as the others, but still useful for detecting shades.
The Fort Worth Zoo recently lost their oustalet's chameleon, so instead I will explain some of the behaviors.
Exercise is very important to a chameleon, but too much movement can reveal oneself to predators. This is why chameleons can spend great amounts of time motionless or moving very slowly. Movement, to a chameleon, is like a dance. In order to move to better feeding, without being eaten, the chameleon mimics the motions the leaves. By moving in slow swaying motions amongst the leaves, predators can easily pass chameleons up as tree matter.
Sexual maturity is reached in 6-12 months. Males have been known to show of a blue color during courtship. The picture above shows the chameleons in a state of agitation. The white colors are possibly caused by fear. The Female may lay a single clutch of 60 eggs, but some are known to lay two clutches in captivity. | http://www.whozoo.org/AnLifeSS2005/TylerK/TJK_outstat_chameleon.html |
Salary Grade:
Actual offers will be determined by the candidate’s creditable years of experience in conjunction with internal equity considerations and based on the organization’s current compensation practices.
Job Code: 693
Job Summary
Works within the vision, mission and philosophy of the agency, under direct supervision, provides case management support and services that ensure the emotional and physical safety of clients.
Essential Duties
1. Provides initial and ongoing assessment of client strengths and needs. Develops, reviews, updates and ensures implementation of strength based service plans for each client including safety plans, independent living skills and legal issues.
2. Completes all documentation in compliance with agency and regulatory requirements.
3. Meets with clients in the home and community settings, developing a helping relationship and ensuring needed supports and services are provided.
4. Provides training of independent living skills and parenting skills, (if applicable) to clients.
5. Supports client to facilitate a successful placement and ensure compliance with agency and regulatory compliance.
6. Works to achieve permanency; builds and supports family and community connections and natural supports, at request of client.
7. Assists client to develop and acquire future housing prior to discharge.
8. Maintains ongoing communication and provides pertinent information to and coordinates with authorized representatives and other team members, participating in meetings, such as child/family teams.
9. Can be available for occasional after hour emergencies
10. Meets productivity and quality expectations and other performance goals as defined.
11. Provides referrals to other community agencies, as client needs dictate.
12. Monitor client’s housing conditions to ensure safety and cleanliness.
13. Performs other responsibilities, as assigned. May vary by site and program needs to support specific department/business needs.
Other Duties and Responsibilities
1. In certain circumstances, may be required to drive client to appointments.
2. Physical interactions with transitional aged youth, including but not limited to carrying furniture (not more than 40 lbs), demonstrating how to make a meal, or demonstrating how to assemble a household item
3. Position specific duties and responsibilities may very depending upon program.
4. Performs other related responsibilities, as assigned, to support specific department/business needs.
Qualifications
To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. All employees are responsible for keeping job requirements up to date. This can include but are not limited to certifications, licensure, and maintaining a clear criminal record.
Education Requirements
• Bachelors degree (B.A. or B.S) in Social Work or Social Welfare; Psychology, Counseling, Child Development, Education with emphasis on Counseling.
Experience Requirements
• Related degree can be considered with two years experience in either Public or Private sector Social Services, or two years experience with Transition Aged Youth, ages 16 to 24.
Highly Desirable Skills
• Speak, read, and/or write another language highly desirable.
• Life experience in the areas that are taught to program clients, such as money management, career and job guidance, consumer skills, acquiring housing, home management and transportation.
Other Specific Requirements
• Must pass Department of Justice (DOJ), Federal Bureau of Investigations (FBI), and Child Abuse Index Check (CAIC) background clearance.
• Valid California Driver's license with two years of experience driving and clean driving record may be required.
Work Environment/Conditions
Reasonable accommodations may be made to enable qualified individuals with disabilities to perform the essential functions. While performing the duties of this job, the employee may be required to use hands to handle or feel objects, tools or controls; reach with hands and arms; and taste or smell. Specific vision abilities required by the job include close vision, distance vision, color vision, peripheral vision, depth perception and the ability to adjust focus.
In addition, this position requires sitting, standing, walking, climbing, and kneeling.
Employee Statement of Understanding
I have read and understand the job description for my position. I am able to perform all of the essential functions of this position.
I agree to successfully complete all required training indicated for this position. I agree to keep all of my required certifications and licenses current and up to date and provide proper documentation of the same to the Human Resources Department. I agree to comply with all Agency policies and procedures. I agree to comply with the agency compliance plan and all laws, rules, regulations and standards of conduct relating to my position. As an employee, I understand my duty to report any suspected violations of the law or the standards of conduct to my immediate supervisor. As an employee, I will strive to uphold the mission and vision of the organization. All employees are required to adhere to the values in all their interactions with youth and families and fellow employees. | https://phf.tbe.taleo.net/phf03/ats/careers/v2/viewRequisition?org=EMQ&cws=37&rid=3597 |
Scrum Master (part-time)
We are Baltic Assist – the Lithuanian outsourcing company, experts in finding creative ways to expand the local job market and provide new, exciting opportunities for the best talent. Built on core values like transparency and innovation, our team is constantly working on making BA the top choice when it comes to your career.
As a Scrum Master, you will be responsible for managing projects and working with the CTO, Tech Leads and developers (both onshore/offshore) to turn the vision into reality with an on-time and quality delivery. It is a part-time (20 hours/week) position.
The position is open to Ukrainian refugees and the company offers to reimburse accommodation costs in Lithuania for up to 6 months and provide any necessary assistance.
What you'll do:
selecting techniques and life cycle models based on the context of the project;
establishing team structures and a collaborative working environment;
communicating with stakeholders and maintaining awareness of business needs and priorities;
using visual techniques for project tracking and reporting;
timeboxing and incremental deliveries;
defining deliverables, milestones and dependencies;
applying change control and risk management processes;
acquiring the necessary resources and skills;
agreeing constraints of cost, timescales, quality and scope;
reviewing experiences and learning from current and previous projects;
ensuring that projects are formally closed and reviewed.
Role responsibilities:
Takes full responsibility for the definition, approach, facilitation, and satisfactory completion of medium/large-scale projects;
Provides effective leadership to the project team;
Adopts appropriate project management methods and tools;
Manages the change control process and assesses and manages risks. Ensures that realistic project plans are maintained and delivers regular and accurate communication to stakeholders;
Ensures project and product quality reviews occur on schedule and according to procedure. Ensures that project deliverables are completed within agreed cost, timescale, and resource budgets, and are formally accepted, by appropriate stakeholders;
Monitors costs, times, quality and resources used and takes action where performance deviates from agreed tolerances.
Knowledge & skills we are interested in:
3 or more years of experience in a relevant role, Bachelor's degree would be an advantage;
CSM, CSPO, PMI - ACP or Agile coach certification would be preferred;
Managing projects by defining and achieving project outcomes and meeting project key performance metric targets;
Utilizing and applying into projects knowledge of IT implementation and maintenance, learning initiatives, operations reporting, metrics, marketing and communications;
Exploring new technologies & assembling and managing product teams by working closely with engineering team and stakeholders;
Providing valuable stakeholder feedback to address and taking action to meet the company’s strategic goals;
Managing/coach small multidisciplinary product teams and collaborating with designers, business users, and developers to innovate and drive product development;
Demonstrating extensive program and project management skills and ability to self-direct work;
Demonstrating written and verbal communication skills;
Demonstrating analytical skills;
Demonstrating background in liaising with and coordinating with stakeholders across multiple groups, including senior stakeholders;
Managing multiple concurrent projects at once;
Demonstrating experience in a broad range of technologies and appreciation for end to end application development;
Having familiarity with SAFe/Agile, JIRA, and Confluence or similar tools.
Understanding of Customer, Product, and Pricing Analysis;
Demonstrating familiarity with Business and Operations Analysis within marketing, operations, or risk analysis using quantitative techniques;
Understanding of agile/scrum and proven success coaching and mentoring a team through an agile transformation;
Utilizing agile frameworks such as Kanban, XP, scaled frameworks;
Demonstrating excellent facilitation, conflict mediation, and situational awareness skills including the ability to exercise mature judgement in delicate situations;
Demonstrating a history of and proven experience making tough decisions and delivering hard messages as required to get the job done;
Demonstrating some knowledge of test driven development;
Organizing scrum teams (leading daily scrum, weekly scrum of scrum to compile status updates, removing roadblocks for the team);
Documenting, tracking, reporting metrics using Jira;
Configuring JIRA, and managing and customizing the reports;
Leading and facilitating integration, solution, and UAT testing;
Facilitating releases (project management work, release notes, promotions to higher environments);
Creating, monitoring, and sharing team performance metrics, including preparing related progress reports.
What we offer:
Fixed monthly remuneration from 1500 EUR to 1700 EUR brutto based on your competencies, skills, and experience;
Flexible working arrangements to help you succeed in your career while balancing personal needs;
Work in an engaging international environment;
Excellent conditions to support your professional and personal development;
Open and friendly company culture.
Salary:
1500 - 1700 €/mon. Gross for part-time (20 hours/week) schedule
About the company:
Baltic Assist is one of the fastest-growing international outsourcing companies in the Baltics. We are outsourcing top-level employees to "hottest" startups and innovative large enterprises from Scandinavian countries, Western Europe, the USA, and others. Here in Baltic Assist we genuinely believe that Lithuanians are top-notch and results-driven specialists. Therefore, we seek to open up opportunities to be employed in innovative and well-known international companies.
Note:
Only the chosen candidates will be contacted.
Recommend This Job To A Friend
If you have a friend, acquaintance, or colleague, who would be a good fit for this position, please leave us their details! | https://balticassist.com/works/scrum-master-part-time/ |
Lumosity's study used data from more than 3 million people in the U.S. between the ages of 18 and 75 who played brain-training games measuring performance across five areas: memory, processing speed, flexibility, attention, and problem solving.
Stevens Point ranked 38th in the Under 35 age group, 25th in the area of Flexibility, 4th in the Memory category, 14th in Problem Solving, and 40th in the Complete Overall Score Rankings.
An area’s overall score was based on the median overall score for users in the geographic area. For the separate cognitive area lists, rankings were based on the median scaled score for each Core-based Statistical Area for each cognitive area.
Core-based Statistical Areas and Combined Statistical Areas were defined by the US Office of Management and Budget based on US Census data.
Lumosity says, "Economists and urban researchers tend to analyze the collective intelligence of cities based on socioeconomic variables like income and education levels. Lumosity published its first Smartest Cities rankings in 2012 based on our own database of users’ performance on cognitive training exercises. The 2013 rankings are based on data from nearly three times as many people, with over 3 million users included in the study. We have also made some adjustments to our methods that we hope will improve the validity and reliability of our results." | https://www.stevenspoint.com/CivicAlerts.aspx?AID=1385&ARC=2645 |
Nature of spelling errors in a Thai conduction aphasic.
A Thai conduction aphasic's performance on a written confrontation naming task is reported. Analysis of his spelling errors indicated that errors rarely violated Thai phonotactic constraints; consonant substitutions were phonologically similar to the target stimuli; longer stimuli were more likely to be in error; distribution of errors was the same across consonants, vowels, and tones; and distribution of error types varied between segmentals (consonants, vowels) and suprasegmentals (tones). Error patterns were similar to those observed in oral reading and repetition. The pattern of impaired writing performance is discussed in relation to a functional model of the spelling process, and it is hypothesized to reflect primarily a functional lesion to the phonological buffer.
| |
INVENTORS OPPOSE REP. ISSA AS CHAIRMAN OF KEY HOUSE SUBCOMMITTEE
Inventors to protest, deliver petition to House GOP Steering Committee members.
(WASHINGTON, DC) – Hundreds of inventors announced today their staunch opposition to Rep. Darrell Issa (R-CA) being elevated to chairman of the House of Representatives Committee on the Judiciary’s Subcommittee on the Courts, Intellectual Property and the Internet (IP Subcommittee) due to his extreme anti-inventor positions and his clear bias toward Big Tech.
“The patent system is broken, and as an unabashed advocate for Big Tech infringers, Rep. Issa has done nothing but damage innovation,” US Inventor founder and inventor Paul Morinville said. “For more than a decade, the IP Subcommittee has trashed intellectual property rights granted under the US Constitution and it is time for new leadership. Rep. Issa is beholden to Big Tech, he does their bidding, and he is not that leader.”
“We hear a loud cry now from Congress about how Big Tech colluded with the government to trample First Amendment rights on social media, but one of those same voices is taking huge donations from Big Tech to gut the Constitutional rights of inventors,” inventor Josh Malone said. “To us, that stinks of hypocrisy.” US Patent and Trademark Office (USPTO) rules supported by Big Tech and Issa forced Malone to spend millions to defend his invention, Bunch of Balloons, the number one toy of the summer for several years.
As the House Republican Steering Committee meets this week to select chairmen of key committees and subcommittees, inventors have traveled to Washington to protest Issa’s likely appointment to chair the vital panel, which changes from Democrat to Republican leadership in the GOP-majority 118th Congress. Meanwhile, Issa is attending the World Economic Forum in Davos, Switzerland with the globalists and Big Tech infringers who he has allied with to gut the US patent system.
US Inventor members will protest in front of the Capitol Hill Club 8:30-9:30am Tuesday and Wednesday, January 17 and 18, as Republicans meet informally inside. They will also deliver a petition opposing Issa signed by more than 600 inventors representing 7,000-plus patents to House Republican Steering Committee members meeting this week in the Capitol complex to choose committee leadership.
ABOUT US INVENTOR
US Inventor, a 501(c)(4) non-profit corporation, was founded in 2015 to advocate for inventor rights and changes to the U.S. Patent System to protect inventors. Since its founding, the group has organized thousands of inventors to fight the rank infringers of Big Tech who donate millions to lawmakers and manipulate legislation and the USPTO to destroy the rights granted to the nation’s innovators by the Framers of the US Constitution. | https://usinventor.org/opposeissa/ |
1.
Mia Petljak Ludmil B. Alexandrov Jonathan S. Brammeld Stacey Price David C. Wedge Sebastian Grossmann Kevin J. Dawson Young Seok Ju Francesco Iorio Jose M.C. Tubio Ching Chiek Koh Ilias Georgakopoulos-Soares Bernardo Rodríguez–Martín Burçak Otlu Sarah O’Meara Adam P. Butler Andrew Menzies Shriram G. Bhosle Michael R. Stratton 《Cell》2019,176(6):1282-1294.e20
2.
Furney SJ Turajlic S Fenwick K Lambros MB MacKay A Ricken G Mitsopoulos C Kozarewa I Hakas J Zvelebil M Lord CJ Ashworth A Reis-Filho JS Herlyn M Murata H Marais R 《Pigment cell & melanoma research》2012,25(4):488-492
Acral melanoma is a rare melanoma subtype with distinct epidemiological, clinical and genetic features. To determine if acral melanoma cell lines are representative of this melanoma subtype, six lines were analysed by whole-exome sequencing and array comparative genomic hybridisation. We demonstrate that the cell lines display a mutation rate that is comparable to that of published primary and metastatic acral melanomas and observe a mutational signature suggestive of UV-induced mutagenesis in two of the cell lines. Mutations were identified in oncogenes and tumour suppressors previously linked to melanoma including BRAF, NRAS, KIT, PTEN and TP53, in cancer genes not previously linked to melanoma and in genes linked to DNA repair such as BRCA1 and BRCA2. Our findings provide strong circumstantial evidence to suggest that acral melanoma cell lines and acral tumours share genetic features in common and that these cells are therefore valuable tools to investigate the biology of this aggressive melanoma subtype. Data are available at: http://rock.icr.ac.uk/collaborations/Furney_et_al_2012/. 相似文献
3.
R516Q mutation in Melanoma differentiation-associated protein 5 (MDA5) and its pathogenic role towards rare Singleton-Merten syndrome; a signature associated molecular dynamics study
P. Raghuraman 《Journal of biomolecular structure & dynamics》2019,37(3):750-765
Singleton-Merten syndrome, a critical and rare multifactorial disorder that is closely linked to R516Q mutation in MDA5 protein associated with an enhanced interferon response in the affected individual. In the present study, we provide conclusive key evidence on R516Q mutation and their connectivity towards sequence-structural basis dysfunction of MDA5 protein. Among the various mutations, we found R516Q is the most pathogenic mutation based on mutational signature Q-A-[RE]-G-R-[GA]-R-A-[ED]-[DE]-S-[ST]-Y-[TSAV]-L-V designed from our work. Further, we derived a distant ortholog for this mutational signature from which we identified 343 intra-residue interactions that fall communally in the position required to maintain the structural and functional integration of protein architecture. This identification served us to understand the critical role of hot spots in residual aggregation that holds a native form of folding conformation in the functional region. In addition, the long-range molecular dynamics simulation demarcated the residual dependencies of conformational transition in distinct regions (L29360–370α18, α19380–410L31, α21430–480L33-α22-L35 and α24510–520L38) occurring upon R516Q mutation. Together, our results emphasise that the dislocation of functional hot spots Pro229, Arg414, Val498, Met510, Ala513, Gly515 and Arg516 in MDA5 protein which is important for interior structural packing and fold arrangements. In a nutshell, our findings are perfectly conceded with other experimental reports and will have potential implications in immune therapeutical advancement for rare singleton-merten syndrome. 相似文献
4.
5.
Genomic instability in Multiple Myeloma-relevance for Clinical Outcome and Efficacy of Therapy 下载免费PDF全文
Genomic instability is a driving force in the natural history of blood cancers including multiple myeloma,an incurable neoplasm of immunoglobulin producing plasma cells that reside in the hematopoietic bone marrow. Long recognized manifestations of genomic instability in myeloma at the cytogenetic level include abnormal chromosome numbers (aneuploidy) caused by trisomy of odd-numbered chromosomes; recurrent oncogene-activating chromosomal translocations that involve immunoglobulin loci; and large-scale amplifications, inversions, and insertions / deletions (indels). Catastrophic genetic rearrangements that either shatter and illegitimately reassemble a single chromosome (chromotripsis) or lead to disordered segmental rearrangements of multiple chromosomes (chromoplexy) also occur. Genomic instability at the nucleotide level results in base substitution mutations and small indels that affect both the coding and non-coding genome. Distinctive signatures of somatic mutations that can be attributed to defects in DNA repair pathways, the DNA damage response or aberrant activity of mutator genes including members of the APOBEC family have been identified. Here we review recent findings on genomic stability control in myeloma that are not only relevant for myeloma development and progression, but also underpin disease relapse and acquisition of drug resistance in patients with myeloma. 相似文献
6.
Clare E. Weeden Marie-Liesse Asselin-Labat 《生物化学与生物物理学报:疾病的分子基础》2018,1864(1):89-101
Maintenance of genomic integrity in tissue-specific stem cells is critical for tissue homeostasis and the prevention of deleterious diseases such as cancer. Stem cells are subject to DNA damage induced by endogenous replication mishaps or exposure to exogenous agents. The type of DNA lesion and the cell cycle stage will invoke different DNA repair mechanisms depending on the intrinsic DNA repair machinery of a cell. Inappropriate DNA repair in stem cells can lead to cell death, or to the formation and accumulation of genetic alterations that can be transmitted to daughter cells and so is linked to cancer formation. DNA mutational signatures that are associated with DNA repair deficiencies or exposure to carcinogenic agents have been described in cancer. Here we review the most recent findings on DNA repair pathways activated in epithelial tissue stem and progenitor cells and their implications for cancer mutational signatures. We discuss how deep knowledge of early molecular events leading to carcinogenesis provides insights into DNA repair mechanisms operating in tumours and how these could be exploited therapeutically. 相似文献
7.
Julia Zischewski Lucia Perez Ludovic Bassié Riad Nadi Giobbe Forni Sarah Boyd Lade Erika Soto Xin Jin Vicente Medina Gemma Villorbina Pilar Muñoz Gemma Farré Rainer Fischer Richard M. Twyman Teresa Capell Paul Christou Stefan Schillberg 《Plant biotechnology journal》2016,14(12):2203-2216
The CRISPR/Cas9 system and related RNA‐guided endonucleases can introduce double‐strand breaks (DSBs) at specific sites in the genome, allowing the generation of targeted mutations in one or more genes as well as more complex genomic rearrangements. Modifications of the canonical CRISPR/Cas9 system from Streptococcus pyogenes and the introduction of related systems from other bacteria have increased the diversity of genomic sites that can be targeted, providing greater control over the resolution of DSBs, the targeting efficiency (frequency of on‐target mutations), the targeting accuracy (likelihood of off‐target mutations) and the type of mutations that are induced. Although much is now known about the principles of CRISPR/Cas9 genome editing, the likelihood of different outcomes is species‐dependent and there have been few comparative studies looking at the basis of such diversity. Here we critically analyse the activity of CRISPR/Cas9 and related systems in different plant species and compare the outcomes in animals and microbes to draw broad conclusions about the design principles required for effective genome editing in different organisms. These principles will be important for the commercial development of crops, farm animals, animal disease models and novel microbial strains using CRISPR/Cas9 and other genome‐editing tools. 相似文献
8. | http://biosci.alljournals.cn/search.aspx?subject=biological_science&major=&orderby=referenced&field=key_word&q=mutation+signatures&encoding=utf8 |
This guide was written in plain language, aimed at regular everyday readers. No fancy legalese, so dive in and enjoy the read.
It won’t give you all the answers but will show you why a company processes personal data, what they process, and with whom they share it.
The General Data Protection Regulation (GDPR) of the EU, on the other hand, does not explicitly require one. Still, it requires businesses to provide users with information on their data operations under the transparency and accountability principles.
Unlike the CCPA, it is not explicit about the information that users should provide. Still, it is sprinkled with many requirements regarding the information a business must deliver to users about transparency and accountability.
You have to tell your users who you are—no need to get into too much detail or write too much prose. Providing your business name, address, and state or country of incorporation would be enough.
If you run a business or a simple blog as an individual, your name, location, or email address would be enough.
Categories of personal data can be a person’s name, alias, home address, email address, ZIP code, phone number, ID number, passport number, Social Security Number, etc.
Personal data under the GDPR also includes health data, information about private life, IP address, political views, religious views, or any other information that could be directly or indirectly linked to an individual. Therefore, anything can be a category of personal data as long as it can by itself or in combination with other data identify a person.
Here you should describe the methods of data processing. In most cases, either:
Transparency under the GDPR means you must disclose to users why you need their data processed. Purposes of processing may include:
You likely use third-party tools to collect and process data, such as Google Analytics, Facebook Pixel, Hotjar, Mailchimp, and others. To process your users’ data with these tools, you must disclose that personal data to them.
GDPR calls users data subjects. When you collect the personal data of a user, they become your data subject.
Data controllers, the business that collects data and has it processed on their behalf, owe data subjects certain rights. These rights include the right to be informed of the processing, the right to have data deleted, objection to processing, and so on.
If you must comply with multiple data protection laws at once, then you have to list all the rights that each statute grants to data subjects.
For example, compliance with the CCPA requires providing information on the sales of personal information. It is unique for the CCPA and is not required by the GDPR, LGPD, PIPEDA, or other laws.
So, if you need to comply with the CCPA and all other elements, you need to add those specific to this law.
In most cases, providing an email address would be enough. Some businesses may also offer a contact form, a phone number, or any other means for exercising these rights.
Data transfers to third countries are arguably the trickiest issue for businesses that must comply with the GDPR. Transfers within the Union and to adequate countries are free, but any other transfer requires additional transfer tools and possibly protection measures.
No matter how and where you handle personal data, users have the right to know whether it is transferred to third countries and, if so, where it is being sent.
If you knowingly collect and process children’s data, that must be included in this document.
If you have a Data Protection Officer or legal representative in the EU, their name and contact information go here. Otherwise, any means of contact with you would be enough to include in this section.
Take a look at our guide listing all GDPR fines to get a picture about the consequences of not following the GDPR.
We assume that you never bother reading privacy policies and always accept cookies.
Companies that are serious about GDPR compliance and compliance with any other data protection law have comprehensive privacy policies.
Such websites are rare, though. Most online businesses collect lots of data, including data they are unaware they collect and process.
If you notice a bunch of social media widgets on a website, that’s usually a sign of data collection.
If you are not sure what the website you visit does about your personal information, scan it for free on WebCookies.org and get the answers you need.
The scan report will also tell you with whom they share your data. An online business can’t do everything alone, so they outsource many processes to third parties, i.e., SAAS companies who manage some operations on their behalf.
In many cases, outsourcing involves sharing of users’ data. For example, sharing the IP address with Google Analytics, email addresses with Mailchimp, and so on.
It is written in plain language, is easy to navigate, and easy to understand. It signals that the company wants to be transparent with the users.
The section on data processing purposes unveils the motives behind personal data processing. Businesses must tell users what makes them want to collect and process data.
The most common purposes for data processing include, but are not limited to:
Provide you with products or services. They sell something, and you must provide your data, such as personal name, email address, home address, postal code, or other data they need to deliver the product or service. The execution of a contract is a lawful basis for data processing under the GDPR and doesn’t require additional consent.
Marketing/Advertising Purposes. When a business collects and processes personal data for marketing purposes, they target customers based on the data they share with third-party services.
Examples of such services are social networks. They all provide advertisers with tracking pixels. These pixels track the web pages you visit online, match that activity with the data you have shared with them through your social media profile, and serve your profile as a potential buyer to the business.
Using cookies or a pixel that could match you with more data points is part of the processing data for marketing purposes because the information is used for marketing products and services.
Analytics purposes. Virtually every website on the internet uses some analytics tool, such as Google Analytics, Plausible, Mixpanel, and others. Some of them collect personal data; others do not.
Check out which analytics tool they share data within the section where they disclose the third-party tools they use.
Preferences. Businesses may collect your personal information to adjust the website to your preferences and improve your user experience. This may include accessibility adjustments, language, and others.
These are usually useful cookies that make the user’s life easier, but they collect personal data anyway, so consent is required before using them.
These are the most common processing purposes but not the only ones. Different business activities lead to additional processing purposes, so it is impossible to include them here. However, most of them belong to these categories.
Shopify, for example, uses more descriptive language to describe its purposes.
Instead of analytics, they say “providing reporting and analytics” and “testing out features and additional services.”
Instead of executing a contract, they say “answering questions or providing other types of support” (which is part of the execution of an agreement).
Marketing purposes include “assisting with marketing, advertising, and other communications.”
Having read this, you can understand that they monitor the usage of their website because, like many other large companies, they take user experience seriously and don’t hesitate to use personal data to figure out what a specific user wants from the website.
Also, you could understand that they use tracking tools to serve you with ads with tailored messages that are likely to interest you.
Finally, they have a purpose that serves their legitimate interest (fraud prevention) and some specific to their business (help merchants find and use apps in the app store).
The next you should check out is what the business needs to fulfill these processing purposes.
Fulfilling each purpose requires the processing of a certain category of personal data. So, now you need to see how data processing types relate to the purposes.
If the business collects your email address to send you a newsletter, then such a category of data relates to the purpose. Without the email address, the business could not ship you the newsletter.
If an app requires access to your photos on your smartphone to provide you with image editing services, then that is adequate for processing. But, if they request your geolocation data to provide you with an app to add filter photos, this is an obvious red flag. That app doesn’t need to know where you are at any given moment. They may use the data for something else or sell it for money.
See how Shopify solves the transparency requirement in relation to categories of data:
This table explains what categories of personal data they collect and how they use it.
Some businesses are not as transparent as Shopify, but it doesn’t mean they are not compliant. If you doubt their privacy practices, you can submit a data subject request and have your questions answered.
GDPR forbids businesses from exporting personal data to countries where data protection is below the EU protection levels unless they have a lawful basis for doing so or eventually implementing additional data security measures.
However, you can understand whether the data is being transferred outside of the European Union or not by having a look at the third parties to whom they disclose information.
This image shows some of the third parties they use for data processing. Many of them (and all of those on the image) are headquartered in the United States, making them subject to the US laws and may mean that the data is being transferred to the US. That makes things tricky regarding the GDPR because such transfer requires supplementary protective measures.
GDPR allows businesses to process data only if they have a lawful basis. The lawful basis listed in Article 6 of the GDPR includes:
The two most common lawful bases are the explicit consent and the execution (performing) of a contract.
Businesses usually obtain consent by using a cookie banner that appears on arrival allowing the user to accept or refuse the cookies by clicking on a button.
You need to ensure that:
Let’s imagine that an online business has collected your email address to deliver a pdf on a subject that interests you. You gave them your email; they sent you the PDF. They also asked if they could send you their weekly newsletter with marketing offers. You ticked the checkbox.
Now they have your email address. You have the PDF and their marketing materials.
They collected and processed your data for executing a contract (sending the pdf) and marketing purposes (mailing the promo newsletter). They do not use the email address for anything else. They use Mailerlite, which is a Lithuanian company with servers in the EU.
This means they have a good purpose for processing the email address, have a lawful basis for doing so, and do not transfer data outside of Europe. That’s compliant with the GDPR and nice privacy practice.
If they upload your email address on the Facebook Lookalike Audience tool and transfer your data to the US… well, that would violate the GDPR and many other similar data protection laws.
If you sign up for Shopify, they will monitor your behavior with Hotjar to see how you use the website and make improvements when they gather enough information about that. They have a useful purpose, a third-party tool to execute on purpose and collect information on your behavior – that is all aligned and a valid privacy practice as long as they obtain your consent for collecting personal information.
If you cannot determine yourself, reach out to a professional. | https://www.privacyaffairs.com/privacy-practices/ |
Socio-Economics is the study of relationship between economic activities and social life. It is a multidisciplinary components involving theories and modules from sociology and economics for human dignity among others. However, socioeconomists focuses on social impacts and political activities that affects economic changes, or causes that impact a society. The Goal to Socio/economic study is to bring about improvement on socioeconomic development environment…Give Opinion or Discuss
Thursday, February 11, 2010
Presidential Proclamation - National African American History Month
The White HouseOffice of the Press SecretaryFor Immediate ReleaseFebruary 01, 2010Presidential Proclamation -- National African American History MonthA PROCLAMATIONIn the centuries since African Americans first arrived on our shores, they have known the bitterness of slavery and oppression, the hope of progress, and the triumph of the American Dream. African American history is an essential thread of the American narrative that traces our Nation's enduring struggle to perfect itself. Each February, we recognize African American History Month as a moment to reflect upon how far we have come as a Nation, and what challenges remain. This year's theme, "The History of Black Economic Empowerment," calls upon us to honor the African Americans who overcame injustice and inequality to achieve financial independence and the security of self empowerment that comes with it.Nearly 100 years after the Civil War, African Americans still faced daunting challenges and indignities. Widespread racial prejudice inhibited their opportunities, and institutional discrimination such as black codes and Jim Crow laws denied them full citizenship rights. Despite these seemingly impossible barriers, pioneering African Americans blazed trails for themselves and their children. They became skilled workers and professionals. They purchased land, and a new generation of black entrepreneurs founded banks, educational institutions, newspapers, hospitals, and businesses of all kinds.This month, we recognize the courage and tenacity of so many hard-working Americans whose legacies are woven into the fabric of our Nation. We are heirs to their extraordinary progress. Racial prejudice is no longer the steepest barrier to opportunity for most African Americans, yet substantial obstacles remain in the remnants of past discrimination. Structural inequalities -- from disparities in education and health care to the vicious cycle of poverty -- still pose enormous hurdles for black communities across America.Overcoming today's challenges will require the same dedication and sense of urgency that enabled past generations of African Americans to rise above the injustices of their time. That is why my Administration is laying a new foundation for long-term economic growth that helps more than just a privileged few. We are working hard to give small businesses much-needed credit, to slash tax breaks for companies that ship jobs overseas, and to give those same breaks to companies that create jobs here at home. We are also reinvesting in our schools and making college more affordable, because a world class education is our country's best roadmap to prosperity.These initiatives will expand opportunities for African Americans, and for all Americans, but parents and community leaders must also be partners in this effort. We must push our children to reach for the full measure of their potential, just as the innovators who succeeded in previous generations pushed their children to achieve something greater. In the volumes of black history, much remains unwritten. Let us add our own chapter, full of progress and ambition, so that our children's children will know that we, too, did our part to erase an unjust past and build a brighter future.NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim February 2010 as National African American History Month. I call upon public officials, educators, librarians, and all the people of the United States to observe this month with appropriate programs, ceremonies, and activities.IN WITNESS WHEREOF, I have hereunto set my hand this first day of February, in the year of our Lord two thousand ten, and of the Independence of the United States of America the two hundred and thirty-fourth.BARACK OBAMAA PROCLAMATIONIn the centuries since African Americans first arrived on our shores, they have known the bitterness of slavery and oppression, the hope of progress, and the triumph of the American Dream. African American history is an essential thread of the American narrative that traces our Nation's enduring struggle to perfect itself. Each February, we recognize African American History Month as a moment to reflect upon how far we have come as a Nation, and what challenges remain. This year's theme, "The History of Black Economic Empowerment," calls upon us to honor the African Americans who overcame injustice and inequality to achieve financial independence and the security of self empowerment that comes with it.Nearly 100 years after the Civil War, African Americans still faced daunting challenges and indignities. Widespread racial prejudice inhibited their opportunities, and institutional discrimination such as black codes and Jim Crow laws denied them full citizenship rights. Despite these seemingly impossible barriers, pioneering African Americans blazed trails for themselves and their children. They became skilled workers and professionals. They purchased land, and a new generation of black entrepreneurs founded banks, educational institutions, newspapers, hospitals, and businesses of all kinds.This month, we recognize the courage and tenacity of so many hard-working Americans whose legacies are woven into the fabric of our Nation. We are heirs to their extraordinary progress. Racial prejudice is no longer the steepest barrier to opportunity for most African Americans, yet substantial obstacles remain in the remnants of past discrimination. Structural inequalities -- from disparities in education and health care to the vicious cycle of poverty -- still pose enormous hurdles for black communities across America.Overcoming today's challenges will require the same dedication and sense of urgency that enabled past generations of African Americans to rise above the injustices of their time. That is why my Administration is laying a new foundation for long-term economic growth that helps more than just a privileged few. We are working hard to give small businesses much-needed credit, to slash tax breaks for companies that ship jobs overseas, and to give those same breaks to companies that create jobs here at home. We are also reinvesting in our schools and making college more affordable, because a world class education is our country's best roadmap to prosperity.These initiatives will expand opportunities for African Americans, and for all Americans, but parents and community leaders must also be partners in this effort. We must push our children to reach for the full measure of their potential, just as the innovators who succeeded in previous generations pushed their children to achieve something greater. In the volumes of black history, much remains unwritten. Let us add our own chapter, full of progress and ambition, so that our children's children will know that we, too, did our part to erase an unjust past and build a brighter future.NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim February 2010 as National African American History Month. I call upon public officials, educators, librarians, and all the people of the United States to observe this month with appropriate programs, ceremonies, and activities.IN WITNESS WHEREOF, I have hereunto set my hand this first day of February, in the year of our Lord two thousand ten, and of the Independence of the United States of America the two hundred and thirty-fourth.BARACK OBAMA
Michelle Obama Photo at Whitehouse
About Me
As Project Director for Confederation Council Foundation for Africa inc.,I Aim to provide platform open doors for development towards the emerging entrepreneurship agenda among New American Migrants. We provide sustainable settlement towards self-sufficiency. The Foundation under my leadership intends to fulfil Immigration Settlement Program and the Millenium Development Goals Strategic Plan of Action. Our Mission is to Network for start-up business, job creation for talented and skilled individuals, Business Investments Locally and Abroad. We involve simple Economic and Educational Networking and empowerment - partnering with Federal Government, the Civic Societies, Faith Based and Non-Profit project under One-Stop-Service plan. The purpose is to encourage more Women and Youth who are the engine of CHANGE towards achieving sustainable policy enactment & Socio/Economic DEMANDs in order to curtail growing influx of African migration which will eventually reduce unwarranted relocation and suffering of many people.
| |
Next week, several more planes are expected to arrive in Ukraine with medical supplies needed to defeat the Covid-19.
This was underlined by President Vladimir Zelensky at a traditional meeting of representatives of the Cabinet of Ministers and the main ministries responsible for measures to prevent the spread of the coronavirus infection Covid-19, said the press service of the head of state.
“The second aircraft has already brought personal protective equipment, disinfection and necessary medical equipment today. The manufacturers have huge lines for the things necessary to overcome the infection. But for the Ukrainian government, they make it a priority, so we don’t have to wait weeks for purchases. Next week, we are awaiting the arrival of several more aircraft, ”said Zelensky.
Meeting participants also discussed the readiness of health facilities to receive Covid-19 patients and to equip new facilities to monitor those who may have a suspected disease. The President again noted the need to provide the population with disinfection and protection.
“Every pharmacy should have masks. And not at prices inflated dozens of times, but at a normal and affordable price. And it is important that our doctors are protected, because today they are in fact at the forefront. The national guard must be protected and, certainly, our soldiers, “said the head of state.
In addition, changes to the state budget were discussed, because in the context of anti-crisis measures, some funding will have to be redirected to prevent the spread of coronavirus infection.
See also: Second plane arrived from China with protective gear for doctors and security officials
It was noted that the meeting was held by communication. It was attended by leaders and representatives of the government, the police, the General Staff, the National Security and Defense Council.
As noted, today a second aircraft arrived from Boryspil Airport from China, which delivered protective equipment for doctors, the police and the military.
In addition, 100,000 PCR tests will arrive on a separate flight overnight.
At the same time, Prime Minister Denis Shmygal said the government had completed preliminary work on changes to the state budget to create the Stabilization Fund to fight coronaviruses and support people and businesses. | https://techumble.com/several-other-planes-with-medical-cargo-will-arrive-in-ukraine-zelensky/ |
London, Ontario - Adequate nutrition and the provision of specific nutrients are essential for optimal brain and visual development in infancy. Both docosahexaenoic acid (DHA) and the carotenoid lutein are highly concentrated in the brain...
HEALTH ODYSSEY - 88th Canadian Paediatric Society Annual Conference
Solving Common Feeding Difficulties in Paediatrics: A Practical Approach
Québec City, Québec / June 15-18, 2011
Québec City – Feeding difficulties in infants and children are extremely common and physicians need to identify specific types of feeding difficulties in order to counsel parents accordingly. Once identified, resolution of the...
HEALTH ODYSSEY - PEDIATRIC NUTRITION
A Practical Evaluation of the Benefits and Risks of Soy-based Formulas
April 2011
Soy-based formulas have a long history of successful use in the management of cow’s milk allergies. That said, several organizations, including the Canadian Paediatric Society, recommend that parents use extensively hydrolyzed...
HEALTH ODYSSEY - Third International Summit on the Identification and Management of Children with Feeding Difficulties
Current Trends in Identification and Management of Feeding Difficulties in Children
Miami, Florida / April 30-May 1, 2011
Many children with feeding difficulties may actually fall in the normal range of eating behaviours; however, the same presentation may also imply significant medical problems. Causes of feeding difficulties range from picky eating to autism...
HEALTH ODYSSEY - PEDIATRIC NUTRITION
Nucleotide-fortified Formulas Can Boost Neonate Immunity, Gastrointestinal Tract Maturation
January 2011
Human breast milk is the gold standard by which all other sources of infant nutrition are compared. Amongst its many advantages, it offers considerable immune bene?ts to the newborn baby’s immature immune system. To meet the...
HEALTH ODYSSEY - PRENATAL NUTRITION
Impact of Maternal Nutrition on Fetal Development
December 2010
Nutrition is perhaps the most influential non-genetic factor in fetal development.1 Maternal body composition, nutritional stores, diet, and ability to deliver nutrients through the placenta determines nutrient availability for...
HEALTH ODYSSEY - American Academy of Pediatrics Clinical Report on
The Effects of Early Dietary Interventions on the Development of Atopic Diseases: Updated Advice for Health Care Providers
Editorial: Timothy Vander Leek, MD, FRCPC Assistant Professor of Pediatrics, University of Alberta, Edmonton, Alberta Ten years ago, the American Academy of Pediatrics (AAP) issued recommendations that addressed dietary management for...
HEALTH ODYSSEY - Pediatric Nutrition
Identification and Management of Feeding Difficulties in Children
May 2010
Worldwide, between 30% and 60% of parents believe their children are not eating appropriately. The spectrum of feeding difficulties is broad, ranging from picky eating to autism. Organic disease, infantile anorexia, food allergies, food...
HEALTH ODYSSEY - Diabetes Management Circle
Blood Glucose Self-Monitoring in Diabetes: Identifying and Dismantling Barriers to Adherence
May 2010
It is clearly important to demonstrate to patients with diabetes that good adherence to self-monitoring of blood glucose (SMBG) favourably affects diabetes control. Among the more recent efforts documenting the relationship between...
HEALTH ODYSSEY
Feeding Difficulties in Infants and Young Children: Tailor Interventions to Match Child Behaviours
June 2009
Editorial: Glenn Berall, MD, FRCPC Chief of Pediatrics, North York General Hospital, Assistant Professor of Paediatrics, Division of Gastroenterology, Hepatology and Nutrition, University of Toronto, Toronto, Ontario Feeding difficulties... | http://www.mednet.ca/en/section/?source=Health+Odyssey |
Superintendent Kamar Samuels (District 13) has announced the implementation of the International Baccalaureate program in 8 public elementary and 4 public middle schools in District 13 in the next two years. The International Baccalaureate (IB) is a global leader in international education—developing inquiring, knowledgeable, confident, and caring young people. Participating elementary schools are PS 20 in Fort Greene, PS 287 in Downtown Brooklyn, PS 9 in Prospect Heights, PS 56 in Clinton Hill and PS 282 in Park Slope. Middle schools rolling out this new approach to teaching and learning are MS 113 in Fort Greene and MS 301 in Bed Stuy amongst others.
What exactly can we see in terms of implementation in elementary and middle school?
Elementary and middle school teachers will use an inquiry approach to teaching and learning that centers students and fosters intellectual exploration and engagement. Students will be encouraged to achieve multilingualism and cultural openness, both essential skills for 21st century leadership. And learning will take place primarily through a project-based curriculum designed to center cultural responsiveness, celebrate the whole child, highlight individual students’ talents, and challenge our young people to stretch themselves intellectually and develop into the leaders and thinkers of the next generation. Schools will take a global perspective through culturally diverse books; transdisciplinary projects; increased independence through a mastery approach to learning and assessment; and increased student voice through discussion, collaboration and presentation.
What is the timeline for full implementation?
All above listed District 13 Schools are currently approved IB Candidate Schools, having had their applications reviewed and accepted by the International Baccalaureate Organization. The Department of Education expects candidate schools to work this year and next to develop and finetune the necessary deliverables and milestones for accreditation by the end of the 2022-23 school year.The staff in all 12 schools will be undergoing IB training this year, and next year schools will be implementing IB practices and preparing their written applications for final approval and planning their site visits by a representative of IBO who will observe classes and interview staff as part of the final approval process. There is no hard deadline; however, the plan is for a majority of schools to become official IB schools by the 2022-23 academic year. In addition to the common features with the elementary (or IB Primary Years Program), middle school teachers will collaborate across content areas to develop and implement a project-based curriculum that is less siloed and exposes students to the richness of real-world problem solving. Students in the IB Middle Years Programs will work on a culminating 8th grade project through which they can display their creativity, expertise, and communication skills.
What about languages in these IB programs?
More than just “adding a foreign language,” we aim to center the appreciation for and acquisition of language as a core value of our work with students as we prepare them for life and work in a global economy. We believe multilingualism and cultural openness will be essential skills for the leaders of the 21st century. Even more importantly, we know that mastering an additional language enhances the brain’s function; multilingual/multiliterate people consistently demonstrate stronger executive function, critical thinking, and creative problem-solving skills. The development of a schoolwide Language Acquisition Policy is a requirement for accreditation, and the thoughtful incorporation of foreign language into each school’s curriculum is a cornerstone of the IB program. Students will have exposure to foreign language through a world language approach for elementary schools that are not experiencing the District’s Dual Language programs. District IB Candidate schools with Dual Language programs include PS 3, PS 9 (Spanish) and PS 20 (French).
Do you see this as a replacement of G&T or is it a tool to have more families opt into D 13 public schools?
Inquiry-based learning is active learning that starts with posing questions, problems or scenarios, rather than presenting established facts or portraying a “smooth path” to knowledge. It is the antithesis of rote learning which focuses on the regurgitation of facts, and instead seeks to cultivate the minds of young people to generate their own questions and devise their own, uniquely brilliant solutions to relevant, interesting, real-world problems. It is the learning approach of choice among the nation’s most elite private schools, and we aim to bring that same culture of intellectual and creative exploration to all students of District 13. An inquiry-based approach honors the complex work of learning. It prioritizes the knowledge and experience that students bring to the classroom and promotes active problem solving, communication, and the shared construction of new ideas. Upon completion of our IB programs, D13 graduates will be self-sufficient, engaged learners, capable of developing their own complex inquiries and demonstrating impressive skills in self-management and critical thinking. Importantly, implementing the International Baccalaureate program in District 13 schools is an equity issue for us. Our shared values of equity and cultural responsiveness are best infused into this program which is a tool employed to raise the instructional standards in all schools. The International Baccalaureate Program has incorporated high standards for all students and is accepted as delivering a high quality education throughout the world.
Which are the 8 elementary schools?
- 13K003
- 13K009
- 13K020
- 13K056
- 13K093
- 13K270
- 13K282
- 13K287
Which are the 4 middle schools? | https://brooklynbridgeparents.com/international-baccalaureate-program-rolling-out-in-12-public-schools-in-district-13/ |
An asset bubble occurs when the price of an asset rises above the current market value within a short period of time, and later declines. It commonly occurs when many investors have a common interest on a particular asset such as stocks, commodities, or housing, etc, and heavily invest, leading to over-inflation and the later realization that it has become massively overpriced, leading to massive selloffs, and a price correction back to it’s true value in the economy. Alan Greenspan former Federal Reserve Chairman characterizes asset bubble as “irrational exuberance”.
Causes Of Asset Bubbles
A) Low interest rates
B) Demand-pull inflation
C) Supply shortages
When banks lend money at lower interest rates, investors borrow a lot to invest in Treasury bonds. This can result in a lot of money circulating in the economy, hence lowering profits on the bonds, and this prompts them to invest in other opportunities.
Demand-Pull Inflation occurs when the demand of a certain commodity is more than the available commodities in the market. This is a large number of investors showing interest on certain limited commodities; As an example, shares and the economy cannot meet the investor’s demands. This causes the prices to rise, leading to over inflation.
Supply shortages leads to asset bubbles in the case where investors have the impression that a certain asset will not be available in the market, thus leading them to invest more on purchasing the asset before it gets depleted.
The collapse of an asset bubble causes stagnation in the previously accelerated economy, especially when the market declines. Hence, the economy cannot return to the same place it was before (at least not immediately). This leads to the loss of billions of dollars of cash, and the loss of investors as well.
Historical Examples Of Asset/Economic Bubbles
- The Dutch Tulip Bubble– This happened in the 1630’s when the price of Tulips skyrocketed to 20 times their normal price within four months, from November 1636 to February 1637, and then delve down 99% in price within a month. Economic professor Earl A. Thompson stated that the price of some tulip bulbs was greater than some luxury homes.
- The South Sea Bubble – The South Sea Company was incorporated in 1711 and they had an interest in trade with the Spanish colonies of South America, in which the British Government assured them of their support of a monopoly. Investors expected the company business to excel like the East India Trading Company, therefore, they bought The South Sea Company’s shares. However, the unexpected happened and the company’s shares increased eight times within a period of six months in 1720, from 128 Euros in January to 1,050 Euros in June, and later the company collapsed causing an economic disruption.
- The Dot-Com Bubble – This happened in the 1990s when the internet was first introduced and dot com companies reaped huge profits as soon as they went public. The NASDAQ composite profits rose from below 500 in January 1990 to a peak of over 5,000 in March 2000. However this did not last for so long after it dropped by 80% by October 2002 and this caused a US recession.
- The US Housing Bubble – When the NASDAQ Bubble busted, many investors turned to real estate, thinking that it was a safe asset class. According to a report from the US Bureau of Labor Statistics, the price of housing rose at a very high rate and nearly doubled between 1996 and 2006. Another two-third increase occurred from 2002 to 2006. Later in 2006, there was a break-even in the price. Thereafter, the decline in price started by one-third value of the US house by 2009. The adverse effect of price peeking and collapsing was a global economic contraction.
Bitcoin And The Bubble
Bitcoin prices have been rising tremendously every day over 2017 – interpreted by some as an asset bubble, and by others as simply finding its true value with growing acceptance and adoption. Sometimes the prices decline and later return back to normal.
Kenneth Rogoff, an economics professor at Harvard University, clearly states that Bitcoin’s price bubble will burst in the long run. He bases his arguments on the fact that there has been a 600% increase in price over the period of one year and a 1,600% increase over the period of two years. He further states that at the moment, a single virtual currency unit’s value is three times an ounce of gold. He even predicts that the price may rise higher in the next few years. However, this will not last for long as other competitors are being introduced in the market and the entry of other cryptos will have a great impact on Bitcoin prices.
The Japan and US governments are putting in place measures to regulate the use of Bitcoin due to the tremendous tax evasion and crime that occurred earlier on when criminals laundered money through Bitcoin. Despite competition, the price of Bitcoin rose and this clearly shows that the chance of a Bitcoin bubble bursting in the short run is slim (if there is a bubble). The government of China recently restricted Bitcoin exchanges in the country, likely because it imposed a political threat to monetary policy.
It is estimated by some that by 2022, the cryptocurrency market could be worth $5 trillion dollars. Currently the total number of cryptocurrencies is around 1,193 and their combined value is around $168,946,341,105.
Bitcoin will continue trading underpinned by its tremendous growth and few regulations. Although any existing bubble may not burst any time soon, the introduction of Ethernet with a current growth of 300% may pose a great impact on Bitcoin. Banks in Asia and Europe are also on the verge of introducing digital currencies which will offer consumers a stable digital currency.
Some experts argue that Bitcoin is in a bubble because the value of the currency is beyond the intrinsic value, and that the rate at which the currency is being utilized is lower than its price. Bitcoin’s price has been attracting a large pool of investors and this has led to inflation. However, chances are that later on there could be a major sell off when investors lose interest and withdraw their investment – which could result in a bubble burst.
Nathan Martin, a writer at the Economic Edge, states that people who buy Bitcoin for speculative purposes rather than for its original purpose are more likely to judge Bitcoin’s price as being in a bubble. He further argues that compared to other assets, Bitcoin has more store value, and with time, it will attract more numbers of people. Also, Bitcoin is not controlled by the Central bank, hence it cannot impose restrictions such as derivatives and exchange traded funds to manipulate the company as it does in stocks, bonds, and real estate, even in gold and silver.
Bitcoins are stored in cyber ‘vaults’ and the chances of the system being hacked are minimal. Exclusive access to the stored bitcoins is given to their owners. Overall, cryptocurrencies are digital currencies with a solid foundation, although they have a long way to go before they can replace the conventional payment methods in the global commerce scene.
Note, DinarDirham is not an investment or financial advisor, and should not be taken as such. The purpose of this article is for entertainment and informational purposes only, and may not be entirely accurate, please do your own research. Any actions taken by a reader of this post is their sole responsibility. It is generally good advise to never invest more than you are willing to lose.
Resources: | https://www.dinardirham.com/is-bitcoin-in-a-bubble-or-just-finding-a-new-price-plateau/ |
Project Update from the Curator
08 April 2020
In terms of physical exhibitions, The Place I Call Home project is now over. The evaluation and dissemination of the project content and stories will continue, despite the current global crisis. We will continue to post new content on the website, and our partner at Maraya Art Centre in Sharjah is launching an online photo challenge, as well as video content about the exhibition there and a 3D virtual tour. Additionally, Ffotogallery has now produced a legacy catalogue for The Place I Call Home, bringing together in a single volume content from the seven individual publications which have been distributed free to exhibition visitors, partners, participating artists and young people.
The project’s aim was to create new work exploring shared narratives of people from the Gulf living in the UK and British people living in the Gulf states, and to open up a conversation about identity, culture and future possibilities with audiences. I believe we achieved that well, and together we pulled off a remarkable project in occasionally challenging circumstances.
The resulting exhibition was creative and vibrant, not purely adhering to documentary forms and modalities. It shared visual depictions of cultures, history and heritage and forged new and enriched working relationships between the British Council, Ffotogallery, and the participating artists and arts, cultural and educational institutions.
The exhibition and accompanying programmes facilitated new creative partnerships with other organisations such as Dar Al-Hekma University in Jeddah and Bahrain University, the American University in Kuwait and British Schools in Oman and Bahrain. These are significant partnerships and form the basis for potential work in the future.
Another impact was the Creative Internship scheme – The Place I Call Home created recruitment of a Young Curators Group in Derby, UK, three Creative Interns in Kuwait, Qatar, Saudi Arabia and Edinburgh, and five in Bahrain, who assisted with invigilation and de-installations, and generating new creative content subsequently shared online through web and social media platforms.
The social media and web platforms remained ‘live’ throughout and had additional content added after the exhibition finished in each country.
Online reach and engagement to date have been impressive, particularly in the context of a photographic exhibition:
38,250 visitors were recorded on the website (www.theplaceicallhome.org) between July 2019 and March 2020
57% of website visitor traffic was from outside the UK with around 43% in the GCC region
75,625 “engagements” were generated, in the form of likes, retweets and comments, across social media platforms such as Facebook, Twitter, Instagram.
There have also been 75,625 views/followers across ffotogallery’s social media platforms with total organic reach being 1,356,000.
Given the high profile and exposure the project attained, it has provided an important platform for UK and GCC-based artists and interns, both in terms of the wider international exposure of their work and the opportunities to make new work and to become involved in public programmes.
The project is also timely, not only in terms of shedding light on UK-Gulf relationships, but also in creating opportunities for the work of GCC artists to be exposed beyond the country they are resident in. Over the preceding decade there have been very few solo exhibitions and no group exhibitions in Europe which have featured photographers and lens-based artists from across the GCC region.
As I reflect on the project at home, I have some many good memories and stories I will endeavour to share in the coming months and years.
I would like to thank everybody involved in the project, in whatever capacity, for contributing to its success and believing in the value of it. As galleries, theatres, concert halls, libraries, universities, schools and places of worship remain closed around the world, we should take heart from what we have achieved together over the last two years, and what is possible in the future when we finally emerge from the shadow of the current global pandemic.
| |
How parents can help their young children develop healthy social skills
As the new year dawns, parents likely turn their thoughts to their child and new beginnings they may experience as they enter an early childhood education and care centre or preschool. Naturally, it’s a time of reflection on the previous year, and excitement about the possibilities for the new year to come.
Parents might reflect on friendships their child makes in the coming year. Making friends is not always instinctive for a young child. Learning to make friends is part of the social development curriculum in early childhood.
Social development skills are just as important as cognitive skills when learning. In recent studies, positive social skills are highlighted as key predictors for better outcomes in adulthood. It’s important for parents to be aware of ways to ensure positive social development skills in their young child.
Parents can begin by looking for interpersonal people skills, such as empathy, listening and communication skills. This will help your child transition into the next stage of their educational journey.
Making friends through the stages of play
There is a range of research about stages of play a young child engages in when they’re learning to make friends. According to brain development research, a young child begins to develop pathways in their brain for social skills from birth.
According to research, there are six stages of play with associated social skills. These are assessed in the early childhood curriculum. The following stages and social skills are approximate and to be used as a guide only:
Understanding some of these key indicators of social skills required to for play will help you consider their ability. Take time to observe your young child’s social interactions in a range of settings. Watch them at home, with family and friends, as well as in their preschool or early childhood education and care centre. This may help you determine if your child is engaging socially during play to make friends.
What’s next?
When a child moves from one educational setting to another, we call this movement a transition. Positive social development skills are an asset for your child during this time. Educators at both educational settings will work in partnership with you, and each other, to make sure the transition is as smooth as possible.
Essentially there are some key indicators which will help children during transitions: self-care, separating from parents, growing independence, and readiness to learn. As parents you can:
familiarise your child with the new environment engage in active listening as your child expresses their thoughts and feelings about starting in a new learning environment ensure children start the new year with all required equipment recommended by the centre or school arrange to meet other people starting in the new year and practice turn taking, listening, asking questions and asking for help before the new year begins.
This will support development of social skills for your young child and help them make new friends more readily. | |
Dark blue color represents the center of each cell. Brown-red color shows Uc.416+A. The top panel shows cancerous stomach cells with high levels of Uc.416+A, but none is detected in healthy cells in the bottom panel.
Researchers at Hiroshima University have opened the door to finding a new class of cancer-causing genetic variations.
Using a combination of pre-existing electronic databases and their own experiments with cancerous and healthy cells, researchers linked stomach (gastric) and prostate cancer to a specific type of DNA called transcribed-ultraconserved regions (T-UCRs). This approach will likely reveal more links between T-UCRs and other cancers in the future.
Modern research studies, like this one led by Professor Wataru Yasui, the Dean of the Institute and Graduate School of Biomedical & Health Sciences at Hiroshima University, are enhancing traditional understandings of cancer genetics.
The human genome is made of lots of DNA, but only some of that DNA forms the genes that become the proteins that make up our bodies. When the human genome was first mapped, sometimes DNA that wasn't part of a gene was referred to as "junk DNA." However, scientists soon realized that DNA that is not part of a gene is often important for controlling how genes make proteins inside the cell.
One type of DNA important for controlling genes is called a Transcribed-Ultra Conserved Region (T-UCR). These stretches of DNA are believed to be extremely important for gene regulation because all 481 of them are identical in humans, mice, and rats. This evolutionary conservation often indicates that the DNA has an essential function.
Based on previous research, Prof. Yasui's team chose to focus on certain T-UCRs already believed to be important for either stomach (gastric) or prostate cancer. They measured how cancer cells grew in a dish after drug treatments. Based on their knowledge of how the drugs work inside the cell and their careful observations of how the cells behaved, the scientists could deduce how the T-UCRs influence cancer growth. More interestingly, the research team could also begin to understand how the T-UCRs themselves are controlled by other parts of DNA.
Several T-UCRs are regulated by the number of extra chemical groups attached to specific points of other DNA called promoter regions. The presence of these groups on DNA forms a code referred to as epigenetic methylation. Too much methylation on the promoter regions that control the T-UCRs means the T-UCRs are unable to control the excessive cell growth that causes cancer.
Prof. Yasui's team identified multiple T-UCRs controlled by the epigenetic methylation of their promoter regions. One of those T-UCRs, called Uc.158+A, had never before been described by any cancer scientists.
However, the researchers also found one T-UCR, called Uc.416+A, that was not affected by drugs that alter epigenetic methylation, hinting at a different style of regulation. Prof. Yasui's team then searched through two different electronic databases and found another molecule that they predicted would interact with Uc.416+A.
After another set of cellular experiments, the researchers discovered relationships between Uc.416+A and multiple other molecules inside the cell. Understanding how these molecules and Uc.416+A influence each other allowed Prof. Yasui's team to identify a potential regulatory pathway of stomach (gastric) cancer. Two of those molecules are already known by cancer scientists to be important in controlling how fast cells multiply. However, Hiroshima University researchers are the first to connect Uc.416+A to this regulatory pathway.
Applying the approach of searching multiple scientific databases and performing cellular experiments should allow researchers to identify T-UCRs relevant in other cancers and will hopefully lead to increased cancer therapies and preventions. Prof. Yasui plans to continue this work at Hiroshima University and his team is currently investigating methods for measuring T-UCR levels in blood samples as a potential future cancer diagnostic test.
The above post is reprinted from materials provided by Hiroshima University. Note: Materials may be edited for content and length.
Disclaimer: DoveMed is not responsible for the adapted accuracy of news releases posted to DoveMed by contributing universities and institutions.
Primary Resource:
Goto, K., Ishikawa, S., Honma, R., Tanimoto, K., Sakamoto, N., Sentani, K., ... & Yasui, W. (2015). The transcribed-ultraconserved regions in prostate and gastric cancer: DNA hypermethylation and microRNA-associated regulation. Oncogene. | https://www.dovemed.com/current-medical-news/new-genetic-cause-gastric-prostate-cancer-identified/ |
AVA’s training equality and diversity policy
AVA complies with all legal obligations under The Equality Act 2010. AVA, comprising the Board of Trustees and its employees, is committed to providing equality of opportunities.
How do we ensure attendees to any of our training courses, workshops or events are treated equally and have equal opportunity to participate fully?
Delegates are asked upon booking their place on a training/event whether they have any special requirements. This may include medical requirements, access needs, mobility issues etc. When a requirement is declared, AVA takes all reasonable measures necessary to ensure that the learner is accommodated.
AVA’s training and events venues are always accessible to individuals with mobility or access needs.
AVA ensures provision can be made for learners of faith whether dietary provision, providing prayer provision or ensuring dates do not prohibitively coincide with major religious festivals.
Reasonable adjustments will be made by AVA for learners who have physical or learning disabilities, including production of large-print materials, interpreters, hearing loop provision etc
Learners of any sex, sexual orientation and gender assignment are all welcome on AVA’s courses.
All AVA trainers are asked to take responsibility for the promotion of respect, equality and diversity in the delivery of all training sessions as well as encouraging delegates to do the same with each other. It is essential to challenge behaviours and opinions where necessary whilst maintaining an open and informal environment.
Trainers are asked to set up group agreements at the beginning of training; ensuring that they are providing a safe and secure space for delegates.
How do we ensure the content of training courses, workshops or events incorporate diversity?
AVA is aware that for many of our learners, English is not their first language and so our training courses aim to use as simple language as possible and refrain from using English language colloquialisms.
As our learners require no formal educational qualifications to attend our courses we are aware that literacy levels can vary. Our training courses aim to use as simple language as possible and provide clear instructions using a variety of methods. Where possible, written material will be discussed or expressed verbally in the training.
AVA ensures that our examples and case studies used in our courses demonstrate the diversity of clients that practitioners work with.
AVA ensures that any directories we provide include a wide range of organisations that provide support and advice to diverse groups.
How do we ensure we plan and design our assessed courses to give every learner an equal opportunity to pass the course?
AVA staff ensure various learning styles are incorporated into our training courses.
AVA uses differentiation techniques where possible; providing different learners with different tasks to best suit them whilst still meeting the same criteria.
AVA’s resubmissions policy allows learners to resubmit work up to three times (excluding the original) without it compromising the assessment.
Course support sessions are provided so that learners can raise issues with their tutor, throughout the duration of the course.
Assessment tasks are flexible and thus can be changed to incorporate a learner’s needs eg. a learner could complete all assessment orally.
What safeguards does AVA have in place to ensure the above actions are taking place?
AVA provides an evaluation form and diversity monitoring form for all its courses, events and workshops to identify any issues with the provision of equal opportunities and reaching a diversity of individuals.
Trainers and assessors are
- measured on their awareness of equality and diversity issues within recruitment procedures
- bound by AVA’s equality and diversity policy and non-adherence to the policy is a disciplinary offence
- involved in work which requires knowledge of equality and diversity issues and legislation change on a regular basis
- invited to attend an annual workshop on equalities and diversity issues held internally by AVA.
AVA has an organisational equality & diversity policy that includes sections on
- Adhering to legislation
- Recruitment
- Treatment of employees
- Our service
- Services we support
- Trainers and delegates
All assessed courses are internally verified to ensure that
- Learning materials include diverse groups and diverse learning styles
- Assessment tasks incorporate differentiation
- Learners have been assessed equally
- Any issues on equality and diversity have been recorded and actioned
- Our complaints policy is accessible
- Delegates have been encouraged to treat each other equally
AVA reserves the right not to provide services to clients who act in ways which contravene this policy and infringe the rights of others.
AVA’s service users who wish to complain about the operation of this policy are asked to request a Complaints Record Form from the Training & Events Coordinator.
AVA’s assessed courses are accredited by Open College Network (London Region). | https://avaproject.org.uk/ava-training-equality-diversity-policy/ |
Received:
12
September
2003
Accepted: 3 November 2003
Deep far-infrared (FIR) imaging data obtained with ISOPHOT at , , and detected the thermal emission from cold dust in the northern shell region of NGC 5128 (Centaurus A), where previously neutral hydrogen and molecular gas has been found. A somewhat extended FIR emission region is present in both the and map, while only an upper flux limit could be derived from the data. The FIR spectral energy distribution can be reconciled with a modified blackbody spectrum with very cold dust color temperatures and emissivity indices in the range K and , respectively, where the data favor the low temperature end. A representative value for the associated dust mass is , which together with the HI gas mass gives a gas-to-dust ratio of ≈300, close the average values of normal inactive spiral galaxies. This value, in conjunction with the atomic to molecular gas mass ratio typical for a spiral galaxy, indicates that the interstellar medium (ISM) from the inner part of a captured disk galaxy is likely the origin of the outlying gas and dust. These observations are in agreement with recent theoretical considerations that in galaxy interactions leading to stellar shell structures the less dissipative clumpy component of the ISM from the captured galaxy can lead to gaseous shells. Alternatively, the outlying gas and dust could be a rotating ring structure resulting from an interaction or even late infall of tidal material of a merger in the distant past. With all three components (atomic gas, molecular gas, dust) of the ISM present in the northern shell region, local star formation may account for the chains of young blue stars surrounding the region to the east and north. The dust cloud may also be involved in the disruption of the large scale radio jet before entering the brighter region of the northern radio lobe.
Key words: galaxies: individual: NGC 5128 / galaxies: elliptical & lenticular, cD / galaxies: intergalactic medium / infrared: general / infrared: galaxies
© ESO, 2004
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | https://www.aanda.org/articles/aa/abs/2004/07/aa0314/aa0314.html |
The terms RMR (Resting Metabolic Rate) and BMR (Basal Metabolic Rate) are often used interchangeably. But do they really mean the same thing? Are they measured the same way? What are they trying to measure?
Metabolic rate represents the number of calories needed to fuel ventilation, blood circulation, and temperature regulation. Calories are also required to digest and absorb consumed food and fuel the activities of daily life. Or put another way, metabolic rate is an estimate of how many calories you would burn if you were to do nothing but rest for 24 hours. It represents the minimum amount of energy required to keep your body functioning.
BMR is synonymous with Basal Energy Expenditure or BEE. BMR measurements are typically taken in a darkened room upon waking after eight hours of sleep, 12 hours of fasting to ensure that the digestive system is inactive, and with the subject resting in a reclined position.
RMR is synonymous with Resting Energy Expenditure or REE. RMR measurements are typically taken under less restricted conditions than BMR and do not require that the subject spend the night sleeping in the test facility prior to testing.
Both BMR and RMR are measured by gas analysis through either direct or indirect calorimetry, although a rough estimation of RMR can be acquired through an equation using age, sex, height, and weight. This equation is the Mifflin-St.Jeor equation. You can also find the equation online at: calculate your daily caloric needs.
So, are RMR and BMR the same? Mostly, except for the fact that BMR is going to be slightly more accurate. However, both play a role in the process of weight loss and weight maintenance. | https://www.acefitness.org/fitness-certifications/ace-answers/exam-preparation-blog/616/bmr-versus-rmr/ |
Information About Cats
Information about cats wouldn’t be complete without asking some basic questions. Before I do though you might like to read about animal trivia, which has some nice cat-stats.
What is the origin of the domestic cat?
Wildcats are formed of 5 sub-species (see below). Of these 5 sub-species, recent analysis of DNA tells us that the domestic cat is a domesticated version of the Near Eastern wildcat (Felix silvestris lybica).
The earliest evidence of domestication was found in a 9,500 year old grave in Cyprus, where a cat was buried next to a human. The wild cat had “come in from the cold” to eat rodents infesting the grain stores of humans and a mutually beneficial arrangement began.
Scientific Classification:
The domestic cat was named Felis catus by Carolus Linnaeus in his Systema Naturae of 1758. Johann Christian Daniel von Schreber named the Wildcat Felis silvestris in 1775.
The International Commission on Zoological Nomenclature confirmed in 2003 Felis silvestris for the wildcat and Felis silvestris catus for the domestic cat.
What is the current position regarding the domestic cat?
Information about cats briefly looks at humans’ impact on the cat. As an example, the wildcat has almost been eradicated from the UK by humans. The wildcat found sanctuary in Scotland where there are now 400 left (at 2007). The number of domestic cats in the UK is estimated to be about 7.7 million. There are an estimated 100m in the Western World and 500m worldwide.
Once the wildcat became domesticated things changed for the domestic cat over the intervening 9,500 years. Humans like the appearance of things. The appearance of things is important to humans. Humans also like to classify things and possess things.
The result is that humans have breed cat types that they like the look of (to increase their numbers) and to create other cat breeds that they find attractive (and commercially viable).
Which are the major the cat registries?
There are 3 major cat registries, 2 in North America and one in the UK. The Cat Fanciers Association (CFA) is currently the world’s largest registry of pedigreed cats. They recognise 39 breeds.
The International Cat Association (TICA) is the 2nd largest in North America. The Governing Council of the Cat Fancy (GCCF) is the primary governing body of the Cat Fancy in the United Kingdom. (The Federation Internationale Feline is a 4th and is “a leading international cat fancier society”. It is based in Luxembourg).
The popularity of breeds differs from country to country and the number of registrations not only depends on popularity of a breed but whether the registry will register the breed. However, the charts below give a clear indication as to the popularity of a breed.
What is the current position regarding the major cat registries?
Overview:
The CFA is declining in numbers registered – why? Probably due to registration policy. It does not recognise breeds with wild blood (e.g. bengal). TICA takes these so it is growing in numbers registered. Also CFA insists upon mandatory inspections of high volume breeders and it is heavily reliant on persian breed registrations, which are declining.
The TICA has almost doubled size past 10 yrs. Probably due to liberal registration policy which includes supporting new breeds. Cat breeders will like this as they seek new business and new breeds
The GCCF is doing fine and heading towards being dominant cat registry.
The decline in kitten registrations by CFA is compensated by increased registrations by TICA. Analysis of information about cats provided by the main registries allowed the following charts to be made.
2004 registrations by Registry
(I am researching more recent figures)
CFA registrations
2004 (I am still looking for more recent figures)
You can see a list of the CFA breeds by clicking on this link.
GCCF registrations
for 2006 (most recently available)
TICA registrations
for 2005 (most recently available as far as I can see).
Information about cats relied on the following reference sources:
- The Mammal Society
- The Scottish Wildcat Association
- The Cat Fanciers Association
- Governing Council of the Cat Fancy.
- The Journal Science, through a Times article
- The Federation Internationale Feline
There’s a lot more information about cats on the other pages. | https://pictures-of-cats.org/information-about-cats.html |
Registered nurses Nancy Halstead and Emilie Gordon are using their skills to support technology that links Waterloo Region patients to specialists in other cities.
Emilie and Nancy care for patients who visit GRH to undergo telemedicine appointments. When a specialist is not available locally, Emilie and Nancy will bring the patient to GRH, arrange specialized videoconferencing technology and provide clinical support such as conducting physical exams as appointments take place.
They also use a specialized camera which provides magnification and detailed close up views of wounds or anatomy based on a physician’s needs.
Through technology provided via the Ontario Telemedicine Network (OTN), patients are connecting with specialists all over the province. Emilie and Nancy are active partners in helping patients become comfortable with these types of appointments, and making sure they get the care they need.
What have been your main areas of expertise in nursing care?
Emilie: I have been a nurse for 15 years, all here at Grand River Hospital. I started out in the emergency department, moved to the post-anesthetic care unit and most recently have been working in the intensive care unit and as member of the critical care response team.
Nancy: I have also been a nurse here at Grand River Hospital for 15 years. I started out in the medicine program before moving to the stroke program where I worked as a floor nurse for 10 years and then resource nurse for two years. Since 2015, I have been working in the secondary stroke prevention and telemedicine clinics.
What drew you to telemedicine?
Emilie: We like the ability to use technology to help patients achieve their health care goals. Telemedicine is able to remove barriers to care such as, geographical distance.
Nancy: Technology is being used more and more in health care and telemedicine is the next evolution in delivering patient-centered care.
Who would need to access telemedicine at GRH?
Emilie: Patients who live in a rural community who would otherwise have to travel long distances to see a specialist are able to connect virtually with their care providers. This results in improved access to care, more efficient care delivery and promotes collaboration between providers.
What do patients expect when they hear they have a telemedicine appointment?
Nancy: Patients can expect that they will still see their health care provider but virtually. With the capabilities of teleconferencing technology, patients can see and hear their provider much the same way they would if they were seeing them face to face.
As nurses, we help conduct physical exams and assessments depending on the needs of the consultant on the other end of the call.
How do you think nurses make telemedicine work better for patients?
Emilie: Nurses are able to reach patients in remote areas or who have mobility needs. We are able to monitor the patient’s condition and interact with them just as we would if we saw them face-to-face.
Our scope of practice doesn’t change with telemedicine. We act as advocates and a liaison between the patient and the health care provider. We are able to replicate the traditional face-to-face visit.
After a patient has used a telemedicine appointment, how do they describe their experience?
Nancy: after a brief discussion about what to expect and a few minutes into the appointment, patients are very satisfied after their telemedicine visit. They appreciate the visit because it saves them from potentially day-long excursions to a larger centre for an hour-long appointment. Their questions and concerns are addressed and any follow up can be done locally.
How has telemedicine changed over the years?
Emilie: Telemedicine has evolved to keep pace with emerging technologies. Care providers can connect to their patients using their own laptops and even connect with their smartphones. It’s not just physicians using telemedicine… nurses, nurse practitioners, social workers and dietitians are able to connect with patients.
How do you see telemedicine changing in the years to come?
Nancy: I think we will see a change in the way we deliver care to patients. It’s very beneficial if patients can potentially see their health care provider in either their local family health team or in the comfort of their own home. | https://www.grhosp.on.ca/stories/nancy-emilie-otn |
"Nematic Polar Anchoring Strength Measured by Electric Field Techniques" by Yuriy A. Nastishin, R. D. Polak et al.
We analyze the high-electric-field technique designed by Yokoyama and van Sprang [J. Appl. Phys. 57, 4520 (1985)] to determine the polar anchoring coefficient W of a nematic liquid crystal-solid substrate. The technique implies simultaneous measurement of the optical phase retardation and capacitance as functions of the applied voltage well above the threshold of the Frederiks transition. We develop a generalized model that allows for the determination of W for tilted director orientation. Furthermore, the model results in a new high-field technique, (referred to as the RV technique), based on the measurement of retardation versus applied voltage. W is determined from a simple linear fit over a well-specified voltage window. No capacitancemeasurements are needed to determine W when the dielectric constants of the liquid crystal are known. We analyze the validity of the Yokoyama–van Sprang (YvS) and RV techniques and show that experimental data in real cells often do not follow the theoretical curves. The reason is that the director distribution is inhomogeneous in the plane of the bounding plates, while the theory assumes that the director is not distorted in this plane. This discrepancy can greatly modify the fitted value of 1/W, and even change its sign, thus making the determination of W meaningless. We suggest a protocol that allows one to check if the cell can be used to measureW by the YvS or RV techniques. The protocol establishes new criteria that were absent in the original YvS procedure. The results are compared with other data on W, obtained by a threshold-field technique for the same nematic-substrate pair.
Copyright 1999 American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics. The following article appeared in J. Appl. Phys. 86, 4199 (1999) and may be found at http://dx.doi.org/10.1063/1.371347. | https://digitalcommons.kent.edu/cpippubs/109/ |
This paper shows how the cross-equation restrictions implied by dynamic rational expectations models can be used to resolve the aliasing identification problem. Using a continuous time, linear-quadratic optimization environment, this paper describes how the resulting restrictions are sufficient to identify the parameters of the underlying continuous time process when it is known that the true continuous time process has a rational spectral density matrix.
Commodity money is modeled as one or two of the capital goods in a one-consumption good and one or two capital-good, overlapping generations model. Among the topics addressed using versions of the model are (i) the nature of the inefficiency of commodity money; (ii) the validity of quantity-theory predictions for commodity money systems; (iii) the circumstances under which one commodity emerges naturally as the commodity money; (iv) the role of inside money (money backed by private debt) in commodity money systems; and (v) the circumstances under which a government can choose the commodity to serve as the commodity money.
This paper surveys recent issues in macroeconomics from the viewpoint of dynamic economic theory. The need to look beyond demand and supply curves and the insights that come from doing so are emphasized. Examples of issues in debt management and fiscal policy are analyzed.
This paper proposes a method for estimating the parameters of continuous time, stochastic rational expectations models from discrete time observations. The method is important since various heuristic procedures for deducing the implications for discrete time data of continuous time models, such as replacing derivatives with first differences, can sometimes give rise to very misleading conclusions about parameters. Our proposal is to express the restrictions imposed by the rational expectations model on the continuous time process generating the observable variables. Then the likelihood function of a discrete time sample of observations from this process is obtained. Parameter estimates are computed by maximizing the likelihood function with respect to the free parameters of the continuous time model.
This paper reconsiders the aliasing problem of identifying the parameters of a continuous time stochastic process from discrete time data. It analyzes the extent to which restricting attention to processes with rational spectral density matrices reduces the number of observationally equivalent models. It focuses on rational specifications of spectral density matrices since rational parameterizations are commonly employed in the analysis of the time series data. | https://researchdatabase.minneapolisfed.org/catalog?f%5Bcreator_sim%5D%5B%5D=Sargent%2C+Thomas+J.&f%5Bresource_type_sim%5D%5B%5D=Research+Paper&locale=en&page=2&per_page=10&sort=date_created_ssi+desc&view=gallery |
For fans of the 1977 landmark science fiction film Star Wars, few if any scenes are as iconic or as breathtaking as Luke Skywalker gazing into the Tatooine evening sky at a pair of alien suns setting on the horizon.
Now, by using a newly-developed mathematical framework, a team of researchers from New York University, Abu Dhabi, and the University of Washington have shown that unlike previously thought, binary star systems (systems with two stars orbiting each other) just like the one that captured audiences’ imaginations a long, long time ago, may indeed be home to habitable, Earth-like planets after all.
Background: A New Hope
Published in the journal Frontiers in Astronomy and Space Sciences, the researchers analyzed nine binary star systems previously identified by the Kepler mission as hosting giant planets at least as large as Neptune.
“Life is far most likely to evolve on planets located within their system’s Habitable Zone, just like Earth,” said study co-author Dr. Nikolaos Georgakarakos, a research associate from the Division of Science at New York University, Abu Dhabi, in the press release announcing the results.
Of course, many of the over 4,000 already identified exo-planets orbit a single star, including a group of rocky, Earth-like planets orbiting the star Trappist-1. Four of those appear to lie within their host star’s habitable zone, where liquid water could theoretically exist on the planet’s surface, already making them targets for future NASA missions.
Still, regardless of star system type, most of the alien worlds we have discovered are giants, meaning they likely have little to no chance of harboring life, at least not life as we know it.
Therefore, given the many challenges already present in searching for a theoretical “second Earth,” the folks who undertake these types of studies have mostly avoided searching for planets around binary star systems like the one Luke Skywalker called home. Instead, they focused on systems with only one star like our own solar system, where we know at least one planet harbors life.
Analysis: Can Binary Star Systems Support Life?
Several factors, including the effects of two stars worth of gravity and radiation, led to this determination. However, the study authors note, “binary systems are common, estimated to represent between half and three-quarters of all star systems.”
It was this surplus of star systems previously deemed unlikely to harbor habitable planets, specifically binary systems previously determined to have at least one giant planet in orbit, that led the research team to take another look, ultimately zeroing in on nine systems identified by Kepler as fitting this particular type.
Once those candidates were selected, the team had to consider each star’s type, mass, luminosity, and most notably, stellar radiation and gravity.
“Some of the challenges in assessing habitability in binary star systems arises from the fact that one has to account for two sources of radiation, possibly of different spectral type,” the study explains. “The second star provides an additional source of radiation, and more importantly, it is also a source of gravitational perturbations for the planetary orbit.”
Additional elements were also factored into the team’s habitability calculation, including the gravitational effects from the system’s giant planet, any theoretically habitable planet’s orbital eccentricity and orbital period, and even something researcher’s termed a planet’s “climate inertia,” an element defined as, “the speed at which the atmosphere responds to changes in irradiation.”
Once the team had accounted for all of these variables and applied them to the data from the nine Kepler binary star systems with known planetary giants, the results indicated five of those nine systems supported extraterrestrial friendly, habitable zones.
“By applying this methodology to Kepler-16, Kepler-34, Kepler-35, Kepler-38, Kepler-64, Kepler-413, Kepler-453, Kepler-1647, and Kepler-1661,” the study explains, “we demonstrate that the presence of the known giant planets in the majority of those systems does not preclude the existence of potentially habitable worlds.”
Specifically, the team concludes of the five worthiest candidates, “Kepler-34, Kepler-35, Kepler-38, Kepler-64, and Kepler-413 seemed more promising [for habitability], with Kepler-38 being the best candidate in this respect.”
Outlook: Binary Star Systems Will Be All The Rage
Of course, as one of the study’s co-authors points out, astronomers and astrobiologists still know so little about potential life outside our planet that even such would-be Jedi Knights may not necessarily be restricted to living within our definition of a habitable zone.
“There is the possibility that life exists outside the habitable zone or on moons orbiting the giant planets themselves,” says co-author Dr. Siegfried Eggl from the University of Washington, while also still noting that “that may be less desirable real-estate for us.”
As with all exoplanet research, more data will be needed to determine if any potentially habitable planets exist in any of the five most favorable Kepler star systems. But until then, the researchers behind this study feel they have clearly proven binary star systems are not inherently uninhabitable, even if they already have a giant planet in their orbit, and therefore make viable targets in the search for another world like our own. | https://thedebrief.org/five-new-potentially-habitable-zones-within-tatooine-like-binary-star-systems/ |
Some of the findings from our recent research studies are posted below. The results have been summarized and are intended to provide the general public with some basic information about our research interests and how well-being, stress and health are related. More comprehensive and detailed results can be found in the published articles listed under publications and in the conference posters produced by PSYCH 492 classes.
Between-Person Versus Within-Person Change
-
It is well-known that different people will score differently on various measures of well-being, which include:
- ​autonomy
- competence
- personal growth
- purpose
- relatedness
- self-acceptance
- engagement
- vitality
- positive and negative affect
- life satisfaction
- However, these scores also change within individuals over time, influenced by daily events and experiences
- Most research on well-being asks one global question at one period in time, but this may not yield the most accurate measurement of a person’s level of well-being. It is more precise to ask several questions on several different occasions to see their true level
- Measuring well-being on the individual level also shows us how well-being can change on a daily basis, and the types of factors that are associated with this change. Identifying such factors enables us to understand how to help people maximize well-being in their lives and experience optimum quality of life
Factors Affecting Well-Being
A. Personality
- ​The five-factor model of personality includes the dimensions of openness to experience, conscientiousness, extraversion, agreeableness and neuroticism. Personality, as measured by this model, is considered a stable trait with little variation over the lifespan
- ​Research studies with University of Victoria undergraduate students showed an association between some personality traits and well-being; for example, people who score highly on neuroticism are more likely than others to have lower well-being scores
- ​One's feelings of negative affect, life satisfaction, self-acceptance and autonomy on a given day are influenced by the previous day’s levels and experiences. However, personality can moderate this effect – people who score highly on agreeableness are quicker to recover from negative affect than others.
Figure 1. Agreeableness moderates the effect of yesterday’s negative affect on today’s negative affect. That is, people who score highly on agreeableness are less likely to have two days in a row with a poor mood than individuals who have lower agreeableness.
​B. Stress
- Stress is known to negatively affect several aspects of cognition by creating cognitive interference, or intrusive, off-task thoughts that interfere with normal task-oriented thinking
- People who report that they have many stressors and/or high stress severity are more likely to also report low positive affect and high negative affect
- Work with University of Victoria students showed a negative relationship between multiple aspects of well-being and stress severity
- Such research demonstrates the importance of monitoring and coping with stress to increase positive affect and feelings of well-being. Health care staff and caregivers can help older adults find effective coping strategies to maintain or improve quality of life
Figure 2. Results showed that participants who reported high stress severity scored highly on cognitive interference and negative affect (poor mood).
​C. Lifestyle
- A variety of lifestyle and health determinants are likely to account for individual differences and within-person variation in affect and well-being
- In a daily survey study, University of Victoria undergraduate students were asked questions about sleep, physical activity, nutrition, time spent outdoors, physical ailments and health behaviours (e.g. smoking, alcohol and caffeine intake)
- Each of these lifestyle determinants was found to be important, which demonstrates both the complexity of well-being as well as how it can change on a daily basis. It is very important for older individuals to monitor their sleep habits, amount of exercise and nutrition to maintain well-being
​Figure 3. Our results showed that individuals who had exercised on a given day scored higher on several aspects of well-being (including positive affect, vitality, life satisfaction and engagement) than people who had not exercised that day.
​D. Social Support Networks
- The quality and quantity of social contacts and perceived social support network affects one’s perceived level of well-being
- Research with University of Victoria undergraduate students has shown that people who spent time with family or friends, had a hug or kiss, helped someone or volunteered scored higher on aspects of well-being
- Such results are in accordance with similar work and emphasizes the importance of continued social involvement and meaningful social contact across the lifespan
​Factors Affecting Cognition
​A. Stress
- Research has shown that stress can negatively affect cognitive abilities such as processing speed and working memory
- These results are consistent with theories that postulate stress-related cognitive interference competes for attentional resources
- Research with undergraduate students at the University of Victoria showed poorer performance on cognitive tests on days when students reported heightened stress severity and number of stressors
Figure 4. With higher reported stressor severity (left), participants’ reaction times on the Multi-Source Interference Task were higher, indicating that high perceived stress competes for cognitive resources. Participants who reported more stressors (right) also had higher reaction times on the Multi-Source Interference Task than participants who reported fewer stressors. This is also an indication that feeling stressed can negatively impact cognitive performance.
B. Lifestyle
- Positive health behaviours and social support networks may act as moderators against the negative impacts of stress on cognition
- High stress is associated with increased consumption of high-fat snacks, caffeine and cigarettes, as well as with decreased physical activity and consumption of vegetables
- Low perceived social networks and emotional support is associated with increased smoking, physical inactivity, weight gain and alcohol consumption
- Research with University of Victoria undergraduate students showed that quality of sleep and use of social support were significant predictors of the number of stressors reported, stress severity and perceived stress.
- Results from this study inform us as to how stress can be better managed over time. Use of social support networks and quality sleep habits seem to provide individuals with a buffer against the negative effects of stress on affect and cognitive performance over time.
​Figure 5. Participants who reported using a social support network were more likely to report more stressors (left) and higher stress severity (right) than people who did not use a social support network. This indicates that people use social support as a coping mechanism against high perceived stress. However, the high degree of variation in these ratings indicate that social support use is not the only factor that influences the effects of stress.
​C. Gait
- Change in gait is associated with physical brain changes in older adults
- With increased age, cognitive processes are compromised when multi-tasking or dividing attention, such as walking while performing a cognitive task. This is because the demand for cognitive resources is twice as high
- Research with University of Victoria undergraduate students showed better cognitive performance in individuals who had a faster gait, longer step length and wider step width than those with a slower and narrower walk
- Both gait and cognitive performance were negatively affected by increased stress and positively affected by hours of sleep
Figure 6. Relationship between Multi-Source Interference Task reaction times and normalized gait velocity (a), step length (b) and step width (c). Faster, longer and wider walks were associated with better dual-tasking cognitive performance, suggesting gait may act as a protective buffer against the cognitive changes associated with higher age. | https://www.ilifespan.org/?q=results |
This summer, Nicole Von Wilczur ’18 had an ideal harmony of on-campus jobs. She split her week between helping Professor of Psychology Samuel Putnam with his research on toddler temperament, and working at the Children’s Center where she often had to to moderate the temperaments of toddlers.
Temperament is, most basically, “all of the intrinsic characteristics of personality,” Von Wilczur explained. It includes such attributes as the propensity to be introverted or extroverted, and is “something that stays with you basically from the time you’re born until you’re an adult,” she added.
Temperament is actually one of the most stable aspects of personality, and, for the most part, you can observe an individual’s temperament as a toddler and expect similar behavior when that same child is 16 or 30. However, Putnam believes that while temperament is certainly inherent and genetic, it is also influenced by parenting style and how parents nurture and raise their children. Temperament does differ culturally, which might have a lot to do with cross-cultural differences in parenting styles.
Although temperament is foundational to our personalities, it is a relatively under-studied part of psychology, according to Von Wilczur.
Von Wilczur’s research with Putnam explores how parenting styles affect toddler temperament. “We’re looking at whether different parenting styles lead to differences in child’s temperament, and how that differs across the nation and how that differs across the world,” she said. With collaborators ranging from Washington state to Germany, the research will comprise of a cross-cultural analysis.
A big part of Von Wilczur’s research this summer was searching for subjects in the surrounding area. She discovered that the biweekly Brunswick Farmer’s Market was a propitious place to find subjects. “We would bring little play sets — we had a slide and a sandbox — and we would bring it over to the Brunswick mall,” she described. Kids would come over and play in the sets, and Von Wilczur and her research partners, Carly Lappas ’17 and Hannah Broos ’17, would ask the parents to fill out a survey about their child and their parenting style.
Nearly half of those who responded to surveys were then willing to allow Putnam and his research assistants into their homes for an observation of their parenting.
Doing this research has changed the way Von Wilczur interprets parent’s behavior when they pick their kids up at the Children’s Center. “Whether they [the parents] are stern, or their child’s throwing a tantrum and they’re kind of standing back and letting them do it,” Von Wilczur said she’s more cognizant of connecting this parental behavior to the temperament of the children. Although the evidence is anecdotal, in the Children’s Center she said she’s observed that the more relaxed the parent, the more unruly the children.
As the summer comes to a close, Von Wilczur said her team has finished collecting surveys and has begun to analyze results and compare them with the results of similar experiments around the world. The hope is that this kind of research can give us a “greater understanding of the role that geographic region and related variables play in influencing differences in infant, child and adult temperament…We are hopeful that the results will be of interest not only to developmental psychologists, but scholars in personality and sociology as well,” said Von Wilczur. In that way, Von Wilczur adds, she thinks this research can “help parents figure out how they would like to change their behavior or not…to change their child’s behavior.”
Von Wilczur has known for a while that she wants to work with kids in the future, and both of her summer jobs have further solidified that aim. While she is still deciding between a psychology or a neuroscience major, “I really want to look at mental health and behavior,” in children, she said. For her, this summer was “the perfect opportunity to get started with that [research] and see what my future could look like,” she said. | http://community.bowdoin.edu/news/2015/08/nicole-von-wilczur-18-researches-the-parental-roots-of-human-temperament/ |
Listening Games ...
Publisher: Key Education Publishing
ISBN: 1602688923
Size: 77.65 MB
Format: PDF, Kindle
View: 3512
Download Read Online
Research has shown that visual and auditory discrimination and memory skills
should be taught and that practice is required to improve these skills. The
engaging activities found in Listen, Look, and Do!will provide young and special
learners with meaningful skill practice. A wide variety of activities are included: •
Stories and Rhymes • Puzzles • Pictures to Color • Cut-and-Paste Activities •
Games • Sequencing Activities • Hidden Pictures • What's Wrong Pictures •
Listening Games ... | http://perambara.org/books/visual-discrimination-grades-2-8/ |
Compelling communication of humanitarian work can create empathy towards a cause, advance an organisation’s advocacy and fundraising efforts, and defy common narratives about marginalised communities. But higher visibility of an organisation’s work comes with tremendous responsibility towards the people it serves.
As humanitarian organisations, communication is vital to document our work, highlight our successful initiatives, and promote our values and causes. And it is common for us to use engaging storytelling techniques covering the individual stories of people who have been uprooted by disaster and violence, which can defy stereotypes and provide a more nuanced narrative of how impact occurs on the ground.
Indeed, according to UNHCR’s communications strategy paper ‘with global displacement at its highest level on record, there is a temptation to emphasise numbers in order to command media space. However, sociological studies indicate that people are much more likely to take notice, empathise and become motivated to help if they are also presented with individual stories’.
However, careful ethical consideration is a prerequisite to field communication in a humanitarian aid context that must be practised throughout the planning, creation, and distribution phases of communication materials. Especially so in countries like Lebanon, where the government and mainstream media have an aggressive attitude towards refugees, and where newspaper headlines calling Syrians ‘dark-skinned invaders of the capital’, or blaming air pollution on the influx of refugees, are the daily norm.
Positive representation of refugees and constructive portrayal of their interactions with their host communities can tell a powerful alternative story to counter some of the common media narratives about displacement. As Vanessa Pupavac argues, ‘representing refugees in a sick role may have been inspired by compassion, however… the capacity of the sick to determine their own interests is problematised. The exaggeration of refugees’ incapacity has dangerous consequences, which helps legitimise decisions being taken away from refugees’.
Therefore, when planning a communication activity, it is necessary to consider the objective of creating communications material, and the potential impact it may have on the people it portrays. As well as first obtaining these people’s full, free and informed consent, it is important to consider the message we want to convey, and whether that message preserves the dignity of the people it involves. While it is important to highlight the challenges a population faces, stories and images that show people in miserable or destitute conditions to provoke emotions and donations are exploitative and harmful. This is because they violate people’s dignity and reinforce crude stereotypes of marginalised groups.
That is not to say that communications work is always straightforward. For example, sometimes the people Basmeh & Zeitooneh would like to feature in our stories decide to revoke their consent, at times even after we have produced the story. A far greater challenge is when the cultural context in a certain community may not allow the women to be photographed. It is vital to understand social and cultural contexts and to respect associated norms while keeping the interests and perspectives of communities, particularly the most marginalised within them, at the forefront of our communication outputs.
Comunications expert Jennifer Lentfer, who has worked with Oxfam, UNICEF and Thousand Currents, writes:
We can redefine our roles as communicators to tell compelling stories without trivializing people’s lives. We can promote a more nuanced narrative of how lasting, transformational change really happens. I believe this also offers more respect to our supporter base, when we can acknowledge their ability to learn and grow, rather than treating them in transactional, manipulative, we-just-want-your-money ways. At this stage in history, it is more important than ever for nonprofits to reflect our full humanity.
Ultimately, humanitarian communications can and should preserve people’s agency and dignity. Not portraying people as passive recipients of aid, and crediting communities involved in relief and longer-term development efforts, is critical to this goal.
Haifa Yassine is Communication and Marketing Manager at Basmeh & Zeitooneh and a member of the Protracted Displacement Economies (PDE) team. PDE is a project funded by UK Research and Innovation through the Global Challenges Research Fund (grant reference number ES/T004509/1). | https://www.displacementeconomies.org/positive-portrayal-of-marginalised-communities-in-humanitarian-aid-communications/ |
House tours and lifestyle
An artist’s collective recreated London landmarks in five paper sculptures
Artists from the Paper Artist Collective were given a brief to create paper models of London landmarks for a private event. The results are colourful, offbeat, and animated.
1. Maria Cruz’s Oxo Tower
Cruz is a paper artist and set designer. She chose the Oxo Tower because she wanted to build a less recognised landmark to make her own. The Oxo Tower forms part of a former meat stock cold store, and there’s an urban legend that you can trade oxo cube wrappers (of cooking stock) for a free cocktail at the tower’s restaurant/bar.
Process: The whole process took me 3 days. I started with a little sketch and later I used the computer to create templates to speed up the process. After printing the designs I hand cut the shapes with a blade and a ruler. I built a couple rough (functional) versions first to ensure the light structure could fit inside the building before starting the final piece.
Most challenging part of the brief: The brief gave artists a lot of freedom to create their own stuff. The most difficult part for me was to simplifying all the details of the building and adding the light structure which means you have to figure it out how to fit everything inside the piece.
Favourite part of the brief: Having the chance to get involved in a project with other artists around the world. It was really cool to see how each person interpreted the landmarks according to their points of view and styles.
2. Samantha Quinn’s Big Ben (St. Stephen’s Tower)
Samantha Quinn chose the tower as something tall and distinctive, and it’s one of my favourite as it’s such a wonderful reimagining of the tower’s colours. The design is so intricate too.
Process: I began with lots of visual research. I examined lots of photographs to create rough sketches of the overall composition. Using my sketches I began to visualise my design digitally in Illustrator. Working digitally helped me to work out scale and select colours easily. Each layer of the design was then printed on to very thin paper and lightly spray mounted on to the reverse of the ‘colorplan’ stock. I opted for a vibrant colour palette of greens and pinks. Every element was cut by hand and carefully glued together. As the landmark has four faces each tiny piece had to be repeated.
Most challenging part of the brief: Sticking down all the tiny fiddly pieces, some pieces were so small that the glue had to applied using a pin head.
Favourite part of the brief: Having complete freedom from the wonderful client G.F Smith, it is very rare that there are so few limitations in a brief so it was a pleasure to explore their fabulous Colorplan range.
Process:I drew and designed all the shapes needed for each part working from reference images of the palm house, tree and plants that are in kew garden. After I printed out my templates at the right size to cut from and sent the petal templates to the cutting machine so it could produce the amount I needed. After hand shaping all the petals I started to build each tree and the plant bed. The pal, house itself is made up of glass like a green house so I used the acrylic as the centre and glued the papercuts of the frame to either side.
Most challenging part of the brief: The hardest part was the scale, the palm house is long but not very tall so it ended up being quite a miniature piece to work on which made it fiddly.
Favourite part of the brief: I enjoyed making the trees the most, I usually work in 2D so bringing together trees that would stand and be filled with petals like the cherry blossom was time consuming but I loved how they ended up.
4. Julianna Szabo’s St Paul’s Cathedral
Julianna chose St Paul’s Cathedral because of all the interesting details she could apply to a 3D paper model.
Process: I started with a deep research into each building, trying to find detailed photographs from all sides and as many angles as possible. I also looked for plan and elevation drawings that provided me the correct proportions. Then I deconstructed the buildings into blocks which I designed one-by-one on paper. After all the measurements and designs were done I made the building blocks out of the appropriate coloured paper without any decorations, only the basic shapes with the windows/doors cut out. When I reached the point when the building took shape, I started adding the tiny details to complete the blocks. I glued together the whole building only after all the elements were finished.
Most challenging part of the brief: I made the design to be able to add as much details in the given size as possible!
Favourite part of the brief: I love making 3D buildings so I enjoyed this project from beginning to end. The most rewarding part is when I can see what I imagined in my head at the beginning, taking shape at the end.
Process:I started with a deep research into each building, trying to find detailed photographs from all sides and as many angles as possible. I also looked for plan and elevation drawings that provided me the correct proportions. Then I deconstructed the buildings into blocks which I designed one-by-one on paper. After all the measurements and designs were done I made the building blocks out of the appropriate coloured paper without any decorations, only the basic shapes with the windows/doors cut out. When I reached the point when the building took shape, I started adding the tiny details to complete the blocks. I glued together the whole building only after all the elements were finished.
Most challenging part of the brief: I made the design to be able to add as much details in the given size as possible!
Favourite part of the brief: I love making 3D buildings so I enjoyed this project from beginning to end. The most rewarding part is when I can see what I imagined in my head at the beginning, taking shape at the end.
One thought on “An artist’s collective recreated London landmarks in five paper sculptures”
https://waterfallmagazine.com
What i do not understood is if truth be told how you are not really a lot more neatly-liked than you might be right now.
You’re very intelligent. You realize thus significantly when it comes to this matter, made me
in my view believe it from so many varied angles. Its like men and women aren’t interested except it’s one thing to accomplish with Lady gaga!
Copyright Statement
All images are either original content from the editorial team, or from an organisation that has authorised use of their images.
This includes (as of 11th April 2017) Stadshem Fastighetsmäkleri, Fantastic Frank Fastighetsmäkleri & Fantastic Frank Immobilienagentur, Svenskt Tenn, Bolaget Fastighetsförmedling, Entrance Fastighetsmäkleri, Historiska Hem, deVOL Kitchens, Urban Spaces, Nooks, and The Modern House.
| |
STUDENT PHOTO CONTEST:
Show us the future of the West
Contest judges Emma Powell, photography faculty, Stephen Weaver, Geology technical director, and Jennifer Coombes, CC Communications staff photographer announced the winners on EARTH DAY, April 22, 2020.
Expert Panel Picks:
1 s t P l a c e
Spring Creek Fire
Photo by Austin Halpern '20
"Austin Halpern's Spring Creek Fire is a striking image that tells the story of wildfires in the west. It illustrates destruction as well as the green hope of recovery. It is very well composed and utilizes great light making it a beautiful intimate landscape image that tells a story."
-- Stephen Weaver, Technical Director, Geology
By Austin Halpern
This image was taken in the fall of 2018, just a couple months after the devastating Spring Creek Fire outside of La Veta, Colorado. 2018 was one of the most destructive fire seasons in Colorado history. Five of the 20 largest wild fires in Colorado history were recorded in 2018 alone. While the fire damage is horrific, what strikes me about this image is the regrowth, the bit of green that symbolizes hope amidst the sea of blackness.
2 n d P l a c e
Cloth
Photo by Deming Haines '21
By Deming Haines
This is an in-camera photograph (not photoshopped) of a cloth thrown up in the air in front of Pikes Peak.
Humans have control over what happens to beautiful landscapes and wildlife. We can use our power to destroy or be stewards of our earth. Our state of mind is in flux just like this cloth floating above Pikes Peak. Even though climate change seems unstoppable, humans can adapt just like the cloth can take a new form according to the breeze.
This photograph to me symbolizes the decisions that humans are currently faced with. We can choose to be a solid mass and come crashing down upon the beauty we take for granted, or we can take a new form, one that listens to our surroundings, one that is encouraged to change, and one that coexists.
For as long as I can remember, I have loved everything that nature has to offer, from the tiniest insects, to the largest mountains. I constantly find ways to interact with nature through my love of photography and abstraction. I believe that in order to spread awareness for nature, we must put our situation into perspective. I hope this photograph does just that: show the suspense, the uncertainty, and the beauty that lie before us. But above all, I hope it shows that we can change.
3 r d P l a c e
Runoff Collection Pond
Photo by Annabel Driussi '20
"The team of judges chose Annabel's image because it exemplifies the complexities of our current situation in the West. She perfectly composed an image in nature that is negatively impacted by human decisions. Despite the circumstances, or perhaps in spite of the circumstances, the composition draws us in with the unnatural colors contrasted against the shoreline where trash colorfully decorates the ground. The simplicity of this unnatural nature should tell us that this is our trouble. This is our work."
-- Jennifer Coombes, CC Communications
By Annabel Driussi '20
Concern over water pollution has risen in recent years, such that 85% of Colorado voters polled in 2020 rate this as a serious issue. In 2019, Governor Polis signed bill HB19-1113 (Protect Water Quality Adverse Mining Impacts), taking small steps to minimize mining companies' damage to water supplies. Fascinatingly, support for this bill was primarily framed as a public health concern over clean drinking water, and only secondarily upon the effects of mine tailings upon local wildlife. Legal efforts are being put into effect. But is current litigation enough to counter the effects of almost 200 years of mining history in the state?
Both of these pictures were taken at Leadville, at an abandoned silver mine outside the city. In the first photo, young coniferous plants struggle to reclaim land occupied by twisted metal refuse.
People's Choice Popular Vote Winner
Zion
Photo by Noah Hirshorn '20
By Noah Hirshorn
Zion National Park in southwestern Utah is one of the national parks most plagued by overcrowding. Taken on a rainy day in March 2019, fog inhibits the view of a valley that is often the site of heavy traffic and tourists attempting to visit the colossal rock walls. While the designation of a national park ensures that the land will be preserved, the ramifications of increased tourism may very well threaten some of the most beautiful landscapes in the country. While visiting national parks, it is crucial for visitors to abide by leave no trace principles in order to ensure future generations can experience the same wonders.
View all the 2020 Conservation in the West student photo contest entries.
2020 CONTEST GUIDELINES
Conservation in the West 2020 Poll voters want a more aggressive agenda for protecting public lands and the "outdoor way of life" from energy development. A call for more aggressive actions to protect air, land, and water in response to climate induced impacts is important to 69% of the voters who self-describe as conservationists. Loss of wildlife habitat is again identified as an extremely serious problem among voters. Nearly half of the voters in the eight Rocky Mountain states consider political support of conservation issues from elected officials as a primary factor in consideration of electing government officials.
"Support for conservation on public lands has remained consistent and strong over the decade-long history of our poll," said Corina McKendry, Director of the State of the Rockies Project and an Associate Professor of Political Science at Colorado College. "The urgency and demand for action behind those feelings is now intensifying as voters in the West increasingly believe their lands and lifestyles are coming under attack from the impacts of climate change and energy development."
TO ENTER:
The State of the Rockies project invites students to submit up to three photos and a description of each photo(s) taken from the Rocky Mountain West that reminds us of the importance of conservation efforts given recent significant evidence of climate change and the Trump administration's continued efforts to tap the nation's public lands for natural resources.
Please submit your entry to [email protected] by midnight April 8, 2020.
Submitted photos should address areas of concern for wildlife, outdoor recreation, shifting climate patterns and processes, public lands, and water availability. Finalists will be determined by a juried panel. Finalists' photos will be posted on social media outlets during Earthweek, April 18-22, 2020.
Winners will be determined by a jury of faculty and staff expert photographers. Emma Powell, photography Faculty, Stephen Weaver, Geology Technical Director, and Jennifer Coombes, CC Communications staff photographer. The popular vote will be determined via vote-by- text. Photos will be posted on social media with a link to follow to cast a vote for the photo and description most compelling -- the most popular photo wins. Follow the vote-by-text instructions (details to follow).
VIEW the 2019 student photography submissions. | https://www.coloradocollege.edu/other/stateoftherockies/conservation-in-the-west-poll-photo-contest/2020-photo-contest.html |
The U.S. Securities and Exchange Commission (SEC) regulates companies who offer securities for sale to the public and those who sell and trade securities and offer advice to investors. One of the SEC’s goals is to maintain consumer confidence in the fairness and integrity of the stock market. As a result, the SEC focuses on illegal insider trading in its enforcement priorities.
If a person is charged with this crime, there can be serious consequences. Illegal insider trading refers to buying or selling stocks on the basis of information that has not been made public.
Nonpublic information can include capital investment plans, negotiations concerning acquisitions, new contracts, financial projections and significant changes in control or change in management, among other information. Insider trading can also include providing tips about this information.
Examples
There are several examples of when illegal insider trading can occur. These include when corporate officers, directors and employees trade the corporation’s stocks after learning about confidential information.
It can also include employees of banking and brokerage firms who trade stocks based on information they received as a result of providing services to a corporation, government employees who learn of confidential information through their employment or other people who misappropriate or take advantage of confidential information from family, friends and others.
Penalties
Insider trading convictions can result in prison time and/or monetary penalties for the accused. In addition, the person can suffer reputational harm, it can affect their future employment prospects and present other challenges.
An experienced attorney at Kammen & Moudy can provide representation to the accused and review the circumstances of the situation. | https://www.kammenlaw.com/blog/2021/04/understanding-illegal-insider-trading/ |
A community-led global network of citizens and organizations defending and strengthening civic space.
Innovative network of more than 200 civil society organizations in six regional, connected hubs in Africa, the Middle East and North Africa ,Central Asia, East Asia, South Asia, Latin America and the Caribbean.
Currently in 84 countries across the Global South.
Inspired by ideas, methods and technologies from across sectors, network members work together on advocacy, research, network building, education and training, digital literacy and technology development.
Innovation for Change (I4C) was established as a response to widespread and worsening trends affecting civil society, including increasing restrictions on civic space and attacks on civil society organizations. I4C is a global network of people and organizations who want to connect, partner and learn together to defend and strengthen civic space and overcome restrictions to our basic freedoms of assembly, association, and speech.
Provide innovation grants to quickly provide funding to kick start ideas emerging from I4C network members.
Support the Hubs in learning and experimenting to develop a network-wide innovation pipeline of strategies and activities to strengthen and defend civic space.
Design and curate a Global Innovation Helper Hub – innovationforchange.net – that provides the tools and services offered to the entire Hub network.
Connect an expanding group of private sector entrepreneurs, ICT and digital security experts, private donors and leaders in innovation with I4C members to reinforce community building and contribute to the long-term sustainability of the network.
Provide advocacy and campaign support for the network, including access to latest methods and technologies.
I4C Africa – known as Hub Afrique – is currently working to protect, strengthen and expand civic space in Africa by focusing on the greatest challenges facing the continent: good governance, transparency, and accountability. Hub Afrique has launched WE-Protect and WE-Account, which includes a continent-wide influence mapping of organizations working on the Hub’s priority themes. I4C members can access the map through the Innovation Lab on africa.innovationforchange.net to help inform their work. In 2018, Hub Afrique launched a social innovation challenge, which will select three innovative ideas to address transparency, accountability and natural resources management across Africa.
The Central Asia Hub is working to implement innovative ways to combat state-imposed restrictions faced by many civil society groups operating in the region. The Central Asia Hub has hosted several networking and skills-sharing events to increase partnerships between local government officials and civil society organizations, including the 2017 Innovation Lab and Innovation Awards attended by more than 150 activists, civil society organizations, and government officials in the region. Advancing more partnerships with civil society and government is a priority for 2018.
In 2017, the East Asia Hub hosted capacity-building events for their network members in Thailand, South Korea, the Philippines, Singapore, Malaysia and Hong Kong. Designed to ensure activists had a safe and secure environment – both in-person and virtually – the East Asia Hub hosted four inter-regional community exchanges, an innovation fair and showcase, and four digital security trainings.
The I4C LAC Hub community provides ongoing opportunities for civil society leaders, technologists, social entrepreneurs, and academics to co-create solutions to the region’s most complex social, political, and economic problems. The LAC Hub’s sharing economy platform, Comunidas.org, is a notable solution that will expand across the region in 2018 to connect more organizations with each other to advance the effectiveness of the entire social sector in LAC. Plans to scale Comunidas.org to the Middle East and North Africa Hub are currently underway.
In 2017, the LAC Hub hosted an Innovation Lab in Buenos Aires and an event at the 2017 International Civil Society Week. During both events, innovative topics like the sharing economy, bitcoin and blockchain were the center of discussion as activists and entrepreneurs dove into the details of how to use modern innovations to strengthen civil society in the region.
The MENA hub works to foster coordination, collaboration and sharing across civil society, focusing on building tools to share expertise, knowledge and resources in the region. MENA hub members are developing and piloting a partnership model between civil society organizations and social entrepreneurs focusing on sustainability, and they will also be working with technologists to develop an open source accountability software. This year, Hub members plan to integrate the sharing economy platform developed by I4C LAC, Comunidas.org, into the MENA Hub network, making the tool available to its members in Arabic by mid-2018.
The South Asia Hub provides an inclusive space to design ideas for creating an enabling environment for civil society to operate in closed spaces where government restrictions impede the ability of civil society organizations to work together and achieve their missions. In response to the increasingly restrictive environment and shrinking funding, the South Asia Hub has launched a series of innovative funding trainings to equip civil society organizations with the tools necessary to access untapped financial resources in the region.
Forty-three Armenian towns will work together with Counterpart International to train municipal employees, share information and collaborate to resolve some of their ongoing development problems.
What do Frank Rainieri, Rita Sellares, Carla Meyrink, Jake Kheel, Denny Taylor and Counterpart International have in common?
Mkhitar Hovhannisyan, mayor of the small village of Gomk in southwestern Armenia, had seen plenty of international development projects come to his town before.
Counterpart International is working to improve Afghanistan’s civil society sector to better serve the population’s development needs.
More than 100 government leaders, community activists and donors assembled in Baku, Azerbaijan, to create an action-oriented agenda for forwarding women’s advancement and political involvement. | https://www.counterpart.org/projects/innovation-for-change/ |
Calls on institutional investors to increase their allocation of capital towards the SDGs, and to drive the demand for a diverse set of SDG investments ranging from sovereign bonds to corporate bonds and equity.
ENCORE (Exploring Natural Capital Opportunities, Risks and Exposure) is a web-based tool that will help global banks, investors and insurance firms assess the risks that environmental degradation poses for financial institutions
Drylands: Desolate, scorched, uninhabitable? Scientists say otherwise
Big Facts is a resource of the most up-to-date and robust facts relevant to the nexus of climate change, agriculture and food security. It is intended to provide a credible and reliable platform for fact checking amid the range of claims that appear in reports, advocacy materials and other sources.
The World Economic and Social Survey 2016 contributes to the debate on the implementation challenges of the 2030 Agenda for Sustainable Development. In addressing the specific challenge of building resilience to climate change, the Survey focuses attention on the population groups and communities…
UN Environment's - Inquiry into the Design of a Sustainable Financial System (UNEP Inquiry) has released, 'The Financial System We Need: From Momentum to Transformations'. This report reveals a doubling in policy actions over the past five years to align the global financial system with sustainable…
2016 Asia and the Pacific Regional Overview of Food Insecurity – Investing in a Zero Hunger Generation – the first post-Millennium Development Goals (MDGs) report of its kind. The report outlines declining progress towards defeating hunger in Asia and the Pacific, noting that nearly one-in-three…
The Food and Agriculture Organization of the UN (FAO) launched the report titled, ‘Asia and the Pacific Regional Overview of Food Insecurity: Investing in a Zero Hunger Generation,’ at the 5th Global Forum of Leaders for Agricultural Science and Technology (GLAST-2016). GLAST-2016 addressed the…
Climate change is having devastating impacts on communities’ lives, livelihoods and food security across South Asia. Its consequences are so severe that it is increasingly contributing to migration, and this incidence is likely to escalate much more in the years to come as climate change impacts…
Land Degradation Neutrality: Will Africa Achieve It? Institutional Solutions to Land Degradation and Restoration in Africa, chapter 5 from the recently published: "Climate Change and Multi-Dimensional Sustainability in African Agriculture"
The past decade saw steady growth in bilateral Official Development Assistance (ODA) to support the objectives of the UN Convention on Biological Diversity, reaching USD 8.7 billion per year in 2014-15. Yet, biodiversity accounts for only 6% of total bilateral ODA, compared with 20% for climate. | https://knowledge.unccd.int/search?f%5B0%5D=topic%3A1300&f%5B1%5D=topic%3A1577&f%5B2%5D=topic%3A1623&f%5B3%5D=topic%3A1747&f%5B4%5D=topic%3A2166&f%5B5%5D=type%3Apublications&%3Bf%5B1%5D=topic%3A1595 |
scientifically to be regarded as scientific theory. Validity,
accuracy, and social mechanisms ensuring quality control, such as peer
review and repeatability of findings, are amongst the criteria and
methods used for this purpose.
Natural science
Branches of physical science
Physics
Branches of physics
Astronomy – study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae), the physics, chemistry, and evolution of such objects, and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma ray bursts, and cosmic microwave background radiation.
Branches of astronomy
Chemistry
Branches of chemistry
Earth
Branches of
Earth
History of physical science
History of physical science
History of physics
History of acoustics
History of soil physics – history of the study of soil physical properties and processes.
History of astrophysics
History of astrodynamics – history of the application of ballistics
and celestial mechanics to the practical problems concerning the
motion of rockets and other spacecraft.
History of astrometry
History of atmospheric physics – history of the study of the
application of physics to the atmosphere
History of atomic, molecular, and optical physics
History of medical physics – history of the application of physics concepts, theories and methods to medicine. History of neurophysics – history of the branch of biophysics dealing with the nervous system.
History of chemical physics – history of the branch of physics that
studies chemical processes from the point of view of physics.
History of computational physics – history of the study and
implementation of numerical algorithms to solve problems in physics
for which a quantitative theory already exists.
History of condensed matter physics
History of biomechanics
History of nuclear physics
History of chemistry
History of analytical chemistry
History of cosmochemistry
History of atmospheric chemistry
History of agrochemistry – history of the study of both chemistry
and biochemistry which are important in agricultural production, the
processing of raw products into foods and beverages, and in
environmental monitoring and remediation.
History of bioinorganic chemistry
History of computational chemistry
History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems.
History of Flavor chemistry – history of the someone who uses
chemistry to engineer artificial and natural flavors.
History of Flow chemistry – history of the chemical reaction is run
in a continuously flowing stream rather than in batch production.
History of geochemistry
History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the studies the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth
History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes and nuclear properties.
History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable).
History of organic chemistry
History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials.
History of organometallic chemistry
History of chemical kinetics – history of the study of rates of
chemical processes.
History of chemical thermodynamics
History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids Multidisciplinary fields involving chemistry
History of chemical biology – history of the scientific discipline
spanning the fields of chemistry and biology that involves the
application of chemical techniques and tools, often compounds produced
through synthetic chemistry, to the study and manipulation of
biological systems.
History of chemical engineering
History of earth science – history of the all-embracing term for the
sciences related to the planet Earth.
Earth
History of atmospheric sciences – history of the umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems.
History of climatology History of meteorology
History of atmospheric chemistry
History of biogeography
History of ecology
History of
Freshwater
History of environmental chemistry – history of the Environmental
chemistry is the scientific study of the chemical and biochemical
phenomena that occur in natural places.
History of environmental soil science – history of the Environmental
soil science is the study of the interaction of humans with the
pedosphere as well as critical aspects of the biosphere, the
lithosphere, the hydrosphere, and the atmosphere.
History of environmental geology – history of the Environmental
geology, like hydrogeology, is an applied science concerned with the
practical application of the principles of geology in the solving of
environmental problems.
History of toxicology
History of geodesy
History of planetary geology – history of the planetary science discipline concerned with the geology of the celestial bodies such as the planets and their moons, asteroids, comets, and meteorites.
History of geomorphology
General principles of the physical sciences
Principle
Basic principles of physics
Physics
Describing the nature, measuring and quantifying of bodies and their motion, dynamics etc.
Newton's laws of motion
Mass, force and weight
Momentum and conservation of energy
Gravity, theories of gravity
Energy, work, and their relationship
Motion, position, and energy
Different forms of Energy, their interconversion and the inevitable
loss of energy in the form of heat (Thermodynamics)
Energy
Kinetic molecular theory
Phases of matter and phase transitions
Temperature
The principles of waves and sound The principles of electricity, magnetism, and electromagnetism The principles, sources, and properties of light
Basic principles of astronomy Astronomy – science of celestial bodies and their interactions in space. Its studies includes the following:
The life and characteristics of stars and galaxies
Origins of the universe.
Physical science
(Note: Astronomy should not be confused with astrology, which
assumes that people's destiny and human
affairs in general correlate to the apparent positions of astronomical
objects in the sky – although the two fields share a common origin,
they are quite different; astronomers embrace the scientific method,
while astrologers do not.)
Basic principles of chemistry
Chemistry
Chemistry, the central science, partial ordering of the sciences proposed by Balaban and Klein.
Physical chemistry
Chemical thermodynamics Reaction kinetics Molecular structure Quantum chemistry Spectroscopy
Theoretical chemistry
Electron
Computational chemistry
Mathematical chemistry Cheminformatics
Nuclear chemistry
The nature of the atomic nucleus Characterization of radioactive decay Nuclear reactions
Organic chemistry
Organic compounds Organic reaction Functional groups Organic synthesis
Inorganic chemistry
Inorganic compounds Crystal structure Coordination chemistry Solid-state chemistry
Biochemistry Analytical chemistry
Instrumental analysis Electroanalytical method Wet chemistry
Electrochemistry
Redox
Materials chemistry
Basic principles of earth science
Earth
The water cycle and the process of transpiration Freshwater Oceanography
Weathering
Agrophysics Soil science
Pedogenesis Soil fertility
Earth's tectonic structure
Geomorphology
Physical geography Seismology: stress, strain, and earthquakes Characteristics of mountains and volcanoes
Characteristics and formation of fossils
Atmospheric sciences
Atmosphere of Earth
Atmospheric pressure
Meteorology, weather, climatology, and climate
Hydrology, clouds and precipitation Air masses and weather fronts Major storms: thunderstorms, tornadoes, and hurricanes Major climate groups
Speleology
Cave
Notable physical scientists
List of physicists List of astronomers List of chemists
Earth
List of Russian earth scientists
See also
Outline of science
Outline of natural science
Outline of physical science
Outline of earth science
Outline of formal science Outline of social science Outline of applied science
Notes
^ The term 'universe' is defined as everything that physically exists: the entirety of space and time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term 'universe' may also be used in slightly different contextual senses, denoting concepts such as the cosmos or the philosophical world.
References
^ Wilson, Edward O. (1998). Consilience: The Unity of Knowledge (1st
ed.). New York, NY: Vintage Books. pp. 49–71.
ISBN 0-679-45077-7.
^ "... modern science is a discovery as well as an invention. It
was a discovery that nature generally acts regularly enough to be
described by laws and even by mathematics; and required invention to
devise the techniques, abstractions, apparatus, and organization for
exhibiting the regularities and securing their law-like
descriptions." —p.vii, J. L. Heilbron, (2003, editor-in-chief).
The Oxford Companion to the History of Modern Science. New York:
Oxford University Press. ISBN 0-19-511229-6.
^ "science".
Merriam-Webster
Works cited
Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures
on Physics. 1. ISBN 0-201-02116-1.
Holzner, S. (2006). | http://theinfolist.com/php/SummaryGet.php?FindGo=Physical_science |
See, for example, the plate of plantswoman and book collector Mary Helen Wingate Lloyd (1868-1934). Lloyd was a talented, knowledgeable gardener whose club, The Gardeners, was a founding club of The Garden Club of America. For many years, she shared her gardening expertise through the columns she contributed to The Garden Club of America’s Bulletin. She was also an avid book collector, who built a collection of European and American imprints that reflected the history of gardening, from the 16th to 20th centuries. After her death, her family bequeathed her gardening library to PHS, in recognition of her longtime role as member and chair of PHS’s Library Committee.
Lloyd’s bookplate was designed by Dorothy Sturgis Harding, an artist who created bookplates for, among others, Eleanor Roosevelt. Filled with pleasing floral and vegetative motifs, the plate shows us Lloyd’s extensive library, with bookcases flanking a mantel and cozy fire, and a glimpse of her celebrated garden at her “Allgates” estate in Haverford, Pennsylvania.
Learn more about bookplates with botanical and horticultural associations, as well as other signs of book ownership, at the Rare Book Talk on Bookplates on Thursday, November 1 in the PHS Town Hall. Space is limited and registration is required.
Thursday, November 1, 2018 at 3 – 4 p.m. | https://phsonline.org/blog/the-art-of-bookplates |
We will stimulate innovation across the highways sector. Successful innovation requires implementation across our business from fundamental research to the effective implementation of standards.
Highways England initiates research to unlock new knowledge to transform the strategic road network.
Our research fits into three timelines:
- 2020-2025 Short term activities – for successful delivery of the next Road Investment Strategy
- 2025-2035 Medium term activities – for informing and supporting future Road Investment Strategies
- 2035-2050 Long-term activities – for pursuing our future aspirations leading up to 2050 and beyond.
Research & academia
What we plan to do:
- Distinguish Highways England as an active participant in research
- Find new ways to solve our current and future challenges
- Strengthen our relationships with research institutions, academia, infrastructure operators and industry.
- Collaborate with academic experts and specialists to align research with our strategic goals
- Attract and develop new talent to the sector
Our innovation programmes are a key mechanism by which Highways England develops and implements new products, services and processes to improve the way we operate, maintain, and improve the strategic road network in England.
What we do:
- Set direction and our approach to drive our innovation culture
- Manage and deliver our Innovation Programme of targeted portfolios and projects
- Develop and manage partnerships with our Operational Directorates, our supply chain, Innovate UK, Industry, Catapults, i3P and overseas roads administrations.
- Develop and refresh Highways England’s Innovation, Technology and Research Strategy, ensuring that it meets future business needs.
- Manage our Research and Innovation Challenges portfolio of projects.
- Manage and publish the results of our research and innovation projects
We act as a centre of excellence that supports colleagues and suppliers across all areas of our business at both strategic and project level
Each region has two expert Lean practitioners; some of these also have a strategic lead for a business area. The expert practitioners develop the capability of staff and delivery partners in the use of Lean continuous improvement techniques, enabling them to deliver continuous improvement.
Our aims include:
- Developing the capability of staff and delivery partners in the use of Lean continuous improvement techniques to support the delivery of the company’s Key Performance Indicators
- Supporting staff and delivery partners continuously improving performance in safety, customer satisfaction and efficiency
- We will support delivery teams to drive improvement demonstrated by an increase in productivity
Further Information
- Lean Support to Highways England 2015 – 2020
- For examples of Lean improvements, view our Highways England Lean Tracker
- Contact us regarding Lean [email protected]
We will build our ‘technical standards enterprise system’ to accelerate standards development and enable agile ways of working with innovation at the heart of the workface.
The Technical Assurance and Governance Group will continue to provide support and guidance to our internal and external customers by enabling faster publication of the Design Manual for Roads and Bridges and the updating of the Manual Contracts Highways Work specifications. In addition we will continue to support enhancement, quality, timeliness and processing of departures. We will continue to support development of the National Highway Sector Schemes that set out the competency levels for operatives working on the UK road network. | https://highwaysengland.co.uk/innovation-hub/our-approach/ |
By DEB GORDON and ROSEMARIE DAY
With the long-awaited inauguration day behind us, America is finally getting something we desperately need: an elected woman in the White House.
On the heels of chaos and violence at the Capitol and after four years of the Trump Administration, we are ready for strong female leadership in the executive branch to help put the country on the right course. In fact, it is long overdue.
Kamala Harris didn’t just need our votes to make history as America’s first female Vice President. To be successful, she’ll need every ounce of our ongoing support as she steels herself to direct threats to her life and faces the challenge, along with President-elect Biden, of healing a deeply fractured nation.
Female leaders around the world have modeled that strong leadership through 2020’s most difficult times. Women have led some of the most effective pandemic responses worldwide. Countries led by women leaders had six times fewer confirmed COVID-19 deaths — and fewer days with confirmed deaths — than countries led by men. New Zealand, Taiwan, Germany, and Iceland — all led by women — are among the coronavirus management success stories.
These women acknowledged the threat from coronavirus rather than underplaying it. They were decisive, and used data and science to drive their decision-making. They took a long-view when designing their response, prioritizing long-term well-being over short-term economic pain. They listened to outside voices to ensure they had the best possible input and solutions for their countries. And they showed empathy. Having a female leader became a symbol of inclusive, open-minded, effective leadership. | https://thehealthcareblog.com/blog/tag/politics/ |
Sonora, CA – The drought has created a high fire danger in the Stanislaus National Forest prompting fire restrictions in Moderate Hazard Areas. Those bans will go into effect at noon on Friday, June 27.
Forest Spokesperson Veronica Garcia says, “We are actually earlier than last year. Last year the restrictions started on August 8 and now it’s about a month earlier. It’s the driest year on record and that is why we are taking this precaution.”
Here is the Forest Service’s list of restrictions on the Groveland, Mi-Wok, Summit, and Calaveras Ranger Districts of the Stanislaus National Forest:
- Campfires: Building, maintaining, attending or using a fire, campfire, (including briquette type barbecue), or stove fire is prohibited, except within developed recreation sites. Persons with a valid California Campfire Permit may use a portable stove or lantern that uses gas, jellied petroleum, or pressurized liquid fuel.
- Smoking: Smoking is prohibited, except within an enclosed vehicle or building, a developed recreation site, or while stopped in an area at least three feet in diameter that is barren or cleared of all flammable material.
- Welding: Operating acetylene or other torch with an open flame is prohibited, except by permit.
- Explosives: Using an explosive is prohibited, except by permit. | https://www.mymotherlode.com/news/local/213835/moderate-fire-restrictions-forest.html |
1. The earliest known paintings that were done in oils date back to the 7th century BC. These paintings were Buddhist murals that were discovered in caves in Western Afghanistan. Oil paint didn’t become widespread for use in art works until the 15th century, when it became popular throughout Europe. Jan van Eyck, a 15th century Flemish painter, is widely believed to have invented it, though in reality he did not invent it, instead he developed it.
2. Oil paint is credited with revolutionising art. One of its key properties is that it’s very slow to dry. It gave artists a lot more time to work on their paintings and it allowed them to correct any mistakes they might have made. Oil paints allowed for artists’ creativity to flourish more because artists could devote more time to each painting. Many of the most widely praised paintings were done in oils.
3. For a few centuries artists had to store their oil paints in animal bladders. This was because the paint tube wasn’t invented until 1841. It was invented by John Goffe Rand, an American painter. Before the tube was invented, artists would have to mix their paints themselves before painting. They would have to grind the pigment up themselves, then carefully mix in the binder and thinner.
4. The most basic type of oil paint is made up of ground-up pigment, a binder and a thinner, which is usually turpentine. For the binder there are lots of different substances that can be used, including linseed oil, walnut oil and poppy seed oil; each of these gives the paint different effects and has different drying times.
5. There are modern versions of oil paint that can dry a lot more quickly than the standard version. The way that it dries is not by evaporation, but by oxidation, the process where substances gain oxygen. It is generally accepted that the typical painting done in oils will be dry to touch after about two weeks, though it can take six months to a year before the painting’s actually dry enough to be varnished. | https://www.takeartcollection.com/2017/11/10/6-facts-about-oil-paint/ |
Someone recently sent me pictures of the campus of Harding University in Searcy, Arkansas (my alma mater), showing it practically buried in beautiful spring flowers. Yes, spring is creeping northward, and soon multicolored flowers and rich green grass and leaves will replace our landscape’s current predominant shades of gray, white and brown.
Exactly when seeds will germinate, plants will put out new leaves, and the buds of trees will open can’t be predicted with any accuracy, because all of these activities are dependent on environmental conditions that change from year to year (although a new Australian discovery may someday allow us to control when plants flower–more on that later!).
The study of the relationship between climate or seasons and recurring biological activity such as the flowering of plants is called phenology (not to be confused with phrenology, which is the study of bumps on your head). Many gardeners take careful note of when flowers appear and trees put out new leaves each year. This data is now being tracked around the world to determine if climate change is beginning to alter the life cycles of plants and animals.
Three main factors, whose importance differs from species to species, govern spring activity in plants: length of day, temperature, and moisture. Seeds, for instance, generally germinate when moisture penetrates their seed coat–but moisture alone isn’t enough. If it were, seeds, which are produced in the late summer and fall, would germinate in the autumn–just in time to be killed by frost. Many plants in the temperate zones get around that problem by producing a chemical called abscisic acid in the late summer and early fall that keeps the seeds dormant. Over the winter, enzymes in the seed degrade the abscisic acid so that the seed is ready to sprout in the spring. Even then, however, the seed won’t sprout unless conditions are right: some seeds can remain dormant for years, even (in the case of the lotus plant) for centuries.
Once water penetrates the seed coat of a no-longer-dormant seed, it begins to soften the hard, dry tissues inside, and causes the seed to swell up. This splits open the coat, allowing more water to enter. The water activates chemicals inside the seed which trigger a series of biochemical events culminating in the sprouting of the plant.
Activity which is governed by the length of day is called photoperiodism. Day length is particularly important to flowering plants; plants of the same species all need to flower at the same time if there is to be any cross-pollination. Of course, the plants don’t have little internal stopwatches; instead, they contain chemicals called phytochromes. When a phytochrome molecule is exposed to certain frequencies of light for a sufficient period of time, it converts to a different form of phytochrome, which signals the plant’s cells to change their activity–to start putting out flowers, opening up buds, etc. The days have to be a certain length, which varies from plant to plant, before this process kicks in.
Plants are also sensitive to temperature. All flowering plants require a certain amount of rest–days when they are unable to grow–referred to as chilling units. (There’s seldom a shortage of chilling units in Saskatchewan.) They also require a certain number of heating units, days when the temperature is above about 8 degrees Celsius. Until the heating requirement is met, the plant will remain dormant. The higher the temperature, however, the faster the plant will bloom. So a cold period followed by a sudden extended warm period can result in what seems like a veritable explosion of leaves and flowers.
The flip side of that is that a cold snap at the wrong moment can have serious effects on flowering plants. We may someday be able to avoid that, however, thanks to the recent discovery by Australians scientists of the Flowering Locus C (FLC) gene. This “master gene” suppresses flowering when it is switched on. Once it collects enough signals from within the plant that day length, moisture and especially temperature requirements have all been met, it turns off, allowing flowering to proceed.
Learning to control this gene could provide more control over year-round production of some crops, including wheat and canola; allow ranchers to prevent the flowering of pasture grasses; allow horticulturalists to produce fruit, vegetables and cut flowers all year in response to market demands and grow bigger vegetables by preventing them from going to seed; and could even help hay fever sufferers by allowing us to prevent the flowering of species whose pollen people are most allergic to.
Alas, however, while we may someday be able to force plants to flower, or prevent them from doing so, there’s no discovery on the horizon that will allow us to force spring weather to set in on schedule. | https://edwardwillett.com/2002/04/spring-again/ |
Learning and improvement
The are several strands to our learning and improvement programme all of which are explained below:
Quality Assurance Audits
HSAB audits and reviews cases through its ongoing quality assurance programme and through in-depth reviews of cases of significant concern through which additional learning will be identified. Where necessary improvements are identified through this work, the findings are placed on to our Audit Tracker with appropriate actions. These are then monitored by the Board's Performance, Audit and Quality Assurance sub group.
Safeguarding Adults Reviews
We have completed one SAR to-date and the findings are contained within this 7 minute learning-
A Safeguarding Adults Review (SAR) is held when a an adult dies as a result of abuse or neglect, whether known or suspected, and there is concern that partner agencies could have worked more effectively to protect the adult.
How a SAR should be undertaken
Statutory guidance (para. 14.135-14.144) outlines principles that should underpin all SARs, how the SAR should be undertaken (including the skills and experience needed of those undertaking a SAR), what the SAR should aim to achieve and suggested reasonable timescales. When setting up a SAR the SAB should consider how the process can dovetail with any other relevant investigations that are running parallel, including child Serious Case Reviews and Domestic Homicide Reviews.
Statutory Duty
- Where the SAB decides not to implement an action it must state the reason for that decision in the Annual Report (Schedule 2, 4(1)).
- Each SAB member must cooperate in & contribute to the carrying out of a review with a view to identifying lessons to be learnt and applying those lessons in future cases (Care Act 44(5))
Findings from SARS
SAR reports should provide a sound analysis of what happened, why and what action needs to be taken. The report should be written in plain English and contain findings of practical value to organisations & professionals. Findings from any SAR should be included in the SAB Annual Report, including what actions have been taken (or intend to be taken) in relation to the findings.
Learning Lessons from Safeguarding Adults Reviews
Please click on the link below to see the HSAB procedure for Safeguarding Adults Reviews:
External reviews of our effectiveness and self evaluation
Peer Challenge Follow Up Visit Response Dec 2016
Peer challenge review action plan
Practitioner Forums
These multi-agency events are held throughout the year and are based on subjects that practitioners have requested further knowledge of. On 19th May 2016 we held a session covering Information Sharing. | https://herefordshiresafeguardingboards.org.uk/herefordshire-safeguarding-adults-board/for-professionals/learning-and-improvement/ |
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates. | https://pubs.er.usgs.gov/publication/70169448 |
A Populist-Authoritarian Nexus
Over the past decade or more, authoritarian powers have formed loose coalitions to counter the influence of the United States and its democratic allies. Initially, they focused on neutralizing efforts at the United Nations and other transnational bodies to enforce global standards on democracy and human rights. They also worked to mobilize support for fellow dictators facing domestic or international pressure, like Syria’s Assad.
More recently, however, the authoritarian regimes have reached out to sympathetic parties, movements, and political figures from democracies in Europe and elsewhere. Marine Le Pen, the leader of France’s National Front, frequently praises Vladimir Putin, has received financial assistance from Russian sources, and has called for France to align with Russia as a counterweight to the United States. Populist politicians in the Netherlands, Britain, Italy, and Austria meet regularly with Russian officials, criticize the sanctions imposed by the EU after the Kremlin’s invasion of Ukraine, and support Russia’s interests in votes at the European Parliament.
This affection for authoritarians like Putin probably represents a minority view in Europe. Polls still show that Europeans regard Russia as repressive and dangerous. But many have come to have doubts about certain core values that underpin the European idea. They are increasingly inclined to question the economic and social benefits of European integration and democratic solidarity in general. They tend to regard sovereign states rather than supranational entities as best equipped to address problems like economic inequality and displacement, surging rates of immigration, and humanitarian crises. And they are less likely to support a foreign policy that requires their nation to assist others for the greater good.
For all of these reasons, citizens of democracies may look to Putin, Xi, and other authoritarian rulers as proof that nation-states can and should buck international commitments and do what they must to protect their own interests. Partnering with such leaders is equated with an embrace of hard-nosed national opportunism.
History shows that this strategy leads to ruin. When universal values and international law are cast aside, global affairs are governed by force. Small-state nationalists who admire foreign dictators today could find their countries subjugated by the same leaders tomorrow. Worse still, they could simply be trampled amid the lawless competition of great powers.
Orphaned Democrats
Citizens in many vulnerable democracies, such as Taiwan and the Baltic states, are alert to these threats. Others in places like Hong Kong, Tunisia, and Ukraine understand that the survival of their freedoms depends on international democratic solidarity. Protesters, activists, refugees, and besieged civilians around the world rely on the promise of international aid and advocacy backed by democratic governments.
The question is whether the United States and Europe will ignore their own long-term interests and retreat from their responsibilities as global leaders. If they do, Russia, China, Iran, and their ilk can be expected to fill the void.
Countries to Watch
The following countries are among those that may be approaching important turning points in their democratic trajectory, and deserve special scrutiny during the coming year.
Czech Republic: October 2017 elections will see the rise or defeat of the populist and nationalist ANO party, which has been compared to the ruling parties in Hungary and Poland.
October 2017 elections will see the rise or defeat of the populist and nationalist ANO party, which has been compared to the ruling parties in Hungary and Poland. Denmark: The parliament is considering a series of bills that, if adopted, would further restrict immigrant and refugee rights and damage Denmark’s reputation for liberal values.
The parliament is considering a series of bills that, if adopted, would further restrict immigrant and refugee rights and damage Denmark’s reputation for liberal values. Ecuador: Voters will elect a successor to President Rafael Correa, whose crackdowns on political opposition, critical journalists, demonstrators, and NGOs have led to a steady decline in freedom during his tenure.
Voters will elect a successor to President Rafael Correa, whose crackdowns on political opposition, critical journalists, demonstrators, and NGOs have led to a steady decline in freedom during his tenure. Iraq: As the battle to retake territory from Islamic State militants continues, the weak and fragmented government will face the challenge of reintegrating the Sunni minority population into the national system and containing the power of Shiite militias.
As the battle to retake territory from Islamic State militants continues, the weak and fragmented government will face the challenge of reintegrating the Sunni minority population into the national system and containing the power of Shiite militias. Kyrgyzstan: The term of President Almazbek Atambayev expires in late 2017, but recently approved constitutional amendments could pave the way for him to retain power by shifting to the prime minister’s seat.
The term of President Almazbek Atambayev expires in late 2017, but recently approved constitutional amendments could pave the way for him to retain power by shifting to the prime minister’s seat. Philippines: After his extrajudicial war on drugs claimed thousands of lives in 2016, President Rodrigo Duterte may continue his extreme policies with strong parliamentary backing.
After his extrajudicial war on drugs claimed thousands of lives in 2016, President Rodrigo Duterte may continue his extreme policies with strong parliamentary backing. South Africa: A weakened African National Congress will choose a new leader in 2017, and state institutions could be drawn into intraparty rivalries ahead of the ANC conference, testing the strength of the country’s democracy.
A weakened African National Congress will choose a new leader in 2017, and state institutions could be drawn into intraparty rivalries ahead of the ANC conference, testing the strength of the country’s democracy. Tanzania: The next year will be a test of President John Magufuli’s authoritarian tendencies, which have already emerged through the government’s use of the Cybercrimes Act against critics and the passage of a new media law late in the year.
The next year will be a test of President John Magufuli’s authoritarian tendencies, which have already emerged through the government’s use of the Cybercrimes Act against critics and the passage of a new media law late in the year. United States: Donald Trump’s unorthodox presidential campaign left open questions about the incoming administration’s approach to civil liberties and the role of the United States in the world.
Donald Trump’s unorthodox presidential campaign left open questions about the incoming administration’s approach to civil liberties and the role of the United States in the world. Zimbabwe: Politicians and officials in the ruling ZANU-PF party will continue to jockey for position to succeed aging president Robert Mugabe against a backdrop of burgeoning popular protests and increasing economic woes.
The False Promise of Strongman Rule
However much they may appreciate the benefits of their own systems, observers in democracies sometimes watch with envy or admiration as foreign strongmen smash through obstacles to implement their desired policies.
But events in three key countries in 2016 illustrated once again that these bold enterprises often founder due to the very lack of checks and balances that initially seemed so advantageous.
Egypt
Egyptian president Abdel Fattah al-Sisi, who seized power in a 2013 coup, has been praised by some democratic politicians—especially those on the right—for toppling an unpopular Islamist incumbent and ruthlessly cracking down on both the former president’s peaceful supporters and an armed insurgency led by the Islamic State militant group. Sisi is held up as a promising partner in the fight against Islamist terrorism.
A closer look at his performance reveals not just a feckless and thuggish security apparatus that has failed to quell the insurgency, but also a pattern of corruption and economic mismanagement that is bringing Egypt to its knees. The ongoing violence and political repression have crippled the vital tourism industry. Billions of dollars in aid from the Persian Gulf monarchies have been wasted, partly on megaprojects of dubious value that enrich regime cronies. And in 2016 the government began implementing austerity measures in exchange for an emergency bailout from the International Monetary Fund, driving up prices for food staples and angering an already desperate population.
Venezuela
Former Venezuelan president Hugo Chávez earned foreign admirers—in his case on the political left—by nationalizing private industries, taking on the moneyed classes behind the country’s conservative political establishment, and redistributing wealth to the poor through a variety of housing, education, and social programs. He also denounced U.S. “imperialism” and used his country’s oil wealth to support likeminded governments across the region.
By 2016, the regime Chávez built, now in the hands of his chosen successor, Nicolás Maduro, was facing economic and political collapse. The national oil company had been hollowed out by corruption, political projects, and neglect under Chávez, long before the arrival of low global oil prices. The currency, weakened by the world’s highest inflation rates, made it difficult to import basic goods including food and medical supplies, leading to chronic shortages and repeated riots during the year. And Maduro, relying in part on the regime’s control of the courts, responded to an opposition victory in recent legislative elections by stripping the legislature of meaningful power and blocking a presidential recall referendum, effectively cutting off the only route to an orderly change of leadership.
Ethiopia
Ethiopia, ruled since 1991 by the authoritarian Ethiopian People’s Revolutionary Democratic Front (EPRDF), has long been a darling of democratic donors, who portray it as a haven of economic progress and stability in an insecure region. They effectively argue that the regime’s vigorous suppression of political dissent and media freedom is excusable given its proven ability to carry out ambitious development projects and deliver impressive rates of macroeconomic growth year after year.
However, protests that began in late 2015—in response to a controversial development project that would have expanded the capital into neighboring regions—grew throughout 2016. The security forces used deadly force, and demonstrators raised accumulated grievances including ethnic discrimination and long-standing exclusion from the political process. As many as 1,000 people may have been killed, and more than 11,000 were detained under a state of emergency declared in October. The protests were supported by many members of Ethiopia’s two largest ethnic groups, and there was a genuine risk at year’s end that the unrest could begin to unravel the EPRDF’s accomplishments in the economic and security spheres.
Breakdown of the Political Mainstream
One of the main casualties of the nationalist and populist wave that rolled over the world’s democracies in 2016 was the de facto two-party system, a traditional division of the political spectrum into two mainstream parties or coalitions of the center-right and center-left, which has long ensured stable government and a strong opposition in much of the free world.
Left in its place were dominant ruling parties with few checks on their power, fragmented parliaments with no governing majority, or an infusion of radical factions whose core constituencies gave them little incentive to moderate or compromise in the public interest.
Spain was without a fully functioning government for much of the year because major gains by two new parties, Podemos and Ciudadanos, denied a majority to both establishment parties—the conservative People’s Party and the center-left Socialist Party—and none of the four were able to form a coalition.
In Britain, the ruling Conservative Party effectively co-opted the positions of the upstart UK Independence Party as a result of the Brexit referendum, and took a more populist and nationalist direction under Prime Minister Theresa May. Meanwhile, the main opposition Labour Party’s shift to the left under leader Jeremy Corbyn caused internal rifts and appeared to dim Labour’s national election prospects, which were already badly damaged by the rise of the pro-independence Scottish National Party. The changes served to cement the Conservatives’ political dominance for the foreseeable future.
Germany’s ruling Christian Democrats, led by Chancellor Angela Merkel, were challenged from the right by the populist Alternative for Germany party, which gained ground in subnational elections. Right-wing nationalist factions continued a multiyear march from the fringe to the heart of governing coalitions elsewhere in Northern Europe.
The French Socialist Party was widely considered a lost cause as the country prepared for the 2017 presidential election, and the deeply unpopular Socialist incumbent, François Hollande, announced that he would not seek a second term. The election was expected to be a contest between hard-line conservative François Fillon and Marine Le Pen of the far-right National Front.
Even in the United States, home to the world’s most entrenched two-party system, challengers with minimal ties to their respective parties—Bernard Sanders and Donald Trump—contributed to major intraparty fractures during the presidential primary campaign. Trump’s eventual victory appeared likely to transform the Republican Party’s policy orthodoxy, though it remained unclear whether this would ultimately weaken or strengthen the Republicans’ hold on power.
Referendums and Democratic Fragility
A constant refrain among democracy advocates is that “democracy is more than just elections.” A truly democratic system includes a variety of other checks and balances that ensure freedom and resilience over time, such as a free press, independent courts, legal protections for minorities, a robust opposition, and unfettered civil society groups.
Referendums represent a radical reduction of democracy to its most skeletal form: majority rule. Too often, they are called in order to circumvent some obstacle thrown up by political or legal institutions—a failure by elected officials to reach consensus, for example, or a constitutional barrier that powerful actors find inconvenient. Whatever the intent, such referendums are an end run around the structures and safeguards of democracy.
The prominence of consequential referendums in 2016 could therefore be interpreted as another sign that global democracy is in distress.
Britain’s referendum on whether to leave the European Union—organized by Prime Minister David Cameron largely as a means of papering over deep rifts in his Conservative Party—has left the public sharply divided, and the government is still struggling to agree on a strategy to implement the outcome. In Italy, Prime Minister Matteo Renzi was forced to resign after voters rejected his political reform plans, as the debate shifted from the merits of the proposals to Renzi’s own popularity.
Colombian president Juan Manuel Santos put his peace agreement with the FARC rebel group to a referendum, hoping to end a decades-old civil war and overcome bitter opposition from conservatives. After the measure failed by a narrow margin, however, he made a number of revisions to broaden consensus and then passed the agreement through the legislature, effectively returning to the more adaptive, give-and-take methods of representative democracy.
Among the year’s other referendums were several examples in less democratic countries, which typically involved an incumbent leader seeking to extend his own power beyond constitutional limits. Azerbaijan’s Ilham Aliyev strengthened his authoritarian grip on the presidency through 29 constitutional amendments that won more than 90 percent approval in a tightly controlled plebiscite.
By contrast, popular Bolivian president Evo Morales lost a referendum that would have allowed him to seek a fourth term in office, underscoring the fact that many voters still value the checks and balances of democracy, even when it means limiting their own choices.
| |
TheSkyNet, a community computing project, has gone online, with people encouraged to contribute spare CPU cycles to help radio astronomers process data.
Sponsored by the International Centre for Radio Astronomy Research (ICRAR), the Curtin University and the WA government’s Department of Commerce, the project will use thousands of PCs to form a distributed computing engine to scan data from telescopes and search for sources of radiation at radio wavelengths that could be coming from stars, galaxies and other objects throughout the universe.
ICRAR director, professor Peter Quinn, said that theSkyNet project will raise awareness of the Square Kilometer Array]] (SKA) radio telescope project and complement the data processing of supercomputing facilities.
“TheSkyNet aims to complement the work already being done by creating a citizen science computing resource that astronomers can tap into and process data in ways and for purposes that otherwise might not be possible,” Professor Quinn said.
ICRAR’s Outreach and Education Manager, Pete Wheeler, recently told Computerworld Australia that the project will aid in the data processing of the forthcoming Square Kilometre Array (SKA) telescope.
“We will be running data sets on [SkyNet] ... so the researchers, as they ramp up to deal with bigger and bigger cubes of data, can overcome some of the challenges they need in order to start processing things like ASKAP [Australian Kilometre Array Pathfinder] data in the future,” he said.
The SKA will be the world’s largest telescope when it is complete and it will cost around $2.5 billion dollars, which will be shared between institutions in 20 countries (including Australia, New Zealand, and countries in North and South America, Europe, Africa and Asia).
The SKA will have up to 50 times the sensitivity and 10,000 times the survey speed of current radio telescopes.
People can join TheSkyNet project at www.theskynet.org. | https://www.techworld.com.au/article/401259/theskynet_comes_alive/?utm_medium=rss&utm_source=tagfeed |
Every service related business wants to maximize the hours the staff bills or provides services to customers. When they are working, the business gets to bill for time and generate revenue. Maximum revenue means the best chance at creating the highest profit.
But how many hours can one staff member bill in a year? Well, if there is 40 hours in a week and there are 52 weeks in a year, then the answer is 2,080. But we know this isn’t realistic. So what is the maximum number of hours a staff member can bill for in one year?
The easiest way to answer this is work backwards from the maximum total time to the actual available time given the normal issues most employees and employers deal with in business. So, we’ll start out with 2,080 hours.
Contents
Holidays
This is the first set of adjustments all small businesses have to address. There are ten traditional holidays in a year as follows:
- New Year’s Day
- Martin Luther King Day
- President’s Day
- Memorial Day
- July 4th
- Labor Day
- Columbus Day
- Veteran’s Day
- Thanksgiving
- Christmas
Now this doesn’t mean you are required to provide them as holidays, but they are traditional days off for most employees. So let’s assume that as an owner of a business you have elected to provide eight of these days. That is 64 hours of unavailable time for your employee.
Religious & Traditional Holidays
No matter where you live or what religion you practice, these days exist. Most employees enjoy Christmas Eve off, the Friday after Thanksgiving, Good Friday, and some local holiday. In the south, they have a Lee-Jackson Holiday. In the north, it is customary to have St. Patrick’s Day off. So every region in our nation deals with this issue. So for the sake of this article, let’s assume the employer provides three of these days per year. This equates to 24 hours.
Vacation/Sick
Most employers provide two weeks per year with five days of sick leave. So therefore, each employee will take off up to 100 hours per year. But in reality, most are not sick for the full five days. However, we have to calculate availability for an employee. So we will use all 100 hours as unavailable.
So where do we stand right now? The formula is this:
Maximum Time Available: 2,080
Holidays (8 Days * 8 Hrs.) (64)
Religious/Traditional (3 Days) (24)
Vacation/Sick (100)
Physical Attendance Availability 1,892 Hours per Year
Each employee should physically be available to the employer for 1,892 hours per year. But this doesn’t mean (s)he will actually work that number of hours for you. Assuming there is enough work and the efficiency is perfect in your business, will the employee actually work and bill for 1,892 hours. The answer is no.
There are other forces at work that take away from availability. There is training, administration, and tooling requirements that have to be met.
Training
In my profession, we are required to attend no less than 40 hours of training per year. In the professional world, it is known as Continuing Professional Education. For others, the time required is different. For most licensed types of businesses (professional and occupational type licenses) the training ranges from 20 hours to 60 hours per year. For others, you can figure on about 20 hours per year. If your employees are relatively younger than they need more training. If you industry utilizes technology and given the constant change with technology, you could be looking at 60 hours per year. So for this article, a conservative 25 hours per year is used.
Administration
Typically the human resources director will interact with the staff several times per year. You have payroll documentation issues, employee evaluations and benefits education. So for a conservative number, this article uses 8 hours.
Tooling
Every business has tooling issues. Even the office environment has tooling requirements. The more sophisticated your business, the more time it takes to tool the staff and educate them on the changes. In the office environment, computers and printers are constant issues especially the software changes. In the auto repair industry, the MATCO Tool Truck pulls up once a week to sell his tools to the mechanics. In electronics, there are always new meters or scopes to review and consider. Again, each service related industry has its own set of dynamics as it relates to tooling, but it doesn’t change the fact that the staff has to spend some of their work year dealing with this issue. So to be reasonable in determining the number of hours to deduct, this article assumes 8 hours per year for tooling.
From the physical availability number you subtract another 41 hours associated with the above items. Now the maximum available billable hours per employee per year are 1,851 hours. This equals 89% of the total hours available per year per employee (2,080). In addition this assumes relatively conservative estimates for the above issues.
In reality it will be even less because of employee productivity, customer interaction issues, and administrative compliance (for invoicing, warranties, and documentation) and other industry related requirements. For the employer, using 1,851 hours as the starting point and a measuring tool for productivity is a good number. Over time the business should be able to calculate a reasonable productivity standard for performance based on this number. Make adjustments for your business using the above as a guideline to determine the actual maximum billable time per year for your employee. Act on Knowledge.
If you have any comments or questions, e-mail me at dave (insert the usual ‘at’ symbol) businessecon.org. I would love to hear from you. If interested in my services as an accountant/consultant; click on ‘My Services‘ in the footer of this article. | https://businessecon.org/2013/01/14/labor-availability/ |
An organization has the potential to be a vibrant, knowledge commons. As in the age-old village where people developed a collective identity, fostered meaningful relationships, participated in engaging community dialogues, and helped each other solve problems, an organization can be a web of conversations and supportive networks. Rooted in the mutual commitment to the success of the whole, it can be a diverse community that applies its collective intelligence to achieving its mission, serving its customers, and obtaining competitive advantage in a socially responsible and sustainable manner.
As explored in previous posts, the workplace is shifting from information processing to strategically managing knowledge with communities of practice and interactive networks that foster innovation—all pivotal elements of organizational learning and decision making.
In light of this perspective, an organization’s information technology (or IT) department and its chief information officer (or CIO) not only play a vital role in the organization’s overall operation, but also in devising the organization’s communication dynamics and knowledge-sharing culture. In addition to maintaining an organization’s technical infrastructure, IT professionals must also focus on establishing social networks that generate knowledge, promote learning and enable innovative decisions be implemented, which has been brought on by the current shift to a knowledge-sharing, collaborative work environment.
The role of these “second generation” IT professionals is to develop cyber-based processes that tap into and enhance an organization’s intelligence. Such IT specialists strive to design collaborative virtual environments and knowledge management systems that create conversational networks, support teamwork, and foster experiential learning. The challenge, though, as Kristina Höök, David Benyon, and Alan J. Munro point out in 2003’s Designing Information Spaces: A Social Navigation Approach is to create information places that are authentic meeting places that facilitate human interaction and learning.
IT professionals, then, are increasingly becoming information systems (or IS) specialists who are communication and knowledge-network architects. Such a viewpoint emphasizes their vital role in fashioning the organization into a knowledge commons characterized by reflective thinking and dialogue rooted in interactive relationships. As information systems managers, network administrators and database managers, IS professionals are information gatekeepers who enable information to openly travel throughout the organization’s communication channels. As knowledge management facilitators, they are architects of the cyber-social networks and communities that enable dialogue and innovative thinking.
2. enable information to flow openly.
Information systems design focuses less on data storage and access, and more on enabling employees to form work relationships that are platforms for them to ask questions, identify answers, dialogue, analyze, advise, and provide feedback to each other.
Issues regarding locale, distance, time zones, gender, language, cultural heritage, job position, professional status, and organizational politics must be addressed so they are not hindrances to the networking process.
Communication and information systems implemented must fit the users’ work habits, preferred communication styles, learning styles, technology level, and particular job needs.
As is easily observed, in most cases, employees are knowledge workers. Because of this, the role of the CIO and the IT department are strategic. While it is important that the chief executive officer promote an organizational culture that values knowledge and learning and the human resource management director develops leaders that enable knowledge sharing and creative thinking, the CIO must envision and implement a human-centric information systems infrastructure that can be the technical backbone for the organization’s communication and knowledge generation avenues. All three of these leaders, in conjunction with their staffs, must work as partners in being architects of a knowledge ecology that flourishes in being an organizational knowledge commons. Together they enable the organization to be an innovative open workplace environment comprised of knowledge sharing processes and networks. | https://www.saybrook.edu/blog/2012/07/12/envisioning-it-professionals-knowledge-network-architects/ |
The 2018-19 school year marks a significant milestone—the 5th anniversary of the Verizon Innovative Learning schools program. Our shared mission, to close the Digital Learning Gap, remains steadfast, as we continue to grow from our first eight schools to our network of 100.
Verizon’s commitment to build brighter futures for millions of kids has given us the amazing opportunity to work with innovative teachers and leaders across the country. Giving students devices and internet connectivity on and off school campus solves one of the major issues of creating equitable environments for all. Additionally, the VILs program ensures teachers are supported and trained by school coaches so they are confident and ready to leverage technology in powerful ways. This combination of access + professional learning is working—students are showing 2x improvement in reading and 3x improvement in math1.
I’m excited to welcome Cohort 5 into the VILs community:
The fall rollout begins a transformative journey for each participating school and the surrounding community. School coaches will lead teachers through professional learning that focuses on effectively leveraging technology to address our eight student outcome goals. These are ambitious but achievable goals, and we look forward to continuing to work across our VILs community on our shared mission of closing the Digital Learning Gap and preparing all students to thrive. | https://digitalpromise.org/2018/08/30/growing-100-schools-year-5-vils/ |
WW International, Inc. (NASDAQ:WW) – Analysts at Jefferies Financial Group dropped their Q1 2021 earnings per share estimates for shares of WW International in a research note issued to investors on Monday, February 8th. Jefferies Financial Group analyst S. Wissink now forecasts that the company will post earnings per share of $0.00 for the quarter, down from their prior estimate of $0.27. Jefferies Financial Group also issued estimates for WW International’s Q2 2021 earnings at $0.82 EPS, Q3 2021 earnings at $0.77 EPS, Q4 2021 earnings at $0.57 EPS, FY2021 earnings at $2.16 EPS, Q1 2022 earnings at $0.11 EPS, Q2 2022 earnings at $0.87 EPS, Q4 2022 earnings at $0.62 EPS and FY2022 earnings at $2.50 EPS.
Get WW International alerts:
Several other research analysts have also recently weighed in on the stock. DA Davidson upped their price target on shares of WW International from $33.00 to $38.00 and gave the stock a “buy” rating in a report on Wednesday, December 9th. Citigroup Inc. 3% Minimum Coupon Principal Protected Based Upon Russell reduced their target price on WW International from $35.00 to $32.00 and set a “buy” rating on the stock in a research note on Friday, November 20th. Wolfe Research began coverage on WW International in a research note on Tuesday, December 15th. They issued an “outperform” rating and a $36.00 target price on the stock. Oppenheimer reissued a “hold” rating on shares of WW International in a research note on Wednesday, December 2nd. Finally, Zacks Investment Research raised WW International from a “sell” rating to a “hold” rating in a research note on Saturday, January 16th. One investment analyst has rated the stock with a sell rating, five have issued a hold rating and nine have assigned a buy rating to the company. The company currently has a consensus rating of “Buy” and a consensus price target of $31.00.
Shares of WW stock opened at $27.64 on Wednesday. The company’s fifty day simple moving average is $25.41 and its two-hundred day simple moving average is $24.78. WW International has a 1 year low of $9.75 and a 1 year high of $39.75. The stock has a market capitalization of $1.88 billion, a price-to-earnings ratio of 22.84, a PEG ratio of 1.04 and a beta of 2.88.
Several institutional investors have recently bought and sold shares of the company. New South Capital Management Inc. boosted its holdings in shares of WW International by 80.4% during the fourth quarter. New South Capital Management Inc. now owns 1,724,928 shares of the company’s stock worth $42,088,000 after purchasing an additional 768,560 shares during the period. Victory Capital Management Inc. lifted its holdings in WW International by 48.0% in the third quarter. Victory Capital Management Inc. now owns 810,258 shares of the company’s stock worth $15,290,000 after buying an additional 262,863 shares during the period. Bank of New York Mellon Corp lifted its holdings in WW International by 8.2% in the fourth quarter. Bank of New York Mellon Corp now owns 564,263 shares of the company’s stock worth $13,767,000 after buying an additional 42,813 shares during the period. Russell Investments Group Ltd. lifted its holdings in WW International by 21.7% in the fourth quarter. Russell Investments Group Ltd. now owns 272,181 shares of the company’s stock worth $6,641,000 after buying an additional 48,493 shares during the period. Finally, Eidelman Virant Capital lifted its holdings in WW International by 452.8% in the third quarter. Eidelman Virant Capital now owns 133,230 shares of the company’s stock worth $2,595,000 after buying an additional 109,130 shares during the period. Institutional investors own 80.08% of the company’s stock.
In other news, Director Oprah Winfrey sold 312,142 shares of the stock in a transaction dated Monday, December 7th. The shares were sold at an average price of $31.10, for a total transaction of $9,707,616.20. Following the completion of the sale, the director now directly owns 4,917,471 shares in the company, valued at $152,933,348.10. The transaction was disclosed in a document filed with the SEC, which is accessible through the SEC website. Over the last 90 days, insiders have sold 1,376,440 shares of company stock valued at $40,027,530. Company insiders own 12.48% of the company’s stock.
WW International Company Profile
WW International, Inc provides weight management products and services worldwide. The company operates in four segments: North America, Continental Europe, United Kingdom, and Other. It offers a range of nutritional, activity, behavioral, and lifestyle tools and approaches products and services. The company also provides various digital subscription products to wellness and weight management business, which provide interactive and personalized resources that allow users to follow its weight management program via its Web-based and mobile app products, including personal coaching and digital products; and allows members to inspire and support each other by sharing their experiences with other people on weight management and wellness journeys. | |
Death closes all: but something ere the end,
Some work of noble note, may yet be done,
Not unbecoming men that strove with Gods.
— Lord Alfred Tennyson, “Ulysses”
Fate’s threads entangle all in an infinite web, unbeknownst to the players of the tragedy. What happens when a character is aware of his fate and acts towards preventing it? Can destiny, with all its predetermined points in time, be altered? Mary Stewart’s The Wicked Day (2003) tells the story of Mordred, as he struggles to fight against the prophecy of King Arthur’s death by his hands.
Mordred’s life begins and ends with Arthur. Oblivious to their family connection, Arthur sleeps with his half-sister Morgause. From their incestuous union, Mordred is born. Following Merlin’s prophecy of Mordred causing the death of Arthur and his kingdom, Arthur gathers all the babies born in the same month as Mordred and casts them away in a ship to perish. However, Mordred survives. He is raised by a fisherman and his wife, and later on, by Morgause herself. Despite Morgause’s attempts to turn Mordred into an instrument of her will and work against Arthur, Mordred resists and chooses to act according to his own unconditional love and loyalty to Arthur.
Stewart portrays Mordred as a person who is ambitious but also holds unconditional love and loyalty for Arthur. Well-loved by Arthur and Guinevere, he is soon recognised in the court’s eyes as Prince Mordred. The title distinguishes him from Gawain and his other half-brothers, as his intelligence and cool demeanor sets him apart from their love of discord and violence. Mordred serves Arthur to the best of his ability and seeks to protect Arthur by trying to prevent his fate.
He seeks Nimue (Merlin’s successor) and asks for her counsel. He confides to her his familial love for Arthur and his intentions, where “[he] would not willingly bring evil to the King” (225) since he feels indebted to Arthur for knowing “[the] prophecy from the start, believing it, [and] yet, [taking Mordred] into the court and [accepting him] as his son” (225). However, Nimue tells him that she cannot aid him and avoid a predetermined destiny. Frustrated, Mordred draws his dagger and threatens to kill himself, asking “[w]ould [it] not avert the fate that [she] say[s] hangs in the stars?” (225). In response, Nimue simply replies that “Fate has more than one arrow” (226), and killing himself would only bring Arthur closer to death, since the prophecy does not mention how “Arthur would meet his doom by [his] hand or even by [his] action” (226), only through Mordred’s existence. Furthermore, Nimue mentions that an alternate course of action may have led to possible fatal consequences. For example, if Arthur had succeeded in slaying Mordred as an infant, “it might have happened that men would have risen against him for his cruelty” (226) and he would have been killed in the process.
Stewart plays with the concept of destiny and predetermined events through the flexibility of fate and through free will. She also brings up the notion of perception, arbitrary chance and coincidence influencing the progression of events to their final conclusion.
Stewart plays with the concept of destiny and predetermined events through the flexibility of fate and through free will. She also brings up the notion of perception, arbitrary chance and coincidence influencing the progression of events to their final conclusion.
When Arthur leaves to fight against the Roman army, Mordred acts as regent in his place. Prior to his departure, Arthur tells Mordred of the treaties he must negotiate with the Saxons, their long-time enemy, if he happens to die in battle. Moreover, upon his return or in the event of his death, Arthur grants Mordred the right to inherit and rule his kingdom. Wounded in battle, Arthur sends a messenger to Mordred and the Queen. However, unbeknownst to them, the messenger who bears the news of Arthur’s death is an agent of the Roman army who replaced Arthur’s messenger. As a result, Mordred follows through in his procedures of treaties and assumes the throne. When Arthur discovers Mordred’s actions through a letter, he misinterprets Mordred’s intentions and actions but remains reserved in judging Mordred. Gawain, Mordred’s half-brother, is convinced of Mordred’s evil intentions and tries to persuade Arthur into fighting him. As they enter Saxon territory by ship on their way home, Arthur and Gawain are attacked by Saxon forces, unaware of the new treaty made by Mordred and their leader. When Gawain dies from his wounds, Arthur grieves for him and is persuaded by his words of Mordred’s treachery.
By believing in the truth of the prophecy, Arthur hastens towards his prophesied death. However, the wise Merlin’s position on the matter of destiny and the role of free will are in opposition to Nimue’s understanding of destiny. He believes destiny can be overwritten through actions and active choices toward preventing it. On the day before Arthur and Mordred’s treaty negotiation, Merlin visits Arthur in his dream. Merlin believes they have “let [themselves] be blinded by prophecy…[their] own follies, not the gods, foredoom [them]” (396). Merlin tells him the nature of destiny relies on free will and choice, where men can refuse to concede to the fate assigned to them by the gods and take their own destiny in their hands. He counsels Arthur to make peace with Mordred, resist his own fate and “lay down his sword [,]…[t]ake no other counsel but talk with [Mordred], listen, and learn” (396). Merlin reassures him of Mordred’s intentions, guaranteeing he will not carry out the prophecy by his will, and confides in Arthur Mordred’s numerous attempts in seeking Merlin out and almost killing himself to save Arthur from him. Lastly, Merlin predicts Arthur and Mordred “may hold Britain safe between [their] clasped hands” (396) if Arthur trusts in and makes peace with Mordred. But if they fail, Arthur’s kingdom will be lost forever.
However, fate has a strange way of fulfilling itself, beyond the reach of free will, through moments of chance and coincidence. Arthur follows Merlin’s advice and clears out any misunderstanding and makes peace with his son, once again promising Mordred to rule after his death. As Arthur and Mordred reach the conclusion of their truce talks, One of Arthur’s knights steps on an adder. By instinct, the knight draws his sword and slays it. Both armies mistake the knight’s drawing of the sword as the signal to fight and the truce ending in failure. The armies rush towards each other, Arthur and Mordred swept away in the confusion. Meeting once more, Arthur and Mordred face each other as enemies. Thus, father and son sever the binding bond of blood, love lost in the wicked day of destiny.
Works Cited
Tennyson, Lord Alfred. “Ulysses”. Poetry Foundation. https://www.poetryfoundation.org/poems/45392/ulysses.
Stewart, Mary. The Wicked Day. Eos. 2003.
Images
Tarot Magician Magic by Rirriz via PixaBay. License: CC0 1.0 Universal.
Crystal Ball by George Hodan via PublicDomainPictures. License: CC0 1.0 Universal.
Image via MaxPixel. License: CC0 1.0 Universal. | http://www.ubcenglish.com/destiny-and-free-will-the-wicked-day-of-chance/ |
Professor Jane Gunn, Dr Gail Gilchrist, Melina Ramp, Darshini Ayton,
Maria Potiriadis
Primary Care Research Unit, Department of General Practice, University of Melbourne
$467,000
beyondblue Victorian Centre of Excellence
2008
The diamond consortium involved a multidisciplinary team with expertise in complex primary care and mental health research, evaluation and clinical practice. The team developed a comprehensive program of primary care mental health research and increased the research capacity for this work in Victoria. The final report presents a summary of the three years of activity.
The consortium funding has provided support for:
Key findings
The results of this project consist of a set of outcomes. The consortium successfully established an active network of over 100 researchers with an interest in depression-related research in primary care.
The consortium focused on four key activities:
A website was established at www.diamond.unimelb.edu.au n to provide details on the aims, structure and function of the consortium. It includes information about research in progress, publications, and future and past events.
Twelve consortium newsletters have been circulated widely throughout Australia and internationally, describing the progress of the diamond longitudinal study and related projects, profiling consortium members and their work, highlighting key consortium events and advertising relevant seminars and conferences.
Due to the delay in publishing research findings in peer-reviewed journals, the project used radio interviews, articles published in primary care and local newspapers, and presentations at forums, conferences and meetings to disseminate information about the diamond study and other consortium activities.
Consortium members’ contributions to state and national policy are summarised in the consortium’s final report. They include organising seminars and panel discussions, presentations to mental health practitioners and involvement in the development of the 2007 National Survey of Mental Health and Wellbeing.
The consortium developed and implemented a research program that began with the diamond study pilot. This enabled the team to refine a cost-effective method for recruiting people attending general practice to a study on depression and emotional well-being.
The pilot study provided a research capacity building opportunity for two medical students, and the development of study methods and tools has influenced a number of other research programs. Two other studies (the DIALOG and Weave studies) have been able to recruit participants via the diamond screening process. Five research higher degree students are using data collected in the diamond study.
The pilot study findings were also used to inform the development of general practice-based models for depression care through the Re-order study.
The diamond longitudinal study involves 30 general practices and almost 800 people experiencing depressive symptoms. Participants agreed to document and map their experiences of depression and the healthcare system over time. This is the largest prospective observational study of depression care undertaken in Australia. It has attracted considerable international attention and led to the use of a common set of measures in a number of other studies.
The funding from beyondblue enabled the consortium to allocate seed funding to support a number of projects related to depression in primary care, including:
The diamond consortium has built research capacity in primary care mental health by supporting the careers of early career researchers, young researchers and research students. The consortium also hosted 17 seminars and six workshops on related topics.
Implications for policy, practice and further research
The diamond consortium’s activities funded under this project have made a substantial contribution to the coordination and communication of depression-related research activities in Australia, and have assisted with the open exchange of information and sharing of resources.
The consortium’s activities have helped map the pathways to and from mental health care for people experiencing depression and identified barriers and facilitators to effective models of primary care mental health. They have helped to develop and test models of care based on a systems approach, and built research capacity in primary care mental health.
Sign up below for regular emails filled with information, advice and support for you or your loved ones.
Subscribe failed. Please try later or contact us. | https://www.beyondblue.org.au/about-us/research-projects/research-projects/diamond-consortium-building-capacity-in-primary-mental-health-care-research-and-evaluation |
Differentiating instruction is essential to meeting the wide range of learning needs in intermediate classrooms. In this course, teachers demonstrate time-effective, practical strategies to address the range of readiness levels and learning needs in grades 3-6 classrooms and stretch all students to higher levels of thinking and achievement.
In this course you will see how classroom teachers in grades 3-6 tier assignments to maximize student achievement with appropriate levels of challenge. In addition, you'll see how to make the most of students' multiple intelligence preferences to promote a deeper understanding of content area studies.
You will learn how to:
Watch an excerpt from this course:
"I thought this course was extremely helpful. The embedded videos showing the various strategies in action were invaluable and made this course far more meaningful to me than any of the numerous articles or other differentiated instruction PD that I have read/participated in. THANK YOU!"
– F. Williams
"I found this course to be informative and organized and the resources very useful."
– R. Rozzi
"Although I have been teaching for quite some time this course was so useful and the strategies will be used right away."
– C. Castillo
"The class gave me some great ideas I could put into practice right away. I loved being able to take the class at my own pace and when and where I wanted to. I will definitely take another online course."
– M. Sams
"The videos really enhanced the course seeing words put into action addressed my learning style." | https://www.ber.org/store/products/Grades-3-6-Practical-Ways-to-Differentiate-Instruction-Using-Tiered-Assignments-and-Tapping-the-Power-of-Multiple-Intelligences.aspx |
Growth of the Philippine economy is likely hindered by the shortage of water in the summer season, the National Economic and Development Authority (NEDA) said.
At a year-end briefing at the NEDA headquarters in Manila on Wednesday, Socioeconomic Planning Secretary Ernesto Pernia said the lack of water supply is likely anew given a setback in the two water concessionaires’ plans.
“Or maybe they’ll be pressured to deliver better [services] in the coming year,” Pernia said, referring to west and east zone concessionaires Maynilad Water Services Inc. and Manila Water Company Inc., respectively.
To recall, the Department of Justice, upon the order of President Rodrigo Duterte, reviewed the concession agreements of the two concessionaires with the Metropolitan Waterworks and Sewerage System (MWSS) in 1997, amid “disadvantageous” provisions in them.
One such provision is the prohibition against government interference in rate-setting. Another is for indemnity in case of such interference.
On Wednesday, Manila Water told the Philippine Stock Exchange it received a letter from MWSS informing about the revocation of its contract extension. The concessionaire has until three days to reply.
For NEDA Undersecretary Adoracion Navarro, she expects a supply shortage to have a minimal impact on the economy since the Philippines is now experiencing the tail end of the El Niño phenomenon.
“It will have minimal impact…unlike during the peak of the El Niño,” she said in an interview with the Philippine media.
“In August, we already declared that the El Niño phenomenon had ended…. Tail-end effects lang ito (These are only tail-end effects),” she added.
Navarro said any possible shortage has nothing to do with the concessionaires’ ongoing battle with the government.
“It’s because of the Angat Dam’s water level. We are not reaching the expected water level by year-end. [It’s] not because of the water concessionaires,” she added.
The National Water Resources Board (NWRB) has set a 212-meter target by the end of the year to meet the sufficient water supply and irrigation requirements of Metro Manila and neighboring provinces until next summer.
For the full year, Pernia said the government’s growth target of 6 to 6.5 percent was “achievable” as the country’s gross domestic product picked up to 6.2 percent during the third quarter of the year.
Meanwhile, he said that an expected increase in consumer spending will boost the country’s fourth-quarter GDP growth, which accounts for two-thirds of the year’s growth.
Besides water shortage, possible risks to economic growth are the trade war between the United States and China, natural disasters, volatility in oil prices, and possible delays in infrastructure projects. | https://www.aseaneconomist.com/philippine-water-woes-to-dampen-growth-next-year/ |
No amount of description or praise can do justice to Michael Cunningham’s words. My own copy of The Hours is underlined and annotated with passages that I come back to year after year. Michael Cunningham won the Pulitzer Prize for this triptych, which is at once an homage to Mrs. Dalloway and a triple portrait of three women in three different time periods struggling for meaning in their own lives.
Told in alternating points of view, the prologue opens with Virginia Woolf’s final journey into the river, weighed down by the rocks in her pockets. With this first dose of introspection, we continue strolling through Woolf’s mind as she wrestles with her private thoughts, slowly slipping away from the family around her and reemerging as she takes a pen to the blank page to craft Mrs. Dalloway. Similarly, we intimately meet two other women at varying stages of life. Clarissa, a middle-aged woman in late-twentieth century New York City, is tasked with hosting a friend’s party, whose preparations evoke memories of past relationships and unexpected interactions. Laura, a young wife and mother living in Los Angeles in 1949, must bake a cake for her husband’s birthday, keep the house clean, and remain at all times a devoted caretaker for her perfect family.
Cunningham’s brilliant mark is honing in on the fluidity of daily life, and how we continue to survive in spite of its complexities—or because of them. The activities of the three women are examined so simply that we can see the act for what it is, the meanings that emerge when we add our thoughts to a moment and watch it suddenly become something else entirely. Life is bigger than us, and yet we repeatedly try to fit ourselves into it. “It is almost perfect, it is almost enough, to be a young mother in a yellow kitchen touching her thick, dark hair, pregnant with another child,” Laura tells herself.
A master of his craft, Cunningham casts his characters in light and shadow, exposing raw emotions and hardened questions, communal experiences and expansive thoughts often left unspoken. His prose acknowledges our search for answers, for direction—the right amount of happiness, the right way to raise a child, the right life to settle into. He shows us that it’s okay—and in fact, expected—to attribute a mixed set of emotions to similar moments, to succeed and fail, to know that you want a different kind of life but have no idea what that should be. His characters put deeper thoughts about friendship and motherhood, loving and dying into better words than we could ever hope to phrase for ourselves.
The women balance each other, looking at the same world with different degrees of fervor—Clarissa, loving life with all its flaws, examining the way we fight so desperately to live, love, and thrive. Laura, sitting on the edge of a form of happiness she’s not sure she’s ready to accept. Virginia, whose choice to write the powerful novel Mrs. Dalloway and then end her own life, brings a contradictory color to her sections, mirroring the range of emotions all three women grapple with.
As the title suggests, time is a constant theme, a subject that can be debated and analyzed and spun a thousand different ways and still beg further investigation. Unraveling time in the context of death makes way for commentaries on debilitating sicknesses and their lasting effects, diseases of mind and body, and of pride, deceit, and fear.
Woolf’s novel Mrs. Dalloway is the object that connects them all; Woolf contemplates Mrs. Dalloway’s fate as she writes, Laura immerses herself in the story as she reads, and Clarissa embodies the title character. A reflection of how easy it is to lose ourselves to expectations, to other fictional worlds and ideas; what we look like as ourselves, and what we look like next to one another. Roles portrayed in isolation, in groups, against responsibility, tragedy, and love—all different versions of ourselves.
You’ll want to engage with these characters in the same way you do your own thoughts, and you’ll leave with something you hadn’t known you needed. The hours you spend reading this book will merge with the hours still standing before you, empty yet full.
Sarah Woodruff is an international publicist at HarperCollins. | https://offtheshelf.com/2015/10/the-hours-by-michael-cunningham/ |
Patents and technological innovations drive progress and are important assets for any company. Effective advice must take into consideration the requirements of all related areas of law, in addition to traditional elements, such as R&D, intellectual property strategies, employee invention law, technology transfers, the enforcement of intellectual property rights and litigation. Our interdisciplinary approach ensures that all relevant aspects are taken into account, including corporate and tax frameworks and antitrust and State aid law issues.
We provide advice on patent and technology law in the following focus areas:
Strategic patent advice
A patent strategy should be more than merely a registration strategy. A good patent strategy will mirror the innovation strategy of the company, form a significant element of the overarching mid- and long-term strategies of that company, and be an important indicator of the company’s competitiveness. Any successful patent strategy will not only align with the corporate strategy, but will also include essential factors, such as R&D structures, efficient management of employee inventions in compliance with the relevant laws, functioning internal and external communication processes and qualified and motivated employees.
Providing strategic patent advice is a focus of our work. We analyse the innovation and patent strategies derived from the corporate strategies of our clients. We develop new R&D structures and rules for the management of employee inventions and amend existing ones, and provide support in the improvement of communication processes. As part of the analysis of portfolios and products, we identify and rectify gaps in IP rights and work with clients to develop application structures that are appropriate to both, the market and the competitive environment. Our analyses help determine the added value of technical IP rights for companies, and we provide clients with the support they need to make the best use of these rights. We generate alternative exploitation options for IP rights, and assist with finding contractual partners and with contractual negotiations.
Employee invention law
Every tech-savvy, innovative company is faced with the challenge of satisfying the requirements of employee invention law. In practice, these requirements are often quite difficult to manage. The difficulty lies in establishing internal company and group processes and structures, which both satisfy the legal requirements and sufficiently take into consideration the existing corporate culture.
We can advise you on the establishment of internal company processes, provide training on how best to handle both employee inventions and the employee inventors themselves, work with you to compile guidelines - tuned to your corporate strategy - on managing invention disclosure, and develop remuneration schemes. In the event of a dispute with an inventor, we have the expertise to help you reach an out-of-court settlement and, should the case escalate, we represent you in arbitration or before the relevant courts.
Managing legal disputes / Litigation
Our expertise and experience with patent and technology disputes in invaluable. This includes the enforcement and defence of claims related to technical IP rights, software copyright, design and the complementary protection of related rights for technical products under the law of unfair competition. We have particular expertise with interim injunctions and with procedures to secure evidence (so-called inspection proceedings).
Enforcing the rights of our clients and defending these rights against attacks, and coordinating the enforcement or defence - even across borders - are part of our daily work. Our lawyers regularly appear before all major courts in Germany and are well acquainted with the respective customs. We execute border seizure procedures in cooperation with customs officials. We also safeguard your trade fair activities, prevent patent infringements at trade fairs and exhibitions, and work with authorities to organise the seizure of infringing goods.
We have significant experience with securing evidence in the run up to the enforcement of claims, especially with the initiation and execution of inspection proceedings in cooperation with technical experts, bailiffs and the police. This is especially pertinent and effective in the case of alleged infringements of working or manufacturing processes that are patent protected, copyright infringements involving computer programmes and source code changes, and the unauthorised use of trade and business secrets (e.g. technical drawings). In most parts of Germany, such procedures for securing evidence in summary proceedings are based on “case law”. The successful conclusion of a dispute during inspection or other interim proceedings therefore requires tact and knowledge of both, the requirements of the respective courts and the pitfalls of execution. This, in turn, requires experience and constant practice. We can offer swift assistance when competitors infringe your rights or you suspect that they have been infringed.
Protection of know-how
Much of our work in this field involves advising and representing companies in relation to the protection of know how, especially in connection with the illegal exploitation of know-how by third parties and the establishment of corporate protection strategies and mechanisms.
Our team has the depth and experience necessary to coordinate and enforce civil and criminal claims swiftly and simultaneously, preventing and penalising the unlawful exploitation of know-how. We advise in relation to know-how analyses and work with clients to develop protection concepts. We review contracts and assist with their drafting. In addition, we provide training for employees and division heads.
Technology transfers
We have successfully advised our clients on technology transfers for many years.
Our experience in this area includes advising on the establishment of R&D intensive joint ventures, during contractual negotiations on the licencing-in or licencing-out of know-how and IP rights, with respect to the sale or purchase of companies, and on the structure of licencing and cooperation agreements. Our team also has considerable experience with M&A transactions involving significant technology.
Law & Technology Day
In the area of patent and technology law, every year in February BEITEN BURKHARDT, in cooperation with the patent law firm Horn Kleimann Waitzhofer, organises the Law & Technology Day. The event provides a platform for technology oriented companies ‑ especially German medium-sized companies ‑ for networking and a professional exchange. In addition to recent developments in patent and technology law, the focus of the event is placed on practical relevance.
The Law & Technology Day combines a professional transfer of knowledge with informal exchange in a familiar setting.
We look forward to seeing you at the coming Law & Technology Day in February 2019.
Programmes of past events can be found here:
Contacts
|
|
Dr Sebastian Heim
Lawyer, LL.M., Licensed Specialist for Intellectual Property Law
Partner
|
|
Experts
|
|
Tarik Bouhabila
Lawyer, LL.M., B.Eng. | https://www.beiten-burkhardt.com/en/areas-of-competence/patent-technology-law |
An extensive study of the mesh requirements when simulating unsteady crosswind aerodynamics for industrial applications is conducted and reported in this article. Detached-Eddy Simulations (DES) of a simple car geometry under headwind, steady crosswind and time-dependent wind gust are analysed for different meshes and flow cases using a commercial software, STAR-CD. The typical Reynolds number of the cases studied is 2.0x106 based on the vehicle length. Mesh requirements for capturing the time development of the flow structures during a gust is provided. While respecting these requirements, the aerodynamic coefficients can be reliably calculated. Using turbulence methods like DES in order to resolve the flow scales provides a significant insight for designing a ground vehicle and, due to the reasonable computational times involved, can be incorporated in a design process in a near future.
Ground vehicles are subjected to crosswind from various origins such as weather, topography of the ambient environment (land, forest, tunnels, high bushes...) or surrounding traffic. The trend of lowering the weight of vehicles imposes a stronger need for understanding the coupling between crosswind stability, the vehicle external shape and the dynamic properties. Means for reducing fuel consumption of ground vehicles can also conflict with the handling and dynamic characteristics of the vehicle. Streamlined design of vehicle shapes to lower the drag can be a good example of this dilemma. If care is not taken, the streamlined shape can lead to an increase in yaw moment under crosswind conditions which results in a poor handling.
The development of numerical methods provides efficient tools to investigate these complex phenomena that are difficult to reproduce experimentally. Time accurate and scale resolving methods, like Detached-Eddy Simulations (DES), are particularly of interest, since they allow a better description of unsteady flows than standard Reynolds Average Navier-Stokes (RANS) models. Moreover, due to the constant increase in computational resources, this type of simulations complies more and more with industrial interests and design cycles.
In this thesis, the possibilities offered by DES to simulate unsteady crosswind aerodynamics of simple vehicle models in an industrial framework are explored. A large part of the work is devoted to the grid design, which is especially crucial for truthful results from DES. Additional concerns in simulations of unsteady crosswind aerodynamics are highlighted, especially for the resolution of the wind-gust boundary layer profiles. Finally, the transient behaviour of the aerodynamic loads and the flow structures are analyzed for several types of vehicles. The results simulated with DES are promising and the overall agreement with the experimental data available is good, which illustrates a certain reliability in the simulations. In addition, the simulations show that the force coefficients exhibit highly transient behaviour under gusty conditions.
Ground vehicles, both on roads or on rail, are sensitive to crosswinds and the handling, travelling speeds or in some cases, safety can be affected. Full modelling of the crosswind stability of a vehicle is a demanding task as the nature of the disturbance, the wind gust, is complex and the aerodynamics, vehicle dynamics and driver reactions interact with each other.
One of the objectives of this thesis, is to assess the aerodynamic response of simplified ground vehicles under sudden strong crosswind disturbances by using an advanced turbulence model. In the aerodynamic simulations, time-dependant boundary data have been used to introduce a deterministic wind gust model into the computational domain.
This thesis covers the implementation of such gust models into Detached-Eddy Simulations (DES) and assesses the overall accuracy. Different type of grids, numerical setups and refinements are considered. Although the overall use of DES is seen suitable, further investigations can be foreseen on more challenging geometries.
Two families of vehicle models have been studied. The first one, a box-like geometry, has been used to characterize the influence of the radius of curvature and benefited from unsteady experimental data for comparison. The second one, the Windsor model, has been used to understand the impact of the different rear designs. Noticeably, the different geometries tested have exhibited strong transients in the loads that can not be represented in pure steady crosswind conditions.
The static coupling between aerodynamics and vehicle dynamics simulations enhances the comparisons of the aerodynamic designs. Also, it shows that the motion of the centre of pressure with respect the locations of the centre of gravity and the neutral steer point, is of prime interest to design vehicles that are less crosswind sensitive. Recommendations on the future work on crosswind sensitivity for ground vehicles are proposed at the end of this thesis. | http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A273712 |
Continued research work which began with the I Promise I’m a Good’n body of work.
For the past nine months, I have been developing an iterative practice centered around creating and presenting large-scale impressions of stained glass windows. I use media such as graphite, builder’s paper, canvas, and cyanotypes alongside custom fabricated ladder equipment in order to capture the likenesses of the silver stained glass and leaded metal-work which compose the windows of the church where my studio is currently situated. My goal is to approach religious institutions and bring the sacred imagery and geometries of the stained glass panes down from their lofty heights and install them in ways that allow the public to interact directly with them in ways that would normally be forbidden.
Graphite and primer on builder’s paper
Graphite on cotton canvas
Cyanotype on fabric
As a research chemist by trade, I interpolate my analytical laboratory tendencies into this practice through it’s highly iterative nature. I experiment and develop my process over time as trials are conducted and results are interpreted to inform future steps. Along with an experimental approach to creating, the physical act of maneuvering through such large-scale work requires me to overcome my fears – heights, falling, being overwhelmed by my work, failing a task I set myself.
Oxidation and reduction are the essential pushers and movers behind all living matter. Through this thermodynamic framework, we conjure understanding of the ways in which inanimate matter reacts with energy and its surroundings. On an molecular level, the simple transfer of electrons between species is a generative process which is responsible for processes which range from the microbiological to the galactic in scale. Oxidation ages our cells and transfers nutrition in our body – our breath alone brings it on. Oxidation sustains fires, allows for explosions. At the risk of anthropomorphizing, oxidation seeks through shifting around energy to reduce all matter to a state of lower energy. Oxidation brings the energy of our world to a more stable, lower state. Without any other agency, all matter may eventually fall victim to oxidation down to it’s basest forms.
In the Christian myth, Christ is often referred to as the Light. The Light, as the subject of the sacred images on the stained glass panes through which the cyanotype was exposed, and as the source of ultraviolet radiation responsible for the creation of the image, fades with time. Framed in an ornate, Ecclesiastical frame with carpet inlay and gold leaf trim, the cyanotype is vulnerable to the light, and will undergo reduction to pure white over time. The anonymous disciple in the image is folded over in grief. There is a hopeful parallel here in oxidation. If all things are to be leveled across the universe with time and oxidation, eventually a day will come when the valleys of grief and memory are leveled to absolute sea level. The last knot of energy comes undone, and all matter inanimate and alive will rest in permanent sleep.
In taking cameraless photographs which use the windows as subject and lens simultaneously, the christian imagery is appropriated into a media which can be removed from the lofty heights and contexts of the original stained glass panes. Ecclesiastical stained glass windows are not designed to be seen through, they are designed to control and manipulate light for the purpose of entrancing viewers within. With these three photos displayed as an unframed triptych, the image of the windows captured on the paper transforms the gallery wall into a nullified window itself. With continued exposure to light over the course of months, the cyanotypes will fade to pure white. Here, the slow fading of the cyanotypes over time mirrors the process of forgetting, and will produce a pure white window devoid of the original stained glass images which manipulated the light for the photographs. | https://harrison-wayne.com/index.php/work/experimental-interpretation-of-stained-glass/ |
There have been a lot of cybersecurity events in the news this week, month, and year, but there are some important things to remember regarding the words used. To help sort through what’s relevant versus what’s noise, please keep the following definitions in mind when reading about cybersecurity events:
A Vulnerability is defined by the Computer Security Resource Center as a “weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source.” In other words, it’s that a risk exists, but there’s no evidence that bad buys are using it to cause damage in any way. Every piece of software ever written has yet-to-be-detected vulnerabilities which is why many developers leverage bug bounty programs to have the community identify weaknesses before the bad guys do.
An Exploit is when a bad guy has taken advantage of a Vulnerability in order to cause harm. When identified, it’s critical that patches to protect against exploits are applied as soon as they are available as the risk involved is not hypothetical, but practical.
A Zero-Day event is when the first time a vulnerability is detected, it’s when a bad actor has already Exploited the Vulnerability to cause damage. These are the most critical events, as damage is already occurring with no fix available at the outset to mitigate the risk of exploitation. This is why events like the current PrintNightmare Zero-day and the Microsoft Exchange Zero-Day earlier this year garner press coverage and Suite3’s attention.
During a Zero-Day event, the ounce of prevention to avoid a pound of cure may still be inconvenient. As part of PrintNightmare last week, we blocked certain print services according to Microsoft best practice recommendations to reduce the risk of exploitation, but that in turn caused print issues for certain clients with certain printers. In our view, a temporary inconvenience of limited impact is an easy trade in return for greatly reducing the likelihood of a long term inconvenience of major impact. There are so many cybersecurity risks out of our control that it’s best practice to master what’s in your control in order to reduce risks whenever and wherever possible. | https://www.suite3.com/posts/vulnerabilities-exploits-and-zero-days-oh-my |
The World I used to Know
Archive for September, 2016
Writing history is an imaginative act. Few people would deny this, but not everyone agrees on what it means. It doesn’t mean, obviously, that historians may alter or suppress the facts, because that is not being imaginative; it’s being dishonest. The role of imagination in writing history isn’t to make up things that aren’t there; it’s to make sensible the things that are there. When you undertake historical research, two truths that once sounded banal come to seem profound. The first is that your knowledge of the past apart, occasionally, from a limited visual record and the odd unreliable survivor comes entirely from written documents. You are almost completely cut off, by a wall of print, from the life you have set out to represent. You can’t observe historical events; you can’t question historical actors; you can’t even know most of what has not been written
about. Whatever has been written about therefore takes on an importance which may be spurious. A few lines in a memoir, a snatch of recorded conversation, a letter fortuitously preserved, an event noted in a diary: all become luminous with significance even though these are just the bits that have floated to the surface. The historian clings to them, while somewhere below, the huge submerged wreck of the past sinks silently out of sight.
The second realization that strikes you is, in a way, the opposite of the first: the more material you dredge up, the more bits and pieces you recover, the more elusive the subject becomes. In the case of a historical figure, there is usually a standard biographical interpretation, constructed around a small number of details: diary entries, letters, secondhand anecdotes, putatively autobiographical passages in the published work. Out of these details, a psychological profile is constructed, which, in the circular process that characterizes most biographical enterprise, is then used to interpret the details. It is almost always possible, though, by ranging a little more widely or digging a little deeper, to find details that are inconsistent with the standard interpretation, or that seem to point to a different interpretation, or that have been ignored because they are fragments that don’t support any coherent interpretation. And usually there’s a level of detail below that, and on and on. One instinct you need in doing historical research is knowing when to keep dredging stuff up; another is knowing when to stop.
You stop when you feel that you’ve got it, and this is where imagination matters. The test for a successful history is the same as the test for a successful novel: integrity in motion. It’s not the facts, snapshots of the past, that make a history; it’s the story, the facts run by the eye at the correct speed. Novelists sometimes explain their work by saying that they invent a character, put the character in a situation, and then wait to see what the character will do. History is not different. The historian’s character has to do what the real person has done, of course, but there is an uncanny way in which this can seem to happen almost spontaneously. The “Marx” that the historian has imagined keeps behaving, in every new set of conditions, like Marx. This gives the description of the conditions a plausibility, too: the person fits the time. The world turns beneath the character’s marching feet. The figures and the landscape come to life together, and the chart of their movements makes a continuous motion, a narrative. The past reveals itself to have a plot.
| |
The JoVE video player is compatible with HTML5 and Adobe Flash. Older browsers that do not support HTML5 and the H.264 video codec will still use a Flash-based video player. We recommend downloading the newest version of Flash here, but we support all versions 10 and above.
High-level cognitive factors, including self-awareness, are believed to play an important role in human visual perception. The principal aim of this study was to determine whether oscillatory brain rhythms play a role in the neural processes involved in self-monitoring attentional status. To do so we measured cortical activity using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) while participants were asked to self-monitor their internal status, only initiating the presentation of a stimulus when they perceived their attentional focus to be maximal. We employed a hierarchical Bayesian method that uses fMRI results as soft-constrained spatial information to solve the MEG inverse problem, allowing us to estimate cortical currents in the order of millimeters and milliseconds. Our results show that, during self-monitoring of internal status, there was a sustained decrease in power within the 7-13 Hz (alpha) range in the rostral cingulate motor area (rCMA) on the human medial wall, beginning approximately 430 msec after the trial start (p < 0.05, FDR corrected). We also show that gamma-band power (41-47 Hz) within this area was positively correlated with task performance from 40-640 msec after the trial start (r = 0.71, p < 0.05). We conclude: (1) the rCMA is involved in processes governing self-monitoring of internal status; and (2) the qualitative differences between alpha and gamma activity are reflective of their different roles in self-monitoring internal states. We suggest that alpha suppression may reflect a strengthening of top-down interareal connections, while a positive correlation between gamma activity and task performance indicates that gamma may play an important role in guiding visuomotor behavior.
In trace fear conditioning a conditional stimulus (CS) predicts the occurrence of the unconditional stimulus (UCS), which is presented after a brief stimulus free period (trace interval)1. Because the CS and UCS do not co-occur temporally, the subject must maintain a representation of that CS during the trace interval. In humans, this type of learning requires awareness of the stimulus contingencies in order to bridge the trace interval2-4. However when a face is used as a CS, subjects can implicitly learn to fear the face even in the absence of explicit awareness*. This suggests that there may be additional neural mechanisms capable of maintaining certain types of "biologically-relevant" stimuli during a brief trace interval. Given that the amygdala is involved in trace conditioning, and is sensitive to faces, it is possible that this structure can maintain a representation of a face CS during a brief trace interval.
It is challenging to understand how the brain can associate an unperceived face with an aversive outcome, even though the two stimuli are separated in time. Furthermore investigations of this phenomenon are made difficult by two specific challenges. First, it is difficult to manipulate the subject's awareness of the visual stimuli. One common way to manipulate visual awareness is to use backward masking. In backward masking, a target stimulus is briefly presented (< 30 msec) and immediately followed by a presentation of an overlapping masking stimulus5. The presentation of the mask renders the target invisible6-8. Second, masking requires very rapid and precise timing making it difficult to investigate neural responses evoked by masked stimuli using many common approaches. Blood-oxygenation level dependent (BOLD) responses resolve at a timescale too slow for this type of methodology, and real time recording techniques like electroencephalography (EEG) and magnetoencephalography (MEG) have difficulties recovering signal from deep sources.
However, there have been recent advances in the methods used to localize the neural sources of the MEG signal9-11. By collecting high-resolution MRI images of the subject's brain, it is possible to create a source model based on individual neural anatomy. Using this model to "image" the sources of the MEG signal, it is possible to recover signal from deep subcortical structures, like the amygdala and the hippocampus*.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding.
Magnetoencephalography is a technique that detects magnetic fields associated with cortical activity [1]. The electrophysiological activity of the brain generates electric fields - that can be recorded using electroencephalography (EEG)- and their concomitant magnetic fields - detected by MEG. MEG signals are detected by specialized sensors known as superconducting quantum interference devices (SQUIDs). Superconducting sensors require cooling with liquid helium at -270 °C. They are contained inside a vacumm-insulated helmet called a dewar, which is filled with liquid. SQUIDS are placed in fixed positions inside the helmet dewar in the helium coolant, and a subject's head is placed inside the helmet dewar for MEG measurements. The helmet dewar must be sized to satisfy opposing constraints. Clearly, it must be large enough to fit most or all of the heads in the population that will be studied. However, the helmet must also be small enough to keep most of the SQUID sensors within range of the tiny cerebral fields that they are to measure. Conventional whole-head MEG systems are designed to accommodate more than 90% of adult heads. However adult systems are not well suited for measuring brain function in pre-school chidren whose heads have a radius several cm smaller than adults. The KIT-Macquarie Brain Research Laboratory at Macquarie University uses a MEG system custom sized to fit the heads of pre-school children. This child system has 64 first-order axial gradiometers with a 50 mm baseline[2] and is contained inside a magnetically-shielded room (MSR) together with a conventional adult-sized MEG system [3,4]. There are three main advantages of the customized helmet dewar for studying children. First, the smaller radius of the sensor configuration brings the SQUID sensors into range of the neuromagnetic signals of children's heads. Second, the smaller helmet allows full insertion of a child's head into the dewar. Full insertion is prevented in adult dewar helmets because of the smaller crown to shoulder distance in children. These two factors are fundamental in recording brain activity using MEG because neuromagnetic signals attenuate rapidly with distance. Third, the customized child helmet aids in the symmetric positioning of the head and limits the freedom of movement of the child's head within the dewar. When used with a protocol that aligns the requirements of data collection with the motivational and behavioral capacities of children, these features significantly facilitate setup, positioning, and measurement of MEG signals.
Patients having stereo-electroencephalography (SEEG) electrode, subdural grid or depth electrode implants have a multitude of electrodes implanted in different areas of their brain for the localization of their seizure focus and eloquent areas. After implantation, the patient must remain in the hospital until the pathological area of brain is found and possibly resected. During this time, these patients offer a unique opportunity to the research community because any number of behavioral paradigms can be performed to uncover the neural correlates that guide behavior. Here we present a method for recording brain activity from intracranial implants as subjects perform a behavioral task designed to assess decision-making and reward encoding. All electrophysiological data from the intracranial electrodes are recorded during the behavioral task, allowing for the examination of the many brain areas involved in a single function at time scales relevant to behavior. Moreover, and unlike animal studies, human patients can learn a wide variety of behavioral tasks quickly, allowing for the ability to perform more than one task in the same subject or for performing controls. Despite the many advantages of this technique for understanding human brain function, there are also methodological limitations that we discuss, including environmental factors, analgesic effects, time constraints and recordings from diseased tissue. This method may be easily implemented by any institution that performs intracranial assessments; providing the opportunity to directly examine human brain function during behavior.
The 5-Choice Serial Reaction Time Task: A Task of Attention and Impulse Control for Rodents
Authors: Samuel K. Asinof, Tracie A. Paine.
Institutions: Oberlin College.
This protocol describes the 5-choice serial reaction time task, which is an operant based task used to study attention and impulse control in rodents. Test day challenges, modifications to the standard task, can be used to systematically tax the neural systems controlling either attention or impulse control. Importantly, these challenges have consistent effects on behavior across laboratories in intact animals and can reveal either enhancements or deficits in cognitive function that are not apparent when rats are only tested on the standard task. The variety of behavioral measures that are collected can be used to determine if other factors (i.e., sedation, motivation deficits, locomotor impairments) are contributing to changes in performance. The versatility of the 5CSRTT is further enhanced because it is amenable to combination with pharmacological, molecular, and genetic techniques.
Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Institutions: New York University, New York University, Princeton University, Princeton University.
Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles.
Best Current Practice for Obtaining High Quality EEG Data During Simultaneous fMRI
Authors: Karen J. Mullinger, Pierluigi Castellone, Richard Bowtell.
Institutions: University of Nottingham , Brain Products GmbH.
Simultaneous EEG-fMRI allows the excellent temporal resolution of EEG to be combined with the high spatial accuracy of fMRI. The data from these two modalities can be combined in a number of ways, but all rely on the acquisition of high quality EEG and fMRI data. EEG data acquired during simultaneous fMRI are affected by several artifacts, including the gradient artefact (due to the changing magnetic field gradients required for fMRI), the pulse artefact (linked to the cardiac cycle) and movement artifacts (resulting from movements in the strong magnetic field of the scanner, and muscle activity). Post-processing methods for successfully correcting the gradient and pulse artifacts require a number of criteria to be satisfied during data acquisition. Minimizing head motion during EEG-fMRI is also imperative for limiting the generation of artifacts.
Interactions between the radio frequency (RF) pulses required for MRI and the EEG hardware may occur and can cause heating. This is only a significant risk if safety guidelines are not satisfied. Hardware design and set-up, as well as careful selection of which MR sequences are run with the EEG hardware present must therefore be considered.
The above issues highlight the importance of the choice of the experimental protocol employed when performing a simultaneous EEG-fMRI experiment. Based on previous research we describe an optimal experimental set-up. This provides high quality EEG data during simultaneous fMRI when using commercial EEG and fMRI systems, with safety risks to the subject minimized. We demonstrate this set-up in an EEG-fMRI experiment using a simple visual stimulus. However, much more complex stimuli can be used. Here we show the EEG-fMRI set-up using a Brain Products GmbH (Gilching, Germany) MRplus, 32 channel EEG system in conjunction with a Philips Achieva (Best, Netherlands) 3T MR scanner, although many of the techniques are transferable to other systems.
Fruit flies (Drosophila melanogaster) are an established model for both alcohol research and circadian biology. Recently, we showed that the circadian clock modulates alcohol sensitivity, but not the formation of tolerance. Here, we describe our protocol in detail. Alcohol is administered to the flies using the FlyBar. In this setup, saturated alcohol vapor is mixed with humidified air in set proportions, and administered to the flies in four tubes simultaneously. Flies are reared under standardized conditions in order to minimize variation between the replicates. Three-day old flies of different genotypes or treatments are used for the experiments, preferably by matching flies of two different time points (e.g., CT 5 and CT 17) making direct comparisons possible. During the experiment, flies are exposed for 1 hr to the pre-determined percentage of alcohol vapor and the number of flies that exhibit the Loss of Righting reflex (LoRR) or sedation are counted every 5 min. The data can be analyzed using three different statistical approaches. The first is to determine the time at which 50% of the flies have lost their righting reflex and use an Analysis of the Variance (ANOVA) to determine whether significant differences exist between time points. The second is to determine the percentage flies that show LoRR after a specified number of minutes, followed by an ANOVA analysis. The last method is to analyze the whole times series using multivariate statistics. The protocol can also be used for non-circadian experiments or comparisons between genotypes.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources.
Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
The process by which cerebral perfusion is maintained constant over a wide range of systemic pressures is known as “cerebral autoregulation.” Effective dampening of flow against pressure changes occurs over periods as short as ~15 sec and becomes progressively greater over longer time periods. Thus, slower changes in blood pressure are effectively blunted and faster changes or fluctuations pass through to cerebral blood flow relatively unaffected. The primary difficulty in characterizing the frequency dependence of cerebral autoregulation is the lack of prominent spontaneous fluctuations in arterial pressure around the frequencies of interest (less than ~0.07 Hz or ~15 sec). Oscillatory lower body negative pressure (OLBNP) can be employed to generate oscillations in central venous return that result in arterial pressure fluctuations at the frequency of OLBNP. Moreover, Projection Pursuit Regression (PPR) provides a nonparametric method to characterize nonlinear relations inherent in the system without a priori assumptions and reveals the characteristic non-linearity of cerebral autoregulation. OLBNP generates larger fluctuations in arterial pressure as the frequency of negative pressure oscillations become slower; however, fluctuations in cerebral blood flow become progressively lesser. Hence, the PPR shows an increasingly more prominent autoregulatory region at OLBNP frequencies of 0.05 Hz and below (20 sec cycles). The goal of this approach it to allow laboratory-based determination of the characteristic nonlinear relationship between pressure and cerebral flow and could provide unique insight to integrated cerebrovascular control as well as to physiological alterations underlying impaired cerebral autoregulation (e.g., after traumatic brain injury, stroke, etc.).
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Investigating the Neural Mechanisms of Aware and Unaware Fear Memory with fMRI
Authors: David C. Knight, Kimberly H. Wood.
Institutions: University of Alabama at Birmingham.
Pavlovian fear conditioning is often used in combination with functional magnetic resonance imaging (fMRI) in humans to investigate the neural substrates of associative learning 1-5. In these studies, it is important to provide behavioral evidence of conditioning to verify that differences in brain activity are learning-related and correlated with human behavior.
Fear conditioning studies often monitor autonomic responses (e.g. skin conductance response; SCR) as an index of learning and memory 6-8. In addition, other behavioral measures can provide valuable information about the learning process and/or other cognitive functions that influence conditioning. For example, the impact unconditioned stimulus (UCS) expectancies have on the expression of the conditioned response (CR) and unconditioned response (UCR) has been a topic of interest in several recent studies 9-14. SCR and UCS expectancy measures have recently been used in conjunction with fMRI to investigate the neural substrates of aware and unaware fear learning and memory processes 15. Although these cognitive processes can be evaluated to some degree following the conditioning session, post-conditioning assessments cannot measure expectations on a trial-to-trial basis and are susceptible to interference and forgetting, as well as other factors that may distort results 16,17 .
Monitoring autonomic and behavioral responses simultaneously with fMRI provides a mechanism by which the neural substrates that mediate complex relationships between cognitive processes and behavioral/autonomic responses can be assessed. However, monitoring autonomic and behavioral responses in the MRI environment poses a number of practical problems. Specifically, 1) standard behavioral and physiological monitoring equipment is constructed of ferrous material that cannot be safely used near the MRI scanner, 2) when this equipment is placed outside of the MRI scanning chamber, the cables projecting to the subject can carry RF noise that produces artifacts in brain images, 3) artifacts can be produced within the skin conductance signal by switching gradients during scanning, 4) the fMRI signal produced by the motor demands of behavioral responses may need to be distinguished from activity related to the cognitive processes of interest. Each of these issues can be resolved with modifications to the setup of physiological monitoring equipment and additional data analysis procedures. Here we present a methodology to simultaneously monitor autonomic and behavioral responses during fMRI, and demonstrate the use of these methods to investigate aware and unaware memory processes during fear conditioning.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD).
Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions.
Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
To examine the neural basis of the blood oxygenation level dependent (BOLD) magnetic resonance imaging (MRI) signal, we have developed a rodent model in which functional MRI data and in vivo intracortical recording can be performed simultaneously. The combination of MRI and electrical recording is technically challenging because the electrodes used for recording distort the MRI images and the MRI acquisition induces noise in the electrical recording. To minimize the mutual interference of the two modalities, glass microelectrodes were used rather than metal and a noise removal algorithm was implemented for the electrophysiology data. In our studies, two microelectrodes were separately implanted in bilateral primary somatosensory cortices (SI) of the rat and fixed in place. One coronal slice covering the electrode tips was selected for functional MRI. Electrode shafts and fixation positions were not included in the image slice to avoid imaging artifacts. The removed scalp was replaced with toothpaste to reduce susceptibility mismatch and prevent Gibbs ringing artifacts in the images. The artifact structure induced in the electrical recordings by the rapidly-switching magnetic fields during image acquisition was characterized by averaging all cycles of scans for each run. The noise structure during imaging was then subtracted from original recordings. The denoised time courses were then used for further analysis in combination with the fMRI data. As an example, the simultaneous acquisition was used to determine the relationship between spontaneous fMRI BOLD signals and band-limited intracortical electrical activity. Simultaneous fMRI and electrophysiological recording in the rodent will provide a platform for many exciting applications in neuroscience in addition to elucidating the relationship between the fMRI BOLD signal and neuronal activity.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.
| |
Chemical Synthesis and Reactions of Thiophene: Thiophene belongs to a class of heterocyclic compounds containing a five-membered ring made up of one sulfur as a heteroatom. Thiophene is a colorless liquid having a boiling point of 84°C. It was isolated as an impurity in commercial benzene in 1882 by Victor Meyer. It is a π – excessive aromatic heterocycle.
Chemical Synthesis of Thiophene
(1) Paal-Knorr Synthesis: In this method, 1, 4 – dicarbonyl compounds can be heated with phosphorus pentasulfide (a source of sulfur) to give thiophene.
The basic mechanism of this synthetic procedure involves cyclic condensation of 1, 4-diketones (a) with primary amine (pyrrole synthesis), (b) with a sulfur source (thiophene synthesis), or (c) dehydration of diketone (furan synthesis). Phosphorus pentasulfide or bis (trimethylsilyl) sulfide acts as a sulfurizing agent as well as a dehydrating agent. Hydrogen sulfide in the presence of an acid catalyst is also effective.
(2) Hinsberg Synthesis: Two consecutive aldol condensations between 1, 2-dicarbonyl compound and diethylthiodiacetate in the presence of a strong base gives thiophene.
(3) Fiesselmann Thiophene Synthesis: It is a base-catalyzed condensation reaction of thioglycolic acid with α, β-acetylenic esters to give 3-hydroxy-2-thiophene carboxylic acid.
(4) Gewald Aminothiophene Synthesis: It is a base-catalyzed condensation of a ketone with a β–acetonitrile to form an olefin, followed by cyclization with elemental sulfur to give 2-aminothiophenes.
(5) Industrial Methods:
(i) Thiophene can be synthesized on an industrial scale heating n-butane and sulfur at high temperatures.
(ii) Using n-butane the sulfur first causes dehydrogenation and then interacts with the alkene by addition. Further dehydrogenation leads to the aromatization of the ring. A mixture of acetylene and hydrogen sulfide is passed through a tube containing alumina at 400°C.
(iii) A mixture of sodium succinate and phosphorus trisulfide is heated at 200°C to give thiophene.
Chemical Reactions of Thiophene
Thiophene is slightly more nucleophilic than benzene. A negative charge is more accumulated on C2 – and C5 – atoms. While sulfur bears a positive charge. Hence, it easily undergoes electrophilic substitution preferably at C2 – and C5 – positions.
(1) Protonation: Thiophene is very stable to the action of acids. Very strong acids like the action of hot phosphoric acid give thiophene trimer.
(2) Oxidation: Thiophene ring is stable to the action of various oxidizing agents. However, the side chains can be oxidized to carboxylic acid groups. When heated with nitric acid, the ring breaks down to maleic acid and oxalic acid.
(3) Electrophilic Substitution: The reactivity order for electrophilic substitution reaction is: pyrrole > furan > thiophene >benzene. The preferred site of the attack in thiophene is the C2-position.
(4) Nucleophilic Substitution: Thiophene substituted with electron-withdrawing substituents are much more reactive to nucleophilic substitution.
(5) Reduction:
Applications in Drug Synthesis
Thiophene derivatives possess remarkable activities like antibacterial, anti-inflammatory, anti-anxiety, anti-psychotic, anti-arrhythmic, and anticancer. Examples include lornoxicam (thiophene analog of piroxicam), pyrantel (anti-parasitic), raltitrexed (anticancer), cephalothin (antimicrobial), suproprofen (anti-inflammatory), ticrynafen (anti-hypertensive), clotiazepam (anti-anxiety), ticlopidine (platelet aggregation inhibitor), etc. | https://solutionpharmacy.in/chemical-synthesis-and-reactions-of-thiophene/ |
SPE Trondheim held a Distinguished Lecture event on Monday, 15 February 2016, at the department of petroleum engineering and applied geophysis, NTNU. Dr. Xiao-Hui Wu, Senior Engineering Associate at ExxonMobile Upstream Research Company, gave a presentation titled "How to Predict Reservoir Performance with Subsurface Uncertainty at Multiple Scales".
Dr. Wu indicated subsurface uncertainty as one of the main challenges in using reservoir models to predict field performance for development and depletion planning purposes. He explained: "The importance of reliable characterization of subsurface uncertainty and its impact on reservoir performance predictions is increasingly recognized as essential to robust decision making in the upstream industry, which is especially true for large projects in complex geologic settings. However, despite recent advances in reservoir modeling and simulation, reliable quantification of the impact of subsurface uncertainty remains difficult in practice. Many factors lead to this state of affair; technically, a fundamental difficulty is that reservoir heterogeneity at multiple scales may have strong effect on fluid flows."
Dr. Wu's lecture included an analysis of the challenge and possible resolutions. "Indeed, relying on computing power alone may not address the challenge." said Dr. Wu. "Instead, we must look at reservoir modeling and performance prediction holistically, from modeling objectives to appropriate techniques of incorporating reservoir heterogeneity into the models."
He presented a goal-driven and data-driven approach for reservoir modeling with the theoretical reasoning and numerical evidence behind them, including real field examples. He showed that the proposed approach is driven by the practical limitations inherent in numerical approximations of Darcy flow equations as well as how fluid flow responds to reservoir heterogeneity. "The one idea that participants of this lecture should take away is that appropriate parameterization of multi-scale reservoir heterogeneity, that is tailored to the business questions at hand and available data, is essential for addressing the challenge of subsurface uncertainty" said Dr. Wu.
Xiao-Hui Wu joined ExxonMobil Upstream Research Company in 1997. His research experience covers geologic modeling, unstructured gridding, upscaling, reduced order modeling, and uncertainty quantification. He is a Senior Earth Modeling Advisor in the Computational Science Function. Xiao-Hui received his Ph.D. in Mechanical Engineering from the University of Tennessee and worked as a postdoc in Applied Mathematics at Caltech before joining Exxon Mobil. He is a member of SPE and SIAM, a technical editor/reviewer for the SPE Journal, Journal of Computational Physics, and Multiscale Modeling and Simulation. He served on program committees of several conferences, including the Reservoir Simulation Symposium. | http://trondheim.spe.org/blogs/bahador-najafiazar/2016/02/16/dr-xiao-hui-wu-explains-goal-driven-vs-data-driven-reservoir-modeling |
Human Resources Overview
The human resources policy at De Dietrich Process Systems is based on a simple observation: We always act with integrity and with respect for the highest principles of both ethics and quality.
OUR ETHICAL PRINCIPLES:
Our ethical principles are listening and respect, setting an example and transparency. They define the way in which we live together; they fashion our culture, build our reputation and play a part in wellbeing at work. It is in a daily context that these ethical principles make the most sense. Whether in working together daily, or in exchanges with our clients, these ethical principles apply naturally and allow us to continue as a Group that inspires confidence.
-
Listening and respect.
Providing proof of openness and attention, avoiding prejudices, listening with empathy and recognising the ideas of others in order to provide the appropriate response; accepting that others are different, while insisting on respect for the rules, processes and reasoning laid down by the company.
-
Setting an example.
Being punctual, reactive and sensitive to others, having a sense of responsibility, honouring commitments and respecting facts, are all part of the qualities expected from each worker, in order to establish their legitimacy, instill confidence and encourage performance and wellbeing at work at the same time.
-
Transparency.
We favour open, regular, accurate and transparent communication. To respect facts is to keep a level of objectivity and intellectual honesty, beyond mere opinions and privileges. It is to dare to acknowledge the existence of a problem and to recognise the reality of its impact, even when the solution appears to be out of reach.
By joining De Dietrich Process Systems you'll integrate with a human-sized Group strongly focused on innovation and development; a company that has been through over 330 years of history thanks to its adaptation and anticipation capacities. | https://www.dedietrich.com/en/careers/human-resources-overview |
Among the many vital roles that sleep plays in our lives, our nightly rest may give us the chance to take out the cerebral trash, says a new study.
No, we're not talking about some kind of Ambien-induced sleep-housework. We're talking about the process by which the brain refreshes itself by removing the buildup of mental metabolites such as beta-amyloid and tau -- the byproducts, if you will, of a day's cogitation.
Left to fester on the sidewalks of our brains, these byproducts of everyday mental activity can gum up the works in a hurry. The hum of electrical signals across synapses slows, and neurons can give up and die in the foul environment of unmanaged neural trash.
THE BRAIN: What made Einstein's special
Accumulations in the brain of tau and beta-amyloid are hallmarks of certain dementias, including Alzheimer's Disease. And perhaps not so coincidentally, sleep problems often precede the onset of obvious dementia symptoms, and are a continuing problem for those afflicted with such diseases as Alzheimer's.
But although brain cells burn up a vast amount of fuel and are highly sensitive to a buildup in their own metabolites, the brain has a trash-removal process that is far less straightforward than that by which wastes are removed from the rest of the body.
The lymph system collects metabolites from tissues throughout the body and dumps them into the bloodstream, where they're carried to the liver for breakdown and removal. The brain's metabolic waste concentrates in interstitial fluid present in all corners of the brain. A second slurry -- cerebrospinal fluid -- circulates throughout the brain, and where the two fluids flow together, the metabolic byproducts are carried away by the cerebrospinal fluid.
In a new study, scientists from University of Rochester Medical Center and New York University found that the brains of mice -- whether they are sleeping or anesthetized -- showed more activity and volume at the "transfer stations," where interstitial and cerebrospinal fluid meet, than did mice who were awake and active. The result was that by the end of a sleep period -- around early evening -- mouse brains had their lowest concentration of neural refuse of the day. By the time they were ready to sleep again, those concentrations had reached their peak.
It wasn't just the mouse circadian schedule that initiated the trash removal: Even when researchers used the powerful sedative ketamine to put the mice to sleep, they saw evidence of a sudden increase in traffic at the brain's transfer stations.
Noting the link between sleep deprivation or disruption and neurodegenerative disease, the authors suggest that neural trash removal must be one of sleep's major benefits. Indeed, they surmised, it could even be that the buildup of brain refuse may be one of the cues that drives us to bed, and that an empty trash bin may help signal us to wake and initiate another day of mental activity and its inevitable byproduct, brain trash.
ALSO: | http://www.baltimoresun.com/health/la-sci-sleep-brain-trash-20131017-story.html |
5 Plantigrade Animals
Plantigrade animals are those that walk while supporting themselves on the full sole of their feet, something unusual in the animal kingdom. Human beings are one such species. Among the others we find some that can walk on both hind legs like people.
Plantigrade animals are those that walk by supporting themselves on the full sole of their feet, like human beings. Believe it or not, we’re talking about a relatively small number of species, since most animals are digitigrade, meaning they support themselves only on their toes to walk. See some examples in this article.
What are plantigrade animals?
Within this select group, which includes people, we can find bears, coatis, badgers, and primates. All of them walk with all four legs completely supported on the ground and some of them can stand on just their two hind legs. These examples stand out among plantigrade animals:
1. Bears
The whole Ursidae family—like the bear in the photo at the top of the page—are plantigrade. This means they move with a ‘heavy’ gait supported by the full sole of their feet. Although they walk on four legs, sometimes—for example when they want to look more threatening or to reach fruit from higher branches—they stand on their hind legs. They’re also able to walk upright for short distances.
In most cases, bears are large animals (they can weigh up to 1600 pounds and stand ten feet tall), and their ears and eyes are small compared to the rest of their body. Almost all are omnivorous, with the exception of the polar bear, and they live in forests and wooded areas.
2. Coatis
The coati—or Nasua to give it its scientific name—is a small mammal with a long tail and snout. Native to the Americas, this animal prefers warm and temperate ecosystems with dense forests. They live in groups of up to 20 individuals and are very agile as they move among the trees; they support themselves on the full sole of their feet to move around on land.
Their short limbs end in strong nails which they use to dig for water in the ground; in addition, their pointed snouts are of great use when looking for food. Their coats can be brown, reddish or black, and they have striped tails and black faces.
3. Raccoons
Also known as ‘trash pandas‘ these animals make up another genus of the plantigrade group. Their limbs have five long, agile fingers that they use to orient themselves and to recognize danger. Interestingly, sometimes they sit on their hindquarters like bears, such as while eating or when they need to rest.
Raccoons live near rivers and in wooded areas, and they’re very skilled with their front legs, both when hunting and holding food. They’ll eat frogs, fruit, garbage… whatever they can find. Also, they’re nocturnal and they’re known for their gray coats, striped tail, and black and white snouts.
4. Wolverines
Also called the ‘glutton’, these are another of our plantigrade animals; they’re similar to bears but smaller and much more ferocious than they appear. Wolverines live in the forests of Canada, Alaska, Russia and Siberia, and there are no subspecies.
They’re solitary creatures and they stay on the move both during the day and at night, except during their reproduction and breeding season. The gestation period in the females is quite long. As for their diet, they mainly feed on rodents, insects, larvae, berries, seeds, birds, and eggs.
5. Badgers
These are medium-sized mammals that live in Europe, America and Asia. The best-known is the European badger which has short, strong, highly-developed legs; this allows them to walk supported on the full sole of their foot without any problem.
In addition, badgers are characterized by their long snouts which they use to dig down into the ground. Their coats are dark gray and their faces have black and white stripes. They’re also omnivorous, nocturnal, and live in territorial clans. | https://myanimals.com/animals/wild-animals-animals/mammals/5-plantigrade-animals/ |
The median is the middle value in a group of numbers ranked in order of size.
The median is that value of the variate which divides the total frequency into two halves. In other words, it is the number in a range of scores that falls exactly in the middle so that 50% of the scores are above and 50% are below.
For example, to find the median of three numbers, 4, 2 and 8:
- put them in ascending order, i.e. 2, 4 and 8;
- the middle number is 4: the median is 4. | https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Glossary:Median |
After a period of general decline, central cities are on the rise again. This surge of people and businesses choosing to take root in city centers reflects a shift back to the preference for a lively urban lifestyle, ease of access to transit, and celebration of local arts and culture. Cities are looking for opportunities to capitalize on this changing dynamic.
It was not very long ago that a company could decide where they wanted to be located and employees followed. This contributed to a period of general decline, as central cities witnessed an exodus of companies moving their headquarters away from city centers, and into the more “family-friendly” and “affordable” suburbs.
There is a new trend with a growing number of employees, millennials in particular, deciding where they want to live first and then finding jobs. Companies are following the talent. Many of these employees’ desire to live in active, walkable, and urban environments with housing, jobs, transit, and shopping in close proximity. A number of innovation-driven businesses such as Airbnb, Twitter, Pinterest, Salesforce, Expedia, and Amazon have sought out central cities to attract this new generation of highly skilled workers.
In response to this changing dynamic, many cities are working hard to enhance amenities, attract housing, and draw new and relocated businesses. Development in central city locations can, however, be contentious, complicated, and costly. ESA has been teaming with our urban communities to address these challenges and help them position in the 21st century economy.
Facilitating Workforce Housing in Sacramento
In 2015, the former mayor of Sacramento launched the Downtown Housing Initiative to bring 10,000 places to live in downtown Sacramento within 10 years, a kick-start to achieving the General Plan housing goal of 23,000 total units within the “Central City” by 2035. As with many communities, housing development has been somewhat challenging to build in Sacramento’s Central City due to a variety of factors, including higher land prices than surrounding areas, limiting regulations, lengthy development review processes, and infrastructure constraints.
In June 2016, ESA teamed with the City of Sacramento to prepare the Central City Specific Plan (CCSP) and Environmental Impact Report (EIR) that establishes a vision, a policy framework, and actions to guide development and infrastructure decisions in the next 20 years. The CCSP incorporates a variety of amenities to attract residents, businesses, and visitors, including the new Streetcar transit system, improvements to bike lanes, pedestrian linkages, and inviting public art. The plan also delineates a Special Planning District to allow for higher building heights and greater densities within one-half mile of a transit station, and sets a goal for 25 percent of new housing to be affordable for lower income households.
To meet their goals, the City incentivized development by reducing open space and parking requirements, and the EIR enables California Environmental Quality Act (CEQA) streamlining for development within the plan area, making the environmental review process more efficient. In addition to facilitating housing development in the Sacramento core, the City has planned for the long-term development of a vibrant community that is complete with housing and jobs as well as access to entertainment, art and culture, recreational facilities, and parks.
Providing Places for Jobs in San Francisco
ESA has been working closely with the City and County of San Francisco to prepare the EIR for the Central SoMa Plan. The vision of the plan is to create a dynamic, mixed-use neighborhood along the southern portion of the Central Subway rail corridor. Unlike other area plans developed and implemented in San Francisco’s eastern neighborhoods, the Central SoMa Plan is focused on carving out opportunities to meet pent-up demand for office space in the city and the Bay Area in general. As San Francisco’s financial district is nearing buildout and few new office development opportunities exist elsewhere, the City is looking to Central SoMa as an area ripe for the creation of a truly sustainable neighborhood that fosters job opportunities where people live.
Encompassing what was formerly a low-rise industrial area, the Central SoMa Plan seeks to introduce new uses, building sizes and heights, amenities, and improvements in a transit-rich area to support the emerging high-tech industry. Equally important is the protection of the community’s industrial and arts uses and providing housing to serve the burgeoning workforce. In response to this, the Central SoMa Plan will add roughly 63,600 new jobs to the plan area by 2040, along with 14,400 new market-rate and affordable residential units accommodating 25,500 new residents. In addition, new development allowed under the plan will provide significant amenities and public benefits, including substantial transit investments, open space and recreation opportunities, arts and creative spaces, school funding, and historic preservation funding to support expanded growth and enhance quality of life.
At the heart of the Central SoMa Plan is the desire to introduce millions of square feet of employment, residential, and commercial space to help relieve the housing affordability crisis and to reduce greenhouse gas emissions by locating people near employment opportunities and convenient transit services. | https://esassoc.com/news-item/central-cities-on-the-rise-attracting-housing-and-jobs/ |
Climate Change Is Altering the Future of Our Food System
When people talk about climate change, they are usually referencing the recent changes in average temperature, precipitation, and frequency of natural disasters as a result of an abnormally rapid increase in the average temperature of the earth’s surface.
In Earth’s history, the climate has been recorded to warm and cool over time, but the current change is happening too quickly as a result of human activity, and we aren’t seeing enough cooling. The disastrous effects are already being felt worldwide.
One human activity that is intertwined with climate change is farming. Humanity needs access to healthy foods, especially ones that aren’t contaminated with pesticides and other chemicals. But some conventional farming methods contribute to the greenhouse gas effect and, most significantly, are hindered by the effects of climate change, such as droughts, excessive rain, and extreme temperatures.
While more warm weather in some places may sound like a good thing for agriculture, the negative impacts of climate change will outweigh any beneficial ones. Not only do some conventional farming practices contribute to climate change, but they also become less productive as the climate changes. More and stronger natural disasters, including droughts, floods, forest fires, and hurricanes, damage crops and reduce farmers’ ability to grow.
Changing growing seasons and habitat ranges make it harder to grow crops because specific climates are needed for certain crops. For example, in the United States extremes in precipitation through decreased freshwater supply in the southwest and increased flooding in the northeast threaten crop productivity. Indirect impacts make agricultural production even more difficult. Pests, diseases, and invasive plants will all increase in abundance. And an increase in temperature can only lead to a decrease in the quality and quantity of food produced.
Changes in the ozone and an increase in greenhouse gases will continue to impact the future of our food systems. Food insecurity is a global problem that will be intensified as conventional farming becomes less productive. The supply chain is stressed, food availability will go down, and food will become more expensive, let alone there is a potential that an increase in pesticides and chemicals will be needed to keep produce and other foods healthy.
Current farming practices depend on reliable precipitation, predictable seasonal changes, and known temperatures and will have to adapt as global warming changes these previously reliable resources. One potential solution, vertical farming, eliminates the reliance on a variable climate by moving the agricultural production of specialty crops indoors. In order for us to stabilize our food supply, and have the ability to increase it as our population continues to grow, we need to invest more into more controlled ways to grow our food. | https://blog.enn.com/climate-change-is-altering-the-future-of-our-food-system/ |
MODERN SLAVERY ACT
This statement aims to ensure that XPGroup Holdings operates in a way that is mindful of the obligations of section 54 of the Modern Slavery Act 2015 that are applicable to organisations with a turnover of £35,000,000 or more, notwithstanding that XPGroup Holdings is not of sufficient size to be bound by the terms of that Act.
XPGroup Holdings is committed to ensuring that all of its business operations are free from involvement with slavery or human trafficking.
Slavery is a crime and a violation of human rights. It may take different forms, including slavery, servitude, forced and compulsory labour and human trafficking, all of which deprive an individual of his or her liberty in order to exploit them for personal or commercial gain. XPGroup Holdings has a zero-tolerance approach to modern slavery and is committed to acting ethically and with integrity in all our business dealings and relationships and to implementing and enforcing effective systems and controls to ensure modern slavery is not taking place within our own business or our supply chains.
We expect the same high standards from all of our contractors, suppliers and other business partners, and as part of our contracting processes, in the coming year we will ask our suppliers to ensure that they are not complicit in the use of forced, compulsory or trafficked labour, or anyone held in slavery or servitude and we expect that our suppliers will hold their own suppliers to the same high standards. We are committed to the highest ethical standards of business and ensuring there is no slavery or human trafficking in any part of our business or supply chain.
XPGroup Holdings is committed to:
- Ensuring that slavery and human trafficking is considered and addressed in our approach to corporate social responsibility
- Ensuring that any concerns about slavery or human trafficking can be raised through our whistleblowing procedure
- Carrying out regular audits to ensure that all our employees are paid at least the National Minimum Wage and have the right to work in the UK
- Ensuring that our commercial agreements reflect our expectation that our suppliers operate in accordance with the Modern Slavery Act 2015, encouraging their suppliers and sub-contractors also to operate in accordance with the Act
- Identifying and addressing any areas of high risk in our supply chain
- Providing our employees with sufficient information and training as appropriate to ensure an understanding of this policy and the importance of XPGroup Holdings Ltd’s compliance with it.
The Board of Directors has overall responsibility for ensuring compliance with our legal and ethical obligations ensuring that our employees comply with it and are aware of our Anti-Slavery and Human Trafficking Policy.
Management at all levels of XPGroup Holdings are responsible for ensuring that those reporting to them understand and comply with this policy. Any employee who breaches this policy will face disciplinary action.
We reserve the right to terminate our relationship with other businesses, contractors or organisations working on our behalf if they breach this policy.
The Board of Directors will review the policy on a regular basis to ensure that it remains fit for purpose in supporting our aim of ensuring that our business is free from involvement with slavery or human trafficking. | https://xpauto.co/modern-slavery-act/ |
Renal Transplant Duplex
A renal or kidney transplant is a procedure in which a healthy kidney is transplanted into your body to treat kidney failure. The procedure involves precisely suturing the blood vessels of the donor kidney to the recipient’s blood supply. After the complex transplantation procedure, careful post-operative monitoring for possible complications is highly imperative. Optimal blood supply to the transplanted kidney is important for the survival of the organ and can be monitored by a duplex ultrasound procedure. The procedure uses sound waves to produce images of the flow of blood through the vessels. Duplex ultrasound is a process which combines regular ultrasound (produces images of the organs) and Doppler ultrasound (produces images of blood flow through the blood vessels) to view abnormalities in the blood vessels that affect the flow of blood.
During the procedure, you will be asked to lie on your back and a gel is applied on your abdomen. Your doctor passes sound waves through a hand held transducer, which is moved across your abdomen. The waves bounce off the blood moving in the vessels and are received by the transducer, which converts them into electric signals and images that are displayed on a monitor. By examining the images, your doctor can detect any loss of blood supply to a section of the kidney or rejection of the organ. The procedure takes approximately 20 minutes and does not involve any risks. | https://www.berwickintegratedcare.com.au/renal-transplant-duplex-berwick-integrated-care.html |
Transport Ministry accepts recommendations made by panel, including not requiring licences for cyclists
Cyclists caught flouting traffic rules will have to pay a $150 fine from Jan 1 next year, up from $75 now.
The composition fine will apply to those who break existing rules while on the road, including not stopping at red lights and cycling on expressways.
It will also apply under a new rule that caps the size of cycling groups at five cyclists in a single file or 10 cyclists when riding abreast from Jan 1 next year.
The Ministry of Transport (MOT) announced the increased fine yesterday, after it accepted all the recommendations made by the Active Mobility Advisory Panel on measures to improve road safety.
In its report submitted to MOT on Oct 1, the panel said capping cycling groups to a maximum length of five bicycles will ensure the space that they occupy on the roads is similar to that of a bus.
RECOMMENDATIONS
The panel also proposed that the Government not require cyclists to get licensed or to have bicycles registered at this juncture. The panel had made several other recommendations, such as introducing guidelines to get cycling groups to keep a distance of about 30m from one another on roads. It called for a guideline for motorists to keep a minimum distance of 1.5m when passing cyclists.
Related Stories
In addition, the panel - which was tasked by the Government to look into regulations for on-road cycling after a debate erupted online in April over rule-breaking cyclists - also said cyclists should be strongly encouraged to get third-party liability insurance.
MOT said it will step up enforcement against errant motorists and on-road cyclists.
For more serious cases, a cyclist may be fined up to $1,000 as well as jailed for up to three months for the first offence.
MOT said the Government will continue to partner stakeholders in its public education and outreach efforts, to raise public awareness and enhance clarity of new rules and guidelines.
During a virtual interview yesterday, Senior Minister of State for Transport Chee Hong Tat said the Land Transport Authority has taken enforcement action against more than 500 cyclists who flouted rules on roads this year.
On whether introducing only one new rule - on cycling group sizes - would be enough to improve road safety, Mr Chee noted that the lack of compliance is sometimes due to people not being aware.
DIFFICULT
Introducing more rules would make compliance more difficult, he said. "Because when the rules are too complex... that will not help the outcome."
Ms Megan Kinder, president of cycling club Anza Cycling, said the cap on group sizes should be a guideline, rather than a rule warranting a fine when breached.
On the increased fines, Ms Kinder said: "The increased penalty for errant cyclists may work if there is on-the-spot enforcement. But at the same time, there should be equal weight placed on penalties, deterrents and enforcement for errant motorists - particularly in regard to minimum passing requirements, which need to be enshrined in law." | |
This redline example was from a class exercise demonstrating the redlines. Photoshop was used to create the visual design of the pages for this hypothetical tea site.
Clients: Javaco Tea
The design process: finalizing a design
Reptiles Around the World final artwork
While some revisions were made to the concept, the final poster design was very similar to the original pencil sketch. Depending upon the project, concept “sketches” or rough drafts may also be created on the computer using programs such as Illustrator or Photoshop. Whatever process is used, creating rough drafts is a great way to develop a design concept that can be presented to the client early on in the design process. | http://www.caroltompkinsdesign.com/tag/photoshop/ |
---
abstract: 'Let $T$ be the Pascal-adic transformation. For any measurable function $g$, we consider the corrections to the ergodic theorem $$\sum_{k=0}^{j-1}g(T^kx)-\dfrac{j}{\ell}\sum_{k=0}^{\ell-1}g(T^kx).$$ When seen as graphs of functions defined on $\{0,\ldots,\ell-1\}$, we show for a suitable class of functions $g$ that these quantities, once properly renormalized, converge to (part of) the graph of a self-affine function. The latter only depends on the ergodic component of $x$, and is a deformation of the so-called Blancmange function. We also briefly describe the links with a series of works on Conway recursive \$10,000 sequence.'
author:
- 'É. Janvresse, T. de la Rue, Y. Velenik'
title: 'Self-Similar Corrections to the Ergodic Theorem for the Pascal-Adic Transformation'
---
Key words: Pascal-adic transformation, ergodic theorem, self-affine function, Blancmange function, Conway recursive sequence.
AMS subject classification: 37A30, 28A80.
Introduction
============
The Pascal-adic transformation and its basic blocks
---------------------------------------------------
The notion of *adic transformation* has been introduced by Vershik (see e.g. [@versh5], [@versh6]), as a model in which the transformation acts on infinite paths in some graphs, called *Bratteli diagrams*. As shown by Vershik, every ergodic automorphism of the Lebesgue space is isomorphic to some adic transformation, with a Bratteli diagram which may be quite complicated. Vershik also proposed to study the ergodic properties of an adic transformation in a given simple graph, such as the Pascal graph which gives rise to the so-called *Pascal adic transformation*. We recall the construction of the latter by the cutting-and-stacking method in the appendix.
When studying the Pascal-adic transformation, one is naturally led to consider the family of words $B_{n,k}$ ($n\ge 1,\ 0\le k\le n$) on the alphabet $\{a,b\}$, inductively defined by (see Figure \[fig:words\]) $$B_{n,0}\ :=\ a,\quad B_{n,n}\ :=\ b,\quad(n\ge 1)$$ and for $0<k<n$ $$B_{n,k}\ :=\ B_{n-1,k-1}B_{n-1,k}.$$ It follows easily from this definition that the length of the block $B_{n,k}$ is given by the binomial coefficient $\binom{n}{k}$.
![The beginning of the words triangle.[]{data-label="fig:words"}](wordstriangle.eps)
In order to describe the large-scale structure of the basic blocks $B_{n,k}$, we associate to each of them the graph of a real-valued function $F_{n,k}$. Let us denote by $B_{n,k}(\ell)$ the $\ell$th letter of $B_{n,k}$. For each $n\ge2$ and $0\le k\le n$, we consider the function $F_{n,k}:[0,\binom{n}{k}]\to\mathbb{R}$ defined from the basic block $B_{n,k}$ as follows (see Figure \[fig:f63\]):
- $F_{n,k}(0)=0$;
- if $1\le \ell\le \binom{n}{k}$ is an integer, $F_{n,k}(\ell) =
\begin{cases}
F_{n,k}(\ell-1) +1 & \text{if $B_{n,k}(\ell) = a$,} \\
F_{n,k}(\ell-1) -1 & \text{if $B_{n,k}(\ell) = b$;}
\end{cases}$
- $F_{n,k}$ is linearly interpolated between two consecutive integers.
![The graph $F_{6,3}$ associated to the word $B_{6,3}=aaabaababbaababbabbb$.[]{data-label="fig:f63"}](f_6_3.eps){height="1cm"}
As we will see in Section \[ErgodicInterpretation\], the ergodic theorem implies that the graph of the function $$t\ \longmapsto\
\dfrac{1}{\tbinom{n}{k}} F_{n,k}\left(t\tbinom{n}{k}\right)$$ converges to a straight line as $n\to\infty$ and $k/n\to p$. In order to extract the nontrivial structure of this graph, we have to remove this dominant contribution and look at the correction (see Section \[limit\_standard\]). Once this is done, it appears that the resulting graph converges to the graph of a self-affine function depending only on $p=\lim k/n$, described in the following section. Examples of such limiting graphs are shown in Figure \[fig:macdo0.5\].
![Limiting observed shape for $B_{n,pn}$ with $p=0.5$ (left) and $p=0.8$ (right).[]{data-label="fig:macdo0.5"}](macdo0.5.eps "fig:"){width="6cm" height="4.5cm"} ![Limiting observed shape for $B_{n,pn}$ with $p=0.5$ (left) and $p=0.8$ (right).[]{data-label="fig:macdo0.5"}](macdo0.8.eps "fig:"){width="6cm" height="4.5cm"}
A one-parameter family of self-affine maps
------------------------------------------
For any $0<p<1$, we consider the two affinities $\alpha_p^L$ and $\alpha_p^R$ defined by $$\alpha_p^L(x,y)\ :=\ (px,py+x),$$ and $$\alpha_p^R(x,y)\ :=\ \Bigl((1-p)x +p,(1-p)y-x+1\Bigr).$$ These maps are strict contractions of $[0,1]\times \mathbb{R}$, thus there exists a unique compact set $E_p$ such that $$E_p\ =\ \alpha_p^L(E_p)\cup\alpha_p^R(E_p).$$ As shown in [@Bedford89], $E_p$ is the graph of a continuous self-affine map ${\text{\MacDo\char 77}}_p\,:\ [0,1]\to \mathbb{R}$, whose construction is illustrated in Figure \[fig\_affinite\] (see also [@Falconer90 Chapter 11]).
Note that ${\text{\MacDo\char 77}}_{0.5}$ is known as the “Blancmange function”, or “Takagi fractal curve”, and was introduced in [@Takagi1903].
![The first four steps in the construction of ${\text{\MacDo\char 77}}_p$, for $p=0.4$. In the first step, we transform the original interval $AB$ into the polygonal line $ACB$, where $C=\alpha_p^L(B)=\alpha_p^R(A)$. In the second step, we similarly map the segment $AC$ onto the segments $AD$ and $DC$, and the segment $CB$ onto $CE$ and $EB$, by applying the same affine transformations. The procedure is then iterated yielding an increasing sequence of piecewise-linear functions, converging to the self-affine function ${\text{\MacDo\char 77}}_p$. []{data-label="fig_affinite"}](lignes_un_tiers.eps)
Conway recursive \$10,000 sequence
----------------------------------
In a lecture at AT& T Bell Labs in 1988, Conway introduced the following recursive sequence, $$C(n) = C(C(n-1)) + C(n-C(n-1)),$$ with initial conditions $C(1)= C(2)=1$. The latter was then studied and generalized in a large number of papers, see e.g. [@Mallows1991; @KuboVakil1996]. In Appendix \[app\_conway\], we briefly describe some links between this topic and the content of the present work.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We wish to thank Xavier Méla for having shown us the beautiful shape of $B_{2n,n}$, which prompted our interest in this topic, and Jon Aaronson and Gérard Grancher who informed us that our curve was the well-known Blancmange function.
Results {#limit_standard}
=======
As mentioned before, we have to renormalize $F_{n,k}$ into a new function $\varphi_{n,k}$ defined on $[0,1]$, vanishing at 0 and 1, and vertically scaled so that the point corresponding to the end of $B_{n-1,k-1}$ is mapped to 1: $$\label{deF_ren}
\varphi_{n,k}(t)\ :=\ \frac{ F_{n,k}\Bigl(t\cdot\binom{n}{k}\Bigr) - tF_{n,k}\Bigl(\binom{n}{k}\Bigr) }
{ F_{n,k}\Bigl(\binom{n-1}{k-1}\Bigr) - \frac{\binom{n-1}{k-1}}{\binom{n}{k}} F_{n,k}\Bigl(\binom{n}{k}\Bigr) }\ .$$
\[thm\_convergence1\] Let $\varphi_{n,k}$ be the renormalized curve associated to the basic block $B_{n,k}$ (see ). For any $p\in]0,1[$, for any sequence $(k(n))$ such that $k(n)/n\to p$, we have $$\label{convergence1}
\varphi_{n,k(n)}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{L^{\infty}}}} {\text{\MacDo\char 77}}_p.$$ Moreover, the denominator in is of order $ \frac{1}{n}\binom{n}{k(n)}. $
Ergodic interpretation {#ErgodicInterpretation}
----------------------
The functions $F_{n,k}$ and $\varphi_{n,k}$ introduced before can be interpreted as particular cases of the following general situation: Consider a real-valued function $g$ defined on a probability space $(X,\mu)$ on which acts a measure-preserving transformation $T$. Given a point $x\in X$ and an integer $\ell\ge 1$, we construct the continuous function $F_{x,\ell}^g\,:\ [0,\ell]\to\mathbb{R}$ by $F_{x,\ell}^g(0):=0$; for each integer $j$, $1\le j\le \ell$, $$\label{defFg}
F_{x,\ell}^g(j)\ :=\ \sum_{k=0}^{j-1} g\left( T^kx\right);$$ and $F_{x,\ell}^g$ is linearly interpolated between the integers.
If $g$ is integrable (which we henceforth assume), the ergodic theorem implies that, for $0<t<1$, for almost every $x$, $$\lim_{\ell\to\infty} \frac{1}{\ell}\sum_{0\le j < t\ell} g\left( T^jx\right)\
=\ t\lim_{\ell\to\infty} \frac{1}{\ell}\sum_{0\le j < \ell} g\left( T^jx\right).$$ Therefore, when dividing by $\ell$ both the abscissa and the ordinate, the graph of $F_{x,\ell}^g$ for large $\ell$ looks very much like a straight line with slope the empirical mean ${1}/{\ell}\sum_{0\le j < \ell} g\left( T^jx\right)$. If we want to study small fluctuations in the ergodic theorem, it is natural to remove the dominant contribution of this straight line, and rescale the ordinate to make the fluctuations appear. This leads to introduce a renormalized function $\varphi_{x,\ell}^g\,:\ [0,1]\to\mathbb{R}$ by setting $$\label{defphig}
\varphi_{x,\ell}^g(t)\ :=\ \dfrac{F_{x,\ell}^g(t\ell)-tF_{x,\ell}^g(\ell)}{R^g_{x,\ell}},$$ where $R^g_{x,\ell}$ is the renormalization in the $y$-direction, which we can canonically define by $$\label{defRxl}
R^g_{x,\ell}\ :=\ \begin{cases}
\max_{0\le t\le 1} |F_{x,\ell}^g(t\ell)-tF_{x,\ell}^g(\ell)
| & \text{provided this quantity does not vanish,}\\
1 & \text{otherwise.}
\end{cases}$$ It will be useful in the sequel to note that $\varphi_{x,\ell}^g$ is not changed when we add a constant to $g$.
If we consider a Bernoulli shift in which the functions $g\circ T^k$ are i.i.d. random variables, Donsker invariance principle shows that these corrections to the law of large numbers are given by a suitably scaled Brownian bridge.
We are going to investigate the corresponding questions in the context of the Pascal-adic transformation. Let us recall, see Appendix \[constructionPA\], that the sequence of letters $a$ and $b$ in the word $B_{n, k}$ encodes the trajectory $\left( x,Tx,\ldots, T^{\binom{n}{k}-1}x\right) $ of a point $x$ lying in the basis of the tower $\tau_{n,k}$ with respect to the partition $[0, 1/2[$ (labelled by “$a$") and $[1/2, 1[$ (labelled by “$b$"). Thus, the function $F_{n,k}$ is nothing else but $F_{x,\binom{n}{k}}^g$ for $x$ in the basis of $\tau_{n,k}$, with the function $g$ defined by $$\label{defg}
g\ :=\ {\mathbf{1}_{[0,1/2[}}-{\mathbf{1}_{[1/2,1[}}.$$ Now, the vertical renormalization chosen to define $\varphi_{n,k}$ was not exactly the one defined by , but it is not difficult to restate Theorem \[thm\_convergence1\] in the following way, where we define $$\label{varphi}
\varphi_{n,k}^g\ :=\ \varphi_{x,\binom{n}{k}}^g$$ for any point $x$ in the basis of $\tau_{n,k}$.
\[thm\_convergence1bis\] Let $g\ =\ {\mathbf{1}_{[0,1/2[}}-{\mathbf{1}_{[1/2,1[}}$. If $k(n)/n\to p$, then $$\label{convergence1bis}
\varphi_{n,k(n)}^g\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{L^{\infty}}}}
\ \dfrac{{\text{\MacDo\char 77}}_p}{\Vert {\text{\MacDo\char 77}}_p \Vert_\infty}.$$
This result shows that the corrections to the ergodic theorem are given by a deterministic function, when we consider sums along the Rokhlin towers $\tau_{n,k}$. It is possible to derive an analogous pointwise statement at the cost of extracting a subsequence.
\[thm\_convergence1ter\] Let $g\ =\ {\mathbf{1}_{[0,1/2[}}-{\mathbf{1}_{[1/2,1[}}$. For $\mu_p$ almost every $x\in X$, there exists a sequence $\ell_n$ such that $$\label{convergence1ter}
\varphi_{x,\ell_n}^g\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{L^{\infty}}}}
\ \dfrac{{\text{\MacDo\char 77}}_p}{\Vert {\text{\MacDo\char 77}}_p \Vert_\infty}.$$
Limit for general dyadic functions
----------------------------------
Let $N_0 \geq 1$. We consider the dyadic partition ${\mathscr{P}}_{N_0}$ of the interval $[0, 1[$ into $2^{N_0}$ sub-intervals. We want to extend Theorem \[thm\_convergence1bis\] to a general ${\mathscr{P}}_{N_0}$-measurable real-valued function $g$.
Associated to a ${\mathscr{P}}_{N_0}$-measurable function $g$, we define the words $B^{N_0}_{n,k}$ ($n\ge N_0,\ 0\le k\le n$) on an alphabet with $N_0+1$ letters, $\{a_0, \dots, a_{N_0}\}$. They are inductively defined by $B^{N_0}_{N_0,k}\ :=\ a_k $ for $0\leq k\leq N_0$, $B^{N_0}_{n, 0}\ := \ a_0$, $B^{N_0}_{n,n}\ := \ a_{N_0}$ for $n\geq N_0$, and for $0<k<n$ $$B^{N_0}_{n,k}\ :=\ B^{N_0}_{n-1,k-1}B^{N_0}_{n-1,k}.$$ To each letter $a_k$ corresponds a continuous function $F_{N_0,k}^g$ defined for $\ell$ integer in $[0,\binom{N_0}{k}]$ by $$F_{N_0,k}^g(\ell)\ := \ \sum_{j=0}^{\ell-1} g(T^jx),$$ where $x$ is any point in the basis of $\tau_{N_0,k}$, and extended to the full interval by linear interpolation.
As before, we associate to the word $B^{N_0}_{n,k}$ a continuous function $F^g_{n,k} : [0, \binom{n}{k}] \to \mathbb{R}$ such that $F^g_{n, k}(0)=0$. Its graph is constructed by substituting to each letter $a_k$ the (translated) graph of $F_{N_0,k}^g$.
A central characteristic of $g$ is given by its ergodic sums along the towers $\tau_ {N_0, k}$: $${h}^g_{N_0, k}\ :=\ F_{N_0,k}^g(\tbinom{N_0}{k}),$$ for $k=0,\ldots,N_0$.
We are interested in the renormalized function $\varphi^g_{n,k}$ defined as in but now for a general $g$. Denoting by $R_{n,k}^g$ the renormalization constant $$\label{defRnk}
R^g_{n,k}\ :=\ \begin{cases}
\max_{0\le t\le 1} |F_{n,k}^g\left(t\tbinom{n}{k}\right)-tF_{n,k}^g\left(\tbinom{n}{k}\right)
| & \text{provided this quantity does not vanish,}\\
1 & \text{otherwise,}
\end{cases}$$ the function $\varphi^g_{n,k}$ is given, for $0\le t\le 1$, by $$\varphi^g_{n,k}(t)\ =\ \dfrac{F_{n,k}^g\left(t\tbinom{n}{k}\right)-tF_{n,k}^g\left(\tbinom{n}{k}\right)}{R^g_{n,k}}.$$
As a first observation, we can point out that it is easy to find ${\mathscr{P}}_{N_0}$-measurable functions for which convergence of $\varphi^g_{n,k}$ to a continuous function will never hold.
\[coboundary-lemma\] Let $g$ be a ${\mathscr{P}}_{N_0}$-measurable coboundary of the form $g={f}-{f}\circ T$ for some bounded measurable function ${f}$. Then $$\label{borne}
\sup_{n,k} \|F_{n,k}^g\|_\infty\ <\ +\infty.$$ If $g$ is not identically $0$, then there is no cluster point in $L^\infty([0,1])$ for any sequence $\varphi_{n,k(n)}^g$ with $k(n)/n \to p\in(0,1)$.
Note that coboundaries such as those appearing in the statement of the lemma do really exist. A simple example is given by $g:={\mathbf{1}_{[1/4,1/2[}}-{\mathbf{1}_{[1/2,3/4[}}$, with transfer function ${f}:=-{\mathbf{1}_{[1/2,3/4[}}$. Also note that the conclusion of the lemma still holds for $g$ cohomologous to a constant in $L^{\infty}$, *i.e.* of the form $$g\ =\ {f}-{f}\circ T + C$$ with ${f}$ bounded measurable. This follows from the fact that $\varphi_{n,k}^g$ is unchanged when we add a constant to $g$.
\[convergenceinlinfinity\] Let $g$ be measurable with respect to the dyadic partition ${\mathscr{P}}_{N_0}$. We suppose that $g$ is not cohomologous to a constant in $L^{\infty}$. For any sequence $(k(n))$ such that $k(n)/n\to p\in(0,1)$, we can extract a subsequence $(n_s)$ such that $\varphi^g_{n_s,k(n_s)}$ converges in $L^\infty$ to a continuous function.
In the course of the proof of this theorem, we will establish the following characterization of ${\mathscr{P}}_{N_0}$-measurable functions $g$ which are cohomologous to some constant in $L^\infty$:
Let $g$ be ${\mathscr{P}}_{N_0}$-measurable, then $g=C+{f}-{f}\circ T$ for some constant $C$ and some bounded function ${f}$ if and only if the quantities $h_{N_0, \ell}^g$ ($0\le \ell\le N_0$) are proportional to $\binom{N_0}{\ell}$ ($0\le \ell\le N_0$).
Our final result concerns the cluster points we can get for the functions $\varphi_{n,k(n)}^g$. Surprisingly, the self-affine maps ${\text{\MacDo\char 77}}_p$ which arose in the study of the basic blocks $B_{n,k}$ turn out to be the only possible limit in “almost all" cases for a dyadic function $g$, in a sense made clear by the following theorem. Before stating it, we introduce for any ${\mathscr{P}}_{N_0}$-measurable function $g$ the polynomial in $p$ $$\label{P^g}
P^g(p)\ :=\ \sum_{\ell=0}^{N_0} h_{N_0,\ell}^g \, p^\ell(1-p)^{N_0-\ell}(N_0p-\ell).$$
Note that $P^g\not\equiv 0$ if and only if $g$ is not cohomologous to a constant in $L^{\infty}$. Indeed, setting $q:=p/(1-p)$ it is easy to compute the coefficients of the corresponding polynomial in $q$ and see that they vanish if and only if $h_{N_0,\ell}^g\propto \binom{N_0}{\ell}$.
\[thm\_general\] Let $g$ be measurable with respect to the dyadic partition ${\mathscr{P}}_{N_0}$. If $P^g(p)\neq0$, for any sequence $(k(n))$ such that $k(n)/n\to p$, we have $$\label{convergence2}
\varphi^g_{n,k(n)}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{L^{\infty}}}} {\mathrm{sign}}(P^g(p))\ {\text{\MacDo\char 77}}_p/\|{\text{\MacDo\char 77}}_p\|_{\infty}.$$ Moreover, $R_{n,k(n)}^g$ is in this case of order $\frac{1}{n}\binom{n}{k(n)}$.
Some examples where another curve appears
-----------------------------------------
Theorem \[thm\_general\] does not characterize the possible limits for the values of $p$ where the polynomial $P^g$ vanishes. We do not have any general result in that case, but we present in Section \[AnotherCurve\] the study of some particular cases showing that some other curves can appear.
Proofs
======
Piecewise linear graph associated with a triangular array
---------------------------------------------------------
Until now we have always considered triangular arrays of Pascal type, in which positions are denoted by couples $(n,k)$, with line $(n+1)$ traditionally represented below line $n$. We call such arrays “descending". We are also going to use another type of triangular arrays, which we call “ascending", in which coordinates will be denoted by $(i,j)$, with line $(i+1)$ above line $i$. In both cases, the object located at a given position in the array is obtained from the two objects just above it by summation or concatenation.
We consider here an ascending triangular array ${\text{\MacDo\char 65}}$ with a finite number of lines, labelled (from bottom to top) $0,1,\ldots,m$. For each $i\in\{0,\ldots,m\}$, line $i$ is constituted by $(i+1)$ pairs of real numbers $(x_{i,0},y_{i,0}),\ldots,(x_{i,i},y_{i,i})$, satisfying the following properties:
- for $0\le j\le i\le m$, $x_{i,j}>0$;
- for $0\le j\le i < m$, $x_{i,j}=x_{i+1,j}+x_{i+1,j+1}$ and $y_{i,j}=y_{i+1,j}+y_{i+1,j+1}$.
Observe that, because of the additive relation existing between these numbers, we can recover the whole array if we only know its values on one of its side (for example, if we know all the pairs $(x_{i,0},y_{i,0})$ for $0\le i\le m$).
These pairs of real numbers are interpreted as the horizontal and vertical displacements between points on the graph of some piecewise linear map $\varphi_{\text{\MacDoSmall\char 65}}$. The map $\varphi_{\text{\MacDoSmall\char 65}}$ is defined inductively on $[0,x_{0,0}]$ in the following way. First, corresponding to line 0, we set $\varphi_{\text{\MacDoSmall\char 65}}(0):=0$ and $\varphi_{\text{\MacDoSmall\char 65}}(x_{0,0}):=y_{0,0}$. In general, taking into account all lines up to line $i$ provides a subdivision of $[0,x_{0,0}]$ into $2^i$ intervals whose lengths are the $x_{i,j}$ (taken several times), and defines $\varphi_{\text{\MacDoSmall\char 65}}$ on the bounds of these intervals. Let $I=[t,t+x_{i,j}]$ be such an interval defined in line $i$: we must have $$\varphi_{\text{\MacDoSmall\char 65}}(t+x_{i,j})\ =\ \varphi_{\text{\MacDoSmall\char 65}}(t) + y_{i,j}.$$ Then, coming to line $(i+1)$, $I$ is subdivised into two subintervals $[t,t+x_{i+1,j}]$ and $[t+x_{i+1,j},t+x_{i+1,j}+x_{i+1,j+1}]$ and we set $$\varphi_{\text{\MacDoSmall\char 65}}(t+x_{i+1,j}):=\varphi_{\text{\MacDoSmall\char 65}}(t) + y_{i+1,j}.$$ This inductive procedure defines at the end the values of $\varphi_{\text{\MacDoSmall\char 65}}(t)$ for all bounds $t$ of some subdivision of $[0,x_{0,0}]$ into $2^m$ subinterval. At last, $\varphi_{\text{\MacDoSmall\char 65}}$ is linearly interpolated between these bounds.
(0,0)![An array and its associated piecewise linear graph (actually this is the array ${\text{\MacDo\char 65}}_{1/2,3}$ providing the $3$rd stage approximation to ${\text{\MacDo\char 77}}_{1/2}$).[]{data-label="fig:array"}](array.pstex "fig:")
\#1\#2\#3\#4\#5[ @font ]{}
(9505,2680)(4275,-6464) (6573,-5804)[(0,0)\[lb\]]{} (9676,-6316)[(0,0)\[lb\]]{} (9676,-4426)[(0,0)\[lb\]]{} (13411,-6406)[(0,0)\[lb\]]{} (5491,-5804)[(0,0)\[lb\]]{} (4906,-5233)[(0,0)\[lb\]]{} (7201,-5233)[(0,0)\[lb\]]{} (4321,-4651)[(0,0)\[lb\]]{} (6121,-6376)[(0,0)\[lb\]]{} (6076,-5233)[(0,0)\[lb\]]{} (5491,-4661)[(0,0)\[lb\]]{} (6571,-4661)[(0,0)\[lb\]]{} (7786,-4661)[(0,0)\[lb\]]{}
Given the triangular array ${\text{\MacDo\char 65}}$ with $m+1$ lines, we can construct two smaller arrays with $m$ lines denoted by ${\text{\MacDo\char 66}}$ and ${\text{\MacDo\char 67}}$: For $0\le i\le m-1$, line $i$ of ${\text{\MacDo\char 66}}$ is constitued by the $(i+1)$ first pairs of reals in line $(i+1)$ of ${\text{\MacDo\char 65}}$, and line $i$ of ${\text{\MacDo\char 67}}$ is constitued by the $(i+1)$ last pairs of reals in line $(i+1)$ of ${\text{\MacDo\char 65}}$. In the sequel, we will make use of the following fact, whose verification is left to the reader: The graph of $\varphi_{\text{\MacDoSmall\char 65}}$ is formed by putting together the graphs of $\varphi_{\text{\MacDoSmall\char 66}}$ and $\varphi_{\text{\MacDoSmall\char 67}}$. More precisely, we have $$\label{L_and_R}
\varphi_{\text{\MacDoSmall\char 65}}(t)\ =\
\begin{cases}
\varphi_{\text{\MacDoSmall\char 66}}(t) & \text{if $0\le t\le x_{1,0}$,}\\
\varphi_{\text{\MacDoSmall\char 66}}(x_{1,0}) + \varphi_{\text{\MacDoSmall\char 67}}(t-x_{1,0}) & \text{if $x_{1,0}\le t\le x_{0,0}$.}
\end{cases}$$
We say that a map $\varphi\ :\ [0,x_{0,0}]\to\mathbb{R}$ is *compatible* with the array ${\text{\MacDo\char 65}}$ if $\varphi(t)=\varphi_{\text{\MacDoSmall\char 65}}(t)$ for all bound $t$ of the subdivision defined by the array.
\[defmacdo\] For all $0<p<1$ and all $m\ge 0$, the self-affine map ${\text{\MacDo\char 77}}_p$ is compatible with the triangular array ${\text{\MacDo\char 65}}_p^m$ defined by its lower-left side as follows: for each $0\le i\le m$, $x_{i,0}:=p^i$ and $y_{i,0}:=ip^{i-1}$.
We consider two transformations $\lambda_L$ and $\lambda_D$ of $\mathbb{R}^2$, which are the respective linear parts of the affine maps $\alpha_L$ and $\alpha_R$ arising in the definition of ${\text{\MacDo\char 77}}_p$: $$(x,y)\ \mathop{\longmapsto}^{\lambda_L}\ (px,py+x),$$ and $$(x,y)\ \mathop{\longmapsto}^{\lambda_R}\ \Bigl((1-p)x,(1-p)y-x\Bigr).$$ It is easy to check that in the triangular array ${\text{\MacDo\char 65}}_p^m$, we have for $0\le i<m$ and $0\le j\le i$ $$(x_{i+1,j},y_{i+1,j})\ =\ \lambda_L(x_{i,j},y_{i,j}),$$ and $$(x_{i+1,j+1},y_{i+1,j+1})\ =\ \lambda_R(x_{i,j},y_{i,j}).$$ [From]{} this, we can deduce that the left part of the graph of $\varphi_{{\text{\MacDoSmall\char 65}}_p^m}$, which is the graph of $\varphi_{{\text{\MacDoSmall\char 66}}_p^m}$, is the image of the graph of $\varphi_{{\text{\MacDoSmall\char 65}}_p^{m-1}}$ by the affine map $\alpha_L$, and the right part of the graph of $\varphi_{{\text{\MacDoSmall\char 65}}_p^m}$ is the image of the graph of $\varphi_{{\text{\MacDoSmall\char 65}}_p^{m-1}}$ by the affine map $\alpha_R$. A simple induction on $n$ then gives the result stated in the lemma.
Proof of Theorem \[thm\_convergence1\]
--------------------------------------
For any $m <n $, the block $B_{n,k}$ is the concatenation of $2^m$ subblocks $B_{n-m,\cdot}$. Let us denote by $t_{n,k,m}^r$, $r = 1, \ldots, 2^m$ the position of the last letter of the $r$th subblock in the block $B_{n,k}$. We also denote by $h_{n,k}$ the height of the basic block $B_{n,k}$, i.e. the difference between the numbers of $a$ and $b$ appearing in $B_{n,k}$. The function $\varphi_{n,k}$ is compatible with the array ${\text{\MacDo\char 65}}_{n,k}^{m}$ defined by its lower-left side as follows: For each $0\leq i \leq m$, $$x^{n,k}_{i,0} = t_{n,k,i}^1 \Big/\binom{n}{k} = \binom{n-i}{k-i}\Big/\binom{n}{k}\,,$$ and $$y^{n,k}_{i,0} = \frac{ h_{n-i,k-i} - x^{n,k}_{i,0} h_{n,k} } { h_{n-1,k-1} - x^{n,k}_{1,0} h_{n,k} }\,.$$
\[lem\_ConvOfArray\] For any $m\geq 0$, any $0<p<1$ and any sequence $k(n)$ such that $\lim_n k(n)/n = p$, we have that $$\lim_{n\to\infty} {\text{\MacDo\char 65}}_{n,k(n)}^{m} = {\text{\MacDo\char 65}}_p^m\,,$$ where ${\text{\MacDo\char 65}}_p^m$ was introduced in Lemma \[defmacdo\].
It is of course sufficient to prove the convergence for the elements appearing in the lower-left side of ${\text{\MacDo\char 65}}_{n,k(n)}^{m}$. We first have $$\lim_{n\to\infty} x^{n,k(n)}_{i,0} = \lim_{n\to\infty} \prod_{r=0}^{i-1}\frac{k(n)-r}{n-r} = p^i\,.$$ Moreover, using the identity $h_{n,k} = \frac{n-2k}{n} \binom{n}{k}$, we also obtain $$\begin{aligned}
\lim_{n\to\infty} y^{n,k(n)}_{i,0}
&= \lim_{n\to\infty} \frac{ h_{n-i,k(n)-i} - x^{n,k}_{i,0} h_{n,k(n)} } { h_{n-1,k(n)-1} - x^{n,k}_{1,0} h_{n,k(n)} }\\
&= \lim_{n\to\infty} \frac{\binom{n-i}{k(n)-i} \left( \frac{n+i-2k(n)}{n-i} - \frac{n-2k(n)}{n} \right)}{\binom{n-1}{k(n)-1} \left( \frac{n+1-2k(n)}{n-1} - \frac{n-2k(n)}{n} \right)}\\
&= \lim_{n\to\infty} i\, \frac{n-1}{n-i} \prod_{r=1}^{i-1} \frac{k(n)-r}{n-r}\\
&= i p^{i-1}\,.\end{aligned}$$
We denote by $\varphi_{n,k}^{m} := \varphi_{{\text{\MacDoSmall\char 65}}_{n,k}^{m}}$ the polygonal approximation of $\varphi_{n,k}$ at the order $m$. A computation shows that $\varphi_{n,k} = \varphi_{n,k}^{n-1}$.
Lemma \[lem\_ConvOfArray\] obviously implies $$\lim_{m\to +\infty}\limsup_{n\to +\infty}\Vert\varphi_{n,k(n)}^{m}-\varphi_{{\text{\MacDoSmall\char 65}}_{p}^m}\Vert_{\infty} = 0.$$ Moreover, by continuity of ${\text{\MacDo\char 77}}_p$, we also have $$\varphi_{{\text{\MacDoSmall\char 65}}_{p}^m}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{m\to\infty}}^{\scriptscriptstyle{L^{\infty}}}} {\text{\MacDo\char 77}}_p.$$ Hence, it is enough to prove that $$\lim_{m\to +\infty}\limsup_{n\to +\infty}\Vert\varphi_{n,k(n)}^{m}-\varphi_{n,k(n)}\Vert_{\infty} = 0,$$ which is a consequence of the general Theorem \[convergenceinlinfinity\].
Proof of Theorem \[thm\_convergence1ter\]
-----------------------------------------
For each $n\ge1$, let us denote by $k_n(x)$ the unique index such that $$x\ \in\ \tau_{n,k_n(x)}.$$ We have $$\dfrac{k_n(x)}{n}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\rightarrow\infty}}^{\scriptscriptstyle{}}}\ p\qquad \text{$\mu_p$-almost surely}.$$ Thus, it follows from Theorem \[thm\_convergence1bis\] that $$\varphi_{n,k_n(x)}^g\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{L^{\infty}}}}
\ \dfrac{{\text{\MacDo\char 77}}_p}{\Vert {\text{\MacDo\char 77}}_p \Vert_\infty} \qquad \text{$\mu_p$-almost surely}.$$ It therefore only remains to observe that $\mu_p$-almost surely, $x$ lies arbitrarily close to the bottom of $\tau_{n,k_n(x)}$ for infinitely many $n$; more precisely there exists a sequence $(n_s)$ such that the height of $x$ in tower $\tau_{n_s,k_{n_s}(x)}$ is smaller than $\frac{1}{s}\binom{n_s}{k_{n_s}(x)}$. This follows from [@Janvresse-delaRue04 Lemma 2.5].
Proof of Lemma \[coboundary-lemma\]
-----------------------------------
For any $\ell\in\{0,\ldots,\binom{n}{k}\}$, $$F_{n,k}^g(\ell)\ =\ {f}(x)-{f}(T^{\ell}x)$$ for any $x$ in the basis of the tower $\tau_{n,k}$, which proves . This implies that the renormalization constants $R_{n,k}^g$ are uniformly bounded. It is easy to see that for $n$ large enough, each letter $a_k$ appears at least once in the decomposition of the word $B_{n,k(n)}^{N_0}$ (here we use the assumption that $\lim k(n)/n\in(0,1)$). Suppose first that there exists $0\le k_0\le N_0$ such that $g$ is not constant on $\tau_{N_0,k_0}$. Then on each subinterval of $[0,1]$ corresponding to one occurence of $a_k$ in $B_{n,k(n)}^{N_0}$, the function $\varphi_{n,k(n)}^g$ has [variation]{} which is uniformly bounded below by some $c>0$. The conclusion follows since the length of this subinterval goes to 0 as $n\to \infty$.
Finally suppose that $g$ is constant on each tower $\tau_{N_0,k}$. Note that this constant cannot be the same for every towers otherwise $g$ would be identically 0 (remember that $g$ is a coboundary). Hence there exists $k_1$ such that $g$ takes different values on $\tau_{N_0,k_1}$ and $\tau_{N_0,k_1+1}$. Therefore $g$ is not constant on the tower $\tau_{N_0+1,k_1}$ and we are back to the previous case.
Proof of Theorem \[convergenceinlinfinity\]
-------------------------------------------
The function $\varphi^g_{n,k(n)}$ is compatible with the array ${\text{\MacDo\char 65}}_{n, k(n)}^g$, which is defined by its lower-left side: for $0\le i\le n-N_0$, $$\label{def_x^gy^g}
x_{i, 0}^{n, k(n), g} := \dfrac{\binom{n-i}{k(n)-i}}{\binom{n}{k(n)}}, \qquad
y_{i, 0}^{n, k(n), g} := \varphi^g_{n,k(n)}\left(x_{i, 0}^{n, k(n), g}\right).$$ Moreover, $$\label{Rgrand}
\|\varphi^g_{n,k(n)}-\varphi_{{\text{\MacDoSmall\char 65}}_{n, k(n)}^g}\|_{\infty}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{}}}\ 0,$$ provided that the renormalization constant $R_{n,k(n)}^g$ goes to infinity. (This means that in this case we can forget the variations in each $F_{N_0,k}^g$ and replace them by linear functions.)
Notice that the $y$ coefficients on the top line of the ascending array ${\text{\MacDo\char 65}}_{n, k(n)}^g$ are either null or of the form $\alpha_\ell(n,k(n))/R_{n, k(n)}^g$, for $0\le \ell\le N_0$, where $$\alpha_\ell(n,k(n))\ =\ h_{N_0, \ell}^g - \binom{N_0}{\ell} \sum_{r=0}^{N_0} h_{N_0, r}^g \dfrac{\binom{n-N_0}{k(n)-r}}{\binom{n}{k(n)}}.$$ The quantity substracted to $h_{N_0, \ell}^g$ corresponds to adding a constant $d$ to $g$ so that $F_{n,k(n)}^{g+d}$ vanishes at its end point. Thus, we can rewrite $y_{i, j}^{n, k(n), g}$ as $$y_{i, j}^{n, k(n), g} =
\dfrac{1}{R_{n, k(n)}^g}\sum_{\ell=0}^{N_0}\alpha_\ell(n,k(n))\binom{n-i-N_0}{k(n)-i-\ell+j}.$$ We denote by ${\text{\MacDo\char 65}}_{n, k(n)}^{g, m}$ the truncated array constituted by the first $(m+1)$ lines of ${\text{\MacDo\char 65}}_{n, k(n)}^{g}$. Observe that the coefficients $y_{i, j}^{n, k(n), g}$ satisfy the conditions of Proposition \[prop\] stated below, with $\delta:=\min\{p/4,(1-p)/4\}$. In particular, it follows from the latter that $$\sup_{0\le j\le i} |y_{i, j}^{n, k(n), g}| \le 3 e^{-Ci},$$ provided that $2\delta < k(n)/n < 1-2\delta$, which is true for $n$ large enough since $p\in(0,1)$. This implies that $$\label{P1}
\sup_n\| \varphi_{{\text{\MacDoSmall\char 65}}_{n, k(n)}^g} - \varphi_{{\text{\MacDoSmall\char 65}}_{n, k(n)}^{g, m}} \|_{\infty} \le 3\sum_{i\ge m} e^{-Ci}$$ which goes to zero as $m$ goes to infinity. For any fixed $m$, we can extract a subsequence $(n_s)$ such that the arrays ${\text{\MacDo\char 65}}_{n_s, k(n_s)}^{g,m}$ converge to an array ${\text{\MacDo\char 65}}^{g,m}$; by a classical diagonalization argument, it is then possible to find $(n_s)$ such that the convergence holds for any $m$. Equivalently, the function $\varphi_{{\text{\MacDo\char 65}}_{n_s, k(n_s)}^{g,m}}$ converges in $L^\infty$ to a function $\varphi^{g,m}$. Moreover, it follows from Proposition \[prop\] that $\varphi^{g,m}$ converges in $L^\infty$ as $m$ goes to infinity to a continuous function $\varphi^{g}$. Then, provided that is satisfied, we get that $$\|\varphi_{n_s,k(n_s)}^g-\varphi^{g}\|_{\infty}\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{}}}\ 0.$$ To complete the proof of the theorem, it only remains to show that if $R_{n,k(n)}^g$ is bounded, then $g=C+{f}-{f}\circ T$ with ${f}$ bounded measurable. Proposition \[prop\] clearly implies that $\alpha_\ell(n,k(n))/R_{n,k(n)}^g\to 0$ as $n\to\infty$. If we assume that $R_{n,k(n)}^g$ is bounded, this leads to $$h_{N_0, \ell}^g - \binom{N_0}{\ell} \gamma_n\ {\mathop{\longrightarrow}_{\scriptscriptstyle{n\to\infty}}^{\scriptscriptstyle{}}}\ 0$$ for all $0\le \ell\le N_0$, where $$\gamma_n\ :=\ \sum_{r=0}^{N_0} h_{N_0, r}^g \dfrac{\binom{n-N_0}{k(n)-r}}{\binom{n}{k(n)}}.$$ This in turn implies that the quantities $h_{N_0, \ell}^g$ ($0\le \ell\le N_0$) are proportional to $\binom{N_0}{\ell}$ ($0\le \ell\le N_0$), which means that we can substract some constant $C$ to the function $g$ so that $$h_{N_0, \ell}^{g-C}\ =\ 0\qquad \forall \ell\in\{0,\ldots,N_0\}.$$ But this is easily seen to be equivalent to the following: The function $g-C$ belongs to the linear space spanned by the functions ${f}_{\ell,r}-{f}_{\ell,r}\circ T$ ($0\le \ell\le N_0$, $1\le r\le \tbinom{N_0}{\ell}$) where ${f}_{\ell,r}$ is the indicator function of the $r$-th rung in tower $\tau_{N_0,\ell}$.
\[prop\] Let $N_0\ge1$, $\delta\in(0,1/4)$, and ${\overline{n}}$, ${\overline{k}}$ such that ${\overline{n}}\ge N_0$ and $2\delta{\overline{n}}\le {\overline{k}}\le(1-2\delta){\overline{n}}$. Let $\alpha_\ell$, $\ell=0,\ldots,N_0$ be real numbers. For $N_0\le n\le {\overline{n}}$ and $0\le k\le {\overline{k}}$ with $n-k\le{\overline{n}}-{\overline{k}}$, we define $$\gamma_{n,k}\ :=\ \dfrac{1}{R}\sum_{\ell=0}^{N_0}\alpha_\ell\binom{n-N_0}{k-\ell},$$ where $R$ is a renormalization constant such that $|\gamma_{n,k}|$ is always bounded by 2. There exists a constant $C=C(\delta,N_0)$ such that, provided ${\overline{n}}$ is large enough, the following inequality holds for all $n,k$: $$\label{gamma}
\gamma_{n,k}\ \le\ 3e^{-C({\overline{n}}-n)}.$$
We choose $\ell_0\in\{0,\ldots,N_0\}$. We can write $$\begin{gathered}
R\gamma_{n,k}\ =\
\binom{n-N_0}{k-\ell_0} \sum_{\ell=0}^{\ell_0-1} \alpha_\ell
\prod_{\ell+1\le r\le \ell_0} \dfrac{n-N_0-k+r}{k+1-r} \\
+ \binom{n-N_0}{k-\ell_0} \sum_{\ell=\ell_0}^{N_0} \alpha_\ell
\prod_{\ell_0+1\le r\le \ell} \dfrac{k+1-r}{n-N_0-k+r},\end{gathered}$$ provided that $\binom{n-N_0}{k-\ell_0}\neq 0$. We are going to bound the second term of the RHS; the first one can be treated in a similar way. It can be written as $$\label{rhs}
\binom{n-N_0}{k-\ell_0}\ \dfrac{\tilde P(n,k)}{\tilde Q(n-k)},$$ where $$\tilde P(n,k)\ :=\ \sum_{\ell=\ell_0}^{N_0} \alpha_\ell
\prod_{\ell_0+1\le r\le \ell} {(k+1-j)}
\prod_{\ell+1\le r\le N_0} (n-N_0-k+r)
,$$ and $$\tilde Q(n-k)\ :=\
\prod_{\ell_0+1\le r\le N_0} (n-N_0-k+r).$$ It is convenient to make the following change of variables: $$x\ :=\ {\overline{k}}-k\quad;\quad y\ :=\ ({\overline{n}}-{\overline{k}}) - (n-k),$$ and to set $$P(x,y)\ :=\ \tilde P(n,k)\quad ;\quad Q(y)\ :=\ \tilde Q(n-k).$$ Notice that in the domain where the $\gamma_{n,k}$ are defined, $x$ and $y$ are nonnegative integers. We observe that the degree of $P$ is $N_0-\ell_0\le N_0$, so that we can write $$P(x,y)\ =\ \sum_{\substack{u,v\ge0\\ u+v\le N_0}} c_{u,v}x^uy^v.$$ There exists a constant $M=M(N_0)$ such that for each polynomial $P$ of the above form, we have $$\sum_{\substack{u,v\ge0\\ u+v\le N_0}} |c_{u,v}|\ \le\ M\max_{\substack{u,v\ge0\\ u+v\le N_0}}|P(u,v)|.$$ Indeed, the map $(c_{i,j})\longmapsto
(P(u,v))$ is linear and one-to-one in a finite-dimensional space where all the norms are equivalent. Therefore, for nonnegative $x,y$ $$|P(x,y)|\ \le\ M\max_{\substack{u,v\ge0\\ u+v\le N_0}}|P(u,v)| \ (1+x+y)^{N_0}.$$ Hence, since $Q(0)\ge Q(v)$ for any $j\ge 0$, $$\left| \dfrac{P(x,y)}{Q(y)} \right|\ \le\ M\max_{\substack{u,v\ge0\\ u+v\le N_0}}\left|\dfrac{P(u,v)}{Q(v)}\right| (1+x+y)^{N_0}
\ \dfrac{Q(0)}{Q(y)}.$$ For all $y\ge 0$, we have $$\dfrac{Q(0)}{Q(y)}\ = \prod_{r=\ell_0+1}^{N_0} \dfrac{{\overline{n}}-{\overline{k}}-N_0+r}{{\overline{n}}-{\overline{k}}-N_0-y+r}
\le\ (1+3y)^{N_0}.$$ The last inequality is easily obtained by considering the two cases: $y<({\overline{n}}-{\overline{k}})/2$, and $y\ge({\overline{n}}-{\overline{k}})/2$. Therefore, we get $$\label{PQ} \left|\dfrac{P(x,y)}{Q(y)}\right| \ \le\ M(1+x+y)^{N_0}(1+3y)^{N_0}\max_{\substack{u,v\ge0\\ u+v\le N_0}}\left|\dfrac{P(u,v)}{Q(v)}\right|.$$ We now need to estimate the maximum in the above formula. Denoting by $n_1,k_1$ the position where this maximum is attained, it follows from the assumption $\gamma_{n,k}\le 2$ that $$\begin{gathered}
2\ \ge\ \gamma_{n_1,k_1}\ =\ \dfrac{1}{R}\binom{n_1-N_0}{k_1-\ell_0} \max_{\substack{u,v\ge0\\ u+v\le N_0}}\left|\dfrac{P(u,v)}{Q(v)}\right| \\
\ge\ \dfrac{1}{R}\binom{{\overline{n}}-N_0}{{\overline{k}}-\ell_0} (2\delta)^{N_0} \max_{\substack{u,v\ge0\\ u+v\le N_0}}\left|\dfrac{P(u,v)}{Q(v)}\right|.\label{max}\end{gathered}$$ The last inequality follows from the fact that $k_1/n_1\in (2\delta, 1-2\delta)$. Therefore, provided that $\binom{n-N_0}{k-\ell_0}\neq 0$, we get from , and $$\gamma_{n,k}\ \le\ C(\delta,N_0) (1+{\overline{n}}-n)^{2N_0}\dfrac{\binom{n-N_0}{k-\ell_0}}{\binom{{\overline{n}}-N_0}{{\overline{k}}-\ell_0}}.$$ Notice that if $(n-N_0, k-\ell_0)$ is such that $(k-\ell_0)/(n-N_0)\in (\delta,1-\delta)$, $(n-N_0, k-\ell_0)$ and $({\overline{n}}-N_0, {\overline{k}}-\ell_0)$ can always be linked by a path in the triangle such that all the points along the path stay in the same set. Therefore, the result simply follows from repetition of the inequalities, valid for $k/n \in (\delta,1-\delta)$, $$\begin{gathered}
\binom{n-1}{k-1} = \frac{k}{n} \binom{n}{k} \leq (1-\delta) \binom{n}{k}\,,\\
\binom{n-1}{k} = \frac{n-k}{n} \binom{n}{k} \leq (1-\delta) \binom{n}{k}\,.\end{gathered}$$ Suppose now that $(k-\ell_0)/(n-N_0) \le \delta$. We want to link the points $(n-N_0, k-\ell_0)$ and $({\overline{n}}-N_0, {\overline{k}}-\ell_0)$ by a path staying as much as possible in the set $\{(n, k): k/n\in (\delta,1-\delta)\}$. One can easily check that the fraction of the length of such a path spent inside this set is bounded below by $1/2(1-\delta)$. Moreover, since for any $(n, k)$, $\max(\binom{n-1}{k-1}, \binom{n-1}{k})\le \binom{n}{k}$, we can repeat our argument and we obtain that $$\dfrac{\binom{n-N_0}{k-\ell_0}}{\binom{{\overline{n}}-N_0}{{\overline{k}}-\ell_0}} \le e^{-C(\delta)({\overline{n}}-n)}.$$ This proves our claim provided $\binom{n-N_0}{k-\ell_0}\neq 0$. However, for any $(n, k)$ it is possible to choose $0\le \ell_0\le N_0$ such that this holds. The conclusion follows since our estimate is uniform in $\ell_0$.
(0,0)![In the case where $\frac{k}{n}\not\in(\delta,1-\delta)$, we construct a path escaping as fast as possible from this region.[]{data-label="fig:linfini"}](linfini-with-loupe.pstex "fig:")
\#1\#2\#3\#4\#5[ @font ]{}
(16820,10119)(-5107,-9694) (11176,-9586)[(0,0)\[lb\]]{} (1801,-9586)[(0,0)\[lb\]]{} (3976,-9586)[(0,0)\[lb\]]{} (8626,-3061)[(0,0)\[lb\]]{} (1936,-6181)[(0,0)\[lb\]]{} (2191,-5371)[(0,0)\[lb\]]{} (-4971,-89)[(0,0)\[lb\]]{}
Proof of Theorem \[thm\_general\]
---------------------------------
Suppose for the beginning that, for some $0\le \ell\le N_0$, $h_{N_0,k}^g=\delta_{k,\ell}$. Then $y_{i,0}^{n,k(n),g}$ (see ) can be written as (writing simply $k$ for $k(n)$) $$\begin{aligned}
y_{i,0}^{n,k,g} & = & \dfrac{1}{R_{n,k}^g}\left(h_{n-i,k-i}^{N_0,\ell}-x_{i,0}^{n,k}h_{n,k}^{N_0,\ell}\right) \\
& = & \dfrac{1}{R_{n,k}^g}\left(\binom{n-N_0-i}{k-\ell-i}-\dfrac{\binom{n-i}{k-i}}{\binom{n}{k}}
\binom{n-N_0}{k-\ell}\right).\end{aligned}$$ After some algebra, we see that the numerator is equal to $$\begin{aligned}
\label{eq_yR}
y_{i,0}^{n,k,g}R_{n,k}^g & = &
\binom{n-N_0}{k-\ell} \prod_{j=0}^{i-1}\dfrac{k-j}{n-j}\left(\prod_{j=0}^{i-1} \dfrac{k-\ell-j}{k-j}\dfrac{n-j}{n-N_0-j}-1\right)\\
\nonumber
& = &
\binom{n-N_0}{k-\ell} \prod_{j=0}^{i-1}\dfrac{k-j}{n-j}\left(i\left(\frac{N_0}{n}-\frac{\ell}{k}\right)+o\left(\frac{1}{n}\right)\right) \\
\nonumber
& = &
\binom{n-N_0}{k-\ell} \prod_{j=0}^{i-1}\dfrac{k-j}{n-j}\left(\frac{i}{k}(N_0p-\ell)+o\left(\frac{1}{n}\right)\right)\\
\nonumber
& = & \frac{1}{n}\binom{n}{k}ip^{i-1}p^\ell(1-p)^{N_0-\ell}(N_0p-\ell+o(1)).\end{aligned}$$ We now turn to the general case. By linearity, we get $$y_{i,0}^{n,k,g}R_{n,k}^g\ =\ \frac{1}{n}\binom{n}{k}ip^{i-1}P^g(p) (1+o(1)).$$ Provided that $P^g(p)\neq 0$, the denominator $R_{n,k}^g$ is proportional to the same expression where $i=1$. It follows that for some $C\neq 0$, $$y_{i,0}^{n,k,g}\ =\ ip^{i-1}(C+o(1)).$$
Open problems and conjectures
=============================
Limiting curves in the transition regime {#AnotherCurve}
----------------------------------------
![The limiting curve corresponding to the array ${\text{\MacDo\char 65}}_{1/2}'^{\,m}$, obtained along the sequence $(2k,k)$. Notice that this curve does not have the same self-similarity as ${\text{\MacDo\char 77}}_p$, and is thus much less stable. For example, along the sequence $(2k-1,k-1)$ the limiting curve is the left half, which is different.[]{data-label="nacdo"}](nacdo.eps){width="10cm" height="7cm"}
Our aim here is to sudy, in some particular cases, the behaviour in the transition regime, i.e. when the polynomial $P^g$ vanishes. We introduce a family of ${\mathscr{P}}_{N_0}$-measurable functions $g_{N_0}$, indexed by $N_0=1,2,\ldots$, such that $$h_{N_0,\ell}^{g_{N_0}} = (-1)^\ell \binom{N_0}{\ell}\,.$$ It is easy to check that, for $N_0\geq 2$, $P^{g_{N_0}}$ has a zero of multiplicity $N_0-1$ at $1/2$. Indeed, using the identity $\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$, one easily obtains that, for $N_0\geq 2$, $$P^{g_{N_0}}(p) = \left( 1-\frac{p}{1-p} \right) P^{g_{N_0-1}}(p) + \left( 1-2p \right)^{N_0-1}\,,$$ and the claim follows since $P^{g_{1}}(p) = 2p(1-p)$ does not vanish at $p=1/2$.
Using this family of functions, it is possible to investigate the behaviour of the limiting graph in the transition regime, i.e. along the sequence $(n,k)=(2k, k)$. In particular, we would like to see whether the multiplicity of the zero of the polynomial at $p=1/2$ has an influence on the limit. It turns out that, seemingly, only the parity of the multiplicity plays a crucial role.
Indeed, introducing the notation $h_{n,k}^{g_{N_0}} = S_{N_0}(n,k) \binom{n}{k}$, and using the identity $h_{n,k}^{g_{N_0}} = h_{n-1,k}^{g_{N_0-1}} - h_{n-1,k-1}^{g_{N_0-1}}$, we easily obtain the following recurrence relation, $$S_{N_0}(n,k) = \left( 1-\frac{k}{n} \right) S_{N_0-1}(n-1,k) - \frac{k}{n} S_{N_0-1}(n-1,k-1)\,.$$ From this, we can then easily compute the following asymptotics for the numerator of $y_{i,0}^{2k,k,g_{N_0}}$: It is equal to $(\frac{1}{2})^i \binom{2k}{k}$ times $$\begin{aligned}
N_0 = 2 & :\quad \phantom{-}\frac{1}{4k^2}\,i(i-1) + o(k^{-2})\,,\\
N_0 = 3 & :\quad -\frac{3}{4k^2}\,i + o(k^{-2})\,,\\
N_0 = 4 & :\quad -\frac{3}{4k^3}\,i(i-1) + o(k^{-3})\,,\\
N_0 = 5 & :\quad \phantom{-}\frac{15}{k^3}\,i + o(k^{-3})\,,\\
N_0 = 6 & :\quad \phantom{-}\frac{45}{16k^4}\,i(i-1) + o(k^{-4})\,,\\
N_0 = 7 & : \quad-\frac{105}{16k^4}\,i + o(k^{-4})\,,\\\end{aligned}$$ and so on, and so forth. We therefore see that for odd $N_0$, there is convergence to the curve ${\text{\MacDo\char 77}}_{1/2}$, with alternating signs. Of course, the scaling is different from what we saw in the generic case. More interestingly, we see that when $N_0$ is even, there is convergence to a new curve (again with alternating signs), characterized by the array ${\text{\MacDo\char 65}}_{1/2}'^{\,m}$ defined by its lower-left side as follows: For $0\le i\le m$, $x_{i,0}':=2^{-i}$ and $y_{i,0}':=i(i-1)(1/2)^{i-2}$. A picture of this limiting curve is given in Fig. \[nacdo\].
A heuristic interpretation of the preceding result is that, for the function $g_{N_0}$ that we consider here, the polynomial $P^{g_{N_0}}(p)$ changes signs when crossing $p=1/2$ for even values of $N_0$, so that we are looking at the transition between the two opposite curves $\pm {\text{\MacDo\char 77}}_{1/2}/\| {\text{\MacDo\char 77}}_{1/2} \|_\infty$. Actually, this can be seen in the array ${\text{\MacDo\char 65}}_{1/2}'$. Indeed, if we consider the subarray of ${\text{\MacDo\char 65}}_{1/2}'$ starting from position $(i_0,0)$ (it is defined by its lower-left side by $x_{i,0}'^{(i_0)}:=x'_{i+i_0,0}$ and $y_{i,0}'^{(i_0)}:=y'_{i+i_0,0}$), and renormalize the associated curve in the standard way, a little computation shows that it converges in $L^\infty$ as $i_0\to\infty$ to ${\text{\MacDo\char 77}}_{1/2}/\| {\text{\MacDo\char 77}}_{1/2} \|_\infty$. Proceeding similarly on the right side gives rise to the curve $-{\text{\MacDo\char 77}}_{1/2}/\| {\text{\MacDo\char 77}}_{1/2} \|_\infty$.
### Question {#question .unnumbered}
The preceding analysis leads us to raise the following question: Is it possible to observe limiting curves other than $\pm{\text{\MacDo\char 77}}_p$, and portions of the curve in Fig. \[nacdo\]? It is actually possible to get other curves for arbitrarily large $n$: for any $s\ge 1$, by taking appropriate initial conditions, we can get arbitrarily close to the curve given by the array defined by its lower-left side as follows: For $0\le i\le m$, $x_{i,0}^{(s)}:=p^{i}$ and $y_{i,0}^{(s)}:=i(i-1)\cdots(i-s)p^{i-s-1}$. However, such curves do not seem to survive in the limit.
Larger classes of functions
---------------------------
Recall that we introduced for any ${\mathscr{P}}_{N_0}$-measurable function $g$ the polynomial in $p$ $$P^g(p)\ :=\ \sum_{\ell=0}^{N_0} h_{N_0,\ell}^g \, p^\ell(1-p)^{N_0-\ell}(N_0p-\ell),$$ where $h_{N_0,\ell}^g $ is the sum of the values taken by $g$ on each rung of the tower $\tau_{N_0,\ell}$. Since $\mu_p$ gives the mass $p^\ell(1-p)^{N_0-\ell}$ to each rung of $\tau_{N_0,\ell}$, we can rewrite $h_{N_0,\ell}^g $ as $$h_{N_0,\ell}^g = \dfrac{1}{p^\ell(1-p)^{N_0-\ell}}\int_{\tau_{N_0,\ell}} g d\mu_p.$$ Therefore, the polynomial $P^g(p)$ works out to $$P^g(p)\ =\ \sum_{\ell=0}^{N_0} (N_0p - \ell ) \int_{\tau_{N_0,\ell}} g(x) d\mu_p(x).$$ For $x\in\tau_{N_0,\ell}$, $\ell$ is equal to the sum of the first $N_0$ digits $X_1, \dots, X_{N_0}$ in the binary expansion of $x$. This allows us to rewrite the last expression as $$\label{cov}
P^g(p)
\ =\ \sum_{\ell=0}^{N_0} \int_{\tau_{N_0,\ell}} g(x) \bigl(N_0p - \sum_{i=1}^{N_0}X_i \bigr) d\mu_p(x)
\ =\ -\ \mathrm{cov}_{\mu_p}\Bigl( g\ ;\ \sum_{i=1}^{N_0}X_i \Bigr).$$ As we are now interested in functions which are not necessarily ${\mathscr{P}}_{N_0}$-measurable, it is convenient to emphasize the $N_0$-dependence of $P^g$ by writing $P^g_{N_0}$. Any ${\mathscr{P}}_{N_0}$-measurable function $g$ can also be viewed as a ${\mathscr{P}}_{N_0+1}$-measurable function. Thus, for such a function, we see from that $P^g_{N_0} = P^g_{N_0+1} $.
For an arbitrary $g$, a natural question is the following: Suppose that $$\lim_{N_0\to\infty}\mathrm{cov}_{\mu_p}\Bigl( g\ ;\ \sum_{i=1}^{N_0}X_i \Bigr)$$ exists and is nonzero. Does the conclusion of Theorem \[thm\_general\] still hold?
A sufficient condition for the existence of the limit is that $$\sum_{N_0}\ \Bigl\| E_{\mu_p}[g | {\mathscr{P}}_{N_0+1}] - E_{\mu_p}[g | {\mathscr{P}}_{N_0}] \Bigr\|_2 < \infty,$$ which is satisfied for example by indicators of intervals.
It is easy to see that Theorem \[thm\_general\] cannot hold for arbitrary measurable function $g$. Actually it is known that in any aperiodic ergodic dynamical system, one can find a function $g$ such that the invariance principle holds [@volny99]; for such a function, we clearly cannot have the type of behavior described in the present paper. We can also construct an explicit counterexample. Let’s start from $g = {\mathbf{1}_{[0,1/2[}}$, which satisfies Theorem \[thm\_general\]. To each tower $\tau_{n, k}$ of a sufficiently large level $n$, we make the following procedure: We modify the values taken by $g$ at the bottom and the top of the tower. On the first $\epsilon\binom{n}{k}$ rungs, the value is set to $1$, while it is set to $0$ on the last $\epsilon\binom{n}{k}$ rungs. We repeat this construction for a sequence $\epsilon_i$ with $\sum_i\epsilon_i$ small, and levels $n_i$ chosen such that $1/n_i$ is much smaller than $\epsilon_i$. Since the fluctuations giving rise to ${\text{\MacDo\char 77}}_p$ for the original function $g$ are of order $\binom{n}{k}/n$ (see Theorem \[thm\_general\]), there cannot be convergence to ${\text{\MacDo\char 77}}_p$ for the modified function.
Other transformations
---------------------
### Generalized Pascal-adic transformations
In [@mela04], Xavier Méla introduced a family of transformations generalizing the Pascal-adic transformation. They can be constructed following the same cutting and stacking procedure as described in Appendix \[constructionPA\], but in which each tower is split into $d$ sub-columns; the last $(d-1)$ sub-columns of the tower $\tau_{n,k}$ being sent to the first $(d-1)$ sub-columns of the tower $\tau_{n,k+1}$. (The standard Pascal-adic transformation corresponds to the particular case $d=2$.) Numerical simulations (see Figure \[generalized\]) indicate that similar results of convergence as those proved in the present paper also hold in this more general context. The limiting curves also seem to be self-affine, but defined with $d$ affinities instead of just 2. Interestingly, as $d\to\infty$ these curves seem to converge to a smooth function.
![Limiting curves observed for the generalized Pascal-adic transformations: $d=3$ (left), $d=8$ (middle) and $d=128$ (right).[]{data-label="generalized"}](padic-gen-d3.eps "fig:"){height="3cm" width="4cm"} ![Limiting curves observed for the generalized Pascal-adic transformations: $d=3$ (left), $d=8$ (middle) and $d=128$ (right).[]{data-label="generalized"}](padic-gen-d8.eps "fig:"){height="3cm" width="4cm"} ![Limiting curves observed for the generalized Pascal-adic transformations: $d=3$ (left), $d=8$ (middle) and $d=128$ (right).[]{data-label="generalized"}](padic-gen-d128.eps "fig:"){height="3cm" width="4cm"}
### Rotations and rank-one transformation
Many questions remain open concerning the Pascal-adic transformation. Its mixing properties are totally unknown, but it is conjectured that it is at least weakly mixing. Related to this important question, we can ask whether such behaviour of ergodic sums can be observed in systems defined by an irrational rotation on the circle.
One of the few properties which have been established for the Pascal-adic transformation is the loose-Bernoullicity (see [@Janvresse-delaRue04]). In the class of zero-entropy systems, to which belongs the Pascal-adic, loose-Bernoullicity is the weaker of a sequence of ergodic properties:
> rank one $\Longrightarrow$ finite rank $\Longrightarrow$ local rank one $\Longrightarrow$ loosely-Bernoulli.
Méla and Petersen ask in [@mela-petersen04] whether those stronger properties are satisfied by the Pascal-adic transformation. The conjecture is that it is not even of local rank one. Connected to this problem, it would be interesting to study the behaviour of the corrections to the ergodic theorem in general rank-one systems. Can phenomenon such as those established in this work appear in the rank-one category?
Construction of the Pascal-adic transformation {#constructionPA}
==============================================
\[construction\]
(0,0)
\#1\#2\#3\#4\#5[ @font ]{}
(4488,5289)(-629,-6892) (181,-6811)[(0,0)\[lb\]]{} (2026,-6811)[(0,0)\[lb\]]{} (1126,-6811)[(0,0)\[lb\]]{} (2881,-6811)[(0,0)\[lb\]]{} (-629,-4111)[(0,0)\[lb\]]{} (3466,-4111)[(0,0)\[lb\]]{} (1171,-3796)[(0,0)\[lb\]]{}
Here we recall the construction of the Pascal-adic transformation, following the cutting and stacking model exposed in [@mela-petersen04]. Our space $X$ is the interval $[0,1[$, equipped with its Borel $\sigma$-algebra $\mathscr{A}$.
We start by dividing $X$ into two subintervals $P_0:= [0,1/2[$ and $P_1:= [1/2,1[$. Let ${\mathscr{P}}_1:=\{P_0,P_1\}$ be the partition obtained at this first step. We also consider $P_0$ and $P_1$ as “degenerate” Rokhlin towers of height 1, respectively denoted by $\tau_{1,0}$ and $\tau_{1,1}$.
On second step, $P_0$ and $P_1$ are divided into two equal subintervals. The transformation $T$ is defined on the right piece of $P_0$ by sending it linearly onto the left piece of $P_1$. This gives a collection of 3 disjoint Rokhlin towers denoted by $\tau_{2,0}, \tau_{2,1}$, $\tau_{2,2}$, with respective heights 1, 2, 1 (see figure \[construction\]).
After step $n$, we get $(n+1)$ towers $\tau_{n,0},\ldots,\tau_{n,n}$, with respective heights $\binom{n}{0},\ldots,\binom{n}{n}$, the width of $\tau_{n,k}$ being $2^{-n}$. At this step, the transformation $T$ is defined on the whole space except the top of each stack. We then divide each stack into two sub-columns with equal width, and define $T$ on the right piece of the top of $\tau_{n,k}$ by sending it linearly onto the left piece of the base of $\tau_{n,k+1}$. Repeating recursively this construction, $T$ is finally defined on all of $X$ except on countably many points. It is well known that the ergodic invariant measures for this transformation are given by the one-parameter family $(\mu_p)_{0<p<1}$, where $\mu_p$ is the image of the Bernoulli measure $B(1-p,p)$ on $\{0,1\}^{\mathbb{N}}$ by the application $(x_k)\longmapsto\sum_{k\ge1}x_k/2^k$. This measure $\mu_p$ can be interpreted as follows: For each $x\in[0,1[$ and $n\ge1$, denote by $k_n(x)$ the unique index such that $x\in\tau_{n,k_n(x)}$. Under $\mu_p$, conditionned on $k_1(x),k_2(x),\ldots,k_n(x)$, the value of $k_{n+1}(x)$ is either $k_n(x)$ (with probability $1-p$) or $k_{n}(x)+1$ (with probability $p$). Thus, the law of large number gives $$\dfrac{k_n(x)}{n}\ \mathop{\longrightarrow}_{n\rightarrow\infty}\ p\qquad\mu_p-\text{a.e.}$$
Links with Conway recursive sequence {#app_conway}
====================================
Let us recall that Conway recursive sequence is defined by $C(1)=C(2)=1$, and for $j\ge3$ $$C(j)\ =\ C(C(j-1))+C(j-C(j-1)).$$ One easily checks that the differences $$\Delta C(j)\ :=\ C(j)-C(j-1)$$ are always 0 or 1. Following Mallows [@Mallows1991], it is convenient to introduce the sequence $$D(j)\ :=\ 2\Delta C(j)-1\ \in\ \{-1,1\}.$$ It is shown in [@Mallows1992] that the sequence $D(j)$, $j\ge3$ is obtained by the concatenation of the $B_{n,k}$’s after substituting $a$ by 1 and $b$ by $-1$: $$(D(j))_{j\ge3}\ =\ B_{1,0}B_{1,1}B_{2,0}B_{2,1}B_{2,2}B_{3,0}\dots$$ This is a consequence of the following remarkable property: Recall that $B_{n,k}$ is the concatenation of $B_{n-1,k-1}$ and $B_{n-1,k}$; in fact $B_{n,k}$ can also be obtained by this alternative procedure. Cut $B_{n-1,k-1}$ after each $a$, and $B_{n-1,k}$ after each $b$. Interleaving the resulting pieces produces $B_{n,k}$. For example $B_{4,2}=aababb$ is the concatenation of $a|a|b$ and $ab|b|$ and can be written as $a|ab|a|b|b$.
The graph of the function with increments $D(j)$ consists in a series of humps corresponding to intervals $2^n+1\le j\le 2^{n+1}$. In most works dealing with Conway sequence, the asymptotic shape of these humps is studied, and it is shown to be given by some smooth explicit function. Each of these humps corresponds to the graph associated to the concatenation of all the words $B_{n,k}$ on a given line. It turns out that at this scale, the fractal structure is lost. The results presented in our paper can thus also be interpreted as the analysis of small fluctuations in the convergence of these humps.
[1]{}
T. Bedford, *The box dimension of self-affine graphs and repellers*, Nonlinearity **2** (1989), 53–71.
K. Falconer, *Fractal geometry*, John Wiley & Sons, 1990.
. Janvresse and T. de la Rue, *The [Pascal]{} adic transformation is loosely [Bernoulli]{}*, Ann. IHP Probab. Stat. **40** (2004), no. 2, 133–139.
T. Kubo, R. Vakil, *On Conway’s recursive sequence*, Discrete Math. **152** (1996), no. 1-3, 225–252.
C. L. Mallows, *Conway’s challenge sequence*, Amer. Math. Monthly **98** (1991), no. 1, 5–20.
C. L. Mallows, Amer. Math. Monthly **98** (1992), no. 7, 563–564.
X. [Méla]{}, *A class of nonstationary adic transformations*, Ann. IHP Probab. Stat., to appear.
X. [Méla]{} and K. Petersen, *Dynamical properties of the [Pascal]{} adic transformation*, Ergod. Th. & Dynam. Sys., to appear.
T. Takagi, *A Simple Example of the Continuous Function without Derivative*, Proc. Phys. Math. Japan **1** (1903), 176–177.
A. M. [Vershik]{} and A. N. [Livshits]{}, *Adic models of ergodic transformations, spectral theory, substitutions, and related topics*, Representation theory and dynamical systems, Adv. Soviet Math., vol. 9, Amer. Math. Soc., Providence, RI, 1992, pp. 185–204.
A.M. [Vershik]{}, *A theorem on the [M]{}arkov approximation in ergodic theory*, Journ. of Soviet Math. **28** (1985), 667–674.
D. Volný, *Invariance principles and Gaussian approximation for strictly stationary processes*, Trans. Amer. Math. Soc. **351** (1999), no. 8, 3351–3371.
| |
2018 Missions at Immanuel
We need your help!
|
|
Concert Update November 4, 2018
|
|
Green Congregation
This is a movement which is a national organization with local groups that fosters
|
|
MEALS THAT MATTER
Immanuel has been blessed by serving Meals that Matter and we are a blessing to all that come. Every week we have had enough volunteers. This is a gift from God more than good planning since we haven't been reminding people to come on their scheduled night! Each week we are getting better as a team! Our number of guests has been gradually increasing, especially at the end of March when we had 45-47 guests. Between September 11, 2017 and March 26, 2018, Immanuel has served 864 meals, 820 were adults and 43 were children. On April 16th we served 52 guests. Christ the King and Grace Lutheran have higher numbers on Tuesday and Wednesday, but they are in higher population density low income areas. The Shalom Center, through the neighborhood Meals that Matter, has fed over 4600 meals in the first 7 months of operation. Souper Bowl Sunday brought in an amazing $164.00 $100.00 was put into the Meals That Matter fund and the rest will go to the Shalom Center.
|
|
Mission Minute
Do you remember the flooding last summer in the Burlington and Fox River area? Some of the homes affected are still in need of repair. The Wisconsin Conference Volunteers in Mission sponsored a work week in the Salem area from April 15-21. The volunteers stayed at the Salem UMC and took showers at the Salem Elementary School. Circuit churches provided meals. Tasks included removing and replacing drywall, painting, cleaning outside areas, and a number of other tasks that involved minor remodeling. This is another example of UMCOR at work.
|
|
InGathering
Natural disasters have been occurring frequently in recent years! The Midwest Mission Distribution Center had bare shelves after the hurricanes last fall. Our help is needed to replenish the stock. We are going to be making only Cleaning (Flood) Buckets and Personal Dignity (Hygiene) Kits for In-Gathering at Annual Conference this June. No school bags, layettes or sewing kits were requested, but if you have individual items for these kinds of kits, they can be sent.
|
|
Open House for the Homeless, January 1-- 7AM to 7PM! | https://www.kenoshaimmanuelumc.org/missons18.html |
Psychostimulants target dopamine neurons, acting principally at the dopamine transporter. Ensuing actions in the postsynaptic striatal circuitry mediate both the acute behavioral response to the drug as well as longer-term neuroplastic changes associated with addiction. Extensive work has focused on dopamine release and on the actions of dopamine, but only recently with the advent of optogenetics have dopamine neuron synaptic actions become directly accessible to study. Optogenetics enables a functional connectome approach to determining dopamine neuron synaptic actions. In this approach, channelrhodopsin 2 is expressed comprehensively in an identified population of neurons and the sum total of the connections of the population of neurons onto identified target neurons measured to determine a functional connectivity index comprising the incidence of connections and their strength. Determining the functional connectome of striatal spiny projection neurons - the principal postsynaptic targets of dopamine neurons - has provided quantitative measures of functional connectivity, going beyond anatomical data to direct measures of synaptic strength. While spiny projection neurons appear to signal solely via GABA, dopamine neurons signal via dopamine as well as glutamate and GABA, and differentially target striatal neurons in different striatal regions. Dopamine neurons make robust glutamatergic synaptic connections with cholinergic interneurons in the ventral striatum, specifically in the medial shell of the nucleus accumbens that appear to be critically involved in mediating the acute behavioral response to amphetamine. A single low dose of amphetamine, which engenders motoric stimulation, significantly and selectively attenuates these glutamatergic connections. In contrast, a high amphetamine dose, which engenders stereotypic behavior, broadly attenuates dopaminergic connections throughout the striatum. This motivates the hypothesis that amphetamine-induced plasticity of specific populations of dopamine neuron synapses is critical for driving the striatal circuitry towards the addicted state. To address this hypothesis, the three specific aims are to: <1> Determine the dopamine neuron functional connectome in the striatum, mapping the synaptic actions of dopamine neurons across the striatum. <2> Determine how amphetamine modulates the dopamine neuron functional connectome following a single exposure, examining regional heterogeneity, and the timing and persistence of the modulation. <3> Determine the role of the most affected connections, as crucial mediators of amphetamine circuit and behavioral effects. Expressing amphetamine-induced actions in functional connectome terms enables a systematic synapses-to-circuits-to-behavior approach to elucidating the synaptic substrate of amphetamine action and the inception of addiction.
| |
Introduction {#s1}
============
During angiogenesis, endothelial cells can produce proteases such as matrix metalloproteinases (MMPs), and can increase their ability to migrate and proliferate [@pone.0013986-Risau1]. This process depends on the activity of several growth factors, such as vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF) and platelet-derived growth factor (PDGF)-BB [@pone.0013986-Schweigerer1], [@pone.0013986-Leung1], [@pone.0013986-Darland1].
Erythropoietin (EPO), a glycoprotein hormone that stimulates erythropoiesis, also instigates the secretion of angiogenic factors [@pone.0013986-Anagnostou1], [@pone.0013986-Carlini1]. Ribatti and colleagues demonstrated that EPO induced a pro-angiogenic phenotype in cultured endothelial cells, and stimulated angiogenesis in vivo [@pone.0013986-Ribatti1], [@pone.0013986-Ribatti2]. It also stimulated angiogenesis indirectly in ischemic tissue by increasing the expression of VEGF and by recruiting endothelial progenitor cells [@pone.0013986-Aicher1], [@pone.0013986-Nakano1]. In rats, EPO administration mobilized bone marrow-derived progenitor cells [@pone.0013986-Hamed1] and increased the myocardial expression of VEGF [@pone.0013986-Westenbrink1]. Wang *et al.* demonstrated that EPO can promote angiogenesis by stimulating VEGF secretion from neural progenitor cells and VEGF-receptor expression in cerebral endothelial cells [@pone.0013986-Wang1].
Other non-hematopoietic effects of EPO include cytoprotection of vascular endothelial cells [@pone.0013986-Chong1], [@pone.0013986-Li1] and anti-apoptotic actions in vascular smooth muscle cells and endothelial cells [@pone.0013986-Somervaille1] such as prevention of mitochondrial release of cytochrome c, suppression of caspase activity, and upregulation of the activity of the protein kinase B (PKB) signaling pathway and the expression of the antiapoptotic protein Bcl-xl [@pone.0013986-Chong2], [@pone.0013986-Wen1].
Autologous fat transplantation is a common and ideal technique for soft tissue augmentation and for filling soft tissue defects due to trauma or aging [@pone.0013986-Locke1]. Emerging evidence suggests that early and adequate vascularization of the fat graft is essential for its take and viability [@pone.0013986-Yamaguchi1], [@pone.0013986-Yi1]. However, the relatively high resorption rate of the fat graft reduces the efficacy of this technique because the volume of vascularized grafts continues to decline as a result of increased fat cell death after its transplantation [@pone.0013986-Nishimura1]. Although angiogenic factors [@pone.0013986-Rophael1], [@pone.0013986-Kuramochi1], and VEGF gene therapy, have been individually used to stimulate angiogenesis in fat grafts in order to enhance fat cell survival and viability [@pone.0013986-Yi1], [@pone.0013986-Lei1], [@pone.0013986-Lu1], the clinical outcome has been disappointing, because a single angiogenic factor to stimulate angiogenesis may be inadequate [@pone.0013986-Henry1]. Therefore, reducing the resorption rate of transplanted fat is a clinical challenge.
In light of all these findings, we hypothesized that treatment of fat grafts with EPO would (a) stimulate the release of several angiogenic factors and promote angiogenesis, and (b) prevent apoptosis in fat grafts. Using this working hypothesis, we initiated a study whose aims were (a) to evaluate and compare the effects of VEGF and EPO on fat cell survival and angiogenesis in human transplanted fat tissue, and (b) to investigate the long-term survival of grafted fat cells after EPO treatment in immunologically-compromised nude mice.
Materials and Methods {#s2}
=====================
Isolation and preparation of human fat tissue {#s2a}
---------------------------------------------
Fat was harvested from the thigh of a 40-year-old woman undergoing suction-assisted lipectomy under general anesthesia. In order to decrease bleeding during fat aspiration, and to relieve pain after the procedure, the areas for aspiration were injected with a local anesthesia solution containing lidocaine (0.5%) and adrenaline (1∶1,000,000) before the beginning of the procedure. The fat was aspirated using a 14-gauge three-hole blunt cannula, and then processed under sterile conditions for subsequent grafting into nude mice within two hours of its collection according to previously published protocols [@pone.0013986-Ullmann1], [@pone.0013986-Kurita1]. The participant gave her written informed consent, and the study was reviewed and approved by the institutional review board of the Rambam Health Care Campus.
Study design {#s2b}
------------
Two different animal studies were conducted, and the use of animals and all the experimental procedures were reviewed and approved by the Technion Animal Care and Use Committee. The first study comprised 30 seven-week-old female CD-1 nude mice (Harlan, Jerusalem, Israel), which were housed in cages in a room with an artificial 12-h light/dark cycle at a constant temperature range (24±2°C) and relative humidity (55±10%). The mice were acclimated for one week prior to the study, and fed a standard chow and water ad libitum. The 30 mice were randomly divided into three equal groups according to treatment of the aspirated human fat after its injection. Group 1 mice were injected with 1 ml of human fat that was treated with sterile phosphate buffer saline (PBS) (control group). Group 2 mice were injected with 1 ml human fat that was treated with 1000 IU/kg EPO (low-dose EPO group). Group 3 mice were injected with 1 ml human fat that was treated with 5000 IU/kg EPO (high-dose EPO group). The fat was injected subcutaneously into the scalp using a 14G needle while the animals were manually restrained. Immediately following fat transplantation, the fat grafts were injected with 100 µl PBS (control group), or with either 20 IU EPO/100 µl PBS (low-dose EPO group) or 100 IU EPO/100 µl PBS (high-dose EPO group) every three days for 18 days making a total of 6 equal injections of each treatment per fat graft. EPO was purchased as an injection ampoule (ARANESP®, Amgen AG, Zug, Switzerland) which contained 150 µg/ml (18,000 IU) of EPO.
The second animal experiment used 20 seven-week-old female CD-1 nude mice, and differed from the first experiment in that the fat grafts were treated with PBS or VEGF (2 μg/ml) (Sigma Aldrich, MO, USA) after its injection into the 20 mice. Briefly, 100 µl PBS or 200 ng VEGF/100 µl PBS were injected every three days for 18 days in an identical manner to that of the PBS- or EPO-treated mice in the first experiment. The PBS-treated mice in the second experiment were used as a second control group. Post-operative analgesics and antibiotics were not administered to the mice in the two experiments.
Follow-up and data collection {#s2c}
-----------------------------
The duration of the study period of each experiment was 15 weeks after fat transplantation. On the day of fat injection, 18 days after the fat injection, and at the end of the study period, each mouse was weighed, and a tail vein blood sample was collected for determining the red blood cell, leukocyte, and platelet counts, the plasma hemoglobin, VEGF, and EPO concentrations. VEGF and EPO concentrations were determined in the plasma as well as in the homogenates of the samples of the fat grafts using commercial enzyme-linked immunosorbent assays (Quantikine® VEGF immunoassay Kit and Quantikine® IVD® Erythropoietin Kit, R&D Systems, MN, USA) in accordance with the manufacturer\'s instructions.
After 15 weeks, all mice were humanely killed, and the fat grafts were carefully dissected from their scalps ([Figure 1C](#pone-0013986-g001){ref-type="fig"}). Each fat graft was weighed, and the volume of the fat graft was measured using the liquid overflow method [@pone.0013986-Ayhan1]. After weight and volume determination, each fat graft was divided into two equal portions. One portion was stored at −80°C and used to determine its EPO concentrations, VEGF content, extent of apoptosis, and the expression levels of its angiogenic factors, namely bFGF, insulin growth factor-1 (IGF-1), PDGF-BB, VEGF receptor-2 (VEGFR-2), EPO receptor (EPOR), and MMP-2, the survival factor PKB and phosphorylated PKB, and pro-apoptotic factors, namely caspase 3 and cytochrome c. The second portion was placed in 4% formalin and used for histological examination and for determination of macrophage infiltration, microvascular density (MVD), VEGFR-2 and EPOR localization.
{#pone-0013986-g001}
Histology and immunohistochemistry {#s2d}
----------------------------------
Histological slides of the formalin-maintained samples were prepared, and then stained with hematoxylin and eosin using standard procedures. Immunohistochemistry was performed using rabbit monoclonal antibodies against tissue CD31, VEGFR-2 and EPOR, and goat polyclonal IgG against VEGF (R&D Systems, Minneapolis MN, USA), and CD68 (Dako, Glostrup, Denmark). The paraffin-embedded fat graft sections were incubated with the antibodies overnight at room temperature followed by incubation with appropriate secondary antibodies [@pone.0013986-Li2]. Upon completion of the incubations, the specimens were counterstained with hematoxylin. Mouse IgG was used as a negative control. The slides were examined under a light microscope for (a) the extent of integration, as evidenced by the extent of organization of intact and nucleated fat cells, (b) the extent of fibrosis, as evidenced by the amount of collagen and elastic fibrils, (c) the presence of cysts and vacuoles, and (d) the intensity of the inflammatory response, as evidenced by the extent of lymphocyte and macrophage infiltration. Each criterion was graded on a scale of 0 to 5 where 0 = absence, 1 = minimal presence, 2 = minimal to moderate presence, 3 = moderate presence, 4 = moderate to extensive presence, and 5 = extensive presence.
Quantification of macrophage infiltration in the fat grafts was estimated by counting the number of CD68-positive cells in five fields per fat graft in all fat graft sections. Microvascular density (MVD) in fat grafts was determined in five regions of interest where the CD31 antibody signal was the most intense in each section in all of the fat graft sections. The number of macrophages and blood vessels in each region was counted under a light microscope at 400× magnification. The assessment in each fat graft was made by calculating the mean result in two different sections per fat graft and five different fields of view per section. All evaluations were made by SH, DK, DE, and YU, who were blind to the treatment of the mice.
Determination of the extent of apoptosis in the fat grafts {#s2e}
----------------------------------------------------------
The extent of apoptosis in all fat grafts was assessed by the terminal deoxyuridine triphosphate nick end labeling (TUNEL) assay using a commercial kit (ApopTag® Plus Fluorescein Kit, CHEMICON, CA, USA), in accordance with the manufacturer\'s instructions. Duplicate determinations were done in each sample, and were processed by fluorescence-activated cell sorting (FACS) (Becton Dickinson, NJ, USA). Data were analyzed using the Macintosh CELLQuest software program (Becton Dickinson).
In vitro tube formation of HUVECs on matrigel {#s2f}
---------------------------------------------
The *in vitro* angiogenic potential of VEGF and EPO was assessed by their ability to form tubes of endothelial cells on matrigel. To this end, human umbilical vein endothelial cells (HUVECs) were first cultured on fibronectin-coated 6-well plates in endothelial basal medium-2 (EBM-2) (PromoCell, USA) and then treated with 0, 20 or 100 IU/ml EPO for 48 hours before their use in the assay. In a second experiment, HUVECs were exposed to 0, 100 IU/ml EPO and 200 ng/ml VEGF for 48 hours in EBM-2 that contained or lacked 0.25 mg/ml bevacizumab (Avastin®, Genentech, San Francisco, CA, USA), a humanized monoclonal antibody that antagonizes the actions of VEGF. After 48 hours, the untreated HUVECs, the VEGF- and EPO-treated HUVECs, and the VEGF+bevacizumab- and EPO+bevacizumab-treated HUVECs were detached gently by 0.5% trypsin/EDTA, and then suspended in EBM-2. At the same time, frozen matrigel (Sigma Aldrich, St Louis MO, USA) was thawed, and spread onto 96-well plates (40µl/well) at room temperature for 30 minutes to allow solidification. The detached untreated HUVECs, VEGF- and EPO-treated HUVECs, and VEGF+bevacizumab- and EPO+bevacizumab-treated HUVECs (5×10^4^ cells/150µl EBM-2/well) were placed on the matrigel surface, and then incubated at 37°C for 24 hours in EBM-2. After plating on the matrigel, the VEGF- and EPO-treated HUVECs and VEGF+bevacizumab- and EPO+bevacizumab-treated HUVECs were treated a second time with identical concentrations of EPO, VEGF, and bevacizumab, respectively. After 24 hours, the non-integrated cells were removed by washing, and tube formation on the matrigel was assessed under a light microscope at 10× magnification. The tubular structures were graded semiquantitatively by evaluating the presence and stages of tube formation on a scale of 0 to 5: 0 = well separated individual cells, 1 = cells had begun to migrate and align themselves, 2 = visible capillary tubes and no sprouting, 3 = visible sprouting of new capillary tubes, 4 = early formation of closed polygons, 5 = development of complex mesh-like structures. All evaluations were assessed by SH, DE and DK, who were blind to the treatments. Four random high-power fields in each sample were examined from three independent experiments. The results from each examiner were then pooled in order to calculate the mean value for each criterion for each sample in each group.
HUVEC proliferation {#s2g}
-------------------
To investigate EPO-induced angiogenesis through mechanisms involving pro-angiogenic factors, we measured the proliferation of EPO-treated HUVECs in the presence of various pro-angiogenic factor inhibitors. To this end, HUVECs (2×10^5^ cells/well) were cultured on fibronectin-coated 12-well plates in EBM-2. The cultured HUVECs were treated with or without 100 IU/ml EPO for 48 hours, and then exposed for 3 hours to (a) 0.25 mg/ml bevacizumab, (b) 100 nM of PD173074; an inhibitor of bFGF (Calbiochem, San Diego, CA), (c) 20 µM of tyrphostin AG 1296; a selective inhibitor of PDGF (sigma), (d) a combination of bevacizumab, PD173074 and tyrphostin, and to (e) 100 nM wortmannin; a phosphatidylinositol 3-kinaz (PI 3-K) inhibitor (sigma). Upon the completion of the experiment, the cells were washed with PBS and then incubated with \[^3^H\]-thymidine (NEN, Boston, MA, USA) (1 µCi/ml medium) for 5 h at 37°C. Thereafter, 0.5 ml cold 10% Trichloroacetic acid (TCA) was added into each well for another 30 min at 4°C. To extract the ^3^H-thymidine labeled DNA, 0.5 ml 1N NaOH was added to each well for 10 min at room temperature, and then 0.5 ml 1N HCl was added and mixed well. Samples of mixture solution (0.5 ml) was taken from each well and added to scintillation vials for the measurement of \[^3^H\]-thymidine incorporation to DNA (cpm/mg protein). Duplicate cell counts were averaged for 3 experiments. Data were expressed as the percentage of control.
Western analysis {#s2h}
----------------
The expression levels of the angiogenic factors, bFGF, IGF-1, PDGF-BB, VEGFR-2, EPOR and MMP-2, the cell survival factor PKB and phosphorylated PKB, and the pro-apoptotic factors caspase 3 and cytochrome c were determined in homogenates of the harvested fat grafts using western blot analysis. Homogenates of samples from the fat grafts were lysed in RIPA buffer (R&D Systems). A 40-µg aliquot of each lysate was loaded onto SDS-PAGE, and then transferred to nitrocellulose membranes. Membranes were then incubated with monoclonal antibodies against bFGF, IGF-1, PDGF-BB, MMP-2, PKB, phosphoPKB, caspase 3, and cytochrome c (all purchased from Santa Cruz Biotechnology, Santa Cruz, CA, USA) and with monoclonal antibodies against VEGFR-2 and EPOR (R&D systems), followed by a second incubation with a horseradish peroxidase (HRP)-conjugated IgG secondary antibody (Santa Cruz Biotechnology). An antibody against β-actin (Santa Cruz) was used to normalize protein loading. The resultant bands were quantified by densitometry.
Statistical analysis of the data {#s2i}
--------------------------------
The data for each study parameter from the PBS-, the VEGF- and the EPO-treated fat grafts in each treatment group were pooled, and the results are presented as mean ± standard deviation (SD). The data have a normal distribution by the Kolmogorov-Smirnov test. The data from the first experiment were analyzed by ANOVA and the data from the second experiment were analyzed by Student\'s *t* test, using a computerized statistical software program (Prism version 5.0, GraphPad Software Inc, CA, USA). Differences were considered statistically significant when *P*≤0.05. Kappa values for intra-examiner repeatability of the blinded evaluations of histological analysis, MVD, and tube formation on matrigel were 0.94, 0.89, and 0.93, respectively.
Results {#s2j}
-------
All mice in all of the treatment groups of both experiments completed the 15-week study period. They appeared to be healthy and there was no evidence of cachexia during the entire study period. There were no significant changes in red blood cell, leukocyte, and platelet counts, and in plasma hemoglobin and EPO concentrations in the mice that had either PBS-treated- or low-dose EPO-treated fat grafts ([Table 1](#pone-0013986-t001){ref-type="table"}). The red blood cell, leukocyte platelet counts, and plasma EPO concentrations, but not the plasma hemoglobin concentrations, were significantly increased in the mice with high-dose EPO-treated fat grafts ([Table 1](#pone-0013986-t001){ref-type="table"}). Eighteen days after transplantation, plasma VEGF concentrations were significantly increased in both groups of mice with EPO-treated fat grafts. At the end of the 15-week study period, the plasma VEGF concentrations in the two groups of mice with EPO-treated grafts were not significantly different from baseline values, or from those in mice with PBS-treated fat grafts. EPO concentrations in the PBS- and EPO-treated grafts were not different from each other at each of the three time points ([Table 1](#pone-0013986-t001){ref-type="table"}).
10.1371/journal.pone.0013986.t001
###### Effect of EPO treatment on body weight, hematology, and plasma and tissue EPO concentrations in the three experimental groups.
{#pone-0013986-t001-1}
Group Control (n = 10) Low-dose EPO (n = 10) High-dose EPO (n = 10)
------------------------------------------------- ------------------ ----------------------- ------------------------
**Initial mice weight** ***(g)*** 26.7±1.1 25.9±1.1 26.2±1.0
After EPO treatment 27.3±1.1 27.9±1.1 28.6±1.2
At week 15 28.3±1.1 28.8±1.1 29.0±1.2
**Initial RBC count** ***(10^6^/mm^3^)*** 7.8±0.9 8.0±1.0 7.9±1.2
After EPO treatment 7.9±0.9 8.0±1.1 8.9±1.0\*
At week 15 7.8±0.9 7.9±1.0 8.1±1.2
**Initial leukocyte count** ***(10^6^/mm^3^)*** 10.8±1.2 11.1±1.1 10.9±1.2
After EPO treatment 11.2±1.2 11.4±1.1 13.1±1.3\*
At week 15 11.0±1.1 10.8±1.1 11.4±1.2
**Initial platelet count** ***(10^3^/L)*** 593±54 609±63 603±72
After EPO treatment 579±58 621±68 741±81\*\*
At week 15 593±54 601±57 597±64
**Initial hemoglobin conc.** ***(g/dl)*** 14.4±1.3 15.1±1.4 15.5±1.4
After EPO treatment 14.8±1.3 15.7±1.4 16.4±1.6
At week 15 14.8±1.2 15.1±1.6 14.9±1.5
**Initial plasma EPO conc.** ***(mU/mL)*** 14.3±1.9 14.6±1.3 14.2±1.7
After EPO treatment 13.7±1.4 17.6±3.3\* 46.7±8.7\*\*\*
At week 15 14.3±1.7 14.2±1.3 14.1±1.3
**Initial plasma VEGF conc.** ***(pg/mL)*** 38.6±3.9 34.8±4.6 39.2±4.8
After EPO treatment 37.1±3.8 51.5±6.6\* 87±9.2\*\*\*
At week 15 38.0±3.3 36.6±4.9 37.4±5.3
**Tissue EPO conc.** ***(mU/mL)*** 0.3±0.1 0.3±0.1 0.3±0.1
**[Footnotes]{.ul}**
Values are presented as mean ± SD; n = number of mice; conc. = concentrations; RBC = red blood cells; EPO = erythropoietin; VEGF = vascular endothelial growth factor. \*P\<0.05, \*\*P\<0.01, \*\*\*P\<0.001 for the difference between either the low-dose- or the high-dose-treated EPO grafts and the PBS-treated grafts.
Fat graft weights and volumes {#s2k}
-----------------------------
A well-defined subcutaneous lump was observed on the scalp of each mouse at the end of the 15-week study period ([Figure 1](#pone-0013986-g001){ref-type="fig"}). The weights and volumes of the EPO-treated grafts were higher than those of the PBS-treated grafts ([Table 2](#pone-0013986-t002){ref-type="table"}). The weights and volumes of the PBS-treated fat grafts in the first experiment were not different from those of the PBS- and VEGF-treated grafts in the second experiment ([Table 2](#pone-0013986-t002){ref-type="table"}).
10.1371/journal.pone.0013986.t002
###### Effect of EPO treatment on fat graft weight and volume in all treatment groups in the two experiments.
{#pone-0013986-t002-2}
First experiment Second experiment
----------------------- ------------------ -------------------------------------------- --------------- ---------- ----------
**Weight** ***(g)*** 0.3±0.1 0.5±0.2[\*\*](#nt108){ref-type="table-fn"} 0.6±0.2\*\*\* 0.32±0.2 0.35±0.2
**Volume** ***(ml)*** 0.3±0.1 0.4±0.1[\*\*](#nt108){ref-type="table-fn"} 0.6±0.1\*\*\* 0.35±0.1 0.36±0.2
**[Footnotes]{.ul}**
Values are presented as mean ± SD.
n = number of mice.
EPO = erythropoietin.
VEGF = vascular endothelial growth factor.
\*\*P\<0.01, \*\*\*P\<0.001, for the difference between either the low-dose- or the high-dose EPO-treated fat grafts and the PBS-treated grafts.
Histological evaluation {#s2l}
-----------------------
The histological criteria of the PBS-treated fat grafts in the first experiment were not different from those in the second experiment. The extent of integration of the fat graft was higher in the high-dose EPO-treated grafts than in the low-dose EPO- and PBS-treated grafts ([Figure 2A](#pone-0013986-g002){ref-type="fig"}), and, the extent of cyst formation and fibrosis was lower in the high-dose EPO-treated grafts than in the low-dose EPO- and PBS-treated grafts ([Table 3](#pone-0013986-t003){ref-type="table"}). The extent of integration, cyst formation, and fibrosis in the VEGF-treated grafts were not different from those in the PBS-treated grafts ([Table 3](#pone-0013986-t003){ref-type="table"}).
{#pone-0013986-g002}
10.1371/journal.pone.0013986.t003
###### Histological analysis of the dissected fat grafts in all treatment groups in the two experiments.
{#pone-0013986-t003-3}
First experiment Second experiment
------------------- ------------------ ------------------------------------------ ------------------------------------------ ----------- ------------------------------------------
**Integration** 3.3±1.0 4.3±0.8 4.6±0.7[\*](#nt114){ref-type="table-fn"} 3.6±0.7 3.2±0.9
**Fibrosis** 2.5±0.9 2.1±0.6 1.5±0.7[\*](#nt114){ref-type="table-fn"} 2.6±0.5 2.9±0.7
**Cyst/Vacuoles** 2.8±0.9 2.0±0.9 1.7±0.7[\*](#nt114){ref-type="table-fn"} 2.9±1.0 3.3±1.0
**Inflammation** 2.9±1.1 1.7±0.5[\*](#nt114){ref-type="table-fn"} 1.3±0.6\*\* 3.2.0±1.4 4.0±1.2[\*](#nt114){ref-type="table-fn"}
\*P\<0.05, \*\*P\<0.01 for the difference between either the low-dose- or the high-dose EPO-treated fat grafts and the PBS-treated grafts.
The effect of EPO on inflammatory response and MVD in the fat grafts {#s2m}
--------------------------------------------------------------------
The severity of the inflammatory response as evidenced by CD68-positive cells infiltration in fat grafts both in the low-dose and in the high-dose EPO-treated fat grafts was lower than the severity of the inflammatory response in the PBS-treated fat grafts. The severity of the inflammatory response in the high-dose EPO-treated grafts was significantly lower than that observed in the low-dose EPO-treated grafts ([Figure 2B and 2D](#pone-0013986-g002){ref-type="fig"} left). However, the intensity of the inflammatory response in the VEGF-treated fat grafts was significantly higher than that observed in the PBS-treated fat grafts ([Table 3](#pone-0013986-t003){ref-type="table"}).
The MVDs observed in both the high-dose and low-dose EPO-treated fat grafts were significantly higher than the MVDs of the PBS-treated fat grafts, and the effect of EPO on MVD was dose-dependent. Avascular areas, ectatic vessels with edema and perivascular hemorrhage, and a marked reduction in capillary ramification were observed in the PBS-treated fat grafts. In the EPO-treated fat grafts, there were well-vascularized areas with increased expression of CD31, and numerous endothelial islets ([Figure 2C and 2D](#pone-0013986-g002){ref-type="fig"} middle). The extent of MVD was negatively correlated to the extent of macrophage infiltration in the fat grafts ([Figure 2D](#pone-0013986-g002){ref-type="fig"} right).
The effect of EPO on VEGF content and expression levels of angiogenic factors and PKB in the fat grafts {#s2n}
-------------------------------------------------------------------------------------------------------
The VEGF contents in the low-dose and high-dose EPO-treated fat grafts were significantly higher than the VEGF contents in the PBS-treated fat grafts. The VEGF content in the high-dose EPO-treated grafts was significantly higher than that observed in the low-dose EPO-treated graft ([Figure 3A](#pone-0013986-g003){ref-type="fig"} upper panel and 3C left). EPO induced a dose-dependent increase in the expression levels of bFGF, IGF-1, PDGF-BB, MMP-2, PKB, and phosphoPKB ([Figure 3B](#pone-0013986-g003){ref-type="fig"}). Furthermore, EPO increased both tissue VEGFR-2 and EPOR expression in a dose-dependent manner, as evidenced by immunohistochemical localization of both factors ([Figure 3A](#pone-0013986-g003){ref-type="fig"} middle and lower panels respectively) and by western blot analysis ([Figure 3C](#pone-0013986-g003){ref-type="fig"} middle and left respectively). The VEGF content and the mean expression levels of both VEGFR-2 and EPOR were positively correlated with MVD ([Figure 3D](#pone-0013986-g003){ref-type="fig"}).
{#pone-0013986-g003}
The effect of EPO on the extent of apoptosis in the fat grafts {#s2o}
--------------------------------------------------------------
The extent of apoptosis in the PBS-treated fat grafts was greater than that in the low-dose and high-dose EPO-treated fat grafts. The extent of apoptosis in the high-dose EPO-treated fat grafts was significantly lower than that in the low-dose EPO-treated graft ([Figure 4A](#pone-0013986-g004){ref-type="fig"}). EPO caused a dose-dependent decrease in the expression levels of caspase 3 and cytochrome c ([Figure 4B](#pone-0013986-g004){ref-type="fig"}).
{#pone-0013986-g004}
The effect of VEGF on MVD and extent of apoptosis in the fat grafts {#s2p}
-------------------------------------------------------------------
The extent of apoptosis and the MVD observed in the PBS-treated fat grafts were the same in both the first and the second experiment. The MVD and the VEGF content in the VEGF-treated fat grafts were higher than, but not statistically different from, those in the PBS-treated fat grafts ([Figures 5A and 5B](#pone-0013986-g005){ref-type="fig"}). There was unorganized vessel formation and perivascular hemorrhage in the VEGF-treated fat grafts. The extent of apoptosis in the VEGF-treated fat grafts was greater than that observed in the PBS-treated fat grafts ([Figure 5C](#pone-0013986-g005){ref-type="fig"}). There were no statistical differences in the expression levels of caspase 3 and cytochrome c in the PBS-treated and in the VEGF-treated fat grafts ([Figure 5D](#pone-0013986-g005){ref-type="fig"}).
{#pone-0013986-g005}
The effect of EPO on endothelial cell proliferation and tube formation on matrigel {#s2q}
----------------------------------------------------------------------------------
VEGF significantly increased HUVEC tube formation and EPO increased HUVEC tube formation in a dose-dependent manner ([Figure 6A](#pone-0013986-g006){ref-type="fig"}). Tube formation was substantially reduced in VEGF + bevacizumab-treated HUVECs, but not in the EPO + bevacizumab-treated HUVECs ([Figures 6B and 6C](#pone-0013986-g006){ref-type="fig"}). The VEGF inhibitor, bFGF inhibitor and PDGF inhibitor each reduced HUVEC proliferation significantly, whereas either a combination of the 3 inhibitors together or wortmannin alone abolished HUVEC proliferation. EPO normalized HUVEC proliferation in the presence of any of the inhibitors, but had no effect on HUVEC proliferation in the presence of a combination of the 3 inhibitors together or in the presence of wortmannin alone ([Figure 6D](#pone-0013986-g006){ref-type="fig"}).
![Effect of EPO on HUVEC proliferation and tube formation on matrigel.\
HUVECs were treated with either 20 IU/ml or 100 IU/ml and either 100 IU/ml EPO or 200 ng/ 100μl VEGF in the absence or presence of 0.25 mg/ml bevacizumab for 48 hours after plating the cells on matrigel. The extent of HUVEC tube formation on matrigel was assessed after 24 hours under a light microscope at 10× magnification. The tubular structures were graded semiquantitatively on a scale of 0 to 5 by evaluation of the relative presence and stages of formation of tubes on the matrigel: 0 = well separated individual cells, 1 = cells had begun to migrate and align themselves, 2 = visible capillary tubes and no sprouting, 3 = visible sprouting of new capillary tubes, 4 = early formation of closed polygons, 5 = development of complex mesh-like structures. (A) Each bar represents the mean grade of tube formation ± SD in the matrigel. \**P*\<0.05, \*\**P*\<0.01 and \*\*\**P*\<0.001. (B) The white bars are the mean grade of tube formation ± SD in the matrigel of untreated HUVECs, VEGF- and EPO-treated HUVECs. The black bars represent the mean grade of tube formation ± SD in the matrigel of untreated HUVECs VEGF- and EPO-treated HUVECs that were exposed to bevacizumab. \**P*\<0.05 and \*\*\**P*\<0.001, for the difference between HUVECs that were or were not exposed to bevacizumab. NS = not significantly different. (C) From top to bottom: representative micrograph of untreated HUVECs on matrigel, VEGF- and EPO-treated HUVECs after 24 hours of plating with (+) or without (−) bevacizumab. (D) Cultured HUVECs were treated with or without 100 IU/ml EPO in the presence of either bevacizumab, PD173074, or tyrphostin, a combination of bevacizumab, PD173074 and tyrphostin, or in the presence of wortmannin. Proliferation of HUVECs was measured by incorporation of \[^3^H\]-thymidine to DNA. Duplicate cell counts were averaged for 3 experiments and the data were expressed as the percentage of control. \**P*\<0.05, \*\**P*\<0.01 and \*\*\**P*\<0.001 for the difference between untreated, or EPO- treated HUVECs that were exposed to bevacizumab, PD173074, tyrphostin or wortmannin. NS = not significantly different.](pone.0013986.g006){#pone-0013986-g006}
Discussion {#s3}
==========
The main finding of our study is that the decrease in the weight and volume of EPO-treated human fat grafts was smaller than the decrease that was observed in VEGF- and PBS-treated human fat grafts in immunologically-compromised nude mice. Treatment of the fat grafts with EPO (a) increased the expression levels of various angiogenic factors, induced the expression of cell survival factors such as PKB, and increased the extent of MVD, (b) increased fat cell survival and (c) reduced the extent of inflammatory response and fat cell apoptosis in a dose-dependent manner. The histological assessments of the harvested fat grafts showed that EPO treatment led to better fat tissue integration with less cysts, fibrosis, and inflammatory cell infiltration compared to the PBS- and VEGF-treated fat grafts. From these results, we concluded that treatment of fat grafts with EPO improves the fat graft\'s integration into the surrounding tissues and its long-term survival following fat transplantation. Our data suggests that EPO-induced angiogenesis in the transplanted graft occurs due to stimulation of a cluster of angiogenic factors that include VEGF, bFGF, PDGF-BB, MMP-2, and IGF-1 that has been shown to increase the survival of grafted fat cells [@pone.0013986-Bluher1]. These findings are in agreement with those of Pallua and colleagues, who reported that VEGF, bFGF, PDGF-BB, and IGF-1 are all required for promoting fat cell viability and adipogenesis [@pone.0013986-Pallua1].
Vascularization is essential for graft survival. After autologous fat transplantation, the increased resorption and the inability of a fat graft to survive in the recipient is associated with reduced fat tissue vascularization, and increased apoptosis of fat cells in the graft [@pone.0013986-Yamaguchi1], [@pone.0013986-Yi1], [@pone.0013986-Nishimura1]. VEGF is a known potent angiogenic factor that influences endothelial proliferation, migration and viability and induces angiogenesis of adipose tissue after transplantation [@pone.0013986-Hausman1].VEGF gene therapy in fat grafts can induce angiogenesis and enhance fat cell survival and viability [@pone.0013986-Yi1]. Recently, Lu and colleagues demonstrated in an elegant study, that modified adipose-derived stem cells that overexpressed VEGF can enhance the survival and quality of transplanted fat tissue through an angiogenesis-dependent mechanism [@pone.0013986-Lu1]. In our study, we demonstrated that exogenous VEGF treatment of fat grafts has no effect on their weight and volume compared to the same parameters in PBS-treated fat grafts. We also observed that the MVD of the VEGF-treated fat grafts was modestly higher than that of the PBS-treated fat grafts. In addition, we found that the extent of apoptosis in these VEGF-treated fat grafts was not different from that in the PBS-treated fat grafts, and this could probably indicate that VEGF does not exert an anti-apoptotic effect in the fat grafts. The process of angiogenesis involves a harmonized interplay between various angiogenic factors that include growth factors such as bFGF, VEGF, PDGF-BB, and proteases such as MMPs that digest constituents of the extracellular matrix that impede angiogenesis. These factors act synergistically in order to improve the survival of adipose tissue after fat transplantation [@pone.0013986-Rophael1], [@pone.0013986-Kuramochi1]. Therefore, the therapeutic use of one of these angiogenic factors, even one as potent as VEGF, may not be sufficient to promote angiogenesis for enhancing fat tissue viability and survival. Contrary to the findings of Yi [@pone.0013986-Yi1], Lei [@pone.0013986-Lei1], Lu [@pone.0013986-Lu1] and their colleagues who found that either gene therapy with VEGF, or adipose derived-stem cell therapy that overexpressed VEGF, can enhance fat cell viability and survival, we are of the opinion that the angiogenic actions of exogenous VEGF may not be adequate to elicit an appropriate angiogenic response in transplanted fat tissue. Indeed, the increased VEGFR-2 expression in the EPO-treated fat grafts that was observed in our study implies that EPO-induced endogenous VEGF secretion in the fat grafts might be more effective than exogenous VEGF administration.
Nakano and colleagues demonstrated that EPO treatment increases VEGF expression and promotes angiogenesis in peripheral ischemic tissues in mice [@pone.0013986-Nakano1]. We recently reported that topical EPO treatment induces VEGF secretion and angiogenesis in excisional wounds in diabetic rats [@pone.0013986-Hamed2]. In the current study, we showed that treatment of fat grafts with EPO increased VEGF, bFGF, IGF-1, PDGF-BB and MMP-2 contents, as well as MVD in the fat grafts. We also observed that, similarly to VEGF, EPO increased HUVEC tube formation on matrigel, thereby confirming that EPO has angiogenic activity. Interestingly, bevacizumab abolished VEGF-induced tube formation, but not EPO-induced tube formation. This result suggests that the angiogenic activity of EPO on HUVECs is indirect, and could be mediated by stimulating other growth factors, such as bFGF and PDGF-BB, and proteases such MMP-2. Furthermore, EPO normalized *in vitro* HUVEC proliferation in the presence of a single growth factor inhibitor such as bevacizumab (VEGF inhibitor), PD173074 (bFGF inhibitor) or tyrphostin (PDGF inhibitor), strengthening our claim that the use of one growth factor for fat tissue vascularization might not be adequate. On the other hand, EPO has no effect on the proliferation of HUVECs that were exposed to the above mentioned inhibitors simultaneously, supporting the idea that secretion of a cluster of growth factors accounts, at least, to one of the underlying mechanisms of EPO action on fat graft vascularization. In addition, EPO increased PKB expression and activity in the fat grafts, and PKB is critical for the cellular pathway of a broad spectrum of growth factors. Nevertheless, EPO has no effect on the proliferation of HUVECs that were exposed to wortmannin which inhibits the phosphorylation of phosphatidylinositol 3-kinases (PI 3-K) and subsequently the phosphorylation of PKB, confirming that EPO stimulates the secretion of multiple growth factors, at least partially, through the PI 3-K/PKB cellular pathway. In light of our results, we attribute the beneficial effects of EPO on the improved viability of fat grafts, partly, to this action on angiogenesis in the fat grafts.
Nishimura and colleagues reported that the sustained volume loss of fat grafts, even in those that are vascularized, is due to fat cell apoptosis [@pone.0013986-Nishimura1]. We observed that the inflammatory response in the PBS-treated fat grafts was greater than the inflammatory response in the EPO-treated fat grafts. This increased inflammation can be attributed to leukocyte and macrophage infiltration, and to increased cytokine secretion in the fat graft. We found that the extent of apoptosis in the EPO-treated fat grafts decreased in a dose-dependent manner. These findings are not surprising since EPO has well-known anti-apoptotic properties, and is an anti-inflammatory cytokine [@pone.0013986-Li1]. Accordingly, we concluded that EPO decreases the rate of fat resorption by directly decreasing the extent of apoptosis in fat cells and/or indirectly by suppressing the inflammatory response that ensues after fat grafting.
Aspirated fat tissue that is used for autologous fat transplantation is devoid of blood microvessels because these microvessels are destroyed during the aspiration, and removed during processing prior to its injection. Therefore, the fat tissue that is injected into a recipient is considered to be an ischemic fat cell mass. During the early period following transplantation, the fat transplant exists under hypoxic and hyponutritional conditions. Should revascularization fail to be initiated in this early period, apoptosis ensues and results in late fat cell degeneration and fat resorption [@pone.0013986-Nishimura1]. In this study, we provoked angiogenesis at the time of fat injection, and for the 18 days after transplantation, by repeated injections of EPO into the fat graft. By doing this, we hoped to induce angiogenesis in the transplanted fat tissue while it is still an ischemic cell mass, in order to promote fat cell survival by increasing the delivery of oxygen and nutrients. At the same time, we thought that this treatment would protect the transplanted fat cell mass from early degeneration, and delay and/or prevent fat cell apoptosis. As already noted, EPO exerts an anti-apoptotic action on fat cells because EPO decreases the extent of DNA fragmentation, caspase-3 activity and cytochrome c in the fat grafts. This anti-apoptotic action might be obtained due to a direct effect of EPO on fat cell apoptosis, and/or an indirect effect by promoting fat graft vascularization. Nevertheless, treatment of the fat grafts with exogenous VEGF did not alter the extent of apoptosis in the fat grafts, although it had modestly increased the vascularization in the fat grafts compared to control fat grafts.
The overall clinical experience on the use of growth factors and cytokines to reduce the rate of fat resorption by increasing fat graft vascularization has not been encouraging [@pone.0013986-Yi1], [@pone.0013986-Shoshani1], [@pone.0013986-Yoshimura1]. We found that fat graft treatment by EPO improves the survival of a human fat graft in nude mice since EPO not only increased angiogenesis, but also reduced the inflammatory response and fat cell apoptosis. EPO can account for some use-limiting adverse effects, as it may promote hypertension, retinopathy, neurotoxicity and thrombotic events when it is used in the repetitive and large doses that are required for adequate tissue protection. It may also lead to an increased risk of spread of tumor growth due to its effect on angiogenesis, as has been observed particularly in patients with chronic diseases and in patients with cancer [@pone.0013986-Jelkmann1]. Nevertheless, EPO has been safely used in humans for many years for treating anemia, and in trials that tested EPO as a neuroprotective/neuroregenerative agent [@pone.0013986-Siren1]. The production of EPO by means of recombinant techniques and its availability in various competent recombinant forms, make EPO an economical drug, conferring it as a potential candidate for enhancing fat transplantation without increasing considerably the cost of the procedure.
In conclusion, the failure of exogenous VEGF to stimulate adequate angiogenesis and to prevent apoptosis in fat grafts strengthens our hypothesis that fat cell survival and viability depend on the action of a cluster of angiogenic factors as well as on prevention of fat cell apoptosis, a process which can be improved by promoting them. We found that EPO treatment of transplanted fat acts through these two mechanisms, and can improve fat graft integration and its long-term survival in immunologically-compromised nude mice. Based on our results, we propose that EPO treatment can significantly improve the efficacy of human autologous fat transplantation for soft tissue filling and augmentation. To the best of our knowledge, the results of this study are the first to demonstrate the effect of EPO on fat resorption. Further studies in animals and humans are now needed in order to validate our data.
**Competing Interests:**There are no issues relating to employment of or consultancy by the authors. There is a Provisional Patent Application in the US Patent and Trademark Office (USPTO), which received the filing date of 23 February 2010 and Serial No. 61/306,991. The authors confirm that this does not alter their adherence to all the PLoS ONE policies on sharing data and materials.
**Funding:**This work was supported by a grant to Dr. Saher Hamed from the Research & Development division of Remedor Biomed Ltd., Nazareth, Israel (\#514361310). Remedor Biomed Ltd. financed the study, has rights to the results of the research and has no objections with respect to the publication of the manuscript.
[^1]: Conceived and designed the experiments: SH DK YU. Performed the experiments: SH DE DK YU. Analyzed the data: SH DE DK AG YU. Contributed reagents/materials/analysis tools: SH LT AG YU. Wrote the paper: SH DE.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.