text
stringlengths
263
344k
id
stringlengths
47
47
dump
stringclasses
23 values
url
stringlengths
16
862
file_path
stringlengths
125
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
81.9k
score
float64
2.52
4.78
int_score
int64
3
5
Personality traits reflect people’s characteristic patterns of thoughts, feelings, and behaviors. Personality traits imply consistency and stability—someone who scores high on a specific trait like Extraversion is expected to be sociable in different situations and over time. Thus, trait psychology rests on the idea that people differ from one another in terms of where they stand on a set of basic trait dimensions that persist over time and across situations. The most widely used system of traits is called the Five-Factor Model. This system includes five broad traits that can be remembered with the acronym OCEAN: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Each of the major traits from the Big Five can be divided into facets to give a more fine-grained analysis of someone's personality. In addition, some trait theorists argue that there are other traits that cannot be completely captured by the Five-Factor Model. Critics of the trait concept argue that people do not act consistently from one situation to the next and that people are very influenced by situational forces. Thus, one major debate in the field concerns the relative power of people’s traits versus the situations in which they find themselves as predictors of their behavior. - List and describe the “Big Five” (“OCEAN”) personality traits that comprise the Five-Factor Model of personality. - Describe how the facet approach extends broad personality traits. - Explain a critique of the personality-trait concept. - Describe in what ways personality traits may be manifested in everyday behavior. - Describe each of the Big Five personality traits, and the low and high end of the dimension. - Give examples of each of the Big Five personality traits, including both a low and high example. - Describe how traits and social learning combine to predict your social activities. - Describe your theory of how personality traits get refined by social learning. When we observe people around us, one of the first things that strikes us is how different people are from one another. Some people are very talkative while others are very quiet. Some are active whereas others are couch potatoes. Some worry a lot, others almost never seem anxious. Each time we use one of these words, words like “talkative,” “quiet,” “active,” or “anxious,” to describe those around us, we are talking about a person’s personality—the characteristic ways that people differ from one another. Personality psychologists try to describe and understand these differences. Although there are many ways to think about the personalities that people have, Gordon Allport and other “personologists” claimed that we can best understand the differences between individuals by understanding their personality traits. Personality traits reflect basic dimensions on which people differ (Matthews, Deary, & Whiteman, 2003). According to trait psychologists, there are a limited number of these dimensions (dimensions like Extraversion, Conscientiousness, or Agreeableness), and each individual falls somewhere on each dimension, meaning that they could be low, medium, or high on any specific trait. An important feature of personality traits is that they reflect continuous distributions rather than distinct personality types. This means that when personality psychologists talk about Introverts and Extraverts, they are not really talking about two distinct types of people who are completely and qualitatively different from one another. Instead, they are talking about people who score relatively low or relatively high along a continuous distribution. In fact, when personality psychologists measure traits like Extraversion, they typically find that most people score somewhere in the middle, with smaller numbers showing more extreme levels. The figure below shows the distribution of Extraversion scores from a survey of thousands of people. As you can see, most people report being moderately, but not extremely, extraverted, with fewer people reporting very high or very low scores. There are three criteria that are characterize personality traits: (1) consistency, (2) stability, and (3) individual differences. - To have a personality trait, individuals must be somewhat consistent across situations in their behaviors related to the trait. For example, if they are talkative at home, they tend also to be talkative at work. - Individuals with a trait are also somewhat stable over time in behaviors related to the trait. If they are talkative, for example, at age 30, they will also tend to be talkative at age 40. - People differ from one another on behaviors related to the trait. Using speech is not a personality trait and neither is walking on two feet—virtually all individuals do these activities, and there are almost no individual differences. But people differ on how frequently they talk and how active they are, and thus personality traits such as Talkativeness and Activity Level do exist. A challenge of the trait approach was to discover the major traits on which all people differ. Scientists for many decades generated hundreds of new traits, so that it was soon difficult to keep track and make sense of them. For instance, one psychologist might focus on individual differences in “friendliness,” whereas another might focus on the highly related concept of “sociability.” Scientists began seeking ways to reduce the number of traits in some systematic way and to discover the basic traits that describe most of the differences between people. The way that Gordon Allport and his colleague Henry Odbert approached this was to search the dictionary for all descriptors of personality (Allport & Odbert, 1936). Their approach was guided by the lexical hypothesis, which states that all important personality characteristics should be reflected in the language that we use to describe other people. Therefore, if we want to understand the fundamental ways in which people differ from one another, we can turn to the words that people use to describe one another. So if we want to know what words people use to describe one another, where should we look? Allport and Odbert looked in the most obvious place—the dictionary. Specifically, they took all the personality descriptors that they could find in the dictionary (they started with almost 18,000 words but quickly reduced that list to a more manageable number) and then used statistical techniques to determine which words “went together.” In other words, if everyone who said that they were “friendly” also said that they were “sociable,” then this might mean that personality psychologists would only need a single trait to capture individual differences in these characteristics. Statistical techniques were used to determine whether a small number of dimensions might underlie all of the thousands of words we use to describe people. The Five-Factor Model of Personality Research that used the lexical approach showed that many of the personality descriptors found in the dictionary do indeed overlap. In other words, many of the words that we use to describe people are synonyms. Thus, if we want to know what a person is like, we do not necessarily need to ask how sociable they are, how friendly they are, and how gregarious they are. Instead, because sociable people tend to be friendly and gregarious, we can summarize this personality dimension with a single term. Someone who is sociable, friendly, and gregarious would typically be described as an “Extravert.” Once we know she is an extravert, we can assume that she is sociable, friendly, and gregarious. Statistical methods (specifically, a technique called factor analysis) helped to determine whether a small number of dimensions underlie the diversity of words that people like Allport and Odbert identified. The most widely accepted system to emerge from this approach was “The Big Five” or “Five-Factor Model” (Goldberg, 1990; McCrae & John, 1992; McCrae & Costa, 1987). The Big Five comprises five major traits shown in the Figure 3.2.2 below. A way to remember these five is with the acronym OCEAN (O is for Openness; C is for Conscientiousness; E is for Extraversion; A is for Agreeableness; N is for Neuroticism). Figure 3.2.3 provides descriptions of people who would score high and low on each of these traits. Scores on the Big Five traits are mostly independent. That means that a person’s standing on one trait tells very little about their standing on the other traits of the Big Five. For example, a person can be extremely high in Extraversion and be either high or low on Neuroticism. Similarly, a person can be low in Agreeableness and be either high or low in Conscientiousness. Thus, in the Five-Factor Model, you need five scores to describe most of an individual’s personality. In the Appendix to this module, we present a short scale to assess the Five-Factor Model of personality (Donnellan, Oswald, Baird, & Lucas, 2006). You can take this test to see where you stand in terms of your Big Five scores. John Johnson has also created a helpful website that has personality scales that can be used and taken by the general public: After seeing your scores, you can judge for yourself whether you think such tests are valid. Traits are important and interesting because they describe stable patterns of behavior that persist for long periods of time (Caspi, Roberts, & Shiner, 2005). Importantly, these stable patterns can have broad-ranging consequences for many areas of our life (Roberts, Kuncel, Shiner, Caspi, & Goldberg, 2007). For instance, think about the factors that determine success in college. If you were asked to guess what factors predict good grades in college, you might guess something like intelligence. This guess would be correct, but we know much more about who is likely to do well. Specifically, personality researchers have also found the personality traits like Conscientiousness play an important role in college and beyond, probably because highly conscientious individuals study hard, get their work done on time, and are less distracted by nonessential activities that take time away from school work. In addition, highly conscientious people are often healthier than people low in conscientiousness because they are more likely to maintain healthy diets, to exercise, and to follow basic safety procedures like wearing seat belts or bicycle helmets. Over the long term, this consistent pattern of behaviors can add up to meaningful differences in health and longevity. Thus, personality traits are not just a useful way to describe people you know; they actually help psychologists predict how good a worker someone will be, how long he or she will live, and the types of jobs and activities the person will enjoy. Thus, there is growing interest in personality psychology among psychologists who work in applied settings, such as health psychology or organizational psychology. Facets of Traits (Subtraits) So how does it feel to be told that your entire personality can be summarized with scores on just five personality traits? Do you think these five scores capture the complexity of your own and others’ characteristic patterns of thoughts, feelings, and behaviors? Most people would probably say no, pointing to some exception in their behavior that goes against the general pattern that others might see. For instance, you may know people who are warm and friendly and find it easy to talk with strangers at a party yet are terrified if they have to perform in front of others or speak to large groups of people. The fact that there are different ways of being extraverted or conscientious shows that there is value in considering lower-level units of personality that are more specific than the Big Five traits. These more specific, lower-level units of personality are often called facets. To give you a sense of what these narrow units are like, Figure 3.2.4 shows facets for each of the Big Five traits. It is important to note that although personality researchers generally agree about the value of the Big Five traits as a way to summarize one’s personality, there is no widely accepted list of facets that should be studied. The list seen here, based on work by researchers Paul Costa and Jeff McCrae, thus reflects just one possible list among many. It should, however, give you an idea of some of the facets making up each of the Five-Factor Model. Facets can be useful because they provide more specific descriptions of what a person is like. For instance, if we take our friend who loves parties but hates public speaking, we might say that this person scores high on the “gregariousness” and “warmth” facets of extraversion, while scoring lower on facets such as “assertiveness” or “excitement-seeking.” This precise profile of facet scores not only provides a better description, it might also allow us to better predict how this friend will do in a variety of different jobs (for example, jobs that require public speaking versus jobs that involve one-on-one interactions with customers; Paunonen & Ashton, 2001). Because different facets within a broad, global trait like extraversion tend to go together (those who are gregarious are often but not always assertive), the broad trait often provides a useful summary of what a person is like. But when we really want to know a person, facet scores add to our knowledge in important ways. Other Traits Beyond the Five-Factor Model Despite the popularity of the Five-Factor Model, it is certainly not the only model that exists. Some suggest that there are more than five major traits, or perhaps even fewer. For example, in one of the first comprehensive models to be proposed, Hans Eysenck suggested that Extraversion and Neuroticism are most important. Eysenck believed that by combining people’s standing on these two major traits, we could account for many of the differences in personality that we see in people (Eysenck, 1981). So for instance, a neurotic introvert would be shy and nervous, while a stable introvert might avoid social situations and prefer solitary activities, but he may do so with a calm, steady attitude and little anxiety or emotion. Interestingly, Eysenck attempted to link these two major dimensions to underlying differences in people’s biology. For instance, he suggested that introverts experienced too much sensory stimulation and arousal, which made them want to seek out quiet settings and less stimulating environments. More recently, Jeffrey Gray suggested that these two broad traits are related to fundamental reward and avoidance systems in the brain—extraverts might be motivated to seek reward and thus exhibit assertive, reward-seeking behavior, whereas people high in neuroticism might be motivated to avoid punishment and thus may experience anxiety as a result of their heightened awareness of the threats in the world around them (Gray, 1981. This model has since been updated; see Gray & McNaughton, 2000). These early theories have led to a burgeoning interest in identifying the physiological underpinnings of the individual differences that we observe. Another revision of the Big Five is the HEXACO model of traits (Ashton & Lee, 2007). This model is similar to the Big Five, but it posits slightly different versions of some of the traits, and its proponents argue that one important class of individual differences was omitted from the Five-Factor Model. The HEXACO adds Honesty-Humility as a sixth dimension of personality. People high in this trait are sincere, fair, and modest, whereas those low in the trait are manipulative, narcissistic, and self-centered. Thus, trait theorists are agreed that personality traits are important in understanding behavior, but there are still debates on the exact number and composition of the traits that are most important. There are other important traits that are not included in comprehensive models like the Big Five. Although the five factors capture much that is important about personality, researchers have suggested other traits that capture interesting aspects of our behavior. In Figure 5 below we present just a few, out of hundreds, of the other traits that have been studied by personologists. Not all of the above traits are currently popular with scientists, yet each of them has experienced popularity in the past. Although the Five-Factor Model has been the target of more rigorous research than some of the traits above, these additional personality characteristics give a good idea of the wide range of behaviors and attitudes that traits can cover. The Person-Situation Debate and Alternatives to the Trait Perspective The ideas described in this module should probably seem familiar, if not obvious to you. When asked to think about what our friends, enemies, family members, and colleagues are like, some of the first things that come to mind are their personality characteristics. We might think about how warm and helpful our first teacher was, how irresponsible and careless our brother is, or how demanding and insulting our first boss was. Each of these descriptors reflects a personality trait, and most of us generally think that the descriptions that we use for individuals accurately reflect their “characteristic pattern of thoughts, feelings, and behaviors,” or in other words, their personality. But what if this idea were wrong? What if our belief in personality traits were an illusion and people are not consistent from one situation to the next? This was a possibility that shook the foundation of personality psychology in the late 1960s when Walter Mischel published a book called Personality and Assessment (1968). In this book, Mischel suggested that if one looks closely at people’s behavior across many different situations, the consistency is really not that impressive. In other words, children who cheat on tests at school may steadfastly follow all rules when playing games and may never tell a lie to their parents. In other words, he suggested, there may not be any general trait of honesty that links these seemingly related behaviors. Furthermore, Mischel suggested that observers may believe that broad personality traits like honesty exist, when in fact, this belief is an illusion. The debate that followed the publication of Mischel’s book was called the person-situation debatebecause it pitted the power of personality against the power of situational factors as determinants of the behavior that people exhibit. Because of the findings that Mischel emphasized, many psychologists focused on an alternative to the trait perspective. Instead of studying broad, context-free descriptions, like the trait terms we’ve described so far, Mischel thought that psychologists should focus on people’s distinctive reactions to specific situations. For instance, although there may not be a broad and general trait of honesty, some children may be especially likely to cheat on a test when the risk of being caught is low and the rewards for cheating are high. Others might be motivated by the sense of risk involved in cheating and may do so even when the rewards are not very high. Thus, the behavior itself results from the child’s unique evaluation of the risks and rewards present at that moment, along with her evaluation of her abilities and values. Because of this, the same child might act very differently in different situations. Thus, Mischel thought that specific behaviors were driven by the interaction between very specific, psychologically meaningful features of the situation in which people found themselves, the person’s unique way of perceiving that situation, and his or her abilities for dealing with it. Mischel and others argued that it was these social-cognitive processes that underlie people’s reactions to specific situations that provide some consistency when situational features are the same. If so, then studying these broad traits might be more fruitful than cataloging and measuring narrow, context-free traits like Extraversion or Neuroticism. In the years after the publication of Mischel’s (1968) book, debates raged about whether personality truly exists, and if so, how it should be studied. And, as is often the case, it turns out that a more moderate middle ground than what the situationists proposed could be reached. It is certainly true, as Mischel pointed out, that a person’s behavior in one specific situation is not a good guide to how that person will behave in a very different specific situation. Someone who is extremely talkative at one specific party may sometimes be reticent to speak up during class and may even act like a wallflower at a different party. But this does not mean that personality does not exist, nor does it mean that people’s behavior is completely determined by situational factors. Indeed, research conducted after the person-situation debate shows that on average, the effect of the “situation” is about as large as that of personality traits. However, it is also true that if psychologists assess a broad range of behaviors across many different situations, there are general tendencies that emerge. Personality traits give an indication about how people will act on average, but frequently they are not so good at predicting how a person will act in a specific situation at a certain moment in time. Thus, to best capture broad traits, one must assess aggregatebehaviors, averaged over time and across many different types of situations. Most modern personality researchers agree that there is a place for broad personality traits and for the narrower units such as those studied by Walter Mischel. The Mini-IPIP Scale (Donnellan, Oswald, Baird, & Lucas, 2006) Instructions: Below are phrases describing people’s behaviors. Please use the rating scale below to describe how accurately each statement describes you. Describe yourself as you generally are now, not as you wish to be in the future. Describe yourself as you honestly see yourself, in relation to other people you know of the same sex as you are, and roughly your same age. Please read each statement carefully, and put a number from 1 to 5 next to it to describe how accurately the statement describes you. 1 = Very inaccurate 2 = Moderately inaccurate 3 = Neither inaccurate nor accurate 4 = Moderately accurate 5 = Very accurate - _______ Am the life of the party (E) - _______ Sympathize with others’ feelings (A) - _______ Get chores done right away (C) - _______ Have frequent mood swings (N) - _______ Have a vivid imagination (O) - _______Don’t talk a lot (E) - _______ Am not interested in other people’s problems (A) - _______ Often forget to put things back in their proper place (C) - _______ Am relaxed most of the time (N) - ______ Am not interested in abstract ideas (O) - ______ Talk to a lot of different people at parties (E) - ______ Feel others’ emotions (A) - ______ Like order (C) - ______ Get upset easily (N) - ______ Have difficulty understanding abstract ideas (O) - ______ Keep in the background (E) - ______ Am not really interested in others (A) - ______ Make a mess of things (C) - ______ Seldom feel blue (N) - ______ Do not have a good imagination (O) Scoring: The first thing you must do is to reverse the items that are worded in the opposite direction. In order to do this, subtract the number you put for that item from 6. So if you put a 4, for instance, it will become a 2. Cross out the score you put when you took the scale, and put the new number in representing your score subtracted from the number 6. Items to be reversed in this way: 6, 7, 8, 9, 10, 15, 16, 17, 18, 19, 20 Next, you need to add up the scores for each of the five OCEAN scales (including the reversed numbers where relevant). Each OCEAN score will be the sum of four items. Place the sum next to each scale below. __________ Openness: Add items 5, 10, 15, 20 __________ Conscientiousness: Add items 3, 8, 13, 18 __________ Extraversion: Add items 1, 6, 11, 16 __________ Agreeableness: Add items 2, 7, 12, 17 __________ Neuroticism: Add items 4, 9,14, 19 Compare your scores to the norms below to see where you stand on each scale. If you are low on a trait, it means you are the opposite of the trait label. For example, low on Extraversion is Introversion, low on Openness is Conventional, and low on Agreeableness is Assertive. 19–20 Extremely High, 17–18 Very High, 14–16 High, 11–13 Neither high nor low; in the middle, 8–10 Low, 6–7 Very low, 4–5 Extremely low - Video 1: Gabriela Cintron’s – 5 Factors of Personality (OCEAN Song). This is a student-made video which cleverly describes, through song, common behavioral characteristics of the Big 5 personality traits. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. - Video 2: Michael Harris’ – Personality Traits: The Big 5 and More. This is a student-made video that looks at characteristics of the OCEAN traits through a series of funny vignettes. It also presents on the Person vs Situation Debate. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. - Video 3: David M. Cole’s – Grouchy with a Chance of Stomping. This is a student-made video that makes a very important point about the relationship between personality traits and behavior using a handy weather analogy. It was one of the winning entries in the 2016-17 Noba + Psi Chi Student Video Award. - Web: International Personality Item Pool - Web: John Johnson personality scales - Web: Personality trait systems compared - Web: Sam Gosling website - Consider different combinations of the Big Five, such as O (Low), C (High), E (Low), A (High), and N (Low). What would this person be like? Do you know anyone who is like this? Can you select politicians, movie stars, and other famous people and rate them on the Big Five? - How do you think learning and inherited personality traits get combined in adult personality? - Can you think of instances where people do not act consistently—where their personality traits are not good predictors of their behavior? - Has your personality changed over time, and in what ways? - Can you think of a personality trait not mentioned in this module that describes how people differ from one another? - When do extremes in personality traits become harmful, and when are they unusual but productive of good outcomes? - A personality trait that reflects a person’s tendency to be compassionate, cooperative, warm, and caring to others. People low in agreeableness tend to be rude, hostile, and to pursue their own interests over those of others. - A personality trait that reflects a person’s tendency to be careful, organized, hardworking, and to follow rules. - Continuous distributions - Characteristics can go from low to high, with all different intermediate values possible. One does not simply have the trait or not have it, but can possess varying amounts of it. - A personality trait that reflects a person’s tendency to be sociable, outgoing, active, and assertive. - Broad personality traits can be broken down into narrower facets or aspects of the trait. For example, extraversion has several facets, such as sociability, dominance, risk-taking and so forth. - Factor analysis - A statistical technique for grouping similar things together according to how highly they are associated. - Five-Factor Model - (also called the Big Five) The Five-Factor Model is a widely accepted model of personality traits. Advocates of the model believe that much of the variability in people’s thoughts, feelings, and behaviors can be summarized with five broad traits. These five traits are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. - HEXACO model - The HEXACO model is an alternative to the Five-Factor Model. The HEXACO model includes six traits, five of which are variants of the traits included in the Big Five (Emotionality [E], Extraversion [X], Agreeableness [A], Conscientiousness [C], and Openness [O]). The sixth factor, Honesty-Humility [H], is unique to this model. - Two characteristics or traits are separate from one another-- a person can be high on one and low on the other, or vice-versa. Some correlated traits are relatively independent in that although there is a tendency for a person high on one to also be high on the other, this is not always the case. - Lexical hypothesis - The lexical hypothesis is the idea that the most important differences between people will be encoded in the language that we use to describe people. Therefore, if we want to know which personality traits are most important, we can look to the language that people use to describe themselves and others. - A personality trait that reflects the tendency to be interpersonally sensitive and the tendency to experience negative emotions like anxiety, fear, sadness, and anger. - Openness to Experience - A personality trait that reflects a person’s tendency to seek out and to appreciate new things, including thoughts, feelings, values, and experiences. - Enduring predispositions that characterize a person, such as styles of thought, feelings and behavior. - Personality traits - Enduring dispositions in behavior that show differences across individuals, and which tend to characterize the person across varying types of situations. - Person-situation debate - The person-situation debate is a historical debate about the relative power of personality traits as compared to situational influences on behavior. The situationist critique, which started the person-situation debate, suggested that people overestimate the extent to which personality traits are consistent across situations. - Allport, G. W., & Odbert, H. S. (1936). Trait names: A psycholexical study. Psychological Monographs, 47, 211. - Ashton, M. C., & Lee, K. (2007). Empirical, theoretical, and practical advantages of the HEXACO model of personality structure. Personality and Social Psychological Review, 11, 150–166. - Caspi, A., Roberts, B. W., & Shiner, R. L. (2005). Personality development: Stability and change. Annual Reviews of Psychology, 56, 453–484. - Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The mini-IPIP scales: Tiny-yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18, 192–203. - Eysenck, H. J. (1981). A model for personality.New York: Springer Verlag. - Goldberg, L. R. (1990). An alternative description of personality: The Big Five personality traits. Journal of Personality and Social Psychology, 59, 1216–1229. - Gray, J. A. (1981). A critique of Eysenck’s theory of personality. In H. J. Eysenck (Ed.), A Model for Personality (pp. 246-276). New York: Springer Verlag. - Gray, J. A. & McNaughton, N. (2000). The neuropsychology of anxiety: An enquiry into the functions of the septo-hippocampal system (second edition).Oxford: Oxford University Press. - Matthews, G., Deary, I. J., & Whiteman, M. C. (2003). Personality traits. Cambridge, UK: Cambridge University Press. - McCrae, R. R., & Costa, P. T. (1987). Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52, 81–90. - McCrae, R. R. & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60, 175–215. - Mischel, W. (1968). Personality and assessment. New York: John Wiley. - Paunonen, S. V., & Ashton, M. S. (2001). Big five factors and facets and the prediction of behavior. Journal of Personality and Social Psychology, 81, 524–539. - Roberts, B. W., Kuncel, N. R., Shiner, R., Caspi, A., & Golberg, L. R. (2007). The power of personality: The comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. Perspectives on Psychological Science, 2, 313-345.
<urn:uuid:2c5e584e-31a7-468d-8275-797f97b9e8a0>
CC-MAIN-2020-16
https://socialsci.libretexts.org/Bookshelves/Psychology/Map%3A_Discover_Psychology_-_A_Brief_Introductory_Text_(Noba)/11%3A_PERSONALITY/11.01%3A_Personality_Traits
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00097.warc.gz
en
0.939165
6,861
3.921875
4
Today is the first day of Lent, a Christian tradition in which followers of the faith give up a highly beloved possession or habit for 40 days in honor of of all that Jesus gave up during the 40 days he spent wandering the desert. Ash Wednesday, the first day of the season, is for reflecting upon the sacrifices made for our sake. It is a somber holiday, named for the ash that celebrants wear in honor of that sacrifice. If you spot a sparkle of purple glitter in the dark mark on someone’s brow this Ash Wednesday, do not think that they’re making light of this tradition. They are not wearing that sign for attention or for levity. They’re wearing it to make it clear that they believe that Christianity includes the LGBT faithful. That they are good influences on us all, and that Jesus Christ’s sacrifice was for their sake as well. Despite a majority narrative that there is no place for queer people in Christianity, there are millions of faithful, religious people who are also gay, lesbian, bi, and transgender. They struggle with inclusion from anyone. Churches call for the shunning, shaming, or in extreme cases, outright execution of LGBT people. Religious families turn their backs on their gay sons, shame their bisexual daughters into suicide, refuse to acknowledge their lesbian daughter’s wives, or deny their trans child the help and clothes they need to be themselves. And the queer community all too often demands that they leave their God at the door, preventing them from finding inclusion there, either. Glitter Ash Wednesday is a message that needs to be sent out: that queer Christians have a place in both of these spheres. The glitter is not only for LGBT Christians—it is for all who call them part of their family. It is a statement that there is nothing about being LGBT that makes one any less a child of God.
<urn:uuid:f9a9dce0-c380-4373-9ce6-2a4ec54e4c77>
CC-MAIN-2020-16
https://www.oneequalworld.com/2017/03/01/glitter-ash-wednesday/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00490.warc.gz
en
0.973437
387
2.640625
3
The data on contract rent (also referred to as "rent asked" for vacant units) were obtained from Housing Question 15a in the 2012 American Community Survey. The question was asked at occupied housing units that were for rent, vacant housing units that were for rent, and vacant units rented but not occupied at the time of interview. Housing units that are renter occupied without payment of rent are shown separately as "No rent paid." The unit may be owned by friends or relatives who live elsewhere and who allow occupancy without charge. Rent-free houses or apartments may be provided to compensate caretakers, ministers, tenant farmers, sharecroppers, or others. Contract rent is the monthly rent agreed to or contracted for, regardless of any furnishings, utilities, fees, meals, or services that may be included. For vacant units, it is the monthly rent asked for the rental unit at the time of interview. If the contract rent includes rent for a business unit or for living quarters occupied by another household, only that part of the rent estimated to be for the respondent's unit was included. Excluded was any rent paid for additional units or for business premises. If a renter pays rent to the owner of a condominium or cooperative, and the condominium fee or cooperative carrying charge also is paid by the renter to the owner, the condominium fee or carrying charge was included as rent. If a renter receives payments from lodgers or roomers who are listed as members of the household, the rent without deduction for any payments received from the lodgers or roomers, was to be reported. The respondent was to report the rent agreed to or contracted for even if paid by someone else such as friends or relatives living elsewhere, a church or welfare agency, or the government through subsidies or vouchers. Contract rent provides information on the monthly housing cost expenses for renters. When the data is used in conjunction with utility costs and income data, the information offers an excellent measure of housing affordability and excessive shelter costs. The data also serve to aid in the development of housing programs to meet the needs of people at different economic levels, and to provide assistance to agencies in determining policies on fair rent. Median and Quartile Contract Rent The median divides the rent distribution into two equal parts: one-half of the cases falling below the median contract rent and one-half above the median. Quartiles divide the rent distribution into four equal parts. Median and quartile contract rent are computed on the basis of a standard distribution. (See the "Standard Distributions" section in Appendix A.) In computing median and quartile contract rent, units reported as "No rent paid" are excluded. Median and quartile rent calculations are rounded to the nearest whole dollar. Upper and lower quartiles can be used to note large rent differences among various geographic areas. (For more information on medians and quartiles, see "Derived Measures.") Aggregate contract rent is calculated by adding all of the contract rents for occupied housing units in an area. Aggregate contract rent is rounded to the nearest hundred dollars. This explanation is comparable to the description used for Aggregate Gross Rent. (For more information, see "Aggregate" under "Derived Measures.") Aggregate rent asked is calculated by adding all of the rents for vacant-for-rent housing units in an area. Aggregate rent asked is subject to rounding, which means that all cells in a matrix are rounded to the nearest hundred dollars. (For more information, see "Aggregate" under "Derived Measures.") Since 1996, the American Community Survey questionnaires provided a space for the respondent to enter a dollar amount. The words "or mobile home" were added to the question starting in 1999 to be more inclusive of the structure type. Since 2004, contract rent has been shown for all renter-occupied housing units. In previous years (1996-2003), it was shown only for specified renter-occupied housing units. Data on contract rent in the 2012 American Community Survey should not be compared to Census 2000 contract rent data. For Census 2000, tables were not released for total renter-occupied units. The universe in Census 2000 was "specified renter-occupied housing units" whereas the universe in the ACS is "renter occupied housing units," thus comparisons cannot be made between these two data sets.
<urn:uuid:6fd9f4b9-13e4-4553-96ef-39bc58514d0e>
CC-MAIN-2017-47
https://www.socialexplorer.com/data/ACS2012/metadata/?ds=American+Community+Survey+2012&table=C25056
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806422.29/warc/CC-MAIN-20171121185236-20171121205236-00425.warc.gz
en
0.959938
889
2.75
3
When providing instruction in science to students who are visually impaired, models are of utmost importance (in place of pictures). For a student with a visual impairment, a “model” rather than a picture truly is worth a 1,000 words. Many science teachers, unfortunately, don’t possess the artistic talent necessary to conceptualize and produce models to best instruct students who are blind or visually impaired on science content. It is for this reason that collaboration with an artist is particularly valuable for the TVI (teacher of students with visual impairments) or science teacher at a school for the blind. This relationship can be fostered with the art teacher at school, a college art student, or even an artistic high school student. I have even had my 11-yr old daughter, who is an aspiring artist, make models for my students. I began requesting models of my teaching assistant several years ago, when I learned that she was an artist. She enjoyed building them and my students have benefited. When the need for a given model arises based on future instruction, provide the artist with a clear picture and description of the item and allow sufficient time for the artist to produce the model. The amount of time necessary will depend on the complexity of the model and the time that the artist has available. Several weeks warning prior to the class in which the model will be needed would be ideal. As the artist may not be familiar with the science concepts being taught, it is important to be involved in the process of conceptualizing the model before the artist produces it. This will help to avoid mistakes in the structure of the model. As with construction of any variety, needless time, money and energy expenditures can be avoided by discussing the model after the artist has had the opportunity to study the picture but before the model is built. Once the artist understands the model request, the materials to be utilized should be chosen by the artist. If the artist is not a TVI (teacher of the visually impaired), then the need for safety should be discussed. Request that the artist not use anything sharp for the model and avoid toxic materials. The artist should also be asked to provide tactual differences between various parts of the model and to make all parts of the model large enough for the student to feel. Samples of Scientific Models Created by Artists The following models produced by various artists with whom I have had the pleasure of collaborating are posted on this blog. Wet Cell battery – Built by Adele Hauser in 2008. When I asked the braille transcriber for a raised line of the image in the book she replied that it was too complicated for a raised line. This model was very helpful in describing the structure to my students. Iron Atoms on a Copper Plate - Built by Sasha Hospitál, 2012 - This image was in the science textbook and I wanted to show it to my students. Sac Fungi- Built by Denist Elliot-Jones, TVI, 2012. A student requested this model when we studied fungi.
<urn:uuid:d6d515e1-f434-4445-828a-e2fb6d9964ea>
CC-MAIN-2020-24
https://www.perkinselearning.org/accessible-science/blog/collaboration-artist-create-scientific-models
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419593.76/warc/CC-MAIN-20200601180335-20200601210335-00553.warc.gz
en
0.966292
618
3.390625
3
The introduction of workflow as you know it Fast-forward to the 1960s, when information systems were introduced. Each new piece of software added a specific functionality, such as database management. These modules were used independently—compared to now when multiple pieces of software are integrated to automate a full workflow. The 1970s were a decade of unfounded optimism about the positive impact new technology could have on productivity and efficiency. As the first sophisticated office information systems were introduced, it became harder for employees to alter standard office procedures based on circumstance. This led to many organizations rejecting the technology. TAKE A LOOK BACK: The workflow evolution It makes sense that throughout history people have always tried to find ways to make processes more efficient. After all, if businesses want to expand (as most do), this question will arise: What’s the least expensive way to achieve a positive outcome while increasing capacity as demand also increases? Take a look at how workflow has evolved since the Before the 1800s, to improve productivity, two factors were considered—workflow and wages. Then the Industrial Revolution happened, introducing a third applied to well-structured tasks that were costly when it came to manpower—mostly farm and factory work—not to the administrative and office work that existed. FREDERICK WINSLOW TAYLOR,
<urn:uuid:3317ac9c-d209-4405-9f3e-0e021c69a4b7>
CC-MAIN-2017-47
http://chiefoptimist.xerox.gtxcel.com/chiefoptimist/workflow-higher_ed?pg=20
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00723.warc.gz
en
0.929267
323
2.921875
3
You are here Throughout its history, the Tennessee Valley Authority (TVA) has managed public lands to meet a wide range of regional and local resource development needs and to improve the quality of life, both within specific reservoir areas and throughout the Tennessee Valley region. Public lands adjacent to TVA reservoirs and adjoining private lands have been used for public parks, industrial development, commercial recreation, residential development, tourism development, forest and wildlife management areas, and to meet a variety of other needs associated with local communities and government agencies. Shortly after its creation in 1933, TVA began a massive dam and reservoir construction program that required the purchase of approximately 1.3 million acres of land for the creation of 46 reservoirs within the Valley region. Of these 1.3 million acres, approximately 509,000 acres have been sold or transferred from TVA’s control. The majority of this land was transferred to other federal and state agencies for public use. Of the remaining land, approximately 470,000 acres were inundated when dams were constructed. Now in the form of lakes and widened rivers, this land provides recreation opportunities for many people. Thus, approximately 293,000 acres of land are adjacent to the reservoirs that TVA currently manages for the benefit of the public. Resevoir Lands Planning The reservoir system in the eastern Tennessee Valley was primarily planned to protect Chattanooga from flooding. At least one reservoir was built on each of the five major tributary rivers above Chattanooga with enough space to store floodwaters from large storms in the drainage areas above them. These seven reservoirs do the main work in controlling floods: - Norris Reservoir on the Clinch River - Fontana Reservoir on the Little Tennessee River - Douglas Reservoir on the French Broad River - Cherokee Reservoir on the Holston River - Chatuge, Nottely and Hiwassee reservoirs in the Hiwassee River basin Three main river reservoirs above Chattanooga—Fort Loudoun/Tellico, Watts Bar, and Chickamauga—provide additional, limited storage capacity. Together, all Tennessee Valley Authority reservoirs above Chattanooga can store about five million acre-feet of water during the winter flood season.
<urn:uuid:b634c1b2-f9cc-4083-bf12-43ef0d68a086>
CC-MAIN-2020-16
http://southernappalachianvitalityindex.org/hydrogeomorphology/tva-reservoir-lands-planning
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00528.warc.gz
en
0.949534
440
2.90625
3
It can be measured as the number of threads to the inch or the number to 10 cms Before working the stitch tack the two edges onto glazed linen or a strip of smooth paper with just enough space between for the stitches. Stitches used can be buttonhole, twisted insertion stitch, interlacing insertion, plaited insertion or knotted insertion stitch. Also known as insertion stitches. Work from left to right. Insert needle from below a little way in from edge. Then insert needle into top edge from below, a little to the right. Twist needle under and then over thread lying between the two edges then insert needle into bottom edge from below a little to the right. Continue working into each edge alternately twisting needle each time. Also known as twisted insertion stitch. This is worked in horizontal rows. Work 5 straight stitches in the order given in the diagram with a second set using the same central hole. The longest stitch in the second row shares a hole with the longest stitch of the previous row, thus leaving spaces which can be filled with straight stitches. Also known as fancy bricking stitch, this has a textured appearance. It consists of three straight horizontal stitches over three threads in the first row, followed by small straight stitches over two alternating with pairs of vertical stitches. Start at top of stitch line and bring thread through. Holding thread down with left thumb, make a diagonal stitch, inserting needle to right of stitch line and bringing it out on stitch line a little below starting point and above thread. then work in the same way but insert needle to the left of the stitch line. Continue alternating in this way. Variations on feather stitch are single, straight, chained, closed, long armed and double. The first row of stitches are alternately long and short stitches and the following rows are stitches of an even length until the last row of the area where stitch length is varied to finish the filling. This can be used with fine shading colours. A version called surface long and short stitch is more economical with floss. Other names for this stitch are long and short stitch, shading stitch, tapestry shading stitch, plumage stitch, embroidery stitch and opus plumarium. Work as shown in the diagram, a decorative stitch used when the fabric does not need to be covered as in hardanger. A row of small upright evenly spaced stitches are worked and then joined by diagonal stitches. As a single row it can be used for a border but it can also be used as a filling stitch when rows are placed above each other in straight lines. When the slanting stitches are from bottom right to top left it is known as barrier or fence stitch and when from bottom left to top right it is known as Bosnian stitch or yugoslavian border stitch. Also known as zigzag Holbein stitch. As a free stitch it can be used to hold down applied fabric. Start just below the top of the stitch line. Make a straight stitch to the left, another to the top of stitch line and another to the right, bringing thread up after last stitch on stitch line a stitch length below starting point. Repeat. As a counted thread stitch it is simple and effective. Working from top to bottom, left to right, bring needle diagonally down to right over four intersections, insert and bring out two threads to the left, take up right across four intersections then insert and bring out one thread lower than the first stitch to begin next. Work second row in exactly the same way forming stitches in same side at right of previous row. The stitch is worked as illustrated. This is very similar to leaf stitch and also interlocks in a similar way. Each block is made of a central straight vertical stitch over six threads with five slanting stitches at each side and then another straight central stitch over four threads to complete. As a free stitch it can be used for a close border or for filling small shapes. Bring needle through at side of shape to be filled and make a diagonal stitch to two threads beyond centre. Bring thread up at other side of shape and make a diagonal stitch to two threads beyond centre, so that stitches overlap at centre. Repeat. A padded variation is raised fishbone stitch. The counted thread stitch is worked in two separate rows starting from lower left hand corner. Take needle diagonally up to right over six intersections, insert and bring out two threads below. Take up to the left over two intersections and bring out two threads above starting point ready for next stitch. Continue to end. The second row is worked in the same way except that the cross is formed at the bottom of the long stitch. Also known as the long and short oblique stitch. The diagram shows one example. It is normally worked only in one direction but it can also be worked in four directions giving a kaleidoscope effect. Also known as florentine stitch, bargello stitch and irish stitch. Worked over large areas this makes a diamond pattern and then a square pattern again over larger areas when four diamonds are made. Each triangle is worked over 1, 3, 5, 7, 9, 11 threads of fabric as shown. Also known as square satin stitch. The pattern is worked in two horizontal rows worked alternately . The first row has three vertical straight stitches worked over three fabric threads alternating with Cross stitches worked over three diagonal intersections. The second row has three straight horizontal stitches worked over three fabric threads under the large cross and a small cross stitch over two diagonal threads worked under the straight vertical stitches. The diagram shows one example. It is normally worked only in one direction but it can also be worked in four directions giving a kaleidoscope effect. This is called four-way Bargello or mitred Bargello. Also known as bargello stitch, flame stitch and irish stitch. The most popular available are Anchor, DMC and Madeira and they all have a large number of shades. These threads are usually split into fewer strands to work. It is available in solid colours. Also known as nordin Start at 1 in the lower right corner of the area to be filled and stitch diagonally upwards working vertical stitches over six fabric threads, then work back downwards crossing these stitches with horizontal stitches over 6 threads as shown. Start the second row in the hole immediately left of the 12th thread away from the first row at 2. Work only full stitches. When these diagonals have been worked work these stitches from lower right to upper left starting at 3, the second cross being worked over a cross of the previous row. On both diagonals a horizontal stitch is always worked last. A variation is to add a Smyrna stitch over four threads in the centre of each large box created by the trellis, still remembering to work the horizontal stitch last. This stitch can be worked horizontally from left to right or vertically from top to bottom or singly. It is like an open chain stitch. Bring needle through to top left of stitch, hold thread down with left thumb, insert needle at top right of stitch and bring out at centre a little below with thread below needle. If working horizontally tie down with a short vertical stitch over the thread then bring needle through just to the right of the top right of stitch ready for next stitch. If working vertically, tie down with a slightly longer vertical stitch, then bring needle up to left of middle of stitch, ready to continue. Make one open fly stitch and then add two straight stitches to make five spokes as shown. With the same ribbon bring the needle out close to the centre and without piercing the existing ribbon weave over and under these arms around 10 to 12 times. Allow the ribbon to twist to form more realistic petals. If wished, the ribbon can be changed to a lighter shade after three or four rounds of the first shade. See also ribbon roses. It is worked in blocks over five threads square and consists of three horizontal and three vertical stitches of graduated length and finished with a diagonal stitch over four intersections from bottom left to top right. A loose loop is made around the needle and tightened after the needle has entered the fabric but before it is pulled through completely. Hold down the loop with the left thumb while pulling the needle through. These knots are characteristic of the rich silk embroideries of China where they often are worked closely over large areas. Also known as Chinese knot, pekin knot and blind knot. This can be used singly randomly or in lines. Male a vertical straight stitch, then loop around to form a knot without entering the fabric and then make the last arm of the cross. Carry the yarn diagonally across the space, enter the fabric and then twist the threads over the first thread back to the starting point. The twisting of the second bar, the other diagonal is taken only as far as the centre, then pass the yarn over and under the bars twice in a circular motion. Finally complete the other half of the bar. Work from right to left. Bring needle through at bottom right corner of first stitch. Insert needle four threads up and bring out again four threads to left of starting point. Insert needle at starting point and bring out four threads up and four threads to the left. Insert four threads to the right, bring out four threads to the left of starting point. For a single stitch work final side of square. For a row continue as before and pull all stitches firmly. Twisted fly stitch is a variation. Work horizontal rows first. Work from right to left. Make two vertical stitches over four threads in the same place, then bring needle out four stitches to the left for next two stitches. Continue along row in this way and then work other horizontal rows in the same way turning fabric round for each. Turn fabric at right angles and work vertical rows in the same way as the horizontal rows. Also known as punch stitch. Embroider the required shape on fabric. Attach it to backing fabric with right sides facing leaving one section open. Turn to the right side and insert a loop of wire through the open end. Leave ends of wire long enough for use in attaching to main design. Attach by taking wire through to the reverse side. Stitch the last section closed and add any other embroidery detail. A transfer printed design is often used as a guide to stitching. There are many stitches used including stem, satin, backstitch, bullion knots, buttonhole, chain, chevron, coral, couching, cretan, cross stitch, crown, darning, double knot, faggoting, feather, fern, fishbone, flat, fly, french knot, herringbone, interlaced band, leaf, lock, overcast, pekinese, roumanian, running, seeding, sheaf filling, spanish knotted feather, spiders` web, split, straight and wheatear. Bring the needle through where knot is required and hold thread firmly with finger and thumb of left hand. Twist needle twice round thread. then still holding thread firmly, turn needle round to starting point and insert just behind it, gently pulling thread through. This stitch can be free or counted. A variation is french knots on stalks. Bring your needle out at A and twist the thread once around the needle. Pull gently so that the thread fits around the needle and still holding the thread so that it doesn`t loosen, insert it at B about a quarter of an inch away from A. Pull gently through. Also known as Italian knot, long tailed french knot and long tack knot stitch. Begin at upper right and work rows from right to left and then left to right. For each french stitch form two tied down straight stitches both within the space of two vertical fabric threads. Start next stitch in second canvas hole from base of stitch just done. At the end of each row reverse working direction. Place new stitches between those of the row above so that tops share a hole with neighbouring horizontal stitches. |Match:||Any termsAll terms|
<urn:uuid:14f0e378-3be3-448d-b5d7-3be03548a4dd>
CC-MAIN-2017-26
http://www.artsanddesigns.com/cgi-bin/makeGlossary.pl?category=embroidery&section=F
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00138.warc.gz
en
0.936972
2,484
3.234375
3
Normal pressure hydrocephalus (NPH) is an accumulation of cerebrospinal fluid (CSF) that causes the ventricles in the brain to become enlarged, sometimes with little or no increase in intracranial pressure (ICP). In most cases of NPH, the cause of blockage to the CSF absorptive pathways is unclear. The name for this condition, ’normal pressure hydrocephalus,‘ originates from Dr. Salomon Hakim’s 1964 paper describing certain cases of hydrocephalus in which a triad (a group of three) of neurologic symptoms occurred in the presence of ’normal‘ CSF pressure – gait disturbances, dementia, and impaired bladder control. These findings were observed before continuous pressure-recording techniques were available. The phrase ‘normal pressure’ is misleading as many patients experience fluctuations in CSF pressure that range from high to low and are variable within those parameters. However, normal pressure hydrocephalus (NPH) continues to be the common name for the condition. Who develops Normal Pressure Hydrocephalus? NPH is most commonly seen in older adults. - It is estimated that more than 700,000 Americans have NPH, but less than 20% receive an appropriate diagnosis. - Without appropriate diagnostic testing, NPH is often misdiagnosed as Alzheimer’s disease or Parkinson’s disease, or the symptoms are attributed to the aging process. - NPH is one of the few causes of dementia that can be controlled or reversed with treatment.
<urn:uuid:9f128524-609f-45c5-81b3-6a64fac5e310>
CC-MAIN-2017-43
http://www.hydroassoc.org/normal-pressure-hydrocephalus/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00698.warc.gz
en
0.929051
321
3.375
3
Ballarat with a population of approximately 85,000 is Victoria's largest inland city 100 kilometres northwest of Melbourne. A gold rush in the 1850's transformed the town into a major city in the goldfields region of Victoria with many heritage listed buildings. Victoria is Australia's second smallest State and covers only 3% of Australia's land area but has the second highest population of all States and Territories. Victoria's mainland and islands have a total length of 2,512 kilometres coastline which is about 4.2% of Australia's 59,736 kilometres of coastline. Australia is the driest inhabited continent and Victoria is no exception although the state capital Melbourne has the reputation to have 4 seasons in one day. Victoria is located in the southeast of mainland Australia and includes the most southern point on mainland Australia at Wilsons Promontory National Park.
<urn:uuid:42566238-e681-4ebc-82d1-d6eb068ff2d1>
CC-MAIN-2017-43
https://www.360cities.net/image/town-hall-ballarat-victoria-australia
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00498.warc.gz
en
0.958347
167
2.78125
3
The Global Fire Atlas of individual fire size, duration, speed and direction Natural and human-ignited fires affect all major biomes, altering ecosystem structure, biogeochemical cycles and atmospheric composition. Satellite observations provide global data on spatiotemporal patterns of biomass burning and evidence for the rapid changes in global fire activity in response to land management and climate. Satellite imagery also provides detailed information on the daily or sub-daily position of fires that can be used to understand the dynamics of individual fires. The Global Fire Atlas is a new global dataset that tracks the dynamics of individual fires to determine the timing and location of ignitions, fire size and duration, and daily expansion, fire line length, speed, and direction of spread. Here, we present the underlying methodology and Global Fire Atlas results for 2003–2016 derived from daily moderate-resolution (500 m) Collection 6 MCD64A1 burned-area data. The algorithm identified 13.3 million individual fires over the study period, and estimated fire perimeters were in good agreement with independent data for the continental United States. A small number of large fires dominated sparsely populated arid and boreal ecosystems, while burned area in agricultural and other human-dominated landscapes was driven by high ignition densities that resulted in numerous smaller fires. Long-duration fires in boreal regions and natural landscapes in the humid tropics suggest that fire season length exerts a strong control on fire size and total burned area in these areas. In arid ecosystems with low fuel densities, high fire spread rates resulted in large, short-duration fires that quickly consumed available fuels. Importantly, multiday fires contributed the majority of burned area in all biomass burning regions. A first analysis of the largest, longest and fastest fires that occurred around the world revealed coherent regional patterns of extreme fires driven by large-scale climate forcing. Global Fire Atlas data are publicly available through http://www.globalfiredata.org (last access: 9 August 2018) and https://doi.org/10.3334/ORNLDAAC/1642, and individual fire information and summary data products provide new information for benchmarking fire models within ecosystem and Earth system models, understanding vegetation–fire feedbacks, improving global emissions estimates, and characterizing the changing role of fire in the Earth system. Worldwide, fires burn an area about the size of the European Union every year (423 Mha yr−1; Giglio et al., 2018). The majority of burned area occurs in grasslands and savannas where fires maintain open landscapes by reducing shrub and tree cover (Scholes and Archer, 1997; Abreu et al., 2017). However, all major biomes burn. Climate controls global patterns of fire activity by driving vegetation productivity and fuel buildup as well as fuel moisture (Bowman et al., 2009). Humans are the dominant source of ignition in most flammable ecosystems, but human activities also reduce fire sizes through landscape fragmentation and fire suppression (Archibald et al., 2012; Taylor et al., 2016; Balch et al., 2017). Over the past 18 years, socioeconomic development and corresponding changes in human land use have considerably reduced fire activity in fire-dependent grasslands and savannas worldwide (Andela et al., 2017). At the same time warming climate has dried fuels and has increased the length of fire seasons across the globe (Jolly et al., 2015), which is particularly important in forested ecosystems with abundant fuels (e.g., Kasischke and Turetsky, 2006; Aragão et al., 2018). Fire activity increases nonlinearly in response to drought conditions in populated areas of the humid tropics (Alencar et al., 2011; Field et al., 2016), resulting in large-scale degradation of tropical ecosystems (van der Werf et al., 2008; Morton et al., 2013b; Brando et al., 2014) and extensive periods of poor air quality (Johnston et al., 2012; Lelieveld et al., 2015; Koplitz et al., 2016). Moreover, increasing population densities in highly flammable biomes also amplify the socioeconomic impact of wildfires related to air quality or damage to houses and infrastructure (Moritz et al., 2014; Knorr et al., 2016). Despite the importance of understanding changing global fire regimes for ecosystem services, human well-being, climate and conservation, our current understanding of changing global fire regimes is limited because existing satellite data products detect actively burning pixels or burned area but not individual fires and their behavior. Frequent observations from moderate-resolution, polar-orbiting satellites may provide information on individual fire behavior in addition to estimates of total burned area. Several recent studies have shown that fire-affected pixels can be separated into clusters based on spatial and temporal proximity. This information can be used to study the number and size distributions of individual fires (Archibald and Roy, 2009; Hantson et al., 2015; Oom et al., 2016), fire shapes (Nogueira et al., 2016; Laurent et al., 2018) and the location of ignition points (Benali et al., 2016; Fusco et al., 2016). One limitation of fire-clustering algorithms that rely on spatial and temporal proximity of fire pixels is the inability to separate individual fires within large burn patches that contain multiple ignition points, a frequent phenomenon in grassland biomes. To address the possibility of multiple ignition points, other algorithms have specifically tracked the spread of individual fires in time and space, with demonstrated improvements for isolating ignition points and constraining final fire perimeters (Frantz et al., 2016; Andela et al., 2017). In addition to the size and ignition points of individual fires, other studies used daily or sub-daily detections of fire activity to track growth dynamics of fires (Loboda and Csiszar, 2007; Coen and Schroeder, 2013; Veraverbeke et al., 2014; Sá et al., 2017). Together, these studies highlight the strengths and limitations of using daily or sub-daily satellite imagery to derive information about individual fires and their behavior over time. Here, we present the Global Fire Atlas of individual fires based on a new methodology for identifying the location and timing of fire ignitions and estimating fire size and duration, and daily expansion, fire line length, speed, and direction of spread. The Global Fire Atlas is derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 6 (Col. 6) burned-area dataset (Giglio et al., 2018), which includes an estimated day-of-burn data layer at a 500 m resolution. Individual fire data were generated starting in 2003, when combined data from the Terra and Aqua satellites began to provide greater burn date certainty. The algorithm for the Global Fire Atlas tracks the daily progression of individual fires at a 500 m resolution to produce a set of metrics on individual fire behavior in standard raster and vector data formats. Together, these Global Fire Atlas data layers provide an unprecedented look at global fire behavior and changes in fire dynamics during 2003–2016. The data are freely available at http://www.globalfiredata.org (last access: 9 August 2018) and https://doi.org/10.3334/ORNLDAAC, and new years will be added to the dataset following the availability of global burned-area data. Here, we developed a method to isolate individual fires from daily moderate-resolution burned-area data. The approach used two filters to account for uncertainties in the day of burn, in order to map the location and timing of fire ignitions and the extent and duration of individual fires (Fig. 1). Subsequently, we tracked the growth dynamics of each individual fire to estimate the daily expansion, fire line length, speed, and direction of spread. Based on the Global Fire Atlas algorithm, burned area was broken down into seven fire characteristics in three steps (Fig. 1b). First, burned area was described as the product of ignitions and individual fire sizes. Second, fire size was further separated into fire duration and a daily expansion component. Third, the daily fire expansion was subdivided into fire speed, the length of the fire line and the direction of spread. The Global Fire Atlas algorithm can be applied to any moderate-resolution daily global burned-area product, and the quality of the resulting dataset depends both on the Fire Atlas algorithm as well as the underlying burned-area product. Here, we applied the algorithm to the MCD64A1 Col. 6 burned-area dataset (Giglio et al., 2018), and the minimum detected fire size is therefore one MODIS pixel (approximately 21 ha). Several studies have shown that the MCD64A1 Col. 6 burned-area product is a considerable improvement compared to the previous generation of moderate-resolution global burned-area products (Giglio et al., 2018; Humber et al., 2019; Rodrigues et al., 2019). We also present a preliminary accuracy assessment of the higher-order Global Fire Atlas products using independent fire perimeter data for the continental US and active-fire detections to assess estimated fire duration and the temporal accuracy of individual fire dynamics. 2.1 Individual fires: ignitions, size, perimeter and duration Large burn patches are often made up of multiple individual fires that may burn simultaneously or at different points in time during the fire season, particularly in frequently burning grasslands and savannas with a high density of ignitions from human activity. Separating large clusters of burned area into individual fires is therefore critical to any understanding of the fire regime in human-dominated landscapes. To isolate individual fires, clusters of adjacent burned area for a given fire season (12 months centered on the month of maximum burned area) were subdivided into individual fires based on the spatial structure of estimated burn dates in the MCD64A1 burned-area product. Although we allow individual fires to burn from one fire season into the next, we processed the data on a per-fire-season basis in each MODIS tile. In the rare case that a pixel burned twice during a single fire season (<1 %), we retained only the earliest burn date. This approach results in a small reduction of total burned area in order to create standardized annual data layers in both gridded raster and shapefile formats. To locate candidate ignition points within each burned-area cluster, we mapped the “local minima”, defined as a single grid cell or group of adjacent grid cells with the same burn date surrounded by grid cells with later burn dates. However, because of variability in orbital coverage and cloud cover, burn date estimates are somewhat uncertain (Giglio et al., 2013), which results in many local minima that may not correspond to actual ignition points. We applied a three-step procedure to address burn date uncertainty and distinguish individual fires. First, we developed a filter to adjust the burn date of local minima that do not correspond to ignition points. Second, we set a “fire persistence” threshold that determines how long a fire may take to spread from one 500 m grid cell into the next, to distinguish individual fires that are adjacent but that occurred at different times in the same fire season. Third, we developed a second filter to correct for outliers in the burn date that occurred along the edges of large fires. Each of these steps is described in detail below. The ignition point filter is based on the assumption that the fires progress continuously through time and space. First, all local minima were mapped within the original field of burn dates (Fig. 2a and b). Next, each local minimum was replaced by the next burn date of the surrounding grid cells, and a new map of local minima was created. If the original local minimum remained as a part of a new, larger local minimum with a later burn date, the fire followed a logical progression in time and space, and the original local minimum was retained. If the local minimum disappeared, the original local minimum was likely the product of an inconsistency within the field of burn dates rather than a true ignition point and the burn date was adjusted forward in time to remove the original local minimum. This step can be repeated several times, with each new iteration further reducing the number of local minima and increasing the confidence in ignition points, yet each iteration also results in a greater adjustment of the original burn date information (Fig. A1 in Appendix A). Here, we implemented three iterations of the ignition point filter to remove most local minima that did not spread forward in time while limiting the scope of burn date adjustments (Figs. 2c and d, A1 and A2). For short duration fires, the ignition points were retained associated with the largest possible number of iterations. In all cases, if several local minima were connected through a single cluster of grid cells with the same burn date, only the local minimum with the earliest burn date or largest number of grid cells was retained, unless the required adjustment of the burn date was larger than the specified burn date uncertainty in the MCD64A1 product. If the final ignition location consisted of multiple 500 m grid cells, we used the center coordinates to produce the ignition point shapefile. By design, the ignition point filter cannot adjust the earliest burn date of a fire and thus has no influence on estimated fire duration. To establish the location and date of ignition points, as well as to track the daily growth and extent of individual fires, we used a fire persistence threshold that determined how long a fire may take to spread from one 500 m grid cell into the next, taking both fire spread rate and satellite coverage into account (Fig. A3). For example, if ignition points were adjacent to a fire that burned earlier in the season, this threshold allowed the ignition points to be mapped as separate local minima despite the presence of adjacent burned grid cells with earlier burn dates. On the other hand, if an active fire is covered by dense clouds or smoke, multiple days can pass before a new observation can be made, resulting in a break in fire continuity and increasing the risk of artificially splitting single fires into multiple parts. Using such a threshold is particularly important to distinguish individual fires in frequently burning savannas and highly fragmented agricultural landscapes, where many individual small fires may occur within a relatively short time span. Because there are no reference datasets on global fire persistence, we used a spatially varying fire persistence threshold that depends on fire frequency (Andela et al., 2017). We assumed that frequently burning landscapes are generally characterized by faster fires and higher ignition densities, increasing the likelihood of having multiple ignition points within large burn patches, while infrequently burning landscapes will generally be characterized by slower fire spread rates and/or fewer ignitions. In addition, frequently burning landscapes often have a pronounced dry season characterized by low cloud cover, while infrequently burning landscapes may experience a shorter dry season with greater obscuration by clouds. Therefore, we used a 4 d fire persistence threshold for 500 m grid cells that burned more than three times during the study period (2003–2016), and a 6, 8 and 10 d fire persistence period for grid cells that burned three times, twice or once, respectively. These threshold values broadly correspond to biomes, with shorter persistence values for tropical regions and human-dominated landscapes and longer threshold values for temperate and boreal ecosystems with high fuel loads (Fig. A3). Based on the location and date of the established ignition points and the fire persistence thresholds, we tracked the growth of each individual fire through time to determine its size, perimeter and duration (Fig. 2f). For each day of the year, we allowed individual fires to grow into the areas that burned on that specific day, as long as the difference in burn dates between two pixels was equal to or smaller than the fire persistence threshold of the pixel of origin. When two actively burning fires meet, as on day 255 for the example fires shown in Fig. 2, grid cells that burned on the day of the merger were divided based on nearest distance to the fire perimeter on the previous day. Burn date uncertainty may also lead to multiple “extinction points”, outliers in the estimated day of burn along the edges of a fire. Environmental conditions such as cloud cover complicate the precise estimation of the date of fire extinction, as rainfall events extinguish many fires, and pixels at the edge of the fire may be partially burned and therefore harder to detect. In addition, the contextual relabeling phase of the MCD64A1 algorithm increases burn date uncertainty for extinction points based on a longer consistency threshold (Giglio et al., 2009). We used a second filtering step to adjust the burn date for extinction points (if required). Outliers were adjusted to the nearest burn date back in time if (1) they represented a cluster no more than one to four grid cells (0.21–0.9 km2) along the edge of a fire that was as least 10 times larger, and if (2) the difference in burn dates was larger than the fire persistence threshold of the adjacent grid cells and thus mapped as a new fire along the edge of the larger fire. If these criteria were met, the outliers were adjusted to the nearest burn date back in time and incorporated within the larger neighboring fire. However, if these criteria were not met (e.g., for burned areas larger than four grid cells), the original burn dates and ignition points were left unadjusted, resulting in separate fires. For the example fires shown in Fig. 2, the adjustment of these outliers affected four grid cells (Fig. 2e) and effectively reduced the number of ignition points (and resulting individual fires) from five (Fig. 2d) to two (Fig. 2f). After adjusting these outliers (extinction points) and including them within the larger fires, we estimated the size (km2), duration (d) and perimeter (km) of each individual fire based on the adjusted burn dates. 2.2 Daily fire expansion: fire line, speed and direction of spread The revised day-of-burn estimates were used to track the daily expansion (km2 d−1) and length of the fire line (km) for each individual fire. The daily estimates of fire line length were based on the daily perimeter of the fire, where we assumed that once the fire reached the edge of the burn scar this part of the perimeter stops burning after 1 d (Fig. 3a). The expansion of the fire (km2 d−1) is the area burned by a fire each day. The average speed of the fire line (km d−1) can now be calculated as the expansion (km2 d−1), divided by the length of the fire line (km) on the same day. However, this estimate of fire line includes the head, flank and backfire, while it is typically the head fire that moves fastest and may be responsible for most of the burned area. Moreover, fire dynamics tend to be highly variable in space and time. To understand the spatial variability and distribution of fire speeds, we therefore used an alternative method to estimate the speed and direction of fire spread for each individual 500 m grid cell. To estimate the speed and direction of spread (Fig. 3), we calculated the most likely path of the fire to reach each individual 500 m grid cell based on shortest distance. More specifically, for each grid cell we estimated the shortest route to connect the grid cell between two points: (1) the nearest point on the fire line with the same day of burn and (2) the nearest point on the previous day's fire line. This route was forced to follow areas burned on the specific day. For each point on this route, or “fire path”, the speed of the fire (km d−1) was estimated as the length of the path (km) divided by 1 d (d−1) and the direction as the direction of the next grid cell on the fire path. Since each grid cell is surrounded by eight other grid cells, this resulted in eight possible spread directions: north, northeast, east, southeast, south, southwest, west and northwest. For ignition points that represented a cluster of 500 m grid cells with the same burn date, we assumed that the fire originated in the center point of the cluster (pixel with largest distance to the final fire perimeter by the end of day 1) and spreads towards the perimeter of the fire by the end of day 1 over the course of 1 d. For single pixel fires, we assumed the fire burned across 463 m (1 pixel) during a single day, and we did not assign a direction of spread. Similarly, fires of all sizes that burned on a single day were not assigned a direction of spread. We corrected estimates of both speed and direction for the orientation between 500 m grid cells on the MODIS sinusoidal projection that vary with location. When a particular grid cell formed part of multiple fire paths, the earliest time of arrival or the highest fire speed and corresponding direction of spread were retained. This assures a logical progression of the fire in time and space and corresponds to fires typically moving fastest in a principal direction and then spreading more slowly along the flank. 2.3 Preliminary accuracy assessment Few large-scale datasets are available on daily or sub-daily fire dynamics, highlighting the novelty of the Global Fire Atlas dataset but also posing challenges for validation. Here, we used four alternative datasets to carry out an initial accuracy assessment. First, we used active-fire detections to assess the temporal accuracy of the Global Fire Atlas burn date. Second, we compared fire perimeters to independent fire perimeter data for the continental US. Third, we combined the independent data on fire perimeters with active-fire detections to evaluate the Global Fire Atlas fire duration estimates. Finally, we compared Global Fire Atlas data to a small (manually compiled) dataset of daily fire perimeters from the US Forest Service. To evaluate burn dates in the Global Fire Atlas, we used the 375 m resolution active-fire detections (VNP14IMGML C1) derived from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard the Suomi National Polar-orbiting Partnership (Suomi-NPP) satellite (Schroeder et al., 2014). Active-fire detections provide accurate information on the burn date, particularly in ecosystems with low fuel loads where fires will typically be only active during a single day in each particular grid cell. We compared the date of active-fire detections from VIIRS within each larger 500 m MODIS grid cell (based on VIIRS center point) to the adjusted MCD64A1 day of burn to understand the temporal precision of the derived Global Fire Atlas products. If several active-fire detections were available for a single 500 m MODIS grid cell, we reported the day closest to the temporal mean. We compared all 500 m MODIS grid cells with corresponding active-fire detection during the overlapping data period (2012–2016) for four different ecosystems globally: (1) forests (including all forests), (2) shrublands (including open and closed shrublands), (3) woody savannas, and (4) savannas and grasslands, with the land cover type derived from MODIS MCD12Q1 Col. 5.1 data for 2012 using the University of Maryland (UMD) classification (Friedl et al., 2002). We compared fire perimeters from the Global Fire Atlas to fire perimeter estimates from the Monitoring Trends in Burn Severity (MTBS) project during their overlapping period (2003–2015). The MTBS project provides semiautomated estimates of fire perimeters based on 30 m Landsat data for fires with a minimum size of 1000 acres (405 ha) in the western US and 500 acres (202 ha) in the eastern US (Eidenshink et al., 2007; Sparks et al., 2015). To determine overlap between MTBS and Fire Atlas perimeter estimates, we rasterized the MTBS perimeters onto the 500 m MODIS sinusoidal grid, including all 500 m grid cells with their center point within the higher-resolution (30 m) MTBS fire perimeter. For all overlapping fire perimeters, we compared the original MTBS fire perimeter information with the Fire Atlas estimates of fire perimeters. In cases with multiple overlapping perimeters, fires with the largest overlapping surface area were compared. We also combined MTBS fire perimeters with VIIRS active-fire detections to derive an alternative estimate of fire duration (2012–2015). To estimate fire duration from these products, we first determined the median burn date of each fire according to the MCD64A1 burned-area data. Subsequently, we included all VIIRS active-fire detections before and after the median or “center” burn date until a period of three fire-free days was reached. Any active-fire detections that occurred outside this timeframe were excluded to avoid overestimation of the fire duration due to smoldering or possible false detections before or after the fire. Two thresholds were used to select a subset of MTBS and Fire Atlas perimeters to assess the accuracy of estimated fire duration. Fires were first matched based on perimeters, with a maximum tolerance of a threefold difference in length between perimeters. Second, we further selected MTBS perimeters with VIIRS active-fire detections for at least 25 % of the 500 m Fire Atlas grid cells. These thresholds excluded 51 % of the overlapping fire perimeters but reduced errors originating from cloud cover or differences in the underlying burned-area estimates (e.g., resolution, methodology) to evaluate estimated fire duration. Similar to the assessment of burn date accuracy, comparisons of fire perimeters and fire duration with MTBS data over the continental US were grouped into four land cover types: (1) forests, (2) shrublands, (3) woody savannas, and (4) savannas and grasslands. For specific large wildfires across the western US, the US Forest Service National Infrared Operations (NIROPS; https://fsapps.nwcg.gov/nirops/, last access: 1 September 2018) estimates daily fire perimeters for fire management purposes by collecting aircraft high-resolution infrared imagery. This imagery is manually analyzed by trained specialists to extract the active fire front. Although these data provide a wealth of information, only a small number of fires are completely and precisely documented. We were able to extract 15 large fires from the NIROPS database for which daily perimeter information was available. Although insufficient for full-scale validation, the comparison with NIROPS data provides valuable insights into the strengths and shortcomings of the Global Fire Atlas estimates of individual fire size, duration and expansion rates. In addition to per-fire averages, we compared day-to-day expansion rates (km2 d−1) of individual large fires across both datasets. If multiple Global Fire Atlas perimeters overlapped with a single US Forest Service fire perimeter, we compared the fires with the largest overlapping surface area. 3.1 Preliminary accuracy assessment At the pixel scale, estimated burn dates from burned-area and active-fire products were comparable (Fig. 4), with greater variability across biomes than from minor burn date adjustments in the Global Fire Atlas algorithm. Burn dates estimated from MODIS burned-area and VIIRS active-fire detections were least comparable in high-biomass ecosystems with lower fire spread rates. In forests and woody savannas 24 % and 35 % of burned pixels were detected on the same day and 54 % and 67 % within ±1 d, respectively (Fig. 4a and c). With decreasing biomass, the direct correspondence between burn dates from burned-area and active-fire detections increased to 41 % (same day) and 80 % (±1 d) in shrublands (Fig. 4b) and 40 % (same day) and 75 % (±1 d) in savannas and grasslands (Fig. 4d). These differences likely stem from the combined increase in the uncertainty of burn date in higher-biomass ecosystems and the influence of fire persistence (multiple active-fire days in a single 500 m grid cell) on the ability to reconcile the timing of burned-area and active-fire detections in these ecosystems. Several factors may account for the positive bias in the 500 m day of burn from burned-area compared to active-fire detections, including orbital coverage, cloud and smoke obscuration, and different thresholds between burned-area and active-fire algorithms regarding the burned fraction of a 500 m grid cell. The adjustments we made to the burn date in the Global Fire Atlas, required to effectively determine the extent and duration of individual fires, had a relatively small effect on the overall accuracy assessment but tended to reduce the negative bias in burn dates and increase the positive bias compared to the underlying MCD64A1 Col. 6 product (see red and black lines in Fig. 4). In line with these findings, we found good agreement between a 3 d running average of the Global Fire Atlas and US Forest service estimates of daily fire expansion but reduced correspondence for daily estimates of fire growth rates due to uncertainty in the day of burn of the burned-area product (Fig. B1 in Appendix B). For fire perimeters, the best agreement between the Global Fire Atlas and MTBS was found in forests and shrublands, where the Global Fire Atlas reproduced 65 % and 61 % of the observed variance in MTBS fire perimeters, respectively (Fig. 5). Less agreement was found for woody savannas (38 %) and savannas and grasslands (41 %). Overall, the Global Fire Atlas underestimated fire perimeter length in all of the vegetation classes. However, uncertainty exists in both datasets. Orthogonal distance regression (ODR) accommodates uncertainties in both datasets and generally resulted in slopes closer to the 1:1 line, indicating closer correspondence, on average, in absolute perimeter estimates for the two datasets. An in-depth comparison of the performance of the Global Fire Atlas and the MTBS datasets for several grassland fires in Kansas (US) suggested that differences originated both from the underlying burned-area datasets and the methodologies (Fig. B2). For this particular grassland in Kansas, the MCD64A1 product estimated less burned area compared to the Landsat-based MTBS dataset, resulting in fragmentation of larger burn scars into disconnected patches. However, the daily temporal resolution of the MCD64A1 burned-area product allowed for recognition of individual ignition points within larger burn patches of fast-moving grassland fires that cannot be separated using infrequent Landsat imagery (Fig. B2). In addition, the 30 m spatial resolution of the MTBS perimeters may result in more irregularity and therefore in longer fire perimeter estimates compared to the 500 m resolution Fire Atlas perimeters. Combined, these trade-offs in spatial and temporal resolution resulted in less agreement between fire perimeters in woody savannas (Fig. 5c) and savannas and grasslands (Fig. 5d). Initial assessment of the accuracy of fire duration estimates from the Global Fire Atlas highlighted differences in the sensitivity of satellite-based burned-area and active-fire products to fire lifetime (Fig. 6). Similar to fire perimeters, the best agreement in fire duration estimates was found for forests, where the Global Fire Atlas reproduced 51 % of the observed variance of the fire duration estimates based on combining MTBS fire perimeters with active-fire detections. Shrublands, woody savannas, and savannas and grasslands had lower correlations, with 27 %, 30 %, and 33 % of the variance explained, respectively. The orthogonal distance regression resulted in slopes close to the one-to-one line for shrublands and savannas and grasslands, indicating reasonable agreement. Fire duration was clearly underestimated for forested ecosystems with high fuel loads, as fires may continue to smolder for days (resulting in active-fire detections) after the fire has stopped expanding. The comparison of Global Fire Atlas data to a small dataset (n=15) of daily perimeters of large wildfires in primarily forested cover types mapped by the US Forest Service yielded good correspondence between estimates of fire size, duration and expansion rate (Fig. 7). The improved comparison of fire size (cf. Figs. 5a and 7a) could be related to the US Forest Service data being more accurate than MTBS but likely also represents the good performance of the Global Fire Atlas (e.g., compare Fig. 7a, b, and c to d, e, and f) and underlying burned-area products (Fusco et al., 2019) for relatively large fires. In contrast to the suggested underestimate of fire duration shown in Fig. 6a, these data suggest the Global Fire Atlas may slightly overestimate fire duration. This difference may reflect the fact that active-fire detections may be triggered by smoldering while the burned-area product will only register the initial changes in surface reflectance from fire. Both comparisons (Figs. 6, 7b and e) suggest the Global Fire Atlas may overestimate the duration of smaller fires with relatively short duration, likely based on the uncertainty in underlying burn dates. Based on a small underestimate of overall burned area and an overestimate of fire duration by the Global Fire Atlas, the average daily fire expansion rates based on US Forest Service data were higher than estimates based on Global Fire Atlas data (Fig. 7c and f). 3.2 Characterizing global fire regimes Over the 14-year study period, we identified 13 250 145 individual fires with an average size of 4.4 km2 (Table 1) and minimum size of one MODIS pixel (21 ha or 0.21 km2). On average, the largest fires were found in Australia (17.9 km2), boreal North America (6.0 km2) and Northern Hemisphere Africa (5.1 km2), while Central America (1.7 km2), equatorial Asia (1.8 km2) and Europe (2.0 km2) had the smallest average fire sizes (Table 1). Spatial patterns of the number of ignitions and fire sizes were markedly different and often inversely related (Fig. 8). Burned area in agricultural regions and parts of the humid tropics, particularly in Africa, resulted from high densities of fire ignitions and relatively small fires, consistent with widespread use of fire for land management. Large fires accounted for most of the burned area in arid regions, high latitudes, and other natural areas with low population densities and a sufficiently long season of favorable fire weather (Fig. 8). Global patterns of fire duration and expansion rates provide new insight about the occurrence of large fires, as the size of each fire (km2) is the product of fire duration (d) and daily fire expansion rate (km2 d−1). Individual fires that burned for a week or more occurred frequently across the productive tropical grasslands and in boreal regions (Fig. 9a, Table 2). In these regions, fire duration exerted a strong control on fire size and total burned area. On average, human-dominated landscapes, such as deforestation frontiers or agricultural regions, experienced smaller and shorter fires compared to natural landscapes (Table 2). Fire duration was also relatively short in semiarid grasslands and shrublands characterized by high daily fire expansion rates, based on the development of long fire lines (Fig. 9b and c) and high velocity. In these semiarid regions, fire duration and size were likely limited by fuel availability and connectivity. In line with these findings, the largest average daily expansion rates were found in Australia (1.7 km2 d−1), Northern Hemisphere Africa (0.9 km2 d−1) and Southern Hemisphere Africa (0.9 km2 d−1), and the smallest expansion rates were found in Central America (0.3 km2 d−1), equatorial Asia (0.3 km2 d−1) and Southeast Asia (0.4 km2 d−1; Table 1). The fastest fires occurred in arid grasslands and shrublands (Fig. 10a), where fuel structure, climate conditions and emergent properties of large wildfires contribute to high fire spread rates. Relatively high fire speeds were also observed in some parts of the boreal zone, particularly in central and western Canada. The lowest fire velocities were observed in infrequently burning humid tropical regions where fire spread was influenced by higher fuel loads and humidity (Table 1). At all scales, estimated fire direction exhibited considerable complexity (Fig. 10b). With some regional exceptions, no clear dominant spread direction was found in South America or Africa. Based on the underlying 500 m data layers, landscape structure and drainage patterns played an important role in controlling individual fire spread direction in the humid tropics. Fire spread direction also varied considerably within individual fires, and the dominant direction typically represented less than half of the pixels. Fire spread direction was more consistent in the arid tropics, as demonstrated by the northwest and southeast orientation of fire spread in Australia, consistent with the dominant wind directions. At midlatitudes, we found evidence for more eastward and westward fire progression in Europe and Asia and a northwest and southeast spread direction in North America, broadly consistent with the orientation of mountain ranges and other topographic features within the key biomass burning regions. 3.3 Fire extremes The world's largest individual fires were mostly found in sparsely populated arid and semiarid grasslands and shrublands of interior Australia, Africa, and Central Asia (Fig. 11a). Strikingly, fires of these proportions were nearly absent in North and South America, possibly due to higher landscape fragmentation and different management practices, including active fire suppression. In arid regions of Southern Africa and Australia, large fires typically followed La Niña periods (e.g., 2011 and 2012), when increased rainfall and productivity increase fuel connectivity (Chen et al., 2017). The largest fire in the Global Fire Atlas occurred in northern Australia, burning across 40 026 km2 (about the size of Switzerland or the Netherlands) over a period of 72 d with an average speed of 19 km d−1, following the 2007 La Niña. The longest fires burned for over 2 months in seasonal regions of the humid tropics and high-latitude forests (Fig. 11b). Drought conditions in 2007 and 2010 caused multiple fires to burn synchronously for over 2 months across tropical forests and savannas in South America. The highest fire velocities typically occurred in areas of low fuel loads. While fires larger than 2500 km2 were nearly absent from arid grass and shrublands in North and South America, patterns of extremely fast-moving fires in arid grass and shrublands were similar to other continents. Fast-moving fires also show evidence of synchronization, for example with several extremely fast fires that burned across the steppe of eastern Kazakhstan during 2003 (Fig. 11c). The Global Fire Atlas is the first freely available global dataset to provide daily information on seven key fire characteristics: ignition timing and location, fire size and duration, and daily expansion, fire line length, speed, and direction of spread based on moderate-resolution burned-area data. Over the 2003–2016 study period, we identified over 13 million individual fires (≥21 ha) (Table 1). Characteristics of these fires varied widely across ecosystems and land use types. In arid regions and other fire-prone natural landscapes, most of the burned area resulted from a small number of large fires (Fig. 8). Fire sizes declined along gradients of increasing rainfall and human activity, with larger numbers of small fires in the humid tropics or other human-dominated landscapes. Multiday fires were the norm across nearly all landscapes, with some large fires in productive tropical grasslands and boreal regions burning for over 2 months during drought periods (Fig. 11). The dominant control on fire size also varied across ecosystems: fire duration was the principal control on fire size in boreal forests, whereas fuels limited the size of fast-moving fires in arid grasslands and shrublands (Figs. 9 and 10). Characterizing fire behavior across large scales is key for understanding fire–vegetation feedbacks, emissions estimates, fire prediction and effective fire management, as well as for building mechanistic models of fires within ecosystem models. Satellite remote sensing has been widely used to characterize global pyrogeography (Archibald et al., 2013) and fire–climate interactions (Westerling et al., 2006; Alencar et al., 2011; Morton et al., 2013a; Field et al., 2016; Young et al., 2017). Despite this progress, large-scale understanding of individual fire behavior has remained limited by the availability of consistent global-scale data products. Analysis and future refinement of the Global Fire Atlas may be useful in this context, providing new insight about the response of fires to different global change drivers. Both climate and human activity exert a strong control on global burned area (Bowman et al., 2009) and contribute to rapidly changing fire regimes worldwide (Jolly et al., 2015; Andela et al., 2017; Earl and Simmonds, 2018). Moreover, increasing human presence in fire prone ecosystems requires increased efforts to actively manage fires for ecosystem conservation and human well-being (Moritz et al., 2014; Knorr et al., 2016). The ignition location, spread and duration of individual fires can be used to address new questions in the field of fire–climate interactions and the changing influence of human activity on fire behavior, as each of these metrics may respond differently to variability or change. For example, recent studies have suggested that climate warming and drying may increase fire size and burned area in the tropics (Hantson et al., 2017) and at higher latitudes (Yang et al., 2015). Our findings suggest that an increase in the length of the fire season may be the dominant driver for increases in fire activity in these ecosystems, as fire duration was a strong control on eventual fire size and burned area (Figs. 8, 9 and 11). Investigating fire–climate interactions and human controls on burned area using the Fire Atlas data layers will benefit management efforts and scientific investigations, as fire alters vegetation structure (Bond et al., 2005; Staver et al., 2011), biogeochemical cycles (Bauters et al., 2018; Pellegrini et al., 2018) and climate (Randerson et al., 2006; Ward et al., 2012). The Global Fire Atlas provides several new constraints that could improve the representation of fires in ecosystem and Earth system models. Fire models embedded in dynamic vegetation models are important tools for understanding the changing role of fires in the Earth system and the impacts of fires on ecosystems (Hantson et al., 2016; Rabin et al., 2017). Most global models of fire activity are calibrated using satellite-derived estimates of total burned area or active fires (Hantson et al., 2016), rather than individual fire characteristics such as fire size. As a result, many of these fire models capture the spatial distribution of global fire activity but not burned-area trends (Andela et al., 2017) or the interannual variability that may occur as a consequence of changes in fire spread rate or duration. Models range from simple empirical schemes to complex, process-based representations of individual fires (Hantson et al., 2016; Rabin et al., 2017). Process-based models estimate burned area as the product of fire ignitions and size, while many models include a dynamic rate of spread to determine eventual fire sizes (e.g., SPITFIRE; Thonicke et al., 2010) but use arbitrary threshold values for key parameters such as fire duration (Hantson et al., 2016). We found that global patterns of fire duration, ignition, size and rate of spread (i.e., speed) varied widely across ecosystems and human land management types, and thus these Global Fire Atlas data products provide additional pathways to benchmark models of various levels of complexity. While only a few models include multiday fires (e.g., Pfeiffer et al., 2013; Le Page et al., 2015; Ward et al., 2018), we found that multiday fires were the norm across most biomes and that fire duration forms an important control on eventual fire sizes and burned area in many natural ecosystems with abundant fuels. Similarly, many models assume relatively homogeneous fuel beds, while our results suggest that landscape features and vegetation patterns result in highly heterogeneous fuel beds that form a strong control on fire spread (speed and direction). Large differences in fire behavior across ecosystems and management strategies may improve fire emissions estimates and emission forecasting, particularly when combined with active-fire detections to better characterize different fire stages including the smoldering phase (Kaiser et al., 2012). Recent studies have shown that fire emission factors may vary widely depending on fire behavior (van Leeuwen and van der Werf, 2011; Parker et al., 2016; Reisen et al., 2018), while improved knowledge of fire–climate interactions is crucial for emissions forecasting (Di Giuseppe et al., 2018). The Global Fire Atlas methodology builds on a range of previous studies that have used daily moderate-resolution satellite imagery to estimate individual fire size (Archibald and Roy, 2009; Hantson et al., 2015; Frantz et al., 2016; Andela et al., 2017), shape (Nogueira et al., 2016; Laurent et al., 2018), duration (Frantz et al., 2016) and spread dynamics (Loboda and Csiszar, 2007; Coen and Schroeder, 2013; Sá et al., 2017). We provide the first fire-progression-based algorithm to map individual fires across all biomes, including the first global estimates of the timing and location of ignitions, fire size and duration, and daily expansion, fire line length, speed, and direction of spread. Several previous studies have estimated fire size distributions based on a flood fill algorithm, where all neighboring pixels within a certain time threshold are classified as the same fire (Archibald and Roy, 2009; Hantson et al., 2015). Interestingly, we found similar spatial patterns of fire size (cf. Fig. 8 and Archibald et al., 2013; Hantson et al., 2015), although absolute estimates may show large differences based on the “cutoff” value used within the flood fill approach (Oom et al., 2016) and, to a lesser extent, based on the fire persistence threshold used here. Spatial patterns of fire size and duration also compared favorably with estimates of Frantz et al. (2016) for southern Africa (Fig. 9a) and estimates of fire speed by Loboda and Csiszar (2007) for Central Asia (Fig. 10a). Here, we compared our results to fire perimeter estimates from the MTBS (Eidenshink et al., 2007; Sparks et al., 2015). Moderate agreement was found for forested ecosystems and shrublands, but results differed more in grassland biomes (Fig. 5). Interestingly, we found that the poor agreement in grasslands stemmed from differences in the spatial and temporal resolution of the burned-area estimates (Fig. B2). In line with previous studies, we found that the coarser resolution (500 m) of the MODIS burned-area data used to develop the Global Fire Atlas sometimes underestimated overall burned area (e.g., Randerson et al., 2012; Rodrigues et al., 2019; Roteta et al., 2019), fragmenting individual large fires. However, the Landsat-based MTBS data at 30 m resolution were unable to distinguish individual fires within large burn patches of fast-moving grassland fires based on infrequent Landsat satellite overpasses (Fig. B2). An initial accuracy assessment of Global Fire Atlas fire perimeter estimates for the continental US revealed several important limitations and opportunities for further development of individual fire characterization using satellite burned-area data. In addition to the accuracy assessment of fire perimeters, we also investigated the temporal accuracy of the Global Fire Atlas (Fig. 4), as well as the fire duration estimates (Fig. 6) based on active-fire detections. Low to moderate correlations (r2 ranging from 0.3 to 0.5) were found between Global Fire Atlas fire duration estimates and fire duration estimates based on a combination of MTBS fire perimeters and VIIRS active-fire detections. Disagreement partly originated from differences in fire perimeter estimates as well as differences between the day-of-burn estimates derived from the MCD64A1 burned-area data and VIIRS active-fire detections. Moreover, the uncertainty in the burn date of the underlying burned-area product is typically at least 1 d, resulting in a large uncertainty in the fire duration estimates of shorter fires (Fig. 6). The temporal accuracy of the Global Fire Atlas adjusted burned area, compared to VIIRS active-fire detections, ranged from 41 % on the same day and 80 % within ±1 d in shrublands to and 24 % (same day) and 54 % (±1 d) in forests. However, in forested ecosystems the use of active-fire detections for validation purposes is not ideal, as fires may smolder for days, triggering active-fire detections after the fire front has passed. Understanding the temporal accuracy of the Global Fire Atlas products is important for linking individual fire dynamics to fire weather, and we found good agreement between Global Fire Atlas and US Forest Service fire expansion using a 3 d running average but less good agreement for individual days based on burn date uncertainty (Fig. B1). Other parameters, including fire speed and direction of spread, were not validated during this stage. However, our comparison to daily fire perimeter estimates from the US Forest Service showed good agreement in terms of average expansion rates, suggesting reasonable overall estimates of speed (Fig. 7). Overall, there is a need to develop additional validation methodologies and data products to advance our understanding of satellite-derived estimates of individual fire behavior, building on the long-standing efforts for burned-area (Boschetti et al., 2009) and active-fire detections (Schroeder et al., 2008). In addition to the Global Fire Atlas algorithm, the data quality also depends on the underlying global burned-area product (MCD64A1 Col. 6). In particular, several recent studies have shown that moderate-resolution burned-area products are unable to adequately map the occurrence of small fires ( ha) in the United States (Fusco et al., 2019) and savanna regions of Brazil (Rodrigues et al., 2019) and Africa (Roteta et al., 2019), resulting in a considerable underestimate of global burned area (Randerson et al., 2012; Giglio et al., 2018). Therefore, care should be taken when using the Global Fire Atlas for cropland regions or other regions dominated by small fires (see Fig. 8c). The quality of derived parameters in the Global Fire Atlas for these same regions also depends on the fire persistence threshold we used to identify when fires spread from one grid cell into the next. The thresholds we used may be more appropriate for analysis of fires in natural landscapes than in croplands with synchronized small fire activity across multiple adjacent fields. Finally, daily burned-area products do not resolve the diurnal cycle of fire activity; fire lifetime and fire behavior may vary widely across fire regimes (Freeborn et al., 2011; Andela et al., 2015), and sub-daily fire dynamics cannot be resolved in the Global Fire Atlas. In line with these limitations, we found that Global Fire Atlas data performed best for large fires (Figs. 5, 6 and 7). Further development of the Fire Atlas product suite is possible based on improvements in the underlying burned-area data from multiple satellite sensors as well as new active-fire products at higher spatial resolution (e.g., VIIRS). The Global Fire Atlas algorithm provides a flexible framework that can be easily adjusted to work at different spatial or temporal resolutions. The data are freely available at http://www.globalfiredata.org (last access: 9 August 2018) and and https://doi.org/10.3334/ORNLDAAC/1642 (Andela et al., 2019) and in standard data product formats, and updates for subsequent years will be distributed pending availability of MCD64A1 burned-area data and associated research funding. Global per-fire-year shapefiles of the ignition locations (point) and individual fire perimeters (polygon) contain attribute tables with a unique fire ID, ignition location, start and end dates, size, duration, and average values of the daily expansion, fire line length, speed, and direction of spread (Fig. 1, Table A1). In addition, gridded 500 m global maps of the Global Fire Atlas adjusted burn dates, daily fire line, speed and direction of spread are available in GeoTIFF format. A monthly gridded GeoTIFF product is also available at 0.25∘ resolution. Global Fire Atlas data products can also be visualized and evaluated using an online tool at http://www.globalfiredata.org (last access: 9 August 2018) to explore individual fire characteristics for a selected region of interest. The Global Fire Atlas is a new publicly available global dataset that includes data on seven key fire characteristics: ignition location and timing, fire size and duration, and daily expansion, fire line length, speed, and direction of spread. Over the 2003–2016 study period, we identified 13 250 145 individual fires (≥21 ha) based on the moderate-resolution MCD64A1 Col. 6 burned-area data. Striking differences were observed among global fire regimes along gradients of ecosystem productivity and human land use. In general, in ecosystems of abundant fuel and low human influence, large fires of long duration dominated the total burned area, with small fires contributing most to overall burned area in human-dominated regions or areas too wet for frequent fires. Fires moved quickly through arid ecosystems with low fuel densities, but fire sizes were eventually limited by fuels from natural or human landscape fragmentation. The dataset enables new lines of investigation for understanding vegetation–fire feedbacks, climatic and human controls on global burned area, fire forecasting, emissions modeling, and benchmarking of global fire models. * Vector data are derived from the underlying 500 m MODIS data. NA, DCM and JTR designed the study. NA carried out the data processing and analysis. All authors contributed to the interpretation of the results and writing of the manuscript. The authors declare that they have no conflict of interest. This work was supported by NASA's Carbon Monitoring System program (grant 80NSSC18K0179) and the Gordon and Betty Moore Foundation (grant GBMF3269). We thank Thomas Mellin of the US Forest Service for granting access to the daily fire perimeter collected over the western US. This paper was edited by Vinayak Sinha and reviewed by two anonymous referees. Abreu, R. C. R., Hoffmann, W. A., Vasconcelos, H. L., Pilon, N. A., Rossatto, D. R., and Durigan, G.: The biodiversity cost of carbon sequestration in tropical savanna, Sci. Adv., 3, e1701284, https://doi.org/10.1126/sciadv.1701284, 2017. Alencar, A., Asner, G. P., Knapp, D., and Zarin, D.: Temporal variability of forest fires in eastern Amazonia, Ecol. Appl., 21, 2397–2412, https://doi.org/10.1890/10-1168.1, 2011. Andela, N., Kaiser, J. W., van der Werf, G. R., and Wooster, M. J.: New fire diurnal cycle characterizations to improve fire radiative energy assessments made from MODIS observations, Atmos. Chem. Phys., 15, 8831–8846, https://doi.org/10.5194/acp-15-8831-2015, 2015. Andela, N., Morton, D. C., Giglio, L., Chen, Y., Van Der Werf, G. R., Kasibhatla, P. S., Defries, R. S., Collatz, G. J., Hantson, S., Kloster, S., Bachelet, D., Forrest, M., Lasslop, G., Li, F., Mangeon, S., Melton, J. R., Yue, C., and Randerson, J. T.: A human-driven decline in global burned area, Science, 356, 1356–1362, https://doi.org/10.1126/science.aal4108, 2017. Andela, N., Morton, D. C., Giglio, L., and Randerson, J. T.: Global Fire Atlas with Characteristics of Individual Fires, 2003–2016, ORNL DAAC, Oak Ridge, Tennessee, USA, https://doi.org/10.3334/ORNLDAAC/1642, 2019. Aragão, L. E. O. C., Anderson, L. O., Fonseca, M. G., Rosan, T. M., Vedovato, L. B., Wagner, F. H., Silva, C. V. J., Silva Junior, C. H. L., Arai, E., Aguiar, A. P., Barlow, J., Berenguer, E., Deeter, M. N., Domingues, L. G., Gatti, L., Gloor, M., Malhi, Y., Marengo, J. A., Miller, J. B., Phillips, O. L., and Saatchi, S.: 21st Century drought-related fires counteract the decline of Amazon deforestation carbon emissions, Nat. Commun., 9, 536, https://doi.org/10.1038/s41467-017-02771-y, 2018. Archibald, S. and Roy, D. P.: Identifying individual fires from satellite-derived burned area data, IEEE Int. Geosci. Remote Se., 9, 160–163, https://doi.org/10.1109/IGARSS.2009.5417974, 2009. Archibald, S., Staver, A. C., and Levin, S. A.: Evolution of human-driven fire regimes in Africa, P. Natl. Acad. Sci. USA, 109, 847–852, https://doi.org/10.1073/pnas.1118648109, 2012. Archibald, S., Lehmann, C. E. R., Gómez-Dans, J. L., and Bradstock, R. A.: Defining pyromes and global syndromes of fire regimes, P. Natl. Acad. Sci. USA, 110, 6442–6447, https://doi.org/10.1073/pnas.1211466110, 2013. Balch, J. K., Bradley, B. A., Abatzoglou, J. T., Nagy, R. C., Fusco, E. J., and Mahood, A. L.: Human-started wildfires expand the fire niche across the United States, P. Natl. Acad. Sci. USA, 114, 2946–2951, https://doi.org/10.1073/pnas.1617394114, 2017. Bauters, M., Drake, T. W., Verbeeck, H., Bodé, S., Hervé-Fernández, P., Zito, P., Podgorski, D. C., Boyemba, F., Makelele, I., Cizungu Ntaboba, L., Spencer, R. G. M., and Boeckx, P.: High fire-derived nitrogen deposition on central African forests, P. Natl. Acad. Sci. USA, 115, 549–554, https://doi.org/10.1073/pnas.1714597115, 2018. Benali, A., Russo, A., Sá, A. C. L., Pinto, R. M. S., Price, O., Koutsias, N., and Pereira, J. M. C.: Determining fire dates and locating ignition points with satellite data, Remote Sens., 8, 326, https://doi.org/10.3390/rs8040326, 2016. Bond, W. J., Woodward, F. I., and Midgley, G. F.: The global distribution of ecosystems in a world without fire, New Phytol., 165, 525–537, https://doi.org/10.1111/j.1469-8137.2004.01252.x, 2005. Boschetti, L., Roy, D. P., and Justice, C. O.: International Global Burned Area Satellite Product Validation Protocol, in: CEOS-CalVal, Part I – production and standardization of validation reference data, Commitee Earth Obs. Satell., USA, 1–11, 2009. Bowman, D. M. J. S., Balch, J. K., Artaxo, P., Bond, W. J., Carlson, J. M., Cochrane, M. A., D'Antonio, C. M., Defries, R. S., Doyle, J. C., Harrison, S. P., Johnston, F. H., Keeley, J. E., Krawchuk, M. A., Kull, C. A., Marston, J. B., Moritz, M. A., Prentice, I. C., Roos, C. I., Scott, A. C., Swetnam, T. W., van der Werf, G. R., and Pyne, S. J.: Fire in the Earth system, Science, 324, 481–484, https://doi.org/10.1126/science.1163886, 2009. Brando, P. M., Balch, J. K., Nepstad, D. C., Morton, D. C., Putz, F. E., Coe, M. T., Silvério, D., Macedo, M. N., Davidson, E. A., Nóbrega, C. C., Alencar, A., and Soares-Filho, B. S.: Abrupt increases in Amazonian tree mortality due to drought-fire interactions, P. Natl. Acad. Sci. USA, 111, 6347–6352, https://doi.org/10.1073/pnas.1305499111, 2014. Chen, Y., Morton, D. C., Andela, N., Van Der Werf, G. R., Giglio, L., and Randerson, J. T.: A pan-tropical cascade of fire driven by El Niño/Southern Oscillation, Nat. Clim. Change, 7, 906–911, https://doi.org/10.1038/s41558-017-0014-8, 2017. Coen, J. L. and Schroeder, W.: Use of spatially refined satellite remote sensing fire detection data to initialize and evaluate coupled weather-wildfire growth model simulations, Geophys. Res. Lett., 40, 5536–5541, https://doi.org/10.1002/2013GL057868, 2013. Di Giuseppe, F., Rémy, S., Pappenberger, F., and Wetterhall, F.: Using the Fire Weather Index (FWI) to improve the estimation of fire emissions from fire radiative power (FRP) observations, Atmos. Chem. Phys., 18, 5359–5370, https://doi.org/10.5194/acp-18-5359-2018, 2018. Earl, N. and Simmonds, I.: Spatial and Temporal Variability and Trends in 2001–2016 Global Fire Activity, J. Geophys. Res.-Atmos., 123, 2524–2536, https://doi.org/10.1002/2017JD027749, 2018. Eidenshink, J., Schwind, B., Brewer, K., Zhu, Z.-L., Quayle, B., and Howard, S.: A Project for Monitoring Trends in Burn Severity, Fire Ecol., 3, 3–21, https://doi.org/10.4996/fireecology.0301003, 2007. Field, R. D., Werf, G. R. Van Der, Fanin, T., Fetzer, E. J., Fuller, R., Jethva, H., Levy, R., van der Werf, G. R., Fanin, T., Fetzer, E. J., Fuller, R., Jethva, H., Levy, R., Livesey, N. J., Luo, M., Torres, O., and Worden, H. M.: Indonesian fire activity and smoke pollution in 2015 show persistent nonlinear sensitivity to El Niño-induced drought, P. Natl. Acad. Sci. USA, 113, 9204–9209, https://doi.org/10.1073/pnas.1524888113, 2016. Frantz, D., Stellmes, M., Röder, A., and Hill, J.: Fire spread from MODIS burned area data: Obtaining fire dynamics information for every single fire, Int. J. Wildland Fire, 25, 1228–1237, https://doi.org/10.1071/WF16003, 2016. Freeborn, P. H., Wooster, M. J., and Roberts, G.: Addressing the spatiotemporal sampling design of MODIS to provide estimates of the fire radiative energy emitted from Africa, Remote Sens. Environ., 115, 475–489, https://doi.org/10.1016/j.rse.2010.09.017, 2011. Friedl, M. A., McIver, D. K., Hodges, J. C. F., Zhang, X. Y., Muchoney, D., Strahler, A. H., Woodcock, C. E., Gopal, S., Schneider, A., Cooper, A., Baccini, A., Gao, F., and Schaaf, C.: Global land cover mapping from MODIS: algorithms and early results, Remote Sens. Environ., 83, 287–302, https://doi.org/10.1016/S0034-4257(02)00078-0, 2002. Fusco, E. J., Abatzoglou, J. T., Balch, J. K., Finn, J. T., and Bradley, B. A.: Quantifying the human influence on fire ignition across the western USA, Ecol. Appl., 26, 2388–2399, https://doi.org/10.1002/eap.1395, 2016. Fusco, E. J., Finn, J. T., Abatzoglou, J. T., Balch, J. K., Dadashi, S., and Bradley, B. A.: Detection rates and biases of fire observations from MODIS and agency reports in the conterminous United States, Remote Sens. Environ., 220, 30–40, https://doi.org/10.1016/j.rse.2018.10.028, 2019. Giglio, L., Loboda, T., Roy, D. P., Quayle, B., and Justice, C. O.: An active-fire based burned area mapping algorithm for the MODIS sensor, Remote Sens. Environ., 113, 408–420, https://doi.org/10.1016/j.rse.2008.10.006, 2009. Giglio, L., Randerson, J. T., and van der Werf, G. R.: Analysis of daily, monthly, and annual burned area using the fourth-generation global fire emissions database (GFED4), J. Geophys. Res.-Biogeo., 118, 317–328, https://doi.org/10.1002/jgrg.20042, 2013. Giglio, L., Boschetti, L., Roy, D. P., Humber, M. L., and Justice, C. O.: The Collection 6 MODIS burned area mapping algorithm and product, Remote Sens. Environ., 217, 72–85, https://doi.org/10.1016/j.rse.2018.08.005, 2018. Hantson, S., Pueyo, S., and Chuvieco, E.: Global fire size distribution is driven by human impact and climate, Global Ecol. Biogeogr., 24, 77–86, https://doi.org/10.1111/geb.12246, 2015. Hantson, S., Arneth, A., Harrison, S. P., Kelley, D. I., Prentice, I. C., Rabin, S. S., Archibald, S., Mouillot, F., Arnold, S. R., Artaxo, P., Bachelet, D., Ciais, P., Forrest, M., Friedlingstein, P., Hickler, T., Kaplan, J. O., Kloster, S., Knorr, W., Lasslop, G., Li, F., Mangeon, S., Melton, J. R., Meyn, A., Sitch, S., Spessa, A., van der Werf, G. R., Voulgarakis, A., and Yue, C.: The status and challenge of global fire modelling, Biogeosciences, 13, 3359–3375, https://doi.org/10.5194/bg-13-3359-2016, 2016. Hantson, S., Scheffer, M., Pueyo, S., Xu, C., Lasslop, G., Van Nes, E. H., Holmgren, M., and Mendelsohn, J.: Rare, Intense, Big fires dominate the global tropics under drier conditions, Sci. Rep.-UK, 7, 14374, https://doi.org/10.1038/s41598-017-14654-9, 2017. Humber, M. L., Boschetti, L., Giglio, L., and Justice, C. O.: Spatial and temporal intercomparison of four global burned area products, Int. J. Digit. Earth, 12, 460–484, https://doi.org/10.1080/17538947.2018.1433727, 2019. Johnston, F. H., Henderson, S. B., Chen, Y., Randerson, J. T., Marlier, M., Defries, R. S., Kinney, P., Bowman, D. M. J. S., and Brauer, M.: Estimated global mortality attributable to smoke from landscape fires, Environ. Health Persp., 120, 695–701, https://doi.org/10.1289/ehp.1104422, 2012. Jolly, W. M., Cochrane, M. A., Freeborn, P. H., Holden, Z. A., Brown, T. J., Williamson, G. J., and Bowman, D. M. J. S.: Climate-induced variations in global wildfire danger from 1979 to 2013, Nat. Commun., 6, 7537, https://doi.org/10.1038/ncomms8537, 2015. Kaiser, J. W., Heil, A., Andreae, M. O., Benedetti, A., Chubarova, N., Jones, L., Morcrette, J.-J., Razinger, M., Schultz, M. G., Suttie, M., and van der Werf, G. R.: Biomass burning emissions estimated with a global fire assimilation system based on observed fire radiative power, Biogeosciences, 9, 527–554, https://doi.org/10.5194/bg-9-527-2012, 2012. Kasischke, E. S. and Turetsky, M. R.: Recent changes in the fire regime across the North American boreal region – Spatial and temporal patterns of burning across Canada and Alaska, Geophys. Res. Lett., 33, L09703, https://doi.org/10.1029/2006GL025677, 2006. Knorr, W., Arneth, A., and Jiang, L.: Demographic controls of future global fire risk, Nat. Clim. Change, 6, 781–785, https://doi.org/10.1038/nclimate2999, 2016. Koplitz, S. N., Mickley, L. J., Marlier, M. E., Buonocore, J. J., Kim, P. S., Liu, T., Sulprizio, M. P., DeFries, R. S., Jacob, D. J., Schwartz, J., Pongsiri, M., and Myers, S. S.: Public health impacts of the severe haze in Equatorial Asia in September–October 2015: Demonstration of a new framework for informing fire management strategies to reduce downwind smoke exposure, Environ. Res. Lett., 11, 094023, https://doi.org/10.1088/1748-9326/11/9/094023, 2016. Laurent, P., Mouillot, F., Yue, C., Ciais, P., Moreno, M. V., and Nogueira, J. M. P.: FRY, a global database of fire patch functional traits derived from space-borne burned area products, Sci. Data, 5, 180132, https://doi.org/10.1038/sdata.2018.132, 2018. Lelieveld, J., Evans, J. S., Fnais, M., Giannadaki, D., and Pozzer, A.: The contribution of outdoor air pollution sources to premature mortality on a global scale, Nature, 525, 367–371, https://doi.org/10.1038/nature15371, 2015. Le Page, Y., Morton, D., Bond-Lamberty, B., Pereira, J. M. C., and Hurtt, G.: HESFIRE: a global fire model to explore the role of anthropogenic and weather drivers, Biogeosciences, 12, 887–903, https://doi.org/10.5194/bg-12-887-2015, 2015. Loboda, T. V. and Csiszar, I. A.: Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product, Global Planet. Change, 56, 258–273, https://doi.org/10.1016/j.gloplacha.2006.07.015, 2007. Moritz, M. A., Batllori, E., Bradstock, R. A., Gill, A. M., Handmer, J., Hessburg, P. F., Leonard, J., McCaffrey, S., Odion, D. C., Schoennagel, T., and Syphard, A. D.: Learning to coexist with wildfire, Nature, 515, 58–66, https://doi.org/10.1038/nature13946, 2014. Morton, D. C., Collatz, G. J., Wang, D., Randerson, J. T., Giglio, L., and Chen, Y.: Satellite-based assessment of climate controls on US burned area, Biogeosciences, 10, 247–260, https://doi.org/10.5194/bg-10-247-2013, 2013a. Morton, D. C., Page, Y. Le, Defries, R., Collatz, G. J., and Hurtt, G. C.: Understorey fire frequency and the fate of burned forests in southern Amazonia, Philos. T. R. Soc. B, 368, 20120163, https://doi.org/10.1098/rstb.2012.0163, 2013b. Nogueira, J. M. P., Ruffault, J., Chuvieco, E., and Mouillot, F.: Can we go beyond burned area in the assessment of global remote sensing products with fire patch metrics?, Remote Sens., 9, 7, https://doi.org/10.3390/rs9010007, 2016. Oom, D., Silva, P. C., Bistinas, I. and Pereira, J. M. C.: Highlighting biome-specific sensitivity of fire size distributions to time-gap parameter using a new algorithm for fire event individuation, Remote Sens., 8, 663, https://doi.org/10.3390/rs8080663, 2016. Parker, R. J., Boesch, H., Wooster, M. J., Moore, D. P., Webb, A. J., Gaveau, D., and Murdiyarso, D.: Atmospheric CH4 and CO2 enhancements and biomass burning emission ratios derived from satellite observations of the 2015 Indonesian fire plumes, Atmos. Chem. Phys., 16, 10111–10131, https://doi.org/10.5194/acp-16-10111-2016, 2016. Pellegrini, A. F. A., Ahlström, A., Hobbie, S. E., Reich, P. B., Nieradzik, L. P., Staver, A. C., Scharenbroch, B. C., Jumpponen, A., Anderegg, W. R. L., Randerson, J. T., and Jackson, R. B.: Fire frequency drives decadal changes in soil carbon and nitrogen and ecosystem productivity, Nature, 53, 194–198, https://doi.org/10.1038/nature24668, 2018. Pfeiffer, M., Spessa, A., and Kaplan, J. O.: A model for global biomass burning in preindustrial time: LPJ-LMfire (v1.0), Geosci. Model Dev., 6, 643–685, https://doi.org/10.5194/gmd-6-643-2013, 2013. Rabin, S. S., Melton, J. R., Lasslop, G., Bachelet, D., Forrest, M., Hantson, S., Kaplan, J. O., Li, F., Mangeon, S., Ward, D. S., Yue, C., Arora, V. K., Hickler, T., Kloster, S., Knorr, W., Nieradzik, L., Spessa, A., Folberth, G. A., Sheehan, T., Voulgarakis, A., Kelley, D. I., Prentice, I. C., Sitch, S., Harrison, S., and Arneth, A.: The Fire Modeling Intercomparison Project (FireMIP), phase 1: experimental and analytical protocols with detailed model descriptions, Geosci. Model Dev., 10, 1175–1197, https://doi.org/10.5194/gmd-10-1175-2017, 2017. Randerson, J. T., Liu, H., Flanner, M. G., Chambers, S. D., Jin, Y., Hess, P. G., Pfister, G., Mack, M. C., Treseder, K. K., Welp, L. R., Chapin, F. S., Harden, J. W., Goulden, M. L., Lyons, E., Neff, J. C., Schuur, E. A. G., and Zender, C. S.: The impact of boreal forest fire on climate warming, Science, 314, 1130–1132, https://doi.org/10.1126/science.1132075, 2006. Randerson, J. T., Chen, Y., van der Werf, G. R., Rogers, B. M., and Morton, D. C.: Global burned area and biomass burning emissions from small fires, J. Geophys. Res., 117, G04012, https://doi.org/10.1029/2012JG002128, 2012. Reisen, F., Meyer, C. P., Weston, C. J., and Volkova, L.: Ground-based Field Measurements of PM2.5 Emission Factors from Flaming and Smouldering Combustion in Eucalypt Forests, J. Geophys. Res.-Atmos., 123, 8301–8314, https://doi.org/10.1029/2018JD028488, 2018. Rodrigues, J. A., Libonati, R., Pereira, A. A., Nogueira, J. M. P., Santos, F. L. M., Peres, L. F., Santa Rosa, A., Schroeder, W., Pereira, J. M. C., Giglio, L., Trigo, I. F., and Setzer, A. W.: How well do global burned area products represent fire patterns in the Brazilian Savannas biome? An accuracy assessment of the MCD64 collections, Int. J. Appl. Earth Obs., 78, 318–331, https://doi.org/10.1016/J.JAG.2019.02.010, 2019. Roteta, E., Bastarrika, A., Padilla, M., Storm, T., and Chuvieco, E.: Development of a Sentinel-2 burned area algorithm: Generation of a small fire database for sub-Saharan Africa, Remote Sens. Environ., 222, 1–17, https://doi.org/10.1016/j.rse.2018.12.011, 2019. Sá, A. C. L., Benali, A., Fernandes, P. M., Pinto, R. M. S., Trigo, R. M., Salis, M., Russo, A., Jerez, S., Soares, P. M. M., Schroeder, W., and Pereira, J. M. C.: Evaluating fire growth simulations using satellite active fire data, Remote Sens. Environ., 190, 302–317, https://doi.org/10.1016/j.rse.2016.12.023, 2017. Scholes, R. J. and Archer, S. R.: Tree-grass interactions in savannas, Annu. Rev. Ecol. Syst., 28, 517–544, https://doi.org/10.1146/annurev.ecolsys.28.1.517, 1997. Schroeder, W., Prins, E., Giglio, L., Csiszar, I., Schmidt, C., Morisette, J., and Morton, D.: Validation of GOES and MODIS active fire detection products using ASTER and ETM+ data, Remote Sens. Environ., 112, 2711–2726, https://doi.org/10.1016/j.rse.2008.01.005, 2008. Schroeder, W., Oliva, P., Giglio, L., and Csiszar, I. A.: The New VIIRS 375 m active fire detection data product: Algorithm description and initial assessment, Remote Sens. Environ., 143, 85–96, https://doi.org/10.1016/j.rse.2013.12.008, 2014. Sparks, A. M., Boschetti, L., Smith, A. M. S., Tinkham, W. T., Lannom, K. O., and Newingham, B. A.: An accuracy assessment of the MTBS burned area product for shrub-steppe fires in the northern Great Basin, United States, Int. J. Wildland Fire, 24, 70–78, https://doi.org/10.1071/WF14131, 2015. Staver, A. C., Archibald, S., and Levin, S. A.: The Global Extent and Determinants of Savanna and Forest as Alternative Biome States, Science, 334, 230–232, https://doi.org/10.1126/science.1210465, 2011. Taylor, A. H., Trouet, V., Skinner, C. N., and Stephens, S.: Socioecological transitions trigger fire regime shifts and modulate fire – climate interactions in the Sierra, P. Natl. Acad. Sci. USA, 113, 13684–13689, https://doi.org/10.1073/pnas.1609775113, 2016. Thonicke, K., Spessa, A., Prentice, I. C., Harrison, S. P., Dong, L., and Carmona-Moreno, C.: The influence of vegetation, fire spread and fire behaviour on biomass burning and trace gas emissions: results from a process-based model, Biogeosciences, 7, 1991–2011, https://doi.org/10.5194/bg-7-1991-2010, 2010. van der Werf, G. R., Dempewolf, J., Trigg, S. N., Randerson, J. T., Kasibhatla, P. S., Giglio, L., Murdiyarso, D., Peters, W., Morton, D. C., Collatz, G. J., Dolman, A. J., and DeFries, R. S.: Climate regulation of fire emissions and deforestation in equatorial Asia, P. Natl. Acad. Sci. USA, 105, 20350–20355, https://doi.org/10.1073/pnas.0803375105, 2008. van Leeuwen, T. T. and van der Werf, G. R.: Spatial and temporal variability in the ratio of trace gases emitted from biomass burning, Atmos. Chem. Phys., 11, 3611–3629, https://doi.org/10.5194/acp-11-3611-2011, 2011. Veraverbeke, S., Sedano, F., Hook, S. J., Randerson, J. T., Jin, Y., and Rogers, B. M.: Mapping the daily progression of large wildland fires using MODIS active fire data, Int. J. Wildland Fire, 23, 655–667, https://doi.org/10.1071/WF13015, 2014. Ward, D. S., Kloster, S., Mahowald, N. M., Rogers, B. M., Randerson, J. T., and Hess, P. G.: The changing radiative forcing of fires: global model estimates for past, present and future, Atmos. Chem. Phys., 12, 10857–10886, https://doi.org/10.5194/acp-12-10857-2012, 2012. Ward, D. S., Shevliakova, E., Malyshev, S., and Rabin, S.: Trends and Variability of Global Fire Emissions Due To Historical Anthropogenic Activities, Global Biogeochem. Cy., 32, 122–142, https://doi.org/10.1002/2017GB005787, 2018. Westerling, A. L., Hidalgo, H. G., Cayan, D. R., and Swetnam, T. W.: Warming and earlier spring increase western US forest wildfire activity, Science, 313, 940–943, https://doi.org/10.1126/science.1128834, 2006. Yang, J., Tian, H., Tao, B., Ren, W., Pan, S., Liu, Y., and Wang, Y.: A growing importance of large fires in conterminous United States during 1984–2012, J. Geophys. Res.-Biogeo., 120, 2625–2640, https://doi.org/10.1002/2015JG002965, 2015. Young, A. M., Higuera, P. E., Duffy, P. A., and Hu, F. S.: Climatic thresholds shape northern high-latitude fire regimes and imply vulnerability to future climate change, Ecography, 40, 606–617, https://doi.org/10.1111/ecog.02205, 2017.
<urn:uuid:f77c525e-1006-495a-bfa1-3247943421a8>
CC-MAIN-2023-23
https://essd.copernicus.org/articles/11/529/2019/essd-11-529-2019.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647895.20/warc/CC-MAIN-20230601143134-20230601173134-00079.warc.gz
en
0.844221
18,630
4
4
The IELTS test is administered by the British Council, the University of Cambridge and IELTS Australia. In other words, it is connected with the British government and was traditionally British universities, as well as New Zealand and Australian universities used to determine the ability of foreign language students. TOEFL is administered by ETS, a nonprofit U.S. goal, and is used by the American and Canadian universities. However, these days, make it easier for international students, universities around the world to both TOEFL and IELTS. Then you with the specific university you want to apply, usually every school in the United States, United Kingdom, Australia or New Zealand should check is to test results. So the sake of your mind. Select the test you think it will be easier for you to complete. To do this, you probably need to know the structure of each exam. Structure of the TOEFL Like last year, official TOEFL is given around the iBT (Internet Based Testing) format. It consists of four sections: The reading section TOEFL invites you to read 4-6 passages of university level and multiple-choice questions on (ie multiple choice, choose the response provided options) to respond. Test questions on understanding the main text ideas, important details, vocabulary, rhetorical devices deduct and style. Listening Section presents long 2-3 conversations and 4-6 lectures. The situations are always related to university life is a conversation between a student and a librarian on materials research or conference a history class. The questions are multiple choice questions and you. About important details, inferences, tone and vocabulary The conversations and lectures are very natural and informal English, interruptions, filler noises like "uh" or "Uhm." Speaking of the game is saved. You speak into a microphone and a grader on your answers at a later date and quality at your disposal. Two questions on familiar topics and ask your opinion and / or describe something to give, you know, like your town or your favorite teacher. Two questions will be asked to summarize information from a text and a call and may also ask your opinion. Two questions will be asked to summarize information from a short conversation. Again, the topic of discussion always related to the University. Finally, there are two short essays on the TOEFL. You will be asked to write your opinion on a subject as vast as it is better to live in the country or city. You will be asked to summarize information from a text and a conference, often the two do not match, and you need to compare and contrast is, or synthesize conflicting information. IELTS contains the same 4 sections, reading, listening, speaking and writing, but the format is very different. The reading section of the IELTS gives you 3 texts, all of which can be at a student textbook or a newspaper or a magazine, but. It's always a bit opinion to know. be. a text arguing for a view. The variety of questions about the IELTS is quite broad, and not every text will have every type of question. One type of question asks you to match headings of the paragraphs of text. You may be asked to complete a summary of the passage with words from the text. Or you must complete a table or chart or picture with words from the text. Either multiple choice questions that require you to be the most important details. One of the most difficult types of issues this state and whether these statements are true, false or not included in the text. You may also be asked to associate words and ideas. Finally, some questions, short answer, but the answers will be taken directly from the text itself. Some questions come before the text and may not require a careful reading to answer. Others come after the text and may expect you to read the text carefully. IELTS has four listening sections. The first is a "transactional conversation" in which someone can be for something (driver's license, a library card), or use of information (such as the demand for more details about an ad or a hotel) . The second part is an information conference of some sort, perhaps a dean explaining the rules of the university. Third is a conversation in an academic context and the final section will be an academic conference. For all sections you may be asked to fill out a summary, fill in a table, answer multiple choice questions, label a diagram or picture, or classify information into several categories. It is planned to complete the answers, as you will hear. There are two writing tasks on the academic IELTS. The first asks you to summarize a table or chart in about 300 words. You need to identify important information, compare and contrast different figures or maybe describe a process. The second task asks for your opinion on a statement on a subject quite open as "women should take care of children and do not work" or "Too many people suffer in cities and rural areas to move." Finally, Section speaking a different day of the rest of the race and in the presence of a trained interviewer will take place. The questions are the same for all examinees but some parts may be in the form of a conversation with a monologue. The first part of the test will be followed by a brief talk introducing some short questions about familiar topics. The interviewer asks your name, your job, what kind of sport you like, what is your daily routine, and so on. In the second part, you will find a card with a specific topic and a few issues to be resolved are given. You need to speak for two minutes on this topic that can be of your daily routine, the last time you went to the cinema, familiarize your favorite part of the world or a similar topic. In the last section, the interviewer will ask an abstract side of the topic in Part 2 discuss why people prefer life? Why do people like the movies? How to travel on local life? What's best for me? So now you have some understanding of what each test involves, but you may be wondering what is best for you. Perhaps reading the structure, you thought, "Wow TOEFL sounds so easy," or "Oh, it sounds somehow IELTS fun!" This could be a good sign that a test, it will be easier for you than the other. More specifically, there are some important differences between the tests. British and American English While the UK and the U.S. accept both tests, and while British English and American English are not as different as some think, the fact of the matter is the IELTS tends to British English and the TOEFL uses exclusively use American English. On the IELTS, this difference is a greater effect because many spelling, and this is an area where Britain and the United States do not always see eye to eye. Of course, if you have problems with the British accent (and the test can be a variety of accents, including Australia, New Zealand, Irish and Scottish belong to) have. On the other hand, American accents may throw you off. Some words are different, and you do not want to waste time asking in your language test, which is an apartment or a truck. If you are used to British or American English, is certainly a factor. If you are more comfortable with American English, TOEFL is a good bet, but if you are used to British English and accents, you will do better on the IELTS. Multiple choice versus Copying Down Read and listen to sections, TOEFL gives you multiple-choice questions, whereas IELTS generally expects to copy down words of the text or the conversation word for word. Multiple choice questions usually require a little more abstract thinking, but the IELTS favors people who have good memories and think more concretely. The good thing about multiple choice is that it is easy to get wrong answers, whereas the good thing about copying down is that the answer is sitting there in the text. You just have to find and repeat. As concrete thinkers will tend to do better is the IELTS and abstract thinkers tend to excel on the TOEFL. Foreseeable or different each time Of course, the TOEFL is also more predictable than the IELTS. The IELTS throws lots of different types of questions to you, and instructions are often slightly different every time. This makes it difficult to prepare. The TOEFL, on the other hand, is pretty much the same test every time, choose A, B, C, D or E. On the other hand, you might IELTS stands on your toes and keep you awake. In a conversation with a person or a computer? Another big difference is in the language section is performed. For some people, it's very relaxing, take your answers into a computer because it feels like no one is listening. Try it. Just your best and forget about it until you get your notes Because IELTS is present in a interview format with a native speaker, you may feel nervous or should be evaluated. And they take notes: Oh my God, he writes something good or something bad? On the other hand, you can feel relaxed in a conversation with someone to explain if you do not understand, just look at a question, or to the side, instead of a computer screen. Evaluation of a native speaker can also be useful to correct mistakes and improve during the test. So it depends what you're comfortable with. If you like talking to people, the IELTS is a better bet. If you just want to be alone and not feel judged, the TOEFL will be more comfortable for you. Holistic against criteria Finally, the oral and written parts of the TOEFL exam are classified holistically. The grader gives you a score on the overall quality of the test is based, including vocabulary, logic, style and grammar. IELTS contrast is marked by individual criteria and you are scored individually for grammar, word choice, fluency, logic, cohesion, and a dozen other criteria. In other words, if you write well, but have a lot of small grammar mistakes, your TOEFL score might very well because graders will ignore small mistakes if the overall essay is logical and detailed. IELTS not forget bad grammar. On the other hand, if your grammar and vocabulary are strong, but they have trouble expressing your opinion or organizing an essay, you could end up with a low TOEFL score but the IELTS will give you good ratings for the use of language. So while it looks like the IELTS is much more difficult because you notes on everything, in fact, you can get a good score if you are strong enough in a number of areas. The TOEFL emphasizes the ability to develop a logical and detailed argument (or summary) and looks at clarity, word choice and style first. If you do not feel comfortable writing essays but you think you have excellent grammar and vocabulary and overall are a decent writer, the IELTS will probably be easier for you.
<urn:uuid:8d6aebdc-3f9b-4f36-8689-3a455adf279d>
CC-MAIN-2017-26
http://tests-guider.blogspot.com/2013/07/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320915.38/warc/CC-MAIN-20170627032130-20170627052130-00358.warc.gz
en
0.945056
2,274
2.875
3
Indigenous Peoples of the Americas Letters sent by the Office of Indian Affairs, Pima and Maricopa Agency The Office of Indian Affairs, now called the Bureau of Indian Affairs, was first established in 1824 as part of the War Department. In 1849 it was transferred to the Department of the Interior. The Pima Agency was established in Arizona in 1859, and ... mnboyer - 2021-09-03 11:53 Arizona Commission of Indian Affairs Records Established in 1953 by the Arizona Legislature, the Arizona Commission of Indian Affairs had as its original mission to study the condition of American Indian residing in the state. Comprised of 15 members, the Commission includes 7 Indian and 2 non ... trentp - 2023-03-07 15:24
<urn:uuid:b5b20f66-a940-4f5e-9f95-9c97c6d52435>
CC-MAIN-2023-23
https://speccoll.library.arizona.edu/subjects/indigenous-peoples-americas?f%5B0%5D=im_field_included_subjects%3A53
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644683.18/warc/CC-MAIN-20230529042138-20230529072138-00764.warc.gz
en
0.952921
169
2.515625
3
Experimental investigation of heat transfer during pool boiling of two nanofluids, i.e. water-Al2O3 and water-Cu has been carried out. Nanoparticles were tested at the concentration of 0.01%, 0.1%, and 1% by weight. The horizontal smooth stainless steel tubes having 10 mm OD and 0.6 mm wall thickness formed the test heater. The experiments have been performed to establish the influence of nanofluids concentration on heat transfer characteristics during boiling at different absolute operating pressure values, i.e. 200 kPa, ca. 100 kPa (atmospheric pressure) and 10 kPa. It was established that independent of nanoparticle materials (Al2O3 and Cu) and their concentration, an increase of operating pressure enhances heat transfer. Generally, independent of operating pressure, sub- and atmospheric pressure, and overpressure, an increase of nanoparticle concentration caused heat transfer augmentation.
<urn:uuid:a60ef915-77dc-4521-a441-0b371a5b2260>
CC-MAIN-2020-05
http://sd.czasopisma.pan.pl/dlibra/results?action=AdvancedSearchAction&type=-3&search_attid1=5&search_value1=operating+pressure
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00467.warc.gz
en
0.942765
191
2.71875
3
DITCH (MOTLEY COUNTY) DITCH (Motley County). The Ditch rises twelve miles northeast of Matador in west central Motley County (at 100°42' N, 34°09' W) and runs east through isolated ranchland for eight miles to its mouth on the Middle Pease River, one mile north of the site of Tee Pee City (at 100°35' N, 34°07' W). An early Matador Ranch map indicated two creeks, Spring and Brush, in the area of the stream now called the Ditch. The Matador map stated that the region contained mostly high-quality grazing land. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article."DITCH (MOTLEY COUNTY)," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/rbdpz), accessed March 11, 2014. Uploaded on June 12, 2010. Published by the Texas State Historical Association.
<urn:uuid:0a415aef-eb20-4c0b-be3f-862c84d2b68c>
CC-MAIN-2014-10
http://www.tshaonline.org/handbook/online/articles/rbdpz
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011372778/warc/CC-MAIN-20140305092252-00051-ip-10-183-142-35.ec2.internal.warc.gz
en
0.894371
216
2.796875
3
Lack of hurricanes may be due to Brazil drought, expert saysby John Nelander On Wednesday, we’ll have matched the 2002 mark for latest first hurricane during the post-1966 satellite era. If no hurricane forms in the Atlantic by the following Monday, Sept. 16, it will tie the all-time latest first hurricane record set in 1941. Of course this is a good thing, but weather researchers are wondering why so many tropical storms (seven so far) have not developed further during the height of the season. A Weather Underground blogger came up with one possible explanation last week: A drought in Brazil has been pumping dry air into the Atlantic, causing tropical storms to fizzle out. Lee Grenci, a retired professor of meteorology at Penn State, cites low relative humidity over the Atlantic “associated with a regional spell of dry weather” in August and early September. According to Weather Underground’s founder, Jeff Masters, the drought in Brazil is the costliest natural disaster in the country’s history at $8.3 billion dollars. Meanwhile, Tropical Depression Eight formed off the East Coast of Mexico Friday afternoon with sustained winds of 35 mph, but it moved on shore within hours and didn’t have an opportunity to reach tropical storm status. It was the third tropical cyclone to hit Northeastern Mexico this year. Tropical Storm Barry formed on June 17 and made landfall north of Veracruz with winds of 45 mph. Tropical Storm Fernand formed on Aug. 25 and moved inland at almost the same location with 50 mph winds. The National Hurricane Center was watching two other areas of potential development over the weekend, including a strong tropical wave emerging from the West Coast of Africa. Forecasters gave it an 80 percent chance of becoming a depression, or a tropical storm, by Thursday. In addition, the remnants of Tropical Storm Gabrielle showed signs of regeneration north of Hispaniola. The system is expected to move north or northeast over the next five days. It should be no threat to Florida. * * * Almost a half-inch of rain fell in Palm Beach and at Palm Beach International Airport Friday night and early Saturday morning as low pressure sagged south over the peninsula. It was expected to stall over Lake Okeechobee and deliver more rain through Saturday, followed by drying trend on Sunday and Monday. More South Florida records fell late week, including one on Wednesday at PBIA. The low dropped to 83 degrees, another record high minimum temperature, beating the previous record of 80 set in 1983. On Friday, a record minimum temperature was tied in Miami with a reading of 82, matching the mark set in 1977. Naples set a record one-day rainfall mark Friday with 2.57 inches, smashing the mark of 1.11 inches set in 1971.
<urn:uuid:02047bf4-6471-4df3-9cb4-c2b74210a99b>
CC-MAIN-2014-10
http://blogs.palmbeachpost.com/weathermatters/2013/09/07/lack-of-hurricanes-may-be-due-to-brazil-drought-expert-says/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021856395/warc/CC-MAIN-20140305121736-00024-ip-10-183-142-35.ec2.internal.warc.gz
en
0.943275
578
2.875
3
Nicolaus Ruterius was an important figure in the Burgundian court in the 15th century. Hailed from Luxemburg, his real name was Nicolaas Ruter. Ruterius arrived in Leuven when he was appointed provost of the Sint-Pieterskerk (St Peter’s Church) and also Chancellor of the University of Leuven. Ruterius rose to become the Bishop of Atrecht (modern-day Arras in France) in 1502. Now, Ruterius was a friend of the Professor of Theology Jan Standonck who founded in 1500 a college, exclusive for students of little means. According to Standonck, the lack of money forced the students to obey this college’s extremely strict regime. This was highly conducive to the instruction of future monks or persons who truly fear God. With the further encouragement by his other friends, the Dean of the Sint-Romboutskathedraal (Saint Rumbold’s Cathedral) in Mechelen, Johannes Robbijns, and Adrianus Florenszoon, Senior Lecturer in Theology and Dean of Sint-Pieterskerk (St Peter’s Church) and later Pope Adrianus VI, Rutterus bought a parcel of land between the land owned by Joost Absoloens (later the Premonstreitcollege) and the domain of the Van ’t Sestich family. In 1508, the Atrechtcollege was officially established, named after the bishopry of Ruterius. With places for 13 poor students in arts, the college is reserved for young men from Atrecht, Haarlem, Kamerijk, Luxemburg and Leuven. The Richest College for the Poorest Students Fortunately for the poor students, the Atrechtcollege was destined to grow wealthier and wealthier since its founding. Ruterius showered it with his own resources and his powerful friends did the same. Between the 17th and 18th century, the college grew in size and intake. In 1633, its president Jan Schinckels built a new building in the big garden for students as well as a principle building “groete huys” for himself. In 1774-1776, the college underwent a complete renovation under president Gerard Deckers, which transformed the site into a “modern” French “hôtel” with a courtyard and a garden. Much of what we see today remains from that time. From a boys’ school to a girls’ school After the dissolution of the University of Leuven in 1797, the Atrechtcollege entered into the hands of private persons. In 1807, it was bought by a wine trader De Bruyn. It was said that he was the one who planted the Japanese pagoda tree (Japanse honingboom) in the front garden in 1818. The property that De Bruyn also had a huge chunk of the original garden gone. It was taken by the city of Leuven in 1870-1871 to form part of the new Sint-Donatiuspark. In 1921, the whole complex was bought by the Congregation of the Daughters of Maria of Paridaens, who in the name of the new University of Leuven, made the site into a girls’ school. More renovation work was done to transform the buildings into this function. It was not until 1979, under the director-engineer A. Verheyden of the university that the site was brought back under the university itself. The Tree of Deep Sorrow – ‘Boom van het groot verdriet’ It was during the years 1921 and 1977 that the Atrechtcollege was a girls’ school. By this time, the Japanese pagoda tree had grown into a big tree. It is said that boys were loitering under its shade outside, as their girlfriends returned to the college by seven in the evening. Hence the tree became to be known as the Tree of Deep Sorrow (Boom van het groot verdriet). The Heavenly Sphere of Ferdinand Verbiest Father Ferdinand Verbiest was a West Flemish Jesuit missionary (9 October 1623 – 28 January 1688). An accomplished mathematician and astronomer, Father Verbiest traveled to China and presented this sphere to the Emperor Kangxi. Known in China as Nan Huairen (南懷仁), Verbiest became a trusted subject of the Emperor and corrected the calendar used by the court in 1670. In return, Verbiest could spread his religion to the Chinese subjects in the capital. The replica sphere that you find in the inner courtyard of the Atrechtcollege was made in 1775. Today, the college houses the International Office of the KU Leuven. The front garden on the streetside and the Japanese pagoda tree became protected monuments and landscape in 1974.
<urn:uuid:3f9ac62b-7a6b-4cf8-8bf0-c01d02a86823>
CC-MAIN-2023-40
https://dontthinktoomuch.com/portfolio-item/oud-leuven-37-atrechtcollege/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510427.16/warc/CC-MAIN-20230928162907-20230928192907-00640.warc.gz
en
0.946216
1,154
2.859375
3
Mass meetings of all kinds soon became in demand. Audiences marvelled at the emotions created by huge crowds, especially in urban communities, where residents could suspend their normal isolation and anonymity and see themselves literally gathered, as a single entity. Boston’s National Peace Jubilee, for instance, developed by band leader Patrick Gilmore in 1869, to celebrate the end of the Civil War, was the largest single concert gathering in the United States, including an orchestra of 1,000 (along with a total of 11,000 singers) and an audience of 50,000, all in a temporary coliseum, constructed entirely out of wood. |Interior of the Coliseum of the 1869 National Peace Jubilee| In 1872, Gilmore did it again in the World Peace Jubilee. Architect William G. Preston redesigned the since demolished 1869 coliseum on an even bigger scale, with a seating capacity of 100,000 (and a 2,000-pice orchestra and a chorus of 20,000). At events like this, the thunder of applause was as interesting as any performer; indeed, the physicality of the moment created excitement out of risk as much as pleasure. Flyers for the 1869 Jubilee advertised "AN IMMENSE COLISEUM, The largest structure in America, capable of accommodating FIFTY THOUSAND PERSONS, has been erected especially for this occasion." And, as one writer said about the Peace Jubilee in 1872: “Not the least moral feature of the Festival is the applause,--so overwhelming in its demonstration that timid souls have said their prayers and trusted blindly in the stability of wooden rafters.” Gilmore tried to downplay this risky excitement in his own accounts of the festivals, but clearly it was part of the appeal: “The builders, contractors, architects, and building committee were all gentlemen of great experience, and fully appreciated the responsibility of their task. They knew that the safety and security of Fifty Thousand lives were in their hands, and they took every precaution to guard against accident by making the structure strong and solid enough to bear ten times the weight and pressure to which it would ever be subjected…From morning till night, for weeks and months, the Building Committee, one or all, were almost constantly on the ground, watching every inch of progress made. Fully satisfied that everything possible was being done which the knowledge and experience of the builders and their own foresight could suggest to make the structure safe beyond a doubt, they turned a deaf ear to the malicious rumors that would have swept away all confidence…” (From Patrick Gilmore, History of the National Peach Jubilee and Great Musical Festival, 1871: 277). Big audiences didn’t disappear—you start to see the impact of mass meetings and concerts, especially, in sports, where stadium construction increases to accommodate audiences for baseball and football. As an article from a gymnasium construction company states, "It is announced in the newspaper column that the Yale football management is to make determined effort to accommodate all the spectators who may wish to attend the Yale-Harvard football game this fall, a condition of affairs which has not existed in previous years. Some 4,000 additional seats are to be erected, giving the gridiron stands a seating capacity of about 31,000. The new stands will be so erected as to close the corners and fill in the sides. Comparison with the Harvard stadium, used for the first time last year, still shows a greater capacity at Cambridge. The Harvard stadium proper seats about 25,000 persons, but by means of additional stands 35,000 can be accommodated. (American Gymnasia and Athletic Record, Vol 1, September 1904-August 1905. Boston: American Gymnasia Company: 11) Temporary “stands” were one thing; it became clear by the 1890s that the lasting appeal of sports would require more scientific examination of how create solid, safe, and enduring audience structures. I’m only beginning the exploration of this particular science, but here’s a glimpse, from a paper on “live loads”—another term for audience!--delivered at the meeting of the American Society of Civil Engineers in 1904: "It is freely admitted the writer’s results give figures greatly in excess of those given by the accepted authorities (outside of some municipal building laws), both in the United States and in Europe, but the experiment is one very easily tried by anyone who may feel unconvinced. Doubtless, mixed crowds of men and women, such as football spectators, may weigh less per square foot, with an equal degree of personal discomfort, than the body of students in the writer’s experiments. It should be remembered that a closely packed crowd is not likely to be in a mood to take calmly any undue deflection or appearance of weakness in the floor, and the result of such seeming insecurity is not pleasant to contemplate. In the writer’s opinion, such floors as those of passageways, corridors, standing-room in theaters, assembly rooms without fixed seats, ballrooms, etc., should be calculated for a weight closely approaching 150 lb. per sq. ft., or, in some cases even more, without exceeding the unit stresses of Mr. Schneider’s Paragraph 17. Possibly, a large standing assemblage, such as is common at political meetings, likely to applaud by stamping; or, a throng of dances; or a body of drilling soldier, might call for an additional impact provision."(C. C. Schneider, “The Structural Design of Buildings.” Transactions of the American Society of Civil Engineers, Paper No. 997: 443-444) |An image from C. C. Schneider's 1904 paper on "live loads."|
<urn:uuid:ab31c245-af16-4a94-9c35-3c27810c613d>
CC-MAIN-2017-39
http://theardentaudience.blogspot.com/2011/02/audience-engineering.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690318.83/warc/CC-MAIN-20170925040422-20170925060422-00301.warc.gz
en
0.959122
1,188
2.734375
3
The African American culture is one of the richest in historical traditions, artistic valor, and communal practice. It takes with it the meaning of family across the African diaspora, transforming those who are called neighbor into brother and sister. People of African descent are one people with many varied parts, continuing to embark on a journey to find some manner of togetherness that doesn’t separate us by religion, creed, gender, political agenda, or educational achievement. Dr. Maulanga Karenga established a means for people of African descent to reconnect to their African heritage and cultural traditions. Rooted in the Black nationalist movement of the 1960’s, Kwanzaa became natured by seven principles of African heritage, known as the Nguzo Saba, which according to Karenga, composes a communitarian African philosophy, or the best of African thought and practice. I currently serve at a “holiday neutral” school where the student majority is of African descent. Although Kwanzaa is not a formalized holiday, with respect to what our school calendar recognizes as such, mention of it was absent. I at first wondered if this was just a local school or organizational issue, until I also realized that parental members of the community did not largely inquire about and therefore accepted its absence. This appears to be a systemic issue. Kwanzaa was instituted as a means to reaffirm the human agency and cultural dignity of people of African descent. This agency was disrupted during enslavement as persons who owned enslaved Africans, influenced a displacement of practices that were intrinsically African. In its stead, Christianity was often misused to justify the institution of slavery. Therefore, upon the birth and annual observance of Kwanzaa, people of African descent who do not honor the Christmas holiday, which is rooted in the Christian belief system, were able to relocate their own African spirituality and practice. In 1977, Karenga wrote in his book Kwanzaa: Origin, Concepts, Practice: “Kwanzaa is not an imitation, but an alternative, in fact, an oppositional alternative to the spookism, mysticism and non-earth based practices which plague us as a people and encourage our withdrawal from social life rather than our bold confrontation with it.” However, as Kwanzaa gained notoriety amongst Christians, who were also of African descent, its stance as an opposition to Christmas, changed. Being individuals of African descent, with this striving, equated a new means for Black America to adopt a system that did not oppress but instead, gifted communalism in relation to the edifying of Black culture. It returned displaced value in the Black community and before America, performed cultural relevance and the quality of life for people of African descent. Beginning on December 26, in the spirit of feasting and gift giving, Kwanzaa is a tonal extension the holiday season intends to set. But while the daily candle lighting of the kinara occurs, can we, meaning Americans, honestly state that we offer a place on the celebratory stage for Kwanzaa to hold relevance in our Americanized discourse and demonstration of “holiday”? The American education system, an incubator for much of what America teaches, is in part responsible for the lack of promotion of an honorable celebration rooted in that, in this case, which is directly linked to the student population it serves. In consideration of this country’s foundational precepts, the assumption could be that America is not largely interested in Kwanzaa, for it is too culturally forward. This is not to state that there should be limitations on the dialogue about Christmas, for the truer and often blurred Christmas message is not about elves, a traveling sleigh, or retail sales. This is to state that we, in America, can begin to be more reflective about whether or not we’ve become so wired by the the celebration of a holiday, that we miss observances, like that of Kwanzaa, which prove a cultural richness and relevance America struggles to promote overall. The overarching question is when will we, the American family, get pass the preliminaries that acknowledge that we are a culturally manifold country, and reach an adoption of knowledge and action that promotes cultural competence, especially during times like these where thick cultural history, thought, and practice are abundant and easily recognizable? Cultural competence begins with discussing what appears to be invisible and making it seen. Acknowledging and educating about Kwanzaa, on a greater scale, is another beginning we can start from. Jovan A. Brown is an elementary educator and cultural competence facilitator based in Philadelphia. She is the mother of one and aspires to publish children’s literature that encourages self-acceptance. (www.dear-beautiful.com)
<urn:uuid:e29c0f37-8478-490b-a3a7-37af41b5aa56>
CC-MAIN-2020-24
https://www.ebony.com/news/kwanzaa-and-christmas-the-importance-of-cultural-tradition-333/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00340.warc.gz
en
0.964193
990
3.359375
3
In the Antarctic Dry Valleys, soil polygons are prominent features of the landscape and may be key units for scaling local ecological information to the greater region. We examined polygon soils in each of the 3 basins of Taylor Valley, Antarctica. Our objectives were to characterize variability in soil biogeochemistry and biodiversity at local to regional scales, and to test the influence of soil properties upon invertebrate communities. We found that soil biogeochemical properties and biodiversity vary over multiple spatial scales from fine (greater than 10 m) to broad (less than 10 km) scales. Differences in biogeochemistry were most pronounced at broad scales among the major lake basins of Taylor Valley corresponding to differences in geology and microclimate, while variation in invertebrate biodiversity and abundance occurred at landscape scales of 10-500 m, and within individual soil polygons. Variation in biogeochemistry and invertebrate communities across these scales reflects the influence of physical processes and landscape development over ecosystem structure in the dry valleys. The development of soil polygons influences the spatial patterning of soil properties such as soil organic matter, salinity, moisture, and invertebrate habitat suitability. Nematode abundance and life history data indicate that polygon interiors are more suitable habitats than soils in the troughs at the edges of polygons. These data suggest that physical processes (i.e. polygon development) and biogeochemistry are an important influence on the spatial variability of biotic communities in dry valley soil ecosystems. ...or query the data clicking the: This file contains data compiled by Ed Khun, Andy Parsons and Jeb Barrett. The final data QA/QC and analysis were performed by Jeb Barrett. Metadata was ported to DEIMS by Inigo San Gil (2015)
<urn:uuid:dbb74805-3852-49b8-a723-5a03571bed0c>
CC-MAIN-2017-30
http://mcmlter.org/content/anhydrobiosis-soils-polygon-study
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426639.7/warc/CC-MAIN-20170726222036-20170727002036-00084.warc.gz
en
0.902038
363
2.703125
3
Geological information can be used to try to prevent and minimise the effects of various types of hazards. Good physical planning requires us to take into account both the hazards that exist in nature and those that we ourselves have created. Where is the land prone to slide, which areas have high radon concentrations in the soil, how vulnerable is our drinking water supply and where do the soil and groundwater contain high levels of harmful substances? What’s in the soil and in the water? Soil and water contain metals and other substances that are present naturally or have been introduced through human activity. For example, planning new residential areas or deploying measures in wells with bad water requires information both about the substances naturally occurring and the substances introduced through human influence. SGU's geochemical information, i.e. information on the soil's content of various substances, can be used to identify areas with properties such as elevated concentrations of metal (natural or anthropogenic) and areas with low pH. Protecting our most important foodstuff Water is one of our most important foodstuffs. When a risk of contamination or poisoning arises, society must know where there are surface and groundwater reserves. In order to eliminate risks as far as possible and to produce plans for dealing with an accident, the vulnerability of drinking water supplies must be mapped in advance.,. For example, an assessment of the measures to be deployed if a petrol truck overturns near a groundwater supply requires information on soil permeability and the groundwater's flow direction. Groundwater might also be affected by climate change. Natural radioactivity and soil radon People are constantly exposed to ionising radiation. The majority of this radiation comes from natural sources - radon in indoor air and radiation from the ground. Radiation from the ground comes mainly from the decay of the naturally occurring elements potassium, uranium and thorium. The concentrations of these elements in bedrock and soil strata vary, which means that the radiation varies from place to place. The variation of natural radiation in Sweden can be seen on the maps viewers for potassium, uranium and thorium. Sweden's bedrock is relatively rich in uranium. Rocks that may have elevated concentrations of uranium include alum shale and various granites. Uranium is radioactive and decays in several steps. One step in the decay chain is radon. Radon in indoor air can come from the soil, from building materials or from water The radon concentration should not exceed 200 Bq/m³ in dwellings, schools and public premises (Public Health Agency of Sweden guideline value). The radon hazard is regarded as linear, which means that a radon concentration of 400 Bq/m3 is twice as dangerous as 200 Bq/m3. All radon buildings should be traced, but it is of primary importance to trace buildings with high radon concentrations. Radon from the soil is the most common source of radon in buildings. There is always enough radon in the soil for the radon concentration to exceed limit and guideline values if soil air can leak into a building. Radon might also be present in drinking water, especially in water from wells drilled in the bedrock. Radon in water passes readily into indoor air when the water is used. Building materials can also emit radon. Landslides and rockfalls Some soils, especially clayey soils on sloping ground or adjacent to water, are prone to landslides. Best of all is to avoid building on such sites, but if there are already buildings, there needs to be means of assessing the landslide risk and, where necessary, the measures to be deployed. This requires knowledge of soil properties and of the occurrence of previous landslides. SGU's soil geology databases can be used together with terrain and water flow information to assess the risk of landslides and rockfalls. Sulphide soils and acid sulphate soils In areas where sulphide soils occur, acid sulphate soils are often formed, which can periodically lead to very adverse effects on watercourses. High metal concentrations and low pH may in certain situations lead to fish death.
<urn:uuid:521eb4ec-fa38-4545-af56-2cd2657641e2>
CC-MAIN-2020-16
https://www.sgu.se/en/physical-planning/hazards/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371858664.82/warc/CC-MAIN-20200409122719-20200409153219-00439.warc.gz
en
0.936442
841
3.6875
4
AWARE THAT ELECTRICITY CAN KILL !!! I ALWAYS KEEP ONE HAND IN MY POCKET WHEN WORKING ON HIGH VOLTAGE EQUIPMENT. MAKE SURE THAT YOU KNOW OF THE HAZARDS FROM A LIVE CHASSIS, CHARGED CAPACITORS, FINAL ANODE VOLTAGES, THE AC SUPPLY ETC The POKING AND HOPING method of fault finding on electronic equipment is ok if there are only a few components which can be changed one at a time, but is useless where a large number of components are involved. A more logical method is necessary. Begin by observation using Have you heard of the game Twenty Questions? One person thinks of something and the others have to guess what it is by asking questions. They receive only YES or NO answers. If the first question IS IT AN ANIMAL? and the answer is YES then all non animal items in the universe can be ignored. If the second question IS IT HUMAN? and the answer is YES then all other animals in the universe can be ignored. If the third question IS IT FEMALE? and the answer is NO then only questions related to men need be asked. After twenty questions most items in the universe can be discovered!! A similar system can be applied to fault finding. This is called the HALF SPLIT method. A transistor radio has several STAGES and the signal from the aerial passes through these and is emitted from the loudspeaker as an audio signal. |The volume control is about half way along this chain. If I inject an audio signal at this point and hear noise from the loudspeaker then I know that all stages and components after this point are ok and the fault lies before this point. From this one measurement we have proved that half of the components are ok and that the fault lies in a certain area. Further HALF SPLIT measurements will enable us to locate the precise stage in which the fault lies. If we had started at the aerial end and the fault was in the loudspeaker then we would have wasted much time and effort before we found it. These tests are called DYNAMIC MEASUREMENTS and enable us to locate the stage or area of the fault. To find the actual faulty component we use STATIC MEASUREMENTS. The measurements obtained are interpreted to obtain the identity of the faulty component. For example, the base to emitter voltage of a good silicon transistor is 0.6 volts.If it is not this voltage then it is possibly this component at fault. Beware that a faulty associated component could possibly give the same readings. If you haven't had much experience at interpreting voltage measurements then remove the suspect component and check it by resistance measurements or substitution with a known good component. Since the faulty stage has been located and only a few components are usually involved then POKE AND HOPE is more permissible!! Copyright Graham Knott 1999
<urn:uuid:c4960bdd-348b-4ed2-9842-17699a58f359>
CC-MAIN-2013-20
http://homepage.ntlworld.com/g.knott/elect30.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704986352/warc/CC-MAIN-20130516114946-00092-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938071
617
2.78125
3
The year 2020 is full of events to observe, a meteor shower in the first days of the year, six eclipses, three supermoons, cosmic conjunctions and here we give you a compilation of the first semester Ending the holidays, the meteors welcome us to 2020 with the greatest meteor shower: the Quadrantids. The dates where they can be observed better were January 03 and 04 where they reached their maximum peaks, reaching up to 200 meteors per hour. The lucky ones for this great event are those who live in the northern hemisphere since it can only be seen on this site in dark and clear places. By day 05 the Earth will reach the shortest distance with the sun having one of the hottest days of the year (approximately 147 million kilometers). And at the end of the month, on January 10 a penumbral moon eclipse can be observed, which can be seen in Europe, Africa, and Australia even if a telescope is needed. On February 18, the moon will slide in front of Mars, an event that can only be observed by the owners of telescopes and large prisms. The event can be seen from Central America, North America, North end of South America, Cuba and Haiti. On March 9, the first supermoon of the year will be held, which is when the moon looks brighter and close to normal. The supermoon is an unusual astronomical event that is created when there is a full moon and, in turn, is at its closest point of the orbit to Earth. By the beginning of April Venus will be at its highest peak of nocturnal appearance. We will also have the second and largest super moon of this year and by the end of the month on the 21st and 22nd, you can see a meteor shower from the southern hemisphere. On May 7 the 3rd Supermoon will be given By June 5 there will be another penumbral eclipse of Luna but it will be difficult to see in Europe, Africa, Asia, and Australia. The first solar eclipse will also be produced by June 21 and can be seen from Africa, Arabia, Pakistan, northern India, southern China, Taiwan, the Philippine Sea, and the Pacific Ocean. Although the moon will pass directly in front of the sun, it will not completely cover the visible radius so a radius of light can be seen shining around the silhouette of the moon
<urn:uuid:caf53020-a4a9-4d30-973b-0b22504fe5df>
CC-MAIN-2023-14
https://theblackheralds.com/2020/01/09/2020-astronomical-events/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00172.warc.gz
en
0.951476
496
2.984375
3
Pomelo fruit inspires high-strength hybrid metal Here's an interesting fact about the pomelo fruit: even though a mature fruit can weigh up to 2 kg (4.4 lb), they remain intact after falling from heights of over 10 meters (33 ft). The secret lies in the structure of their peel. Scientists have copied that structure, to produce a new type of aluminum composite that's stronger than straight aluminum. The pomelo's peel is comprised of a "graded, fiber-reinforced foam" that incorporates a myriad of tiny impact-absorbing strut-like structures. Scientists from RWTH Aachen University and the University of Freiburg, both in Germany, took that design and applied it to the creation of a unique metal hybrid. The center of the material is composed of pure aluminum, which is good at withstanding permanent changes in shape. The outer shell, however, is made of an aluminum-silicon alloy, which has a high tensile strength (it's difficult to break, in other words). As a result, the composite resists both deformation and breakage, better than either the pure aluminum or the alloy on their own. It has been suggested that the metal could be particularly useful in the manufacturing of strong but lightweight safety materials, particularly in the automotive industry.
<urn:uuid:9086786b-1009-493b-99dd-0ee6d7c1af19>
CC-MAIN-2020-10
https://newatlas.com/pomelo-fruit-aluminum-hybrid/30018/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145966.48/warc/CC-MAIN-20200224163216-20200224193216-00550.warc.gz
en
0.941437
269
2.75
3
Help us find a cure! Learn how > Many parents of children with HAE are unsure about how to approach their child’s school. On this page, we provide some materials to help you work together with your child’s school. The materials are provided only as a general guide. You will want to personalize any materials that you provide to your child’s school to reflect his/her own particular needs and situation. How will the school respond if my child misses too many days of school? How will my child make up missed assignments? An HAE school packet can be a great resource in this regard. Click here or on the icon to the left to download information on putting together a school packet. What Is a 504 plan? Who is eligible for a 504 plan? Are 504 plans really necessary? Information can be found here to answer these questions and more. Chances are your school nurse has never heard of Hereditary Angioedema. A letter providing a brief introduction to HAE, an idea of what is needed to help your child if medical intervention is necessary, and your emergency contact information can be a great help to you, your child and your child’s school. Click here or on the icon to the left to download the sample letter.
<urn:uuid:ddd0c255-3497-4977-bb18-cd67eda70041>
CC-MAIN-2014-10
http://www.haea.org/resources/school-resources/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670740/warc/CC-MAIN-20140305060750-00065-ip-10-183-142-35.ec2.internal.warc.gz
en
0.923365
264
2.515625
3
System administrators are no longer alone in their concern for security. The increase in high-profile virus attacks, and a general sense of heightened security, means that executives are likely to have security on their mind. It may be easier than ever to enlist their support for securing our networks and systems, and they may be more likely to put up with some inconvenience for users if it means tighter security. This article gives an overview of the basics necessary to secure your network, including: Consider this a checklist to reenergize your efforts or to get you started. The first step in securing your network is to teach users to create secure passwords. All the security in the world is easily bypassed if your CFO's password is "fred." I also recommend requiring users to change passwords monthly, and not allowing them to reuse one within a one-year period. Some people argue that requiring a password change will encourage users to write down their passwords, eliminating the benefits. I argue that in many environments a user's password is more likely to be hacked than to be read off a hidden sheet of paper. Even so, you can take IBM's purported approach: you write down your password, you're fired. It's harsh, but with today's threats and the damage that can be caused by a compromised account, it may be worthwhile. Will it increase the calls to IT for forgotten passwords? Perhaps. One way to help combat that is to allow only a person's manager to request a password reset. Or, as I used to say when I worked at the Census Bureau, "No problem, done in two hours." "Why two hours?" "Don't forget your password." Also, make sure your users know not to give their password out over the phone, even if the person claims to be from the IT department. Social engineering is the simplest and most effective way to gain access to a company's network. The same is true for physical site security: make sure strangers can't get into your office space. If that's impossible, make sure your users can identify your IT staff; just because someone has long hair and a wrinkled shirt doesn't necessarily mean they're actually on the IT staff. For a more detailed explanation of good password policy, read this chapter on Password Problems from Managing Windows NT Logons.) In a Unix environment, run a tool like Crack against your password (better still, shadow) file to weed out any easy-to-guess passwords. One more thing: administrators, too, need to remember to change all default passwords. Attachments have proven quite dangerous. Tell your users not to open any attachment they receive from anyone, unless they were already expecting it. If they receive an attachment that might be legitimate, a quick email or a phone call will confirm if it's legit or not. I also highly recommend blocking all executable, DLL, and scripts at your mailer, or at least renaming the files so they don't execute if clicked. You can defang attachments with a Procmail filter called the Sanitizer. Users may think they're safer if they have their macros disabled on Microsoft Windows applications, but they're not. SecurityFocus recently announced that malformed Excel and PowerPoint documents can completely bypass all security checks, allowing macros to run even when supposedly disabled. If your users rely on Outlook, be sure to apply the appropriate patches. Visit Slipstick Systems for more information on Outlook security. Moving on from users and passwords, we next look to the network itself. A firewall is a given these days. A DMZ, or a Demilitarized Zone, should be as well. A DMZ is a haven for machines that are exposed to the real world. The machines in a DMZ can be reached from the corporate LAN or from the outside world. But those DMZ machines cannot reach back into the corporate LAN to contact hosts within. A firewall and a DMZ are not enough, however. What if someone gains access to your LAN, either physically or by compromising a user account or a partially exposed machine on the LAN? You should disable all the network services you don't plan on using on every machine on your LAN. This minimizes the potential exploits available to an attacker; all the more so since these are the very services you're unlikely to update and patch. To help identify unused services that are running, try a package like SAINT (Security Administrator's Integrated Network Tool), which automatically scans all the machines on your network and reports open ports and other security risks in a simple Web interface. Speaking of patches, be sure to apply security updates for the operating system and all the offered services of DMZ and internal machines. Keeping relatively current is also worthwhile--for example, BIND version 8 contains a bug that allows root access to the box, while BIND 9 does not have this problem. And while it takes a bit of effort, it's also worthwhile to keep all of your users' machines current as well. The rest of this article discusses specific steps you can take to further increase LAN security. Remember, though, that without secure passwords and well-informed users, many other security measures are moot. In the early days of the Internet, when security was not a big concern, insecure protocols such as fine, even though they transmit all passwords and data in cleartext. That's not acceptable today, either across the Internet or on a LAN. A wireless LAN, even one with 128-bit WEP security, is an even greater security risk, as anyone within range of your card can pick up all your unsecured data--even from outside your building. Also on Firewalls The SSH protocol provides a drop-in replacement for the Unix r-commands (such as rsh, rexec, and so on). The r-commands are very insecure, and what security they do offer is typically trivial to bypass. Public/private keypairs can be used to replicate the convenience of the r-commands passwordless connectivity. However, that opens another security hole: hack one machine with a passwordless keypair and you have access to all the related machines as well. Deploying SSH is pretty simple; you can even remove the r-commands and replace them with symbolic links to the SSH equivalent. SSH also serves as a capable Telnet replacement, and as one O'Reilly person pointed out, the heck with the security, it's one of the only decent terminal emulators available for Windows! Finally, SSH provides the extremely powerful ability to tunnel other protocols. That is, if you absolutely must use the insecure POP to get your email, you can use SSH to tunnel the data, keeping it from any prying eyes. SSH does this by intercepting data sent to a local port, say POP3's 110, and forwarding it to the specified server's port 110. The data actually travels over SSH's port 22 and is completely secured by the SSH protocol. Read how to set up secure SSH tunnels here. On the subject of tunneling, IPsec (IP security) provides packet-level encryption of any and all protocols. IPsec, which is inherent to IPv6, is also available for IP version 4. It's an excellent way to secure all network traffic. It doesn't make up for flawed programming--such as the potential root exploits mentioned above--but it does allow for transparent security of normally insecure protocols. The setup and configuration is well beyond the scope of this article, however, you can read tutorials for FreeBSD, NetBSD, OpenBSD, Linux, and Windows 2000. SSH tunneling and IPsec aside, there are other secure equivalents for protocols. APOP, for example, is the secure equivalent of POP. Sendmail now supports SMTP AUTH and SMTPTTLS, which provide authentication and encryption support. BIND 9 and higher supports signed zone and DNS request support, ensuring that hosts get accurate data from the correct name server. If you can't find a secure replacement, you can almost always use IPsec or an SSH tunnel. Computer Security Basics If you are running a wireless network, be sure to use 40-bit or 128-bit WEP. It's true that it's rather trivial to crack, however, it will keep the casual observer from watching your data or hooking onto your network. However, since it is trivial to crack, be sure to use secure protocols to carry sensitive information (such as passwords or financial reports) over a wireless connection. You might also consider placing your wireless network in its own DMZ and require users to VPN into the wired network. To prevent unauthorized users from accessing your network, restrict access to cards with registered MAC addresses. This is also pretty simple to defeat but, again, it guards against casual abuse. Stronger authentication can be provided via the NoCatAuth project. If you take the steps outlined above, you're running a pretty secure network. Unfortunately, it's not good enough to rest on your laurels and enjoy endless days of peaceful security. You must remain constantly vigilant to new exploits and potential risks. A paranoid security administrator is a great security administrator. One quick and easy way to keep tabs on your network is to use the SAINT tool mentioned above. Keep it current and you'll stay informed of many new potential exploits. SAINT is also one way to look for compromised machines on your network by identifying odd open ports. Read O'Reilly's Security Bibliography, a list of the best security books by O'Reilly and other publishers, which should help you find resources to protect your systems and your privacy in these troubled times. Traffic analysis is an excellent way to watch for both attacks and compromised machines. By mirroring all of your internal network traffic to a machine running Snort you can log most packets that travel over your network. Tools to analyze the logged traffic and identify attacks are available at the Snort site. You might consider running one box outside of your firewall to see what attacks are being made, and another internally to see what's getting through. You should also regularly analyze your system log files to check for malicious activity. Tools like Logcheck can help by mailing you about suspicious-looking activity. It's important to log to a different machine and configure the logs so they're append-only. Read Chris Boyd's article showing how to do this with syslog. For logging Windows data, utilities such as EventReporter send Windows event logs to a host running syslog. Keeping track of your system files is another good idea, using a tool such as Tripwire. This tool ensures that if a system file is replaced by a cracker, you're aware of it. Also, as I mentioned above, it's important to apply appropriate security patches to the operating system, services, and programs on all machines on your network. Even with secure passwords and IPsec, your work is for naught if someone can access root on your mailserver because of a Sendmail exploit. These steps will not guarantee that your network is safe from being hacked, but by following these recommendations you'll keep out most of the script kiddies and casual hackers. Here are three more things you can do to stay on top of things: Mike DeGraw-Bertsch is a security and Unix system administration consultant in the Boston, Mass. area. When he's not at a job, writing, hacking with Perl, or playing with his wireless network, he can usually be found playing goal in ice hockey. Return to the Linux DevCenter. Copyright © 2009 O'Reilly Media, Inc.
<urn:uuid:4361882b-9506-4cc6-9379-84ec1af8681b>
CC-MAIN-2014-35
http://www.linuxdevcenter.com/lpt/a/1286
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909044547-00154-ip-10-180-136-8.ec2.internal.warc.gz
en
0.94159
2,384
2.515625
3
Draw a Map After reading Part I, students should be given the opportunity to draw a map of Alobar's land, Aelfric, etc. using the text as a guide. Where is Rome located in relation to Aelfric? Where is Egypt? Ask students to look in their local newspaper for an article that involves some aspect of the novel and bring it to class. They will present it in front of the class and explain how it relates to the novel. Draw a Character Have students draw their interpretation of a character from the novel. Artistic ability is not important but drawings must reflect the type of costume worn by the character, whether the character is fat or thin, bald, prim or slovenly. What colors does the character wear? Invite students to research different types of recipes with beets in them and create these dishes to... This section contains 811 words (approx. 3 pages at 300 words per page)
<urn:uuid:7c9e5c1b-26db-4bb0-862a-2c852e81d31c>
CC-MAIN-2017-26
http://www.bookrags.com/lessonplan/jitterbug-perfume/funactivities.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320669.83/warc/CC-MAIN-20170626032235-20170626052235-00571.warc.gz
en
0.951488
196
3.90625
4
We're passionate about building passive housing. The term passive house (Passivhaus in German) refers to the rigorous voluntary Passivhaus standard for energy efficiency in buildings. The first Passivhaus buildings - ultra low energy buildings requiring little energy for space heating or cooling - were built in Germany in 1990 and the Passivhaus-Institut was set up by Dr Wolfgang Feist in 1996 to promote and control the standard. Since then, around 15,000 passive houses have been built around the world, mostly in German-speaking countries. To meet the passive house standard, buildings should have: It is also recommended that the specific heat load for the heating source at design temperature is less than 10W/m² (3.17 btu/ft2 per hour). Passive housing does not use conventional heating. A passive house typically costs slightly more to construct than conventional buildings, but energy costs can be cut by up to 90%. To meet strict passive house regulations the windows and doors must be top of their class in energy saving which is why we recommend the Austrian window manufacturer, Gaulhofer. The Gaulhofer range is extensive and features frameless designs as well as powder coated aluminum shells and boasts the only UPVC windows that achieve a passive house qualified Uw value.
<urn:uuid:cfd05786-6ee1-4a61-8ec8-81124861a5cd>
CC-MAIN-2013-48
http://www.ecoframesystems.com/passive-housing/
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164949664/warc/CC-MAIN-20131204134909-00060-ip-10-33-133-15.ec2.internal.warc.gz
en
0.956848
263
2.765625
3
Back in the Victorian era, surgery was as much a spectator sport as a medical procedure, one that some would consider bloodier and more gruesome than the battles fought in the Coliseum of Rome. Were it not for the English surgeon Joseph Lister, and his pioneering work in antiseptics, they might still be the stuff of horror. We spoke to American author Lindsey Fitzharris about her book The Butchering Art, now shortlisted for the 2018 Wellcome Book Prize, which explores why Victorian operating theatres were known as ‘gateways of death’, how the shy but brilliant surgeon Joseph Lister discovered what was making them so dangerous, and how it would change surgery forever. It’s not long into the book before you read about the grim realities of Victorian surgery – what would it have been like to undergo the knife back then? Victorian operating theatres were filled to the rafters with medical students and ticketed spectators, many of whom had dragged in with them the dirt and grime of everyday life. The surgeon wore a blood-encrusted apron, rarely washed his hands or his instruments, and carried with him the unmistakable smell of rotting flesh, which those in the profession cheerfully referred to as ‘good old hospital stink.’ Before the advent of anaesthesia in the 1840s, patients were fully conscious during an operation. Postoperative infection was so common that most surgeries became slow-moving executions. The book is centered around the story of Joseph Lister – what is it that he did and how did it change Victorian medicine? You wouldn’t know it from the title, but The Butchering Art is a love story. It is the uniting of science and medicine for the first time in history. Joseph Lister applied a scientific principle (germ theory) to medical practice through the development of antisepsis. In the process, he saved thousands of lives in his own time, and continues to save lives today as we now operate with the knowledge that germs exist. And yet, most people are only vaguely familiar with Lister’s name through a product he did not even invent: Listerine. In fact, it was not even a mouthwash in the 19th century. It was more commonly used to cure gonorrhea! He must have been a pretty controversial at the time – was his work quickly accepted in the medical community, or did it face stiff opposition? Lister faced enormous backlash from the medical community when he announced his antiseptic methods. There was an American surgeon named Samuel Gross who prided himself on not adhering to germ theory. He would walk into an operating theatre, slam the door, and proclaim: ‘There! Mister Lister’s germs can’t get in now!’ Surgeons simply could not accept that tiny, invisible creatures (germs) were killing their patients. It took many decades, and a new generation of surgeons, to shift the paradigm. Our understanding of medicine has come a long way since Lister and the Victorian doctors – do you think his work in antiseptic surgery is the most significant advance in medical history? I think Lister’s application of germ theory to medical practice certainly ranks amongst one of the most important advancements in medicine. Without an understanding of germs, it’s unlikely we would be able to do the complex procedures we are able to do today. To us, Victorian surgery seems barbaric, but do you think in 150 years there will be another Lindsey Fitzharris looking back at our modern surgery with similar disgust? Absolutely! I hope that when people read The Butchering Art, they realise that what we know today is not what we will necessarily know tomorrow. What will people say about us in the future? Like the book, your YouTube channel, Under the Knife, is also pretty grisly – what got you into this macabre field of study? Although I’m a historian by training, I’m a storyteller first and foremost. I often tell the stories about the past that excited me when I was younger. I was a strange child, and I’m an even stranger adult! That said, I don’t think I could write about Victorian surgery without it getting a bit gruesome. I would be doing a disservice to the patients who submitted to the knife in the 19th century if I romanticized their experiences. After all, it was their bodies that helped advance medicine, and we owe a huge debt to them. Follow Science Focus on Twitter, Facebook, Instagram and Flipboard
<urn:uuid:16142c20-c4ba-4d48-80e4-f73abee2ad83>
CC-MAIN-2020-24
https://www.sciencefocus.com/the-human-body/joseph-lister-and-the-grim-reality-of-victorian-surgery/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00049.warc.gz
en
0.979058
955
3.140625
3
How do you create an animation character? How to Animate a Character in 6 Steps - Step 1: Download and Install Duik. - Step 2: Design Your Character. - Step 3: Prepare and Import Your Character Artwork. - Step 4: Establish Initial Character Rigging. - Step 5: Create Your Rig. - Step 6: Start Animating Your Character. How do you create an iconic character? 27 top character design tips - Don’t lose the magic. - Step away from the reference material. - Research other characters. - 04. but also look elsewhere. - Don’t lose sight of the original idea. - Decide who your character design is aimed at. - Make your character distinctive. What are the principles of character design? Silhouette, palette, and exaggeration are three fundamental components of good character design. While there are plenty of details a character designer must consider, these three elements are often at the core of what makes a character design memorable or completely forgettable. What is character design in animation? Character design is the process of fully developing a character’s style, personality, behavior, and overall visual appearance in the visual arts. Character designers construct characters as a means of conveying stories. How do you make an easy animation? Here are few simple steps to help you create an animated cartoon video yourself! - Step 1: Use a powerful animation maker. - Step 2: Choose a template for your animated video. - Step 3: Animate and synchronize. - Step 4: Add a music track or voice-over. - Step 5: Publish, share and download your animated video. How do you design a character? 6 Character Design Tips - Know your target audience. The project’s demographic will help determine the simplicity or complexity of the character design. - Practice world-building. - Understand shape language. - Explore the character’s personality. - Experiment with color. - Keep it simple. How can I make my character look unique? Another great way to make your character distinct is by improving their pose. A simple way to check a character’s pose is by turning your actual character into a silhouette – so making it fully black – and then you can check if some gestures or shapes can be pushed further to make it more iconic. What is the process of character design? The process of designing a character is creating the concept and style of that character from scratch. No matter if you’re creating a cartoon character or an entire concept design for game characters, the process takes a lot of work and creativity and requires years of studies and practice to perfect. What do triangles mean in character design? Triangles are associated with energy and power to indicate direction. Triangles may give a sense of action, tension, or even aggression. They can symbolize strength on the one hand, while conflict on the other. How can I improve my character design? What should I study to become a character designer? Character designers will often have a degree in graphic art, fine art, illustration or a related discipline, but this is not strictly necessary. The vital thing is that you can demonstrate very strong drawing skills. You need a portfolio which shows talent and creativity and a wide knowledge of and love for animation.
<urn:uuid:0c07b836-2a1e-47f5-a9ef-e5ed9f7a482e>
CC-MAIN-2023-23
https://www.curvesandchaos.com/how-do-you-create-an-animation-character/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653071.58/warc/CC-MAIN-20230606182640-20230606212640-00777.warc.gz
en
0.88776
712
3.0625
3
PLANTS know when you are going to chomp down on them – and they are NOT happy when they’re about to be munched. They have a special sense that alerts them to their imminent death, according to scientists at the University of Missouri. The scientists, hoping to finally work out whether live plants have a sense of awareness, carried out an experiment on a close relation to broccoli and kale called Thale cress. The plant produces mustard oils which are slightly toxic and sour to the taste to keep predators away. But to see whether the cress would produce the oil when being eaten rather than just being damaged, the scientists created a special scenario. MOST READ IN TECH AND SCIENCE ZUCK TO THE FUTURESocial media can forecast the future - predicting riots, revolutions and even election results WANNACRY II?Britain and Europe hit by ransomware cyber-attack that echoes the ‘WannaCry’ assault that crippled the NHS GAMER'S PARADISEThe Steam Sale is in full swing - here are today's hottest deals, and how long you have to bag a bargain Must seaHere's how many shark attacks there are per year - and why great whites are considered to be the most terrifying species SNAPCHAT STALKER TOOLParents warned about Snapchat's new feature that lets you trace kids' locations They recorded audio of the vibrations caterpillars – the thale cress’ worst enemy – make while eating its leaves. They also recorded vibrations similar to natural noises – such as a breeze – which plants might sense, too. Scientists discovered that Thale cress only produced the toxic oils when it heard the “munching vibrations” and didn’t react when the natural sounds were played. Heidi Appel, senior research scientist in the Division of Plant Sciences in the College of Agriculture, Food and Natural Resources and the Bond Life Sciences Centre at the University of Missouri said: “Previous research has investigated how plants respond to acoustic energy, including music. “However, our work is the first example of how plants respond to an ecologically relevant vibration. “We found that feeding vibrations signal changes in the plant cells’ metabolism, creating more defensive chemicals that can repel attacks from caterpillars.” It follows news Brits have absolutely no idea which fruit and veg is in season throughout the year. We pay for your stories! Do you have a story for The Sun Online news team? Email us at [email protected] or call 0207 782 4368
<urn:uuid:47e1a260-7133-4aab-8e17-8ee8420082c3>
CC-MAIN-2017-26
https://www.thesun.co.uk/tech/3250751/plants-can-tell-when-humans-are-eating-them-scientists-reveal-after-experiment-which-will-send-shivers-down-vegetarians-spines/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00191.warc.gz
en
0.943478
541
3.125
3
The well-known Liar's Paradox "This statement is false" leads to a recursive contradiction: If the statement is interpreted to be true then it is actually false, and if it is interpreted to be false then it is actually true. The statement is a paradox where neither truth value can be assigned to it. However, "This statement is true" also leads to a paradox where either truth value can be assigned to it with equal validity. If the statement is perceived to be true then it is actually true, and if the statement is perceived to be false then it is actually false. These two statements demonstrate two different classes of paradox. The same paradox states exist in set theory. Consider "The set of all sets that do not contain themselves" leads to the former paradox (neither solution is valid), and "the set of all sets that do contain themselves" leads to the latter paradox (either solution is valid.) My question is: How many classifications of paradox exist? Is there any development in classifying types of paradoxes and applying them to mathematical logic, computer science, and set theory? What implications would classes of paradoxes have on Gödel's incompleteness theorems--could a system that allows and classifies paradoxes be demonstrably consistent?
<urn:uuid:79b36a2c-5de9-42df-a9f8-30c4158da2d9>
CC-MAIN-2013-20
http://math.stackexchange.com/questions/60997/classifying-types-of-paradoxes-liars-paradox-et-alia/61448
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703317384/warc/CC-MAIN-20130516112157-00080-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93459
260
3.15625
3
Systems of Injustice and Oppression Killing someone to save the honour of a family is a system of communal ‘justice’ that validates murders of family members, and is often supported by the local communal court Jirga and the local influentials. Though men are targets of honor killings too but the majority of people who lose their lives in this system are women. Jirga is a communal court comprising local elders and influentials. Jirgas are known to sanction atrocities subjecting women to rape and muder.
<urn:uuid:f60b37d3-1a3b-4925-8eee-a9bf500c0154>
CC-MAIN-2023-23
https://lifethelove.wordpress.com/systems/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00389.warc.gz
en
0.940654
140
2.53125
3
26 October 2016 Heading Football Has “Significant” Impact on Brain A new study has revealed that footballers exposed to the everyday impact of heading a ball causes “significant” changes in brain function. The study, by the University of Stirling, tested 19 footballers who headed a ball 20 times. The ball was launched from a machine that emulated the power of a ball kicked from a corner, testing participants’ brain function before and immediately after the exercise, as well as 24 hours, 48 hours and two weeks after. Increased inhibition in the brain was detected after just a single session of heading. Memory test performance was also reduced between 41 and 67 per cent, with effects normalising within 24 hours. Cognitive neuroscientist Dr Magdalena Ietswaart said: “In light of growing concern about the effects of contact sport on brain health, we wanted to see if our brain reacts instantly to heading a football. Using a drill most amateur and professional teams would be familiar with, we found there was in fact increased inhibition in the brain immediately after heading and that performance on memory tests was reduced significantly. “Although the changes were temporary, we believe they are significant to brain health, particularly if they happen over and over again as they do in football heading. With large numbers of people around the world participating in this sport, it is important that they are aware of what is happening inside the brain and the lasting effect this may have.” One would hope that sports bodies and institutions such as schools will learn from the findings of this study, given the head injury a seemingly harmless header could cause. Campaigning to this effect is Dawn Astle, daughter of football legend, Jeff Astle, who set up the Jeff Astle Foundation to raise awareness of the effects of heading the ball. Former England and West Brom striker Jeff Astle died in 2002 at the age of 59, suffering from early on-set dementia which a coroner found was caused by heading footballs and gave the cause of death as “industrial disease”. Further examination of Astle’s brain revealed the neuro-degenerative brain disease Chronic Traumatic Encephalopathy (CTE), a disease that can only be diagnosed after death and often found in deceased boxers, rugby players and NFL players. Ms Astle, told The Mail on Sunday: “Would I be surprised if damaging effects of heading are found? No. The question is: what are they going to do about it? What are the authorities going to do to protect our children? This study raises awareness of the long term damage that repeated impact to the head and brain can cause. “These people are paid an awful lot of money to protect players and children playing football at any level. If they find damage is caused by heading the ball — which we as a family believed even before the coroner found dad's brain was damaged in the same way as a boxer's — then what are the long-term implications?” This study raises awareness of the long term damage that repeated impact to the head and brain can cause. Sport is an important of part of education and necessary to maintain a healthy lifestyle as well as being a social activity. Football in particular is a popular hobby enjoyed by many people of all ages. The risks of repeated head impacts described above may not be as obvious or anticipated as they would be in other sports. Therefore, it is important to further understand the issues with research so that the necessary education and safety measures can be considered to reduce and improve any long terms risks. For further information about financial compensation following a serious injury or for a free consultation to discuss a serious injury, call our No Win No Fee personal injury solicitors on freephone 0800 916 9046 or contact us online.
<urn:uuid:f3429b94-6bd5-4d31-9444-20877cb5d6d8>
CC-MAIN-2020-05
https://www.slatergordon.co.uk/media-centre/blog/2016/10/heading-football-has-significant-impact-on-brain/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694908.82/warc/CC-MAIN-20200127051112-20200127081112-00512.warc.gz
en
0.970113
784
3.21875
3
Mix IT Up There was a time when I would focus my discussions with others on how important infrastructure is when planning any type of technology integration project. My opinion on the importance of infrastructure has not changed over the years, however the conversation about the other ingredients that must go into the foundation for integrating technology in our classrooms are not independent of the infrastructure. As relevant as it is to have a robust network backbone and a great Internet connection, that by itself will not give our students the opportunities they need to be successful when they leave the K-12 environment. Creating the right mix that goes into the foundation for integrating technology in our classrooms is similar to the science of creating the perfect batch of concrete that supports our houses, buildings and roads. The science used to arrive at the finished product is far more complicated than buying a few bags of concrete at your local Home Depot for the backyard swing set. Building a foundation for today's digital learners is as important as the base for the tallest skyscraper, and requires the proper mix of four key ingredients that I believe are critical for our classrooms. Let's take a look at the ingredients in no particular order. Historically PD is the afterthought when integrating technology in our classrooms and becomes the first reason cited for technology integration failure. The excitement of having support to integrate the devices often allows a structured PD plan to be pushed aside, making it difficult to play catch up. If our foundation is going to hold up, we need our technology and curriculum departments working collaboratively with our site leaders, teachers, students, and parents to craft a PD plan. One way to accomplish a baseline for building a plan across all stakeholder groups is to leverage data. Participating in a survey, such as Project Tomorrow's Speakup Survey, allows educators, students, and parents to have feedback on their current and future technology use in education. The data provides starting points for discussions during the PD planning process and gives voice to all stakeholders involved. Digital citizenship is not different from the general definition of citizenship other than the medium where that community exists. If we expect our students to be responsible members of the digital community, then we have to collaborate with all stakeholders to create successful learning opportunities, have parental support, lead by example, and include our students in that discussion. A digital citizenship road map is a key ingredient to our foundation and one that will lead to many cracks if ignored. Having the proper resources and embedding digital citizenship within the everyday curriculum will allow teachable moments while not adding another layer and demonstrating real world application. Develop or adopt digital citizenship standards or elements that can be made visible throughout your school district and community. Providing a common digital citizenship language for all students, educators, and parents will help in changing the culture and support behavior in the digital world. The million dollar question has long been, which device is the best choice for our classrooms. The simple answer is there is not a "best" choice or one size fits all device that exists, and the reason for that is simple in my opinion. Every classroom, school, and district is a little bit different than the next, the magic is finding the device that works best for you. There are a few ways to gather feedback from stakeholders that may take a bit of time, but will provide valuable feedback before making a commitment. - Try and buy. If you are committed to purchasing devices, there will large amounts of budget spent, why not purchase a small number of devices first and distribute those to teachers, students, and technology staff. It is amazing how fast word spreads among teachers and students when they have a new device that allows them to improve what they are doing in the classroom. - Student devices. Once a decision is made, allow your teachers to have access to a device prior to them being implemented in the classroom. Our teachers need to know the device before there are 30 of them in their classroom. - Plan to manage. The technology team needs to understand the devices and be able to craft their plan on how they will manage devices in their network environment. Every device type will have it's own characteristics and the tech team will need to be able to support them in the classroom. Remember that devices can be the most difficult ingredient in the foundation and cause the biggest cracks if not well thought out. Buy in from all stakeholders is important so that everyone is supportive of the device through its lifespan. Watching the discussion relating to infrastructure change with the transition from desktops to mobile devices has been exciting. The good old days of deciding where to locate the 5-8 drops per classroom now revolves around supporting a 1:1 environment and wireless connectivity from the front office to the football field. The challenge becomes where to begin when planning your infrastructure needs and how to build for the future. Taking the time to properly plan, it is much easier than chasing connectivity and bandwidth down the road, here are a few things to think about. - Take a field trip. Don't underestimate how much can be learned by visiting other districts, asking questions about their planning process and why they made the decisions that they did. Having prior knoweldge and learning from others before starting your journey is invaluable. - Schools and Libraries Program (E-rate). The e-rate program was established in 1996 to assist schools and libraries with making their telecommunication needs more affordable. The program has gone through a modernization effort and is focused on assisting schools with obtaining affordable access to high-speed broadband and funding internal connections to support the connectivity. Participating in the program can make a difference in the planning process. - Use the resources available for baseline data. There are national resources available such as Education Superhighway, who's mission is to bring internet access to every public classroom in our country. They have spent time putting together tool kits to help with the infrastructure planning process. Their resources were put together by working with districts from across our country and provide starting points. - Outside assistance. Reaching out and working with experts in the infrastructure field is not a sign of weakness, it's a smart move. Bringing a consultant to the table is not only beneficial for collaborating on designing the appropriate infrastructure but offers large amounts of knowledge transfer throughout the process. How to Finish The last stage of pouring concrete is known as finishing and like many aspects of construction is a form of art. A good finisher brings that smooth, consistent look to the end product, that makes the weekend construction warrior so envious of. Finishing can also be considered the last step in tying together the ingredients of a strong foundation. Building an environment for today's digital learners is challenging, strenuous at times, and absolutely rewarding when student and teachers have a great experience integrating technology in the classroom. Remember, it's the ingredients working together that is supporting the foundation of your structure. cross posted at jcastelhanothisandthat.blogspot.com Jon Castelhano is director of technology for Apache Junction USD in Arizona and serves as an advisor to the School CIO member community, a group of top tier IT professionals in schools across the country who understand and benefit from news and information not available elsewhere. Read more at jcastelhanothisandthat.blogspot.com
<urn:uuid:584a6c22-8713-4941-ab79-1be5f891e6b3>
CC-MAIN-2020-05
https://www.techlearning.com/tl-advisor-blog/10681
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00148.warc.gz
en
0.956627
1,461
2.890625
3
The Continuous Comprehensive Evaluation of Health & Hygiene is a process of evaluation of health, nutritional intake and hygiene habits of child and adolescent developed by Centre for Education and Health Research Organization. In a country of largest child population nearly nine million children die every year from preventable diseases and infections On National Level, we found following problem which are contributing to the above situation: - Prevention is not being done effectively that can enhance the healthcare and reduce the cost of the individual and of the government on healthcare substantially. Bad awareness and prevention campaigns also lead to overcrowded government hospitals and hence reducing the quality of healthcare further. - Children from low-income families are worst affected - lack of information and knowledge of their parents lead them to make wrong decisions about the healthcare - local hospitals/doctors are ill-equipped to treat children - focus on the treatment of the current disease or sometimes symptoms is much more than making the child healthy. There is also a trust issue, people generally believe private hospitals are better than government ones. - There is no system to detect early problems faced by the children in the country where world’s largest child population resides- no data to inform policies and local interventions for awareness, prevention and treatment (physical and mental) On a national level, We want to use and understand the usability of public spaces instead of hospitals to drive change around health; the public spaces maybe schools, anganwadis, community gatherings, community-based local NGOs such as CEHRO. Basically, we want to prevent people from going to the hospital and also making them know when they really need to visit a hospital before the situation gets worst. These changes that we are talking about possibly could be brought by 3 methods: - Awareness - Using public spaces and gatherings to build conversations around health - Empowering - Empowering unqualified local people and parents with basic tools to judge their child’s health, know what to do and act effectively. This could be done by organising multiple training sessions giving parents tools and supporting them later on - Data - Providing data for policy interventions, local level interventions or individual level interventions On Munirka Level, We’ve made significant progress in the data collection, we have developed continuous comprehensive evaluation of health hygiene of child and adolescent. We collect their data on daily basis. Datasheet includes questions related to eating habit and sanitation- like did they brush or not, breakfast/lunch they had, did the take a bath, washed hands These data is presented to parents as evidence and are encouraged to make certain changes To add nutrition, we provide healthy refreshments like fruits, biscuits, bread and milk. In this system, children do daily self evaluation of the following parameters: - Hygeine habits - Protien, Fruits, Vegetables and fastfood consumption Secondly, Regular medical check-ups are arranged in the community and in some cases we take them to the medical centres. It includes eye check-up, dental check-up and general health check-ups. we organize two health check up camps that focuses on: - Hieght, Weight, Age and BMI relationships - Hygiene indicators - Nutritional Indicators - Detecting any present or potential diseases
<urn:uuid:6286fd02-126a-493d-ac11-ba2ac05421e1>
CC-MAIN-2023-14
http://cehroindia.org/health.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00265.warc.gz
en
0.943992
669
3.40625
3
Detection of PTLD uses PCR to detect circulating EBV DNA in the blood or in situ hybridization to identify EBV DNA in tissue biopsies. EBV DNA was detected in the tissue section using both real-time PCR and in situ hybridization. We report an unusual presentation of PTLD with no detectable EBV DNA in the blood using EBER-1 and EBNA-1 PCR assays. This report suggests that the use of EBV-PCR for the early detection of PTLD in blood samples may not be 100% effective in detecting disease. - Lymph node - Peripheral blood ASJC Scopus subject areas - Pediatrics, Perinatology, and Child Health
<urn:uuid:5c7c18e2-18fe-4f32-93ab-7f9c789ae58f>
CC-MAIN-2020-16
https://utsouthwestern.pure.elsevier.com/en/publications/unusual-case-of-epstein-barr-virus-dna-tissue-positive-blood-nega
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00271.warc.gz
en
0.864655
146
2.828125
3
85 Years of Leadership: HVAC Changes Shades of Green In the early 1970s, escalating oil prices led to a lot of research into improving energy efficiency and finding renewable resources. With that background, President Carter formed the U.S. Department of Energy in 1977. The movement started out strong enough. In “Renewable Energy Policies and Markets in the United States,” coauthors Eric Martinot, Ryan Wiser, and Jan Hamrin (affiliated with the Center for Resource Solutions of San Francisco) point out that “the introduction of the Public Utilities Regulatory Policy Act (PURPA) of 1978 required utilities to purchase power from qualifying third parties at the utility’s avoided cost.” Because of “California’s interpretation of PURPA and favorable tax incentives, 12,000 MW of renewable power was constructed in the United States during the 1980s,” they wrote. However, several factors caused renewable energy markets to stagnate sharply in the mid-80s. These included “a long period of electric power sector restructuring, repeal of federal and state incentives, and sharply lower natural gas prices,” stated the report. Now we have entered a period that some have optimistically called a renaissance for solar thermal electric plants in the United States. But do we run the risk of relying too heavily on rebates again? If they were to vanish tomorrow, would the market evaporate? Charles Culp, P.E., Ph.D., LEED-AP, professor — Department of Architecture and associate director, Energy Systems Laboratory, Texas A&M University (TAMU), learned about incorporating renewable and sustainable technologies through the process of building his own three homes. The first, he said, was average; the second experience was with “the builder from hades”; and “the third builder was from heaven.” The home is very efficient. However, it did not achieve net zero status, which states that from Jan. 1-Dec. 31, the building generates at least as much energy as it uses, Culp explained. “At times the PV can feed back to the grid,” he said, though not always. “It’s a very good goal as long as it can be done cost effectively.” The house that he wants to build with grad students may be a smaller, affordable home (1,000-1,450 square feet). A normal house in the area ranges from 1,000 to 2,500 square feet, he said. His redesign would include strategies like removing hallways and creating an atrium; “then we can move the air slowly to move it around. “We think we can get the fundamental energy consumption down to 50 percent of what a normal house uses,” Culp said. “Let’s use as much energy as we can before we throw it outside.” A 1,500-square-foot house for the project would use a propane chiller and chilled-beam technology, plus zoning. The green home market today still has a ways to go, he said. Development, field testing, market acceptance, inspections, and of course builder acceptance are needed for large-scale green system application. “PV prices are dropping like crazy,” he continued. “I was told by a PV manufacturer ahead of the curve that in three to five years it will be cost competitive with grid power. I assume that this means increasing costs of electricity and lower costs of PV. At that point it makes sense to go PV as much as possible and let the grid handle the excess.” To make costs still more competitive, “I think we’ll get to the point where the installation itself will be the focus,” Culp said. “I think we’re getting there” using clip-on, bolt-down methods. The $64,000 question, as the older folks say, is whether the current market for sustainable technology could sustain itself in the absence of rebates and incentives. “Good question,” said Culp. “Don’t know.” There are good reasons now to harvest solar power, though mainly due to incentives and rebates. Plus, “It will generate power and your home value goes up. If the rebates went away, it wouldn’t be as attractive,” he continued. “But if the cost of the collectors goes down, it will get pretty interesting.” “I’ve been in this industry over 40 years,” said Steve Howard, president of The ACT Group. “It’s déjà vu, the solar industry, especially living in Phoenix where it was a big deal. I took a course and it was fascinating, and I got on that bandwagon. “My crystal ball says long term, every industry must be able to prosper wholly within the boundaries of the free enterprise system. Currently solar, wind, and ethanol (with huge federal subsidies) are not viable for the economy as a whole if they must rely on any type of good-intentioned government subsidies or rebates.” Why should we hope the results will be different this time? “There’s always hope,” Culp said. Since the mid-80s, “Our awareness of energy is really starting to develop. The part that will make us see the biggest difference, though, is cost. I think we will see growth in the industry. I think people will spend money on it. But I really do think it’s a cost-driven thing. In Europe they treat it differently.” Culp said the area he focuses on, existing building commissioning, can save 10-20 percent of a building’s total energy use (which means about 20-40 percent of the HVAC energy use). “If we could apply this to every building in the U.S. or world, it would be real savings, but would not solve the increasing energy use issue. “The reality is that this will take 10 to 20 years to really apply widely because applying the technology requires highly skilled people to do this effectively,” he said. “Add this to the fact that buildings have a 50-plus-year life, and we have a real challenge.” People have an unfortunate tendency to think in the short term, which can lead to short-sighted solutions, Howard said. Back in the day, “Everybody and his brother got into solar. When the government’s giving away free money, worms come out of the woodwork … As soon as these rebates are gone, the solar industry will be a struggle. Some consumers will buy it because they are adopters, environmentalists, etc., and that’s where our company comes in, to train contractors on who their customers are and what their motives are. From our standpoint, these changes are good.” There also are requirements for utilities to use a certain percentage of renewables, Howard pointed out, so the market will be there. How the utilities achieve this is not stipulated. Howard, who lives in Arizona, resides in an adobe house built in the 1870s. It uses passive solar and is oriented North-South, “so the only time we get any direct solar is in wintertime,” he said. The system combines in-floor radiant heat with the utility’s time-of-day rate; “at night we heat up the gypcrete for infloor radiant, and it warms the house all day long.” The home’s 22-inch-thick walls mean that even without conditioning, “even in summer, you can walk in and it’s 10 degrees cooler inside. In wintertime it’ll get down in the 30s outside, and it’s in the 60s inside.” It also utilizes small windows, thick walls, and natural materials (creek mud and stone). “There’s a super high-efficiency market for smart dealers,” he said. “It doesn’t make sense here to have 13 SEER when you could have 20.” But it requires more than technical know how; financing and knowledge of the rebate programs are crucial. “It requires a sophisticated dealer who is available to help the customer.” These days Howard is doing what he has been doing all along: “training contractors on selling value, not price.” He said the definition of value has changed. “We used to have a box that put out hot or cold filtered air. Now we have variable-speed, zone-controlled, total-comfort systems. Selling geothermal or solar, the value of the equation has changed; now it’s the ability to make your own power,” Howard said. “It’s an emotional high. The customer’s motives have changed. It could be independence, ego, or simply going green. They feel good because they’re doing the right thing. If I’m buying a system and the saleperson really understands my motives, there is a chemical response that makes the sale go through. “You don’t have to go into thermodynamics,” Howard said. “But you do need to understand the technology, so they can transfer that to the benefits the customer wants.” Now and Then “I think the thing that’s different now is that every buyer has different motives for different products,” he continued. “If you can tap into that, you can find your buyers. We need to understand the benefits and match those to what the buyers want.” In “Renewable Energy Policies and Markets in the United States,” the authors spelled out a few lessons from our less-than-sustainable past: 1. Policy consistency is essential. U.S. renewable energy policy has suffered from inconsistency as incentives have been repeatedly enacted for short periods of time and then suspended. This stop-and-go tendency has seriously hampered the development of markets and industries. 2. The major renewable energy markets are overseas, particularly in Europe and Japan. A U.S. market would benefit U.S. manufacturers. 3. State-level policies are needed for market expansion, with complementary federal policies. 4. Who owns and maintains the power transmission system when harvested renewable power is placed on the grid? That needs to be worked out. Publication date: 09/19/2011
<urn:uuid:7386e4a7-e57b-463c-ace7-9eb163d243ff>
CC-MAIN-2017-26
http://www.achrnews.com/articles/117841-85-years-of-leadership-hvac-changes-shades-of-green
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00644.warc.gz
en
0.961577
2,224
2.671875
3
Teaching your child healthy oral habit requires persistence and a playful imagination. To create long-lasting habits, Charles Duhigg, author of the Power of Habit recommends following these three steps: - Setting a cue or trigger - Creating a routine - Receiving a reward And while these steps form the neurological bases for habit formation in adults, adapting these principles to teach your child to brush his or her teeth requires an added ingredient — FUN! Making toothbrushing a fun experience for your child at an early age will reduce temper tantrums and help your child build a life-long habit of proper oral hygiene. We’ve included a couple of tips to help you prime your pre-schooler to brush his or her own teeth. TIP #1 Start Early Dentists recommend caring for your child’s oral care even before they begin to teeth. Parents and caretakers can use a fluoride-free toothpaste and a finger brush to gently brush a baby’s gums and eliminate bacteria. This not only stimulates gums which encourages teeth growth, but also helps diminish teething pain. But equally important, this activity will help accustom your child to the sometimes uncomfortable process of tooth brushing at an early age. TIP #2 Brush Your Teeth With Your Toddler As toddlers begin to gain motor skills, they will develop the ability to obey simple commands and eat independently. When they develop the ability to maneuver a spoon, they should be ready to hold a toothbrush on their own. Parents can let their toddler's brush their teeth first for fun and than let mommy or daddy brush their teeth afterwards to make sure all the sugar bugs are out! This establishes a predictable daily routine. Because toddlers tend to eat everything, use a non-fluoride, safe to swallow toothpaste when they brush their teeth, and a thin smear of fluoride toothpaste when parents brush their teeth, wiping the area to minimize the amount they swallow. TIP #3 Appeal to Their Imagination Did you know that American households did not exercise the daily ritual of tooth brushing until the early 1900’s? Famous American Advertiser Claude C. Hopkins launched a decade-long campaign to promote Pepsodent- a toothpaste that owed a large part of its success to the tinly aftertaste it left users with. Pepsodent users claimed that if they forgot to brush their teeth, they’d notice the absence of the tingly but refreshing sensation of the toothpaste (cue) and were then triggered to brush their teeth. Similarly, children need playful “cues” to trigger the desire to brush their teeth. Let your pre-schooler select toothpaste with his favorite cartoon character. Consider fun alternatives to boring toothbrushes such as light up toothbrushes and saber light toothbrushes. Be sure to end the ritual with a reward. Try reserving one of his favorite toys for him to play with after he finishes brushing his teeth so that he associates the activity as a positive experience. Neuroscientists have shown that music is powerful tool that enhances memories. Aquafresh features a tooth brushing app for children that includes a 2-minute song to help your child stay entertained while brushing his teeth. Children can earn stars for every time they complete the activity and spend them on buying accessories for Aquafresh’s Nurdle mascot. Making tooth brushing a fun activity will protect your child from premature tooth decay and encourage a lifetime of healthy oral hygiene habits.
<urn:uuid:fae9a6c3-1bd6-446d-9cb4-96af0882f02c>
CC-MAIN-2017-34
http://www.pinesmiles.com/blog/2015/7/priming-your-pre-schooler-to-brush-their-teeth
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109525.95/warc/CC-MAIN-20170821191703-20170821211703-00266.warc.gz
en
0.940807
726
3.6875
4
Here we will try to introduce you to the renewable energy sources (RES) available today. RES refers to energy resources which are naturally replenished: wind, solar, hydro-power, biomass, geothermal energy, ocean energy. The EUROSTAT’s definition of RES: Renewable energies cover hydro-power, wind energy, solar energy, biomass and wastes and geothermal energy. Renewable energies are the sum of hydro-power, wind energy, solar energy, biomass & wastes and geothermal energy. Today, renewable energy is mainly produced and used domestically. Traditional biomass (for cooking and heating) is growing just slowly as it is used more efficiently or replaced by more modern energy sources, large hydro-power is growing slowly, new renewables (small hydro, modern biomass, wind, solar, geothermal and bio-fuel) are growing very rapidly. Modern applications of renewable energy have grown steadily over the past three decades and the investment to developing renewable energy capacities in countries is growing rapidly, from $6 billion in 1995 to over $50 billion in 2008 The good news is that renewable resource potentials exceed today’s world energy consumption. Renewable energy policies and promotions, as well as new targets already exist in more than 50 countries all over the world. Most of them are targeting the share of renewable energy in the electricity generation (typically 5 – 30 per cent). This is true for most developed countries. And as far as we can tell the most rapid changes can be seen in South-Eastern Asia and China. The biggest problem in developing new RES policies and projects is the lack of a unified system, that would provide information on the know-how and statistics periodically and transparently. The are also copyright law issues and very significant differences in the definitions of RES stated by various organizations. The other interesting fact is that no matter how rapidly the investments to the sector increase, there is still not nearly enough funds to reach the average term goals for this type of resources.
<urn:uuid:5d259208-6db9-47cb-9664-7a94be1552f9>
CC-MAIN-2017-39
http://gotpowered.com/2009/renewable-energy-sources-classification-and-current-situation/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00611.warc.gz
en
0.955795
405
3.46875
3
Uranian is a 19th-century term that referred to a person of a third sex—originally, someone with "a female psyche in a male body" who is sexually attracted to men, and later extended to cover homosexual gender variant females, and a number of other sexual types. It is believed to be an English adaptation of the German word Urning, which was first published by activist Karl Heinrich Ulrichs (1825–95) in a series of five booklets (1864–65) which were collected under the title Forschungen über das Räthsel der mannmännlichen Liebe ("Research into the Riddle of Man-Male Love"). Ulrich developed his terminology before the first public use of the term "homosexual", which appeared in 1869 in a pamphlet published anonymously by Karl-Maria Kertbeny (1824–82). The word Uranian (Urning) was derived by Ulrichs from the greek godness Aphrodite Urania, who was created by Uranus out of his own body parts. Therefore it stands for homosexual gender, while Aphrodite Dionea (Dioning) is representing the heterosexual gender. The term "Uranian" was quickly adopted by English-language advocates of homosexual emancipation in the Victorian era, such as Edward Carpenter and John Addington Symonds, who used it to describe a comradely love that would bring about true democracy, uniting the "estranged ranks of society" and breaking down class and gender barriers. Oscar Wilde wrote to Robert Ross in an undated letter (?18 February 1898): "To have altered my life would have been to have admitted that Uranian love is ignoble. I hold it to be noble - more noble than other forms." The term also gained currency among a group that studied Classics and dabbled in pederastic poetry from the 1870s to the 1930s. The writings of this group are now known by the phrase "Uranian poetry". The art of Henry Scott Tuke and Wilhelm von Gloeden is also sometimes referred to as "Uranian". The word itself alludes to Plato's Symposium, a discussion on Eros (love). In this dialog, Pausanias distinguishes between two types of love, symbolised by two different accounts of the birth of Aphrodite, the goddess of love. In one, she was born of Uranus (the heavens), a birth in which "the female has no part". This Uranian Aphrodite is associated with a noble love for male youths, and is the source of Ulrichs's term urning. Another account has Aphrodite as the daughter of Zeus and Dione, and this Aphrodite is associated with a common love which "is apt to be of women as well as of youths, and is of the body rather than of the soul". After Dione, Ulrichs gave the name dioning to men who are sexually attracted to women. However, unlike Plato's account of male love, Ulrichs understood male urnings to be essentially feminine, and male dionings to be masculine in nature. John Addington Symonds, who was one of the first to take up the term Uranian in the English language, was a student of Benjamin Jowett and was very familiar with the Symposium. However, it has been argued that this etymology, at least for the English-speaking countries, is unrelated to Ulrichs's "coinage". In his volume Secreted Desires: The Major Uranians, Michael M. Kaylor writes: Given that the prominent Uranians were trained Classicists, I consider ludicrous the view, widely held, that ‘Uranian’ derives from the German apologias and legal appeals written by Karl-Heinrich Ulrichs (1825–95) in the 1860s, though his coinage Urning — employed to denote ‘a female psyche in a male body’ — does indeed derive from the same Classical sources, particularly the Symposium. Further, the Uranians did not consider themselves the possessors of a ‘female psyche’; the Uranians are not known, as a group, to have read works such as Forschungen über das Räthsel der mannmännlichen Liebe (Research on the Riddle of Male-Male Love); the Uranians were opposed to Ulrichs’s claims for androphilic, homoerotic liberation at the expense of the paederastic; and, even when a connection was drawn to such Germanic ideas and terminology, it appeared long after the term ‘Uranian’ had become commonplace within Uranian circles, hence was not a ‘borrowing from’ but a ‘bridge to’ the like-minded across the Channel by apologists such as Symonds. — p.xiii, footnote Development of classification scheme for sexual types Ulrichs came to understand that not all male-bodied people with sexual attraction to men were feminine in nature. He developed a more complex threefold axis for understanding sexual and gender variance: sexual orientation (male-attracted, bisexual, or female-attracted), preferred sexual behavior (passive, no preference, or active), and gender characteristics (feminine, intermediate, or masculine). The three axes were usually, but not necessarily, linked — Ulrichs himself, for example, was a Weiblinge (feminine homosexual) who preferred the active sexual role. The taxonomy of Uranismus Note that in these terms, -in is an ordinary German suffix usually meaning "female". - Urningin (or occasionally the variants Uranierin, Urnin, and Urnigin): A female-bodied person with a male psyche, whose main sexual attraction is to women. ("lesbian" or "straight trans man") - Urning: A male-bodied person with a female psyche, whose main sexual attraction is to men. - Dioningin: A heterosexual, feminine woman. - Dioning : A heterosexual, masculine man. - Uranodioningin: A female bisexual. - Uranodioning: A male bisexual. - Zwitter: Intersexual Urningthum, "male homosexuality" (or urnische Liebe, homosexual love) was expanded with the following terms: - Mannlinge: very masculine, except for feminine psyche and sex drive towards effeminate men ("butch gay") - Weiblinge: feminine in appearance, behaviour and psyche, with a sex drive towards masculine men ("queen") - Manuring: feminine in appearance and behaviour, with a male psyche and a sex drive towards women ("feminine straight man") - Zwischen-Urning: Adult male who prefers adolescents. ("pederast", "hebephile") - Conjunctive, with tender and passionate feelings for men - Disjunctive, with tender feelings for men but passionate feelings for women ("metrosexual", "bromance") - Virilisierte Mannlinge: Male Urnings who have learned to act like Dionings, through force or habit ("straight-acting gay") - Uraniaster or uranisierter Mann: A dioning engaging in situational homosexuality (e.g. in prison or the military) - Michael Matthew Kaylor, Secreted Desires: The Major Uranians: Hopkins, Pater and Wilde (Brno, CZ: Masaryk University Press, 2006) - Merriam-Webster's Encyclopedia of Literature, "Urania" (Springfield MA, Merriam-Webster Inc, 1995.) - Webster's Dictionary of the English Language – Unabridged Encyclopedia Edition, "Uranian" (New York NY: Publisher's International Press, 1977.) - Winston Dictionary of the English Language, "Uranian (Philadelphia PA, John C. Winston Company, 1954.) - Lesbian activist Anna Rueling used the term in her 1904 speech, "What Interest does the Women's Movement have in Solving the Homosexual Problem?" text of Rueling's speech
<urn:uuid:a4a1a6dd-487c-4d06-b577-ac4b79eedd98>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Uranian
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708789647/warc/CC-MAIN-20130516125309-00071-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94217
1,715
3.4375
3
SDI (Specially Designed Instruction) Resources for Co-Teachers A few resources for identifying SDI and incorporating SDI into the co-taught classroom. (From Marilyn Friend's presentation materials, Virginia's Co-Teaching Summer Institute, July 2017.) This is a list of instructional strategies that may effective for students with disabilities. Many of these strategies could be Specially Designed Instruction (SDI). This handbook provides information on Specially Designed Instruction (SDI). This document was developed to clarify the relationship between Specially Designed Instruction, Core Instruction and Interventions within a multi-tiered system of supports (MTSS) for educators developing, improving and maintaining systems of support for all students. It is important that students learn to manage their own behavior. Cognitive behavior management is a technique that could be used as Specially Designed Instruction (SDI). A contract is a written agreement between a student and a teacher that is directed toward changing the youngster's behavior. This Module features the Self-Regulated Strategy Development (SRSD) model, which outlines the six steps required to effectively implement any instructional strategy and emphasizes the time and effort required to do so (est. completion time: 1 hour). This PowerPoint discusses SRSD and its impact on writing. The Neurological Impress Method involves the teacher and the student reading aloud simultaneously from the same book. The teacher reads slightly faster than the student to keep the reading fluent. The teacher usually sits next to the student and focuses his or her voice near the ear of the students. Helping Students with Poor Working Memory A simple cooperative learning strategy
<urn:uuid:685feb9b-da48-42bb-8eb6-f973afa5396b>
CC-MAIN-2017-43
http://ttaconline.org/Resource/JWHaEa5BS76yevjkXBOgkg/Resource-sdi-specially-designed-instruction-resources-for-co-teachers
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824325.29/warc/CC-MAIN-20171020192317-20171020212317-00887.warc.gz
en
0.951396
339
4.03125
4
Six process control variables that are vital to your additive manufacturing success Additive manufacturing processes operate on scales that are far more complex and intricate than human machine operators are able to control unassisted. Layer-based methods like fused deposition modeling (FDM) can be configured according to a wide range of variables, including pixel scale and print angle, which are difficult if not impossible to validate manually. For this reason, manufacturers need proven methods of ensuring that additive production processes are precise, accurate, economical, and safe both within and across print jobs. Broadly speaking, these methods — known as process controls — are industrial systems that enable engineers to manage complex processes so that they produce consistent results. For example, additive manufacturing process controls can help optimize numerous production parameters critical to ensuring that parts achieve similar standards for geometric accuracy, performance characteristics, mechanical properties, dimensional tolerance, and qualities. The majority of industrial 3D printers — including most laser powder bed fusion (LBPF) machines — now include closed-loop monitoring systems, which use cameras as well as thermal and position sensors to collect data about the printer’s output and detect deviations in real-time. Also known as feedback systems, these in situ process controls make adjustments based on output so as to achieve the desired conditions or properties, thereby refining the consistency and quality of part production. Though less common, some 3D printers feature “feed-forward” simulation tools that input feedback directly into the device’s closed-loop system, which allows for real-time process control and more consistent prints. Critical additive process control variables The precision and quality of 3D-printed parts are influenced by dozens of variables that must be rigorously controlled to achieve consistent results. In general, these variables align with one of six categories. Controlling the physical conditions of the space where parts are created is foundational to ensuring consistent prints. Factors like humidity, air quality, and temperature can significantly impact how the material extrudes and binds between layers. Knowing how each of these factors affects print quality is key. The technology used during production introduces a number of variables that must be accounted for. The quality of a machine can alleviate some of these factors. For example, the tolerance you can achieve will only be as good as the least accurate piece of the machine; it doesn’t matter how expensive an extruder is if the controllers and motors driving it are inaccurate. Additional factors, such as the extrusion force and the printer and platform temperature, have direct effects on material adhesion strength and interface stability. While different material characteristics of course impact the performance and function of the final part, these qualities must also be factored into the production process. If materials have specific storage or handling requirements, or need to be prepared in a specific way before printing, process controls should be established for each variable to ensure that a job may be repeated with similar results. 4. Part geometry and orientation In addition to optimizing part design for manufacture, engineers must define how the part will be produced within the printers’ build chamber. Factors such as the part’s orientation relative to the build plate, the design of support material, and the alignment of critical features to the machine’s most accurate print plane all contribute to production efficiency and quality. Once printing has concluded, parts may need to undergo additional processes before post-production for a number of reasons. If support material was used, it will need to be removed. Certain features may need to be drilled to increase the accuracy of the final product. Secondary steps may also be time-sensitive. In all cases, these processes must be standardized. 6. Quality assurance Optimizing post-processing further helps to ensure part accuracy and achieve the desired aesthetic qualities. Similar parameters should be established for how supports are removed, how surfaces are finished, and how any cosmetic detailing, such as hot stamping or plating, is applied. Finally, the methods used to measure, validate, and qualify parts — a practice known as metrology — must be consistent. Optimizing additive production processes for critical variables After identifying the variables involved in a given production job, the next step for product teams is to design a process that allows for effective variable management. Process calibration will require trial and error, but a significant advantage of additive manufacturing processes is that they are iterative, allowing for rapid updates to digital designs without the need for expensive and time-consuming tooling changes. Optimizing additive production processes for accuracy and precision should generally involve the following steps: - Collect extensive data sets based on the process control variables - Run statistical and correlation analyses of the data sets to establish dependence between variables and inputs - Based on variable dependencies, perform targeted designs of experiments (DOEs) to illuminate the principal causes of inconsistencies or deviations - Alter the production process to achieve greater accuracy and precision This cycle should be repeated until the process reliably produces high-quality parts with desired characteristics and dimensional tolerances. However, testing should be conducted on an ongoing basis to guarantee that the process is consistent and effective. Discover consistent on-demand manufacturing Process controls are critical to ensuring that production methods remain efficient and consistent, and developments in additive manufacturing process control technologies have been both rapid and significant. The increased incorporation of IoT sensors and machine learning algorithms into 3D printer systems is enabling product teams to create parts with greater precision and speed. Further, groundbreaking new technologies like simulation analysis, experimental design, and control system industrialization have the potential to streamline and refine the additive manufacturing sector. Fast Radius is a forward-thinking, on-demand manufacturing platform offering a range of production technologies ranging from additive manufacturing and injection molding to urethane casting and CNC machining services. We specialize not only in refining part design for maximum efficiency and quality, but also in helping our customers find the right production process — or combination of processes — to get the job done on time and at competitive rates. Contact our team today to get started. Visit the Fast Radius learning center to learn more about the intricacies of injection molding, industrial-grade additive manufacturing, and much more.
<urn:uuid:3437947d-ee59-49ed-9fb1-9ea519eb2836>
CC-MAIN-2023-23
https://www.fastradius.com/resources/additive-manufacturing-process-control/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652235.2/warc/CC-MAIN-20230606045924-20230606075924-00431.warc.gz
en
0.919619
1,266
2.890625
3
What is a sustainable landscape? Considering the environmental impact that the landscape will have is the first step when creating a sustainable landscape. A sustainable landscape respects the local climate and wildlife, which in turn can potentially reduce the need for excessive use of fertilizers, pesticides and watering. Selection of plants that are conducive to your existing environmental conditions will help to achieve a sustainable landscape. Plants native to your area are perfectly adapted to local climate and other biotic life. On site conditions and planting location are other factors to take into account when creating a sustainable landscape. Ultimately, how big will the plant grow? What are the sun and water requirements for this plant? Ask yourself these questions before placing a tree or shrub in the landscape. Determine the plant’s needs when selecting trees, shrubs and perennials for the landscape. Click here for information on selecting a tree for Illinois landscapes. The ultimate goal of a sustainable landscape is that the plants are able to survive with minimal input from outside sources. The more reflective you can make your landscape of the natural environment in which your selected plants grow in, the less likely you are to need outside sources such as fertilizers, excess watering or pest control treatments. A great example of sustainable landscaping is becoming a common practice in the southwestern portion of the United States, this type of landscaping is called xeriscaping. This type of sustainable landscaping focuses on utilizing drought tolerant plants, which don’t require excessive water. It is more reflective of the natural environment in that part of our country. Working your way towards these goals creates a self-sustaining landscape that functions in harmony with the local environment. Click here for more information on creating a sustainable landscape.
<urn:uuid:3dc19216-d116-440e-a35d-3028e5577b18>
CC-MAIN-2023-40
https://kramertree.com/tip-week-creating-sustainable-landscape/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00807.warc.gz
en
0.933922
349
3.359375
3
Wartime Bolivia and refugees from Nazism : a unique case / by Mari Mariana Conea-Rosenfeld. During the 1930s, as Hitler expanded Nazism's influence throughout Europe, German and Austrian Jews, fearing for their lives, emigrated en masse, resulting in an international migration crisis. Bolivia figured among the few countries that offered a safe haven to Jewish refugees. This dissertation analyzes why did the Bolivians admit the refugees, how did this contact affect the national identity of Bolivians and their immigrant counterparts, and what was the relationship between Jewish refugees and Bolivian nationals. This dissertation contends that Jewish immigration changed both the Jewish refugees and Bolivian nationals. On one hand, Jews fled the effects of a disastrous convergence of race and nation in Central Europe, only to encounter the same in Bolivia. On the other hand, their presence helped redefine Bolivian cultural politics and national identity. Record last modified: 2018-04-24 16:01:00 This page: https://collections.ushmm.org/search/catalog/bib149797
<urn:uuid:555028f6-9e2e-47bd-b20a-674a6e6b2399>
CC-MAIN-2020-34
https://collections.ushmm.org/search/catalog/bib149797
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00005.warc.gz
en
0.912709
226
3.171875
3
Philosophy provides students the opportunity to reflect on the most fundamental questions of our lives that often go unexamined. Courses in philosophy acquaint students with the intellectual and moral traditions of world civilizations and aim to develop the critical thinking skills necessary to question assumptions, to weigh propositions fundamental to personal responsibility, and to consider the ethical implications of their decisions. An understanding of philosophy is one of the hallmarks of Jesuit education. Requirement: Students take two courses in Philosophy to complete their Core requirement. One course must be from the Knowledge & Reality category, and one course must be from the Values & Society category. Taking a course from each of these categories ensures that students experience a broad range of areas, major themes, and problems within philosophy. Knowledge & Reality courses explore fundamental questions of nature, existence, and understanding. Values & Society courses explore fundamental questions of humans’ relationship to one another and to the world; and these courses also focus specifically on questions of ethics. Courses in each category are at the 200 and 300 level; the courses have no prerequisites, and students are not required to take a 200-level course before a 300-level course. Theology & Religious Studies (TRS) Courses in Theology and Religious Studies provide students with the knowledge and skills necessary for the analysis of religion; for investigation of the historical development and contemporary practice of particular religious traditions; for critical reflection on personal faith as well as sympathetic appreciation of the beliefs of others; and for resources to understand and respond to the religious forces that shape our society and world . Because of the University’s commitment to its Catholic and Jesuit heritage, particular attention is paid to the Roman Catholic tradition. Requirement: Students take two courses in Theology and Religious Studies to complete their Core requirement, at least one course must be taken at the 300 level. Issues in Social Justice (ISJ) With its emphasis on currency, relevance, care for the learning of each student, and discernment, the Integrative Core Curriculum highlights essential principles of Ignatian pedagogy. The Issues in Social Justice component asks that students consider important questions about justice, diversity, and ethics. Students are expected to be engaged learners who bring new knowledge into being through study and collaboration, realizing that knowledge has the capacity to raise ethical questions and that these questions are meaningful and liberating. In Issues in Social Justice courses, students learn to understand and interrogate concepts of inclusion and empowerment and to analyze systems and structures of oppression and marginalization. These courses pose questions about equality, access, multiculturalism, economic and social barriers, or discrimination based on gender, sexuality, class, race, and/or ethnicity. These courses challenge students to recognize institutional impediments or de facto assumptions that result in an individual or group having less than full voice and participation in societies. Issues in Social Justice courses focus on historical issues, contemporary problems, or both. Requirement: Student take one Issues in Social Justice course. These courses are offered by several academic departments. Creative and Performing Arts (CAPA) From their beginnings, Jesuit colleges and universities were distinguished by their attention to the arts and architecture, painting, sculpture, music, theatre, dance, and poetry as methods of religious communication. The practice of any art form gives students a new mode of expression, a voice. To fulfill this requirement, students may take a variety of courses, including creative writing, screenwriting, playwriting, theatre performance, photography, music, and dance. Requirement: Students take one Creative and Performing Arts course, which may be 1 or more credits.
<urn:uuid:2ba022f3-f8d0-4488-b605-0e16ea361b0a>
CC-MAIN-2023-40
https://www.jcu.edu/academics/core/core-curriculum/jesuit-heritage
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00811.warc.gz
en
0.936252
740
3.1875
3
Han van Meegeren achieved such fame as a forger that, after he died, people attempted to forge more of his forgeries. And science is still uncovering some of his work. Han van Meegeren was a Dutch man who loved Dutch art. Unfortunately he loved it a few centuries too late. Van Meegeren was a devotee of the Old Dutch Masters, and so he learned to paint very well in a style that, by the early 1900s, no one cared for anymore. His works were, critics said, unoriginal. But while deriding van Meegeren's work, they continued to praise the older paintings, raving about how they were timeless masterpieces. This made Van Meegeren understandably bitter. He decided that if the critics wanted old paintings, they'd get old paintings. Thus began one of the most lucrative art forging careers of all time. The man was a talented artist, and could paint nearly any style, but his most celebrated paintings were forgeries of Vermeer. They sold for the modern equivalent of millions of dollars. Van Meegeren may never have been caught if he hadn't confessed. The Nazis rolled through Holland, and one of Van Meegeren's "Vermeer" paintings ended up being sold to Hermann Göring. In the eyes of the world, Van Meegeren was a collaborator, selling the cultural heritage of his nation to its enemy. Van Meegeren confessed to forging the painting, and many others. He was sentenced to a year in jail for fraud, but died before he could serve his sentence. Van Meegeren wasn't eager to serve any more time than he already was in for, so he didn't volunteer the full list of his works. During his life he kept his nose to the grindstone, though, so after his death, scientists have discovered quite a few of his works by analyzing supposedly old paintings. One type of analysis shows how very dedicated Van Meegeren was as a forger. The white paint in both modern and older paintings is pigmented with white lead. White lead is a compound that includes, naturally, lead. Lead ores taken from period-appropriate sources contain specific amounts of radium-226, which decays over time into lead-210. The lead also contains precise ratios of carbon-12 and carbon-13. Scientists can analyze the proportion of isotopes of lead, carbon-12, and carbon-13, to determine whether the lead paint in a dubious piece has been mined in the right place at the right time. When analyzed this way, Van Meegeren's pieces often checked out. He was using historically accurate sources of lead for his paintings. What he couldn't get hold of was the right kind of carbon. Old lead paints were made using old plant sources which gave off old carbon dioxide. These sources included every archaeologist's best friend, carbon-14. This can allow scientists to date paintings with precision. Modern white lead uses fossil fuels, and carbon-14 is mostly absent. Carbon-14 dating is what allowed scientists, years after Van Meegeren passed away, to declare supposedly old paintings new Van Meegerens. These days, Van Meegerens have a caché of their own. That is, if they're true Van Meegerens. Other painters (including Van Meegeren's son, can make a pretty penny forging his forgeries.
<urn:uuid:4b515ee7-2531-4d10-b3af-3becf650f7a6>
CC-MAIN-2020-24
https://io9.gizmodo.com/science-is-still-identifying-the-paintings-of-the-most-1696818712
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00281.warc.gz
en
0.985096
721
3.046875
3
SACRAMENTO – California’s groundbreaking effort to fight climate change took another big step forward today as the California Air Resources Board released the proposed plan to reduce greenhouse gas emissions by 40 percent below 1990 levels by 2030 – the most ambitious target in North America. The plan builds on the state’s successful efforts to reduce emissions and outlines the most effective ways to reach the 2030 goal, including continuing California’s Cap-and-Trade Program. Achieving the 2030 target under the proposed plan will continue to build on investments in clean energy and set the California economy on a trajectory to achieving an 80 percent reduction in greenhouse gas emissions by 2050. This is consistent with the scientific consensus of the scale of emission reductions needed to stabilize atmospheric greenhouse gas concentrations at 450 parts per million carbon dioxide equivalent, and reduce the likelihood of catastrophic climate change. “Climate change is impacting California now, and we need to continue to take bold and effective action to address it head on to protect and improve the quality of life in California,” said CARB Chair Mary D. Nichols. “The plan will help us meet both our climate and our clean air goals in the coming decades and provide billions of dollars in investments to cut greenhouse gases, smog and toxic pollution in disadvantaged communities throughout the state. It is also designed to continue to drive creative innovation, generating good new jobs in the growing clean technology sector.” For the past decade, California has been reducing emissions through a series of actions, innovative solutions and advances in technology. These include cleaner, more fuel-efficient cars and zero emission vehicles, low-carbon fuels, renewable energy, waste diversion from landfills, water conservation, improvements to energy efficiency in homes and businesses, and a Cap-and-Trade Program. The result is improved public health, a growing economy with more green jobs, and better clean energy choices for Californians. Assembly Bill 32, signed in 2006, set California’s initial goal to reduce greenhouse gas emissions to 1990 levels by 2020 and directed CARB to develop a climate change scoping plan – to be updated every five years – detailing specific measures needed to reach the target. Today’s proposed plan, required by the Governor’s April 2015 Executive Order, updates the previous scoping plan to account for the new 2030 target codified in Senate Bill 32. The proposed plan continues the Cap-and-Trade Program through 2030 and includes a new approach to reduce greenhouse gases from refineries by 20 percent. It incorporates approaches to cutting super pollutants from the Short Lived Climate Pollutants Strategy. And it acknowledges the need for reducing emissions in agriculture and highlights the work underway to ensure that California’s natural and working lands increasingly sequester carbon. Achieving the 2030 goal will require contributions from all sectors of the economy and will include enhanced focus on zero- and near-zero emission vehicle technologies; continued investment in renewable energy, including solar and wind; greater use of low-carbon fuels; integrated land conservation and development strategies; coordinated efforts to reduce emissions of short-lived climate pollutants, which include methane, black carbon and fluorinated gases; and an increased focus on integrated land-use planning to support livable, transit-connected communities. The proposed plan, which follows the release of a discussion draft in December, analyzes the potential economic impacts of different policy scenarios, including a carbon tax, and calculates the benefit to society of taking actions to reduce greenhouse gas emissions. The plan also includes the estimated range of greenhouse gas, criteria pollutant and toxic pollutant emissions reductions of each measure. The analysis in the plan finds that Cap-and-Trade is the lowest cost, most efficient policy approach and provides certainty that the state will meet the 2030 goals even if other measures fall short. The Cap-and-Trade Program funds the California Climate Investments program, which provides funds for community, local, regional and statewide projects aimed at reducing greenhouse gas emissions – with at least 35 percent of proceeds invested in disadvantaged and low-income communities. To date, a total of $3.4 billion in cap-and-trade funds have been appropriated for the California Climate Investments program. The proposed plan was developed by CARB staff over the past 18 months working with multiple State agencies and departments. This effort was guided by legislation and reflects input from dozens of public workshops and community meetings, and input from CARB’s Environmental Justice Advisory Committee and many other stakeholders. The first of three public hearings on the proposed plan will be held at the regularly scheduled Board meeting on January 27. The California Air Resources Board is slated to hold workshops in February and hear an update at the February 16 Board meeting. The Final 2017 Scoping Plan Update will be released in late March and be considered for approval by the Board in late April. The full text of “The 2017 Scoping Plan Update: The Proposed Plan for Achieving California’s 2030 Greenhouse Gas Target” is available at: Stakeholders and the public are encouraged to submit comments by 5:00 PM PST on March 6, 2017.
<urn:uuid:82adb272-9f80-4b6f-ae48-fce25f634026>
CC-MAIN-2017-47
http://www.rapidshift.net/californias-plan-to-cut-ghg-40-below-1990-levels-by-2030-most-ambitious-target-in-north-america/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806455.2/warc/CC-MAIN-20171122031440-20171122051440-00530.warc.gz
en
0.93569
1,040
2.546875
3
Camille Pissarro Biography, Life, Interesting Facts Died On : Also Known For : Birth Place : Born on July 10, 1830, Camille Pissarro was an impressionist and neo-impressionist painter. The Danish-French national is recognized for his contributions to the betterment of Imperialism. Having followed the footsteps of great painters like Gustave Courbet and Jean-Baptiste-Camille Corot, he brought together 15 impressionist artists to develop and explore other themes. At age 54, Pissarro took an interest in neo-impressionism and therefore understudied George Seurat and Paul Signac. He became the only artist to have exhibited his works eight times at the Paris Impressionist exhibition between the years of 1874 to 1886. Referred to as the dean of Impressionist painters, Camille Pissarro was like a father to other painters because of good interpersonal relationships and in-depth knowledge in his field of work. Jacob Abraham Camille Pissarro was born to Frederick and Rachel Manzano in St Thomas and had three other siblings. He attended the Savary Academy a boarding school in Passy near France. During his school days, he was enthused with the French art masters and gained interest in arts. His schoolmaster, Monsieur Savary helped him build on this interest by taking him through drawings and paintings. He was then encouraged to take inspiration from nature and draw from it. Pissarro heeded to this advice and practiced it upon his return to St Thomas. Despite his interest in arts, his father also had another vision for him. He wanted him to become part of his business and was given a job as cargo clerk. Pissarro always made sure he took his tools to work and practiced whenever he was free. So for the five years of being in his dad’s business, he never stopped drawing. Pissarro later came into contact with a Danish artist Fritz Melbye, who convinced him to take on painting as a profession and offered to be his teacher. He vacated his position and traveled with Melbye to Venezuela. The two worked as an artist there and bearing in mind Monsieur Savary’s advice, Pissarro included landscapes, village scenes and other nature in his paintings. Life In France Two years of his stay in Venezuela, Camille Pissarro moved to Paris to work with Anton Melbye, brother to Fritz Melbye. There he acted as an assistant to Anton and studied works of other painters. He studied the works of Courbet, Jean-Francois, Charles-Françoise Daubigny, Millet, and Corot. To improve on his skills, he took on some classes at the Academie Suisse and Ecole des Beaux-Arts but later found the lessons boring, so he looked for alternatives. He then consulted one of the artists he had previously studied, Corot, to help him out which he accepted. After gaining lots of experience and meeting classification, he debuted at the Paris Salon exhibition in 1859. Camille did lots of painting with inspiration from Corot. Between 1870 and 71, Pissarro moved to Norwood because he couldn’t join the army during the Franco-Prussian War due to his Danish nationality. Camille Pissarro did lots of painting works at Norwood and Sydenham, including the view of St. Bartholomew’s Church also known as The Avenue Sydenham, The Crystal Palace, Norwood Under the Snow and Lionello Venturi. Between the year 1890 and 1892, he moved to France and London frequently to paint landscapes. Upon his return to France, Camille Pissarro noticed that most of his paintings he left behind had been destroyed. Out of the over 1500 paintings, he had only 40 of them. Most of the damaged paintings were believed to have impressionist style. Impressionist painting has always been described as a brainchild of Pissarro by some critics. Pissarro then planned to have a competition and alternative for Salon to help them display their style contrary to Salon’s classification. Pissarro brought together 15 Impressionist artists to form the Société Anonyme des Artistes, Peintres, Sculpteurs et Graveurs in 1873. Being the pivotal figure of the group, Pissarro did not impose himself on them but came to their level to work cordially. The group held its maiden Impressionist Exhibition in 1974 and was met with lots of critics. Their new developed theories were faulted and criticised as an insult to the craft of the traditional artist. The style shifted from the traditional portrayal of historical and religious to common places and lifestyle settings. Camille Pissarro later found the impressionist theme boring and searched for different themes during the 1880’s. This was probably the end of the impressionist movement. He went back to his youthful style of painting. He was now painting the lives of country people and local settings. His current work came with negative critics, but to Pissarro, it was aimed at educating the public. He said he wanted to do a realistic painting than idealizing ones. Pissarro also abandoned the neo-impressionist style, saying it was very artificial and came out with the pointillism, which he developed with Georges Seurat and Paul Signac. He spent about three years 1985 and 1988 practicing this new technique. After mastering it, he showcased some of his works at the Impressionist Exhibition in 1886. He adopted other styles in his subsequent works. Legacy And Influence Critics described Camille Pissarro as an outstanding painter among his group. He was also termed as the most real and naïve member by Armand Silvestre. His later works were described as that of dignity, sincerity and durable which distinguished his person by Diane Kelder, a historian. Mary Cassatt, who was once a member of the Imperialist, described Pissarro as one who could teach “the stone to draw.” Though Pissarro sold few of his work in his lifetime, posthumously his works have been sold for millions at auctions. One of Pissarro’s lost oil painting in 1897 Rue St. Honoré, Apres Midi, Effet de Pluie was found at hanging in Museo Thyssen-Bornemiszaa Madrid government’s museum. Even though the US embassy requested the return of the paint, it was declined. His other works like the Quai Malaquais and Printemps was also stolen. Others like Le Boulevard de Montmartre, Matinée de Printemps done in 1897 was found in the Israel Museum in Jerusalem. Camille Pissarro married his mother’s maid Julie Valley in 1871 and had seven children. The couple lived in Pontoise and later moved to Louveciennes. He died on November 13, 1903, in Paris. More People From United States of America Barbara Grizzuti Harrison John F. Kennedy Jr.
<urn:uuid:fa1f3264-0c1f-412d-96ba-dc557d194f2a>
CC-MAIN-2023-50
https://www.sunsigns.org/famousbirthdays/profile/camille-pissarro/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00084.warc.gz
en
0.982351
1,470
3.171875
3
Topics for population censuses The selection of census topics should be based on the output expected to be produced by the census. Thus, at the beginning of the process of selecting topics stands a clear identification of expected outputs. A number of considerations have to be taken into account when deciding which topics should be covered in the census. Firstly, prime importance should be given to the fact that population censuses should be designed to meet national needs. Secondly, census topics should be selected in a way that ensures a high comparability of results both internationally and over time. Thirdly, it is important to select only topics for which the respondents will be willing and able to provide adequate information (and avoid those which may arouse fear, local prejudice or superstition). Finally, the selection of topics should be carefully considered in relation to the total resources available for the census (see the “Principles and Recommendations” for more detailed discussion of this issue). The “Principles and Recommendations for Population and Housing Censuses, Revision 2” provides a list of topics to be investigated in population censuses of recommended topics. It distinguishes between “core topics” collected directly, “derived core topics” and “additional topics”. Although derived core topics are based on information in the questionnaire, they usually – but not exclusively – do not come from replies to a specific question but are rather obtained indirectly (e.g. total population). Additional topics are such topics which are not regarded to have the highest priority but which some countries may find useful to include in their census. As part of the 2010 World Programme on Population and Housing Censuses the UNSD is analyzing questionnaires used in population censuses according to the topics covered in the questions. The document Implementation of population census topics in the 2010 census round shows for every topic the number of countries which asked questions concerning this topic. This is done only for the direct core topics and the additional topics, since derived topics can usually not be linked immediately to questions in the questionnaire. The results are rather minimum counts and should be interpreted with caution since not all topics are based on information coming from the personal questionnaire. Some information is asked directly in the questionnaires but may also come from other enumeration material or a combination of them; this is in particular true for some topics concerning geographical characteristics such as “place of usual residence” or “place where present at time of census”. Furthermore, some topics may be covered by combining information from different parts of the questionnaire (for example, the “age of mother at birth of first child” can be asked for directly or can be calculated by subtracting the date of birth of the women from the date of birth of the first child born).The document Countries and the number of population census topics covered in their census questionnaires (2010 census round) shows the different degrees to which countries covered population topics in their census questionnaires. The same considerations given above apply also for the interpretation of this table. Furthermore, it has to be considered that some countries, in particular those with the most developed statistical systems, are relying for information on a number of topics on non‑census sources. Those countries may show only a low number of topics covered in their census questionnaire although data on this topic is available from other sources including population and other registers, and also sample surveys.
<urn:uuid:47466f1b-0925-4478-9061-bc40e74fd1e0>
CC-MAIN-2014-23
http://unstats.un.org/unsd/demographic/sources/census/2010_PHC/CensusTopics.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00199-ip-10-33-131-23.ec2.internal.warc.gz
en
0.94265
686
2.9375
3
An example of ornis is the group of birds that lived in southern Africa during the 8th century. Origin of ornisGerman from Classical Greek ornis, bird: see ornitho- - An avifauna; a bird. From German Ornis, from Ancient Greek ὄÏνις (ornis, “bird") - Used to form genus names of birds New Latin, from Ancient Greek ὄρνις (ornis, “bird”)
<urn:uuid:00929310-1a23-482a-9b5b-23e445b265ad>
CC-MAIN-2017-47
http://www.yourdictionary.com/ornis
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804518.38/warc/CC-MAIN-20171118021803-20171118041803-00554.warc.gz
en
0.79447
118
3.0625
3
Asked by | 3rd Apr, 2010, 10:19: AM Angular momentum = Linear momentum x Perpendicular distance. Now the perpendicular distance, is the radius of the disc and angular momentum required is the angular momentum of the disc at the point of contact. Hence MI about origin = MR2/2 + MR2 = 3MR2 /2 and Angular Momentum = 3MR2w/2 Answered by | 8th Apr, 2010, 01:05: PM Kindly Sign up for a personalised experience - Ask Study Doubts - Sample Papers - Past Year Papers - Textbook Solutions Verify mobile number Enter the OTP sent to your number
<urn:uuid:e57ef31c-f711-495f-b236-0e0c8a884448>
CC-MAIN-2020-16
https://www.topperlearning.com/answer/rotational-motion/zrkedguu
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00524.warc.gz
en
0.757103
150
3.25
3
In cost accounting, variance is very important to evaluate the performance of company for increasing its efficiency. In variance analysis, we compare actual and standard cost and revenue to know whether it is favorable or unfavorable. Favorable variance ( F ) shows that standard cost is less than actual cost or standard revenue is more than actual revenue. But unfavorable or adverse ( U or A ) variance shows that actual cost is more than standard cost or actual revenue is less than standard revenue. Types of variance are the steps to deep study of variance. We classify variance with following ways. 1st Type of Variance: Direct Material Variance Direct material variance shows the difference between the actual cost of material of actual units and standard cost of material of standard units. It is also the total of material price variance, material quantity variance. If there is favourable material quantity variance and unfavourable material price variance or vise-versa, direct material cost may be either favourable or unfavourable because it is total of material price and material quantity variance. 2nd Type of Variance: Labor Variance Labor variance shows the variance of labor cost. It is the difference between standard cost of labor for actual production and the actual cost of labor for actual production. 3rd Type of Variance: Overhead Variance Overhead Variance shows the variance of all indirect cost. It is the difference between standard cost of overhead for actual output and actual cost of overhead for actual output. 4th Type of Variance: Sales Variance Sales variance is that type of variance which shows the difference between actual sales and standard sales. But in unfavourable sales variance, our standard sale is less than actual sale. Sales variance is good way to calculate the responsibility of sales department.
<urn:uuid:96ea3b38-af06-4953-aece-5150fa9ab3ff>
CC-MAIN-2020-10
http://forum.daffodilvarsity.edu.bd/index.php/topic,52788.0/prev_next,prev.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146342.41/warc/CC-MAIN-20200226115522-20200226145522-00464.warc.gz
en
0.862427
355
3.671875
4
World War I created volatility in Europe, which later would lead to the beginning of World War II. The devastation caused by the Great War (as World War I was called then) left Europe unstable enough for Hitler to rise to power and begin his march for world domination. Great Britain and France were the first nations to declare war on Germany, following Hitler’s invasion of Poland in 1939. This began a six-year long endeavor for the Allied forces to defeat the Axis powers (Italy, Japan, and Germany, among other nations). This war was the most devastating of any previous war, with casualties ranging in the tens of millions—including an estimated 6 million Jews killed within Nazi concentration camps during the Holocaust. The political and military leaders of that time were definitely in it to win it. So I am including another ‘top 5’ list, this time a list of the top WWII military leaders. - First and foremost has got to be General Patton (George S. Patton Jr.) who was appointed as commanding general of U.S. operations in North Africa. He was a skilled strategist at tank warfare, and he is probably best known for his role in the Battle of the Bulge. - General Bernard Montgomery (and British Field Marshall) was best known for his role in actively planning the D-Day invasion in Normandy. He was also in combat during WWI where he was very badly wounded. He took the German surrender in 1945 at Luneburg Heath in northern Germany. After the war he became the Commander-in-Chief of the British Army of the Rhine (BAOR) in Germany and then Chief of the Imperial General Staff. - Lt. General Omar Bradley was the last of only 9 people to hold 5-star rank in the US Armed Forces. He was a senior U.S. field commander in North Africa and Europe, and he served as General of the Army in the U.S. Army. Lt. Gen. Bradley had command of all U.S. ground forces invading Germany from the west. He held command over 43 divisions and 1.3 million men, which is reported as the largest body of American soldiers ever to serve under a U.S. field commander. - Dwight D. (Ike) Eisenhower served as supreme commander of the Allied forces during WWII in western Europe. He also held 5-star rank as general and had responsibility in planning and overseeing the invasion of North Africa during Operation Torch. He later became our 34th president. - General of the Army Douglas MacArthur served as general and field marshall of the Philippine Army during WWII. General MacArthur also served as Chief of Staff of the United States Army during the 30’s. He was awarded the Medal of Honor for his service in the Philippines Campaign; he and his father (Arthur MacArthur Jr.) became the first ever father-son duo to both receive the Medal of Honor. He also became 1 of only 5 men to rise to rank of General of the Army in the U.S. Army, and he was also the only man to ever become a field marshal in the Philippine Army. These men served their countries and defended the civilians standing watch during WWII. Their service and sacrifices made are well-known and deserve honor and respect. On another note, there were also very prominent political leaders during the war as well, and all worked together to bring the successions of Hitler’s forces to a halt. Please feel free to offer up more information on the above top World War II military leaders, as well as including any more that deserve recognition in the comments section. You can leave comments or questions in the section below. Thank you. About the Author: Lauren is a stay at home mom currently working from home as a freelance writer. She is certified in Education with a background in education, writing, and tutoring to help students develop their educational skills. She comes from a military family and writes articles about education, military life, and personal development.
<urn:uuid:7a35d177-c2d7-4201-8e78-4bd45d10c090>
CC-MAIN-2020-29
https://www.part-time-commander.com/top-5-wwii-military-generals-and-leaders/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00121.warc.gz
en
0.988491
812
3.703125
4
How Long Are Cats Pregnant? Cats waste little time when it comes to pregnancy. Most female cats can conceive when they're just four months old (it's important to spay and neuter early to avoid unplanned pregnancies), and they can have as many as three litters each year. Once pregnant, cats gestate for only 62 to 67 days. That's roughly nine weeks from conception to birth. The typical nine-month human pregnancy is an eternity by comparison. Like we said, cats are no slouches in terms of reproduction. Below we've outlined the various stages of feline pregnancy so you'll know exactly what to expect when you're expecting kittens. Early in the Pregnancy During the first couple weeks after conception, your cat won't show many obvious signs of pregnancy. By the third week, though, it should become more apparent. You might notice her nipples are pinker or rosier. Her appetite could increase – she's eating for more than one now, after all. She may vomit occasionally, just like women in the early stages of pregnancy. You might notice she's more affectionate (good news for you!) and seeking more attention, so don't skimp on petting your mother-to-be. If you want to know with certainty that your cat is with kittens, take her to the vet. After three weeks of pregnancy your vet should be able to examine her stomach and feel the embryos (don't try this at home – you could cause a miscarriage). If there's still some uncertainty, your vet might perform an ultrasound. Throughout the pregnancy make sure your cat is getting plenty of water and food. Later in the Pregnancy Your pregnant cat will begin to gain weight – usually a total of two to three pounds by the time the kittens are born. The weight gain should be very noticeable four to five weeks into the pregnancy. By the final week of pregnancy her mammary glands will be enlarged and ready to produce milk. Around this time she may begin nesting – finding a quiet, safe place to give birth to her kittens. Consider providing your cat with a nesting box. To make your own, simply cut a hole (this will be the entryway) in the side of a large cardboard box. Throw in something she will like to sit on, like a towel or blanket, to make the box more comfortable. Be sure to place the box in a quiet area that's not lit too brightly. Give her a week or two to get used to the nesting box. Your goal is to help your expectant cat feel safe, comfortable, and ready for the big day. When your cat is very close to giving birth, she may become restless, meow frequently, refuse to eat, and spend more time in her nesting area. Once she goes into labor, she may yowl or even vomit. The good news is the birthing process is generally a quick one for cats. If she is unable to push a kitten out after an hour or so of difficult labor, talk to your vet (a C-section may need to be performed but chances are low that you will have to do this). The average litter includes four to six kittens, but it can consist of as many as eight and as few as one. Keep an eye on your cat and the new kittens throughout the birthing process, but don't get too close or involved unless complications arise. In most cases the new mom can handle things all on her own. As always, consult your vet about any concerns or complications. Last but not least, take a moment to adore those cute new kittens. By Jed M.
<urn:uuid:f1dc7fc6-6b9b-4a7b-bccf-5702dc372f6e>
CC-MAIN-2017-39
https://www.cuteness.com/blog/content/how-long-are-cats-pregnant
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818694719.89/warc/CC-MAIN-20170926032806-20170926052806-00024.warc.gz
en
0.965039
739
2.6875
3
SS.4.A.3.10: Identify the causes and effects of the Seminole Wars. The United States waged three wars against the Seminoles in the 19th century. The largest of these conflicts, the Second Seminole War, was the longest and most costly American Indian war in U.S. history. This episode describes the third and final Seminole War. Episode Six: The Third Seminole War Episode Six discusses the third and final Seminole war. The resources for Episode Six are an 1844 surveyor’s map of Florida, (Slide Three) an 1853 map of Florida by H.S. Colton, (Slide Four) an 1859 map of Florida by the firm of Charles Desilver, and (Slide Five) an 1873 map of Florida by Asher Adams. A third Seminole War almost broke out in July 1849 when Seminole warriors attacked a farm near modern-day Fort Pierce. Two weeks later, another attack took place near the Peace River in southwest Florida. Initially caught off guard by the attacks, the U.S. government was unprepared to renew a military campaign against the Seminoles in Florida. Fearful that the actions of a few might endanger all of his people, Billy Bowlegs, the most powerful Seminole leader remaining in Florida, met with the Americans and vowed to turn over those responsible for the 1849 attacks. Despite surrendering the accused attackers, Bowlegs was informed that all Florida Indians had to leave Florida. The major Seminole settlements during this time are indicated on this map by blue triangles. Over the next two years, several dozen Seminoles were bribed and pressured to leave Florida for the Indian Territory in the West. In 1852, Billy Bowlegs met with President Millard Fillmore to discuss removal. Bowlegs agreed to the President's demands for removal, but, upon returning to Florida, decided not to leave the state again. In response, the Americans placed additional pressure on the Seminoles in order to force them into a conflict with the U.S. Army. The plan worked. In late December 1855, warriors led by Billy Bowlegs attacked a party of surveyors who had disturbed Seminole camps near the Big Cypress Swamp. The approximate location of Bowlegs’ camp is indicated on this map by a blue triangle. Over the next three years, the Americans attempted to drive the Seminoles into the Everglades. They thought that the Seminoles could not survive in the swamps during the rainy season. When they came out to plant crops, the Army planned to capture them. The Americans built a series of forts to surround the Seminoles. Compare this map from 1859 with the previous map from 1853. Note the increased military presence indicated by the number of forts reactivated or built during the Third Seminole War. Although the Seminoles dealt a series of blows against frontier settlements during the Third Seminole War, the Americans’ plan proved largely successful. By 1858, most of the Seminoles remaining in Florida had been captured or had agreed to emigrate west, including Billy Bowlegs. When the Third Seminole War finally came to an end, only an estimated 200 Seminoles remained in Florida. The approximate location of major post-Seminole Wars camps are indicated on this map by blue triangles. After three wars, Florida Seminoles now faced a new life in a challenging environment.
<urn:uuid:d23abec6-2638-4d95-8320-4cb874363dad>
CC-MAIN-2014-10
http://www.floridamemory.com/onlineclassroom/seminoles/sets/maps6/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011038777/warc/CC-MAIN-20140305091718-00085-ip-10-183-142-35.ec2.internal.warc.gz
en
0.959918
719
3.875
4
John Dryden’s Of Dramatic Poesie (also known as An Essay of Dramatic Poesy) is an exposition of several of the major critical positions of the time, set out in a semidramatic form that gives life to the abstract theories. Of Dramatic Poesie not only offers a capsule summary of the status of literary criticism in the late seventeenth century; it also provides a succinct view of the tastes of cultured men and women of the period. Dryden synthesizes the best of both English and Continental (particularly French) criticism; hence, the essay is a single source for understanding neoclassical attitudes toward dramatic art. Moreover, in his discussion of the ancients versus the moderns, in his defense of the use of rhyme, and in his argument concerning Aristotelian prescripts for drama, Dryden depicts and reflects upon the tastes of literate Europeans who shaped the cultural climate in France and England for a century. Although it is clear that Dryden uses Neander as a mouthpiece for his own views about drama, he is careful to allow his other characters to present cogent arguments for the literature of the classical period, of France, and of Renaissance England. More significantly, although he was a practitioner of the modern form of writing plays himself, Dryden does not insist that the dramatists of the past are to be faulted simply because they did not adhere to methods of composition that his own age venerated. For example, he does not adopt the views of the more strident critics whose insistence on slavish adherence to the rules derived from Aristotle had led to a narrow definition for greatness among playwrights. Instead, he pleads for commonsensical application of these prescriptions, appealing to a higher standard of judgment: the discriminating sensibility of the reader or playgoer who can recognize greatness even when the rules are not followed. For this reason, Dryden can champion the works of William Shakespeare over those of many dramatists who were more careful in preserving the unities of time, place, and action. It may be difficult to imagine, after centuries of veneration, that at one time Shakespeare was not held in high esteem; in the late seventeenth century, critics reviled him for his disregard for decorum and his seemingly careless attitudes regarding the mixing of genres. Dryden, however, recognized the greatness of Shakespeare’s productions; his support for Shakespeare’s “natural genius” had a significant impact on the elevation of the Renaissance playwright to a place of preeminence among dramatists. The period after the restoration of the Stuarts to the throne is notable in English literary history as an age in which criticism flourished, probably in no small part as a result of the emphasis on neoclassical rules of art in seventeenth century France, where many of King Charles II’s courtiers and literati had passed the years of Cromwell’s rule. Dryden sets his discussion in June, 1665, during a naval battle between England and the Netherlands. Four cultivated gentlemen, Eugenius, Lisideius, Crites, and Neander, have taken a barge down the River Thames to observe the combat and, as guns sound in the background, they comment on the sorry state of modern literature; this naval encounter will inspire hundreds of bad verses commending the victors or consoling the vanquished. Crites laments that his contemporaries will never equal the standard set by the Greeks and the Romans. Eugenius, more optimistic, disagrees and suggests that they pass the remainder of the day debating the relative merits of classical and modern literature. He proposes that Crites choose one literary genre for comparison and initiate the discussion. As Crites begins his defense of the classical drama, he mentions one point that is accepted by all the others: Drama is, as Aristotle wrote, an imitation of life, and it is successful as it reflects human nature clearly. He also discusses the three unities, rules dear to both the classicist and the... (The entire section is 1,774 words.)
<urn:uuid:0932c9d1-30d9-48fe-af04-1bf8f4278945>
CC-MAIN-2020-16
https://www.enotes.com/topics/dramatic-poesie
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00429.warc.gz
en
0.962502
838
2.671875
3
Unlike true crabs, hermit crabs have soft, vulnerable abdomens. For protection from predators, many hermit crabs seek out abandoned shells, usually snail shells. When a hermit crab finds one of the proper size, it pulls itself inside, leaving several legs and its head outside the shell. (A hermit crab has five pairs of legs, but not all of them are fully developed.) A hermit crab carries the shell wherever it goes. When it outgrows its shell, it switches to a larger one. Most adult hermit crabs are from 1/2 inch (13 mm) to 4 3/4 inches (121 mm) long. Living on the seashore, in tidepools, and on the sea bottom in deeper water, hermit crabs scavenge their food.
<urn:uuid:17d5d80d-485a-4b73-8037-e7788d8cf04f>
CC-MAIN-2017-34
http://oceanaddicts.ie/marinelife/crustaceans-2/4126-2
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00389.warc.gz
en
0.923551
161
3.46875
3
Ten ways to improve your displays, save on your electricity costs and avoid the most common lighting mistakes. 1. The rate per kilowatt-hour that shows on your utility bill is meaningless because all charges need to be considered and paid. Determine your real electricity rate by dividing the bottom line, how much you owe, by the number of kilowatt hours you used. That is the blended rate and the only one that matters. The examples that follow use 10 cents per kilowatt-hour because it is an easy number to adjust to any rate. 2. If your electric bill is $100.00 then $20-$25 is due to lighting. That 20-25% for lighting is easily addressed to reduce direct and indirect lighting costs. One watt of electricity in a furniture store costs 40 cents per year assuming a rate of 10 cents and 4,000 operating hours. 3. Each watt of electricity used by a light bulb generates a watt of heat. Heat needs to be handled by the HVAC system so there is an indirect cost. It follows that If we reduce the heat generated, we can lower the load on the HVAC. A 40-watt fluorescent tube generates 40 watts of heat. A 40-watt light bulb in a table lamp generates 40 watts of heat. A 40-watt halogen generates 40 watts of heat. Watts is watts. 4. It costs twice the bulb cost to change a light bulb. Even when replacing the bulb in a table lamp, one has to consider the time it takes to recognize the problem, get the bulb, make the change, and dispose of the failed lamp. Getting the ladder to change a fluorescent in the ceiling, including moving and repositioning merchandise adds more time to the process. Fluorescents should be changed all at once, a technique called group relamping, to reduce labor cost by 30-50%. 5. Fluorescent light can (and should) be used in showrooms to provide ambient or background lighting. Fluorescents now have excellent color rendering and service life. Modern fluorescent light scores about 85 for color rendering on a scale of 0-100. The old tubes, cool white-warm white, score 60-65. We can see a big difference in color at 85 compared to 60. The difference between a fluorescent at 85 and a fluorescent at 95 is hardly noticeable. 6. Incandescent light scores 100 for color rendering on the 0-100 scale. It may seem strange but the crisp, white light of halogen gets 100, as does the dirty, yellow light of a 130-volt incandescent. The reason is that color rendering, the scale of 0-100, is determined by the color temperature of the light source. Halogen is about 3000 degrees Kelvin and the incandescent is at 2700 degrees. On the Kelvin scale, 3000 is cooler than 2500. This has nothing to do with heat. Warm light contains proportionately more red while cool light contains more blue. 7. The color temperature thing in #6 is really important. Just as with the different appearance of halogen and incandescent light, fluorescent bulbs can have the same color rendering (0-100) but colors will appear different. A fluorescent at 3000 degrees has more red than a fluorescent at 4100. Think of a blacksmith heating iron. As the iron gets hotter it goes from red to blue to white. The Kelvin scale is based on that concept. Incandescent light, including halogen, heats a filament, same as the blacksmith, so incandescent color temperature matches the Kelvin scale. Fluorescent light has no filament so color temperature is more difficult to determine. Fluorescent light is correlated to the Kelvin scale. When we say a fluorescent is 3000 degrees we are really saying, it falls into a range called 3000 degrees rather than giving an absolute value of 3000. 8. Years ago stores had only fluorescent light. Then we switched to incandescent reflector floods to improve the color of products. The next switch was to halogen because it was more efficient than incandescent. Stores were track only at this point. Efficiency was a consideration because electricity rates were increasing. Halogen bulbs cost more than incandescent reflectors but use less electricity and last longer. Fluorescent light with improved color rendering was added back into the lighting mix because track only stores are very dark compared to most shopping experiences. 9. Clearly there are differences in light sources. You can make shadows with halogen light because it is a point source. Light (and lots of it) is generated at the single point of a filament. Fluorescent light is generated along the entire surface of the bulb and is a diffuse source. You cannot make a shadow with fluorescent Light. For this reason fluorescent light should not be used in track lighting to accent merchandise. There will be no accent. The appearance of a fluorescent only store is like an old K-Mart. 10. The best practice for a good looking store is to provide an adequate level of ambient light from a fairly uniform, fluorescent layout and accent lighting from track to provide visual interest to the sales floor. The amount of ambient light and accent light should depend on the price points and the type of merchandise. The rule of thumb is, The higher the merchandise price, the lower the ambient light level. Merchandise appearance is the key issue. Top Five energy saving Ideas 1. Replace every incandescent bulb on the sales floor with a compact fluorescent. The typical bulb in a table lamp is a 40-watt, standard incandescent. All it needs to do is illuminate the shade and a little of the tabletop. Replacing one lamp, 40 watt incandescent to 9-watt compact fluorescent, saves 31 watts, worth $12.40 per year. The payback on this investment is less than 3 months and your merchandise displays will look the same. If you like the brighter look of 60-watt bulbs in table lamps, then use a 13-watt compact fluorescent. Savings: 47 watts or $18.80 per lamp, per year. 2. Replace every downlight with a fluorescent. Downlights are typically recessed cans in the ceiling that provide general illumination to an area or hallway. They may have reflector or halogen bulbs. Don't replace those with the compact fluorescents that you used in table lamps ( twists or spirals). Downlights call for a reflector style compact fluorescent to direct light out of the fixture and withstand the heated environment inside the can. Savings are typically 62 watts or $24.80 per year, per can. 3. Audit your track lighting. Take a good look at (1) the placement of track heads, (2) bulbs being used and (3) whether the bulb is aimed properly. Heads should NOT be evenly placed on a track. Heads should be moved as necessary to illuminate each display. Some displays take more heads while some need fewer heads. You will probably find that 10% of your track heads are underemployed and can be removed. My rule of thumb is to use 4 or 5 track heads per room group. If you are using more than that, check the bulb. You may have 130-volt bulbs, which do last longer but have lower light output than the 120 volt bulbs. Often, retailers will ad more bulbs (read that as more cost is added) to compensate. The result is that a long lasting 130 volt bulb costs more money, heat and labor. 4.Replace incandescent reflectors or conventional halogen bulbs with Infrared Coated (IRC) bulbs. The incandescent reflector went out of favor because it is inefficient and produces a yellowish light. The incandescent reflector replacement was halogen. Halogen was an improvement, but IRC technology makes it even better. Inside every halogen bulb is a capsule about the size of the tip of your thumb. That capsule contains a filament and halogen gas. By coating the inside of the capsule with a material that reflects infrared heat back onto the filament, the filament becomes hotter and produces more light. By recycling waste heat, more light for less energy is produced. Service life is 4,200 hours on the Philips bulb we recommend so that means fewer changes compared to the standard, 2,500-hour lamp. More light, less labor, and a savings of $10-$12 per year per track head. 5. Audit your fluorescent lighting. If you find fixtures containing T12 fluorescents in your store it is time to upgrade your lighting. T12 fluorescents are 1.5 inches in diameter and will be stamped with F40T12 or F34T12 if they are four feet long or F96 if they are 8 footers. The time has come to replace T12 fixtures because they are inefficient and are no longer allowed in new construction. A typical 2x4, four-lamp, T12 fixture will use between 160 and 180 watts and can be replaced by a three lamp, T8 fixture using 82-87 watts. Your investment in a new fixture is paid back in less than three years. Monte Lee is a Regional Manager for Service Lamp Corporation, a distributor of lighting products and services. Inquiries on any aspect of furniture store lighting can be sent to him at [email protected]. See all of Monte Lee’s articles on store lighting posted to the www.furninfo.com website.
<urn:uuid:c63e6b6d-04c1-40d5-a6c8-7c21b69130a6>
CC-MAIN-2023-23
https://furninfo.com/furniture-world-archives/9309
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654606.93/warc/CC-MAIN-20230608071820-20230608101820-00024.warc.gz
en
0.918048
1,941
3.03125
3
Many, see text The soldier beetles (Cantharidae) are relatively soft-bodied, straight-sided beetles. They are cosmopolitan in distribution. One of the first described species has a color pattern reminiscent of the red coats of early British soldiers, hence the common name. They are also known commonly as leatherwings because of their soft elytra. Historically, these beetles were placed in a superfamily "Cantharoidea", which has been subsumed by the superfamily Elateroidea; the name is still sometimes used as a rankless grouping, including the families Cantharidae, Drilidae, Lampyridae, Lycidae, Omalisidae, Omethidae, Phengodidae (which includes Telegeusidae), and Rhagophthalmidae. - Phillips, C., et al. Leatherwing (Soldier) Beetles. Virginia Cooperative Extension, Virginia Tech and Virginia State University. 2013. |Wikimedia Commons has media related to Cantharidae.| |Wikispecies has information related to: Cantharidae|
<urn:uuid:7102fb6a-2b85-4917-9997-2d9fbee2b87a>
CC-MAIN-2017-39
https://en.wikipedia.org/wiki/Soldier_beetles
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690029.51/warc/CC-MAIN-20170924134120-20170924154120-00243.warc.gz
en
0.677903
230
3.125
3
A novel coronavirus (COVID-19) has spread mostly all over the world. This virus is unleashing devastation on the city of Wuhan situated in the Hubei region of China. The COVID-19 virus outbreak started from the start of December of 2019 and has kept on spreading. The individuals who were the first ones to become infected from this virus were linked toward the South China Seafood Wholesale Market, which has been shut from that point onward. A vast number of cases have come into notice by the health authorities in China. There are likewise cases that have come into existence in different nations, generally spread by the individuals going out of China, including Chinese individuals or the individuals coming back from China to their particular countries. The infection can spread, starting with one individual then onto the next through contact or even merely being in the closeness of the contaminated person. We as a whole, know about this virus, yet its time to think about Coronavirus in detail. What is Covid-19 – the virus that started from Wuhan? This is a large virus family that is collectively known as the Coronavirus. A large portion of the known coronavirus symptoms has a simple effect on the individuals, for example, giving them a mild respiratory ailment like a common cold. However, there have been two such cases of the Coronavirus that have indicated enormous consequences for the infected. That is Severe Acute Respiratory Syndrome (SARS) coronavirus and Middle East Respiratory Syndrome (MERS) coronavirus. A large number of those at first infected either worked or often shopped in the Huanan seafood market in the Chinese city. A novel coronavirus (nCoV) is another strain that has not been recognized in people. How might You protect yourself from a coronavirus? Coronaviruses regularly cause respiratory symptoms. So we prescribe fundamental hand cleanliness, for example, washing your hands properly with soap and water and respiratory disinfection, for example, when you sneeze, sneezing into your elbow. Approaches to secure yourself against a potential animal source are to stay away from unnecessary unprotected contact with live animals. Make sure you properly wash your hands after animal contact. Also, make sure to check that your meat is cooked altogether before consuming it. Is there a treatment of Coronavirus? There are no particular medications for coronaviruses, yet side effects can be dealt with. What are the symptoms of Coronavirus? Cough, cold, fever, and trouble breathing are some of the signs and symptoms which have been seen in the people infected. Some of the patients have additionally revealed having an irritated throat. There’s been some hypothesis about the extreme infection-causing potential of the novel Coronavirus even though these cases are not upheld with legitimate verification. Individuals with chronic ailments and matured patients may present more prominent odds of having an extreme illness because of this infection. As this is viral pneumonia, there is no practical use of the antibiotics. The antiviral medications we have against influenza won’t work. Recovery relies on the immune system’s strength. A significant number of individuals who have kicked the bucket were already in bad health condition. What is the risk of Coronavirus? The individuals who are living or going around the region where the infection is predominant are at risk of contamination, as indicated by the WHO. As of now, the virus is just present in China, and non-occupants of China who have been infected have headed out to China and have been in contact with the contaminated individuals who are from China. As per WHO, the risk to the individuals who are not living in China is exceptionally low as long as you do not come in contact with one of the non-resident Chinese individuals who are contaminated. Likewise, the WHO expresses that necessary disinfectants can undoubtedly discard the infection if it is available on a surface, and the endurance time of the virus on any surface is truly low. What WHO Says about it? “COVID-19 is still affecting people in China with some outbreaks in other countries. Most people who become infected experience mild illness and recover, but it can be more severe for others,” says the World Health Organisation. Is it being transmitted from one person to another? China’s national health commission has confirmed human-to-human transmission, and there have been such transmissions elsewhere. What do the stats say? Number of people infected from Coronavirus Starting on 4 March, the worldwide loss of life is 3,190, while more than 93,000 individuals have been affected over 80 nations. In China, there have been 2,981 deaths and 80,270 cases in all. South Korea, the country’s most exceedingly terrible hit by Coronavirus after China, has had 5,328 cases. Over 44,000 individuals in China have recovered from Covid-19. “There have been over 90,000 cases so far and a little over 3,000 deaths. Most deaths have occurred in people with compromised immune systems such as the elderly or people with other serious existing illnesses. So, there is a need to exercise caution but not panic. The infection can be avoided by following strict hygiene protocols and awareness and alert behavior is the key,” added Kamal Narayan, CEO, Integrated Health & Wellbeing Council (IHW) Council. Facts Related To Coronavirus For Workplace From Reliable Sources The dread of Coronavirus, medically now termed ‘coranxiety,’ is tremendous to the point that it has prompted the spread of fake resources and misinformation, particularly on social media. The number of posts and forwards on Facebook, WhatsApp, Instagram, other social platforms are increasing day by day. Fake news on websites is increasing with the expansion in the name of affirmed Coronavirus cases. Is the outbreak a pandemic? No. A pandemic, in WHO terms, is “the worldwide spread of disease.” The spread of the infection outside China is stressing yet not an unexpected development. The WHO has proclaimed the outbreak to be a public health emergency of universal concern. The key issues are how transmissible this new Coronavirus is among individuals, and what extent become seriously sick and end up in the medical clinic. Regular infections that spread effectively will, in general, have a milder effect. By and large, the Coronavirus seems, by all accounts, to be hitting older individuals hardest, with scarcely any cases in kids. Healthy Habits Recommended By WHO To Prevent Coronavirus At Workplace Companies are thinking about how to manage this dangerous outbreak of coronavirus. This is what they’re doing, and how it could influence their employees. Sick leaves, work from home, furloughs. Have you opted for any of the options? You could experience any of these measures as organizations attempt to keep their workers from being exposed to the coronavirus. A few organizations have just played it safe, like restricting travel to affected countries or any international conferences. Others have requested employees to remain at home as they visited a place or country with a severe outbreak. What else you can do to stay safe at your office? The tips beneath will assist you with finding out about steps you can take to protect yourself as well as other employees at the workplace from viruses and help stop the spread of germs. Important Guidelines Stated By WHO For Workplaces - Maintain a strategic distance from close contact. Maintain a strategic distance from close contact with individuals who are not well. At the point when you are sick, stay away from others to protect them from becoming ill as well. - Stay home when you are not well If conceivable, remain at home from work, school, and tasks when you are sick. This will help forestall spreading your ailment to other people. - Cover your mouth and nose. Cover your nose and mouth and with a tissue when sneezing or coughing. It might forestall people around you from becoming ill. - Clean your hands. Washing your hands properly and frequently will help protect you from germs. On the off chance that soap and water are not accessible, utilize a hand rub. - Avoid contacting your eyes, nose, or mouth. Germs are regularly spread when an individual contacts something that is contaminated with germs and afterward contacts their eyes, nose, or mouth. - Practice other great wellbeing propensities. Clean and sterilize every now and again contacted surfaces at home, work, or school, mainly when somebody is sick. Get a lot of rest, be active physically, deal with your stress, drink a lot of liquids, and eat nutritious food. Other Preventive Methods To Consider At Workplace - Find out about your companies plans if the outbreak of virus or another illness happens and whether first aids are offered on-site. - Clean frequently touched surfaces and objects, including door handles, phones, and keyboards to help evacuate germs. - Make sure your work environment has proper tissues, cleanser, paper towels, liquor based hand rubs, and dispensable wipes supplies. - Train others on the most proficient method to carry out your responsibility so they can cover for you if you or a relative becomes ill and you need to remain at home. - Rush home as soon as possible if you begin to feel sick at the workplace. - Find out about plans your youngster’s school, kid care program, or college has if the outbreak of coronavirus or another illness happens and whether first aids are offered on-sites. - Make sure your youngster’s school, kid care program or college routinely cleans as often as possible touched objects and surfaces. Make sure that they have proper tissues, cleanser, paper towels, liquor based hand rubs, and expendable wipes supplies on-site. - Ask how sick students and staff are isolated from others and who will think about them until they can return home. What Doctors Are Saying About Covid-19? Dr Aggarwal, former president and former honorary secretary-general of the Indian Medical Association told that “Coronavirus is large in size where the cell diameter is 400-500 micro and for this reason, any mask prevents its entry so there is no need to use trade muzzles. The virus does not settle in the air but is grounded, so it is not transmitted by air. Coronavirus when it falls on a metal surface, will live 12 hours, so washing hands with soap and water well enough are important”. He also added that “Coronavirus, when it falls on the fabric, remains for 9 hours, so washing clothes or being exposed to the sun for two hours meets the purpose of killing it. The virus lives on the hands for 10 minutes, so putting an alcohol sterilizer in the pocket meets the purpose of prevention. Also drinking hot water and sun exposure will do the trick. And staying away from ice cream and eating cold is important. How To Tackle Workplace Risks In Context Of Coronavirus Consider Promoting Hygenic Washroom Practise “Sanitising rub dispensers must be placed in the most frequented office spots and thorough handwashing must be promoted with the help of posters (such as the one below) and other communication measures such as guidance from occupational health officers,” said Savitha Kuttan, cofounder & CEO, Omnicuris. Staff should be suggested to wash hands after entering the office and every three hours, say doctors. Clean frequently used surfaces and door handles with disinfectants every 3 hours: “At this point in time especially, surfaces and objects must be wiped with a disinfectant because contaminated surfaces are one of the main ways that the virus spread,” Kuttan added. Properly Train canteen staff and office helpers: Make sure canteen area is properly cleaned and the canteen staff should be guided properly about hygiene should also ensure their canteens are clean and the chefs should be briefed about the process. “Avoid raw meat, fish, eggs and while preparing food, cook it thoroughly. Ensure a high degree of personal/food hygiene,” added Mehra. Clean utensils regularly as they can be carriers of infection. Clean desks before starting to work: Use disinfected on three hourly bases for all laptops and mobiles a linen-free cloth dipped in alcohol or disinfectant wipe. “This is mandatory. Coronavirus, when it falls on a metal surface, will live 12 hours,” Dr Aggarwal said. While considering another infectious disease about which so much is as yet unknown, it’s imperative to search out reliable data and follow up on it. This article shares everything about Coronavirus to pay special mind to if you feel that the infection may influence you, however in case you’re a non-inhabitant and have not been in contact with anybody going from China. You get no opportunity of catching the infection.
<urn:uuid:f980090f-2dd2-4fe1-9f9d-979a8b4d7837>
CC-MAIN-2020-16
https://talentscrew.com/blog/how-to-stay-safe-and-healthy-from-coronavirus-at-workplace/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00370.warc.gz
en
0.955374
2,739
2.953125
3
School can sometimes be a serious place with all the learning and studying, but that doesn’t mean we can’t have a good laugh along the way! Here are 50 hilarious jokes that will have your kids giggling in and out of the classroom. These jokes are kid-friendly and guaranteed to bring a smile to their faces. So, get ready to laugh out loud with these school-themed jokes! Best kid jokes about school - Why did the math book look sad? Because it had too many problems! - Why did the teacher wear sunglasses to school? Because her students were so bright! - What’s a snake’s favorite subject? HISStory. - Why did the kid cross the playground? To get to the other slide. - How do bees go to school? On a school BUZZ! Easy jokes kids can tell about school - What’s the smartest bug? A spelling bee. - Why did the clock go to the principal’s office? Because it was always running late! - Who’s the leader of the school supplies? The ruler. - What’s a math teacher’s favorite season of the year? Sum-mer! - Why did the bicycle fall over at school? Because it was two-tired! Fun kid jokes about a school of fish - Why are fish considered the smartest animals? Because they live in schools. - Why did the fish get bad grades in school? Because it was always swimming in the wrong direction! - What did the fish say to the substitute teacher? “School’s a real splash!” - Why don’t fish go on vacation? Because they’re always in a school. - Why was the fish late to school? Because it was fin-ishing its homework! Funny jokes about kids going back to school - Teacher: “Why are you late on the first day of school?” Student: “I saw a sign that said, ‘School Ahead: Go Slow.'” - What did the pen tell the pencil on the first day of school? Lookin’ sharp! - Why did the dog go to school? Because it wanted to learn new tricks! - Why did the cat go to school? Because it wanted to improve its purr-formance! - Knock, knock. Who’s there? Alpaca. Alpaca who? Alpaca the lunch, we’re going to school! 2nd grade kids jokes about school - Why did the banana go to school? Because it wanted to split its time between learning and being a snack! - How do you make seven an even number? Remove the “S.” - Knock, knock. Who’s there? Broken pencil. Broken pencil who? Forget it. It’s pointless. - Why are the dark ages named that? Because they have so many knights. - Why did the pillow go to school? Because it wanted to take a nap in every class! More hilarious kids jokes about school - What did the buffalo dad say to his son at school drop off? Bison! - Where do kids in New York learn multiplication? Times Square. - Why did the kid study on a plane? He wanted a higher education. - What’s a blackboard’s favorite drink? Hot CHALK-o-late. - Why did the pencil go to the principal’s office? Because it needed to be sharp! - Why did the broom go to school? Because it wanted to brush up on its knowledge. - What’s the smartest shape? A “circle,” because it’s well-rounded in every subject! - What do you call a vampire who teaches math? Count Dracula! - Why was the math book always worried? Because it had too many problems to solve. - What kind of tree does a math teacher climb? Geome-tree. - Why did the student put their lunchbox in the oven? Because they wanted to have a hot lunch! - How do you make a tissue dance? You put a little boogie in it! - What did one math book say to the other math book? “I’ve got problems.” - Why don’t scientists trust atoms? Because they make up everything! - What do you call a pencil that can tell jokes? A pun-cil. - What’s the difference between a teacher and a train? The teacher says, “Spit out your gum,” but the train says, “Chew chew!” - What kind of tree fits in your hand? A palm tree! - What did one wall say to the other wall at school? “I’ll meet you at the corner!” - Why did the book go to the doctor? Because it had too many paper cuts! - Why did the student bring a backpack full of rubber bands to school? Because they wanted to “stretch” their knowledge! - Why did the girl bring a ladder to school? Because she wanted the highest grades! - Why did the student eat their homework? Because their teacher said it was a piece of cake! - How does a book stay warm? By putting on its jacket. - Why did the teacher write the lesson on the window? To make it clearer for the students. - Why is 2 + 2 = 5 like your left foot? It’s not right. - What is a mathematical plant? The one with square roots. Laughter is the best medicine, even in the classroom! Not only does it make school more enjoyable, but it also helps relieve stress and improve focus. Sharing jokes with classmates can create a positive and fun learning environment where everyone feels included. So, encourage your kids to keep laughing all the way to class and see the difference it makes in their school experience! With these 50 hilarious jokes about school, you’ll have your kids laughing and learning at the same time. Whether it’s a math joke, a fish joke, or a joke about going back to school, there’s something for every kid’s sense of humor. So, why wait? Start sharing these jokes with your kids today and watch their faces light up with laughter! Looking for an after-school sitter to care for your kids and maybe tell a joke or two? Join UrbanSitter to find after-school drivers, sitters, tutors, and more!
<urn:uuid:d79905a5-eb5d-47d3-8701-415d05ba50f8>
CC-MAIN-2023-50
https://blog.urbansitter.com/50-hilarious-kids-jokes-about-school/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100529.8/warc/CC-MAIN-20231204115419-20231204145419-00626.warc.gz
en
0.963698
1,396
2.578125
3
Feb. 18th – Oct 1st 2017 The Art Deco style developed internationally between the 1920s and 1930s, dominating the architecture and the decorative arts. It was an eclectic, rich and opulent style, glamorous but at the same time elegant and above all ‘modern’. No wonder then that Art Deco was particularly favored by the modern middle class and lent its esthetical features to new theatres, ocean liners, railway stations, cinema interiors and private houses. Just like Art Nouveau and Futurism, Art Deco influenced Italian interior and industrial design, fashion design, the graphic arts and, last but not least, Italian pottery, impacting both shapes, materials and decoration. It placed the myth of the machine at its center, replacing symmetries with geometries, and finally making the way for the industrial production.
<urn:uuid:d6d0555a-dc85-45ee-aed7-0654f92d481f>
CC-MAIN-2023-50
https://www.thatsarte.com/blog/tag/art-deco-ceramics/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00261.warc.gz
en
0.939837
178
2.5625
3
To have a vehicle moving smoothly down the road, it requires a lot of moving parts to work together. If one of those parts starts to slip or else fail, then the rest of the parts are in danger of failing. One of those important first steps in order for a car to smoothly travel down the road includes the fuel injector. If the fuel cannot be injected properly to start the engine when it fires, then you are out of luck right away. The fuel injector needs to be cleaned regularly to ensure it works to the best of its ability to get your car to start correctly each and every time you need it to. Know what a Fuel Injector is The fuel injector is the part that physically injects the fuel in to the pistons in an effort to fire the engine up. If the fuel does not get put in with the pistons to fire and start the car, then you will not move anywhere. A car will not run without its fuel source creating a combustion reaction to power the engine. The only way to get around a car needing a functioning fuel injector is if you have an electric car and can start it from the battery charge and not a fuel source. Even bio-diesel fuels require a fuel injector to move the fuel through to the engine and within the pistons to get it running. Fuel Injector Issues The fuel injector can malfunction for no reason at all, just like any other part on the car or truck you are driving. It can break down and need replacing because piece broke off or it simply ran out of life. Another reason why a fuel injector might fail to work is if it gets gummed up. Older vehicles that have never had the fuel injector cleaned are at a higher risk for this happening. But it is an easy fix if you know what to do or tell your ASE master technician to do during maintenance checks of your vehicle. There is an additive that can be added in to the gas tank that is meant to clean the fuel injector from all of the dirt and pollutants. Read the directions carefully on the bottle and only do it when instructed. Fill the gas tank to the level where it is supposed to be at on the directions before pouring the fuel injector cleaner inside. Cleaning the Fuel Injector The age of your car and the manufacturer might have recommendations for how often to have the fuel injector cleaning fluid added to your engine. It might be once a year for casual to moderate driving. It might be more than once a year if you are a heavy driver or commute through heavy traffic each day of the year. Check with the manufacturer of your particular car and see how often they recommend the cleaning if you are in doubt but feel like your engine could perform a little better.
<urn:uuid:b7d46ddb-71f9-4312-815c-0aef4131ef66>
CC-MAIN-2020-29
https://lloydsautomotive.net/the-importance-of-fuel-injection-cleaners/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00056.warc.gz
en
0.958188
566
2.59375
3
More New and Used from Private Sellers In Stock Usually Ships in 24-48 Hours Starting at $38.98 Questions About This Book? What version or edition is this? This is the edition with a publication date of 11/30/2010. What is included with this book? - The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc. Brazil, the fifth largest nation in the world, is one of the planet's richest places for avian diversity and endemism. With the Birds of Brazil field guide series, the Wildlife Conservation Society brings together a top international team to do justice to the incredible diversity of Brazil's avifauna. This first guide of the planned five-volume series features the 743 bird species of the Pantanal and Cerrado regions of Central Brazil.The sprawling Pantanal plain, one of the world's most famed birding sites, is a seasonally flooded wetland boasting both impressive concentrations of large waterbirds and species such as the Toco Toucan, Hyacinth Macaw, Golden-collared Macaw, and endemic Blaze-winged Parakeets. The Cerrado is a distinctive Brazilian habitat that is the planet's biologically richest savanna.This compact modern field guide's unparalleled color artwork throughout, identification points, and range map for each species enable easy identification of all the birds normally found in these vibrant and critically important areas of Brazil. With 116 threatened species encompassing 25 percent of South America's threatened birds, Brazil has an imperative to conserve its birds and unique habitats that begins with their appreciation and identification. Thus, the species accounts are coupled with an introductory chapter on the region's unique environments and pressing conservation challenges. This practical and portable guide is an indispensable companion to those visiting Brazil's glorious natural areas of the Pantanal and Cerrado.
<urn:uuid:32f4a603-8e82-4f09-b2f1-94709a5a6ea1>
CC-MAIN-2014-35
http://www.ecampus.com/wildlife-conservation-society-birds-brazil/bk/9780801476464
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832032.48/warc/CC-MAIN-20140820021352-00086-ip-10-180-136-8.ec2.internal.warc.gz
en
0.887799
401
2.671875
3
Kijsltra provides precast drainage solution For some people, the widespread floods of summer 2007 were a foretaste of the kind of natural disaster that results from man-made climate change. For many in the water industry, it was also a stark reminder of how urgently we need to upgrade our water infrastructure. One of those areas worst hit by the 2007 floods was Wakefield, West Yorkshire, a city with a long history of flooding due to river modifications associated with its industrial development. Serious flooding has hit Wakefield five times since 1940, with a further three floods within just the past few years. Following the 2007 floods, more than £12 million has been spent on projects to reduce the flood risk from the River Calder and Wash Dike. And now Wakefield Council, in partnership with the Environment Agency, is nearing completion of a £1.3 million project to contain floodwater from the Oakenshaw Beck in the Agbrigg district of the city. Although normally a small and unremarkable stream, Oakenshaw Beck is notorious for flooding and in 2007 more than 400 properties were inundated when the beck overflowed. Main contractor CA Blackwell (Contracts) Ltd from Wakefield, won the contract to construct a system of storage ponds and flood embankments near Agbrigg to allow floodwater to be diverted away from the natural watercourse during flood conditions. The system then discharges the stored water back into the stream when it returns to normal levels. Besides excavating a series of ponds and swales to accept the floodwater, Blackwell’s had to install two concrete structures, plus associated concrete pipes, to transfer water to and from the storage ponds. The upstream control structure is a 12m long, 6.5m wide chamber containing a penstock and valve system which intercepts high water flows and feeds the surplus into the storage ponds. Downstream, a substantial concrete pumping station and downstream control structure have been installed to enable stored water to be pumped back into the stream after the risk of flooding has passed. The original scheme proposed traditional in-situ concrete techniques to build these two structures. This would have involved assembling temporary formwork within the excavations, fixing steel reinforcement and then pouring the concrete and leaving it to cure until it had achieved enough strength to allow the formwork to be struck and removed. But this is a lengthy and labour-intensive process, and CA Blackwell was keen to explore opportunities to reduce on-site activities, exposure to bad weather and speed up construction times. “Waterfront, the company which supplied the penstocks and valves, suggested off-site construction methods as a more efficient alternative,” explains Blackwell Project Manager Phil Holden. “By manufacturing the concrete elements off-site we could reduce the risk of disruption from adverse weather conditions and also reduce the amount of time spent working in a live natural watercourse. There were significant environmental benefits.” Via Waterfront, Blackwell contacted Kijlstra, one of Europe’s leading suppliers of specialised drainage systems. Headquartered in The Netherlands – where much of the country is built on low-lying reclaimed land and flood conditions are a constant threat – Kijlstra has pioneered the design and production of pre-cast concrete components for drainage applications. But although now commonplace in continental Europe, the use of pre-cast in the UK water industry is still relatively unknown. On the Oakenshaw Beck project, Kijlstra’s solution was an entirely new concept for most of the construction team. “We had used small pre-cast components on previous projects but this was the first time we had considered using pre-cast for an entire structure,” says Mr Holden. Convincing JBA Design Consultants, which had produced Blackwell’s winning design, to approve the use of in-situ concrete was the first major challenge, admits Wieger Faber, sales engineer for Kijlstra: “They were a little unsure I think, at first.” However, once the technical suitability of the method had been demonstrated to everybody’s satisfaction, the practical, environmental and time-saving advantages of pre-cast were enough to tip the balance in its favour, adds Mr Faber. The upstream control structure comprises pre-cast Kijlstra wall units grouted onto an in-situ concrete base slab with Kijlstra headwalls located at either end. “The headwalls are located back-to-back and a penstock is installed at one end to control the water flow,” explains Wieger Faber. The penstock can be opened and closed manually but electronic remote control also allows the valve to be operated remotely at the flick of a switch. The downstream structure and pumping station also comprises pre-cast wall sections sitting on an in-situ base with Kijlstra top slabs and two headwalls. Adopting the Kijlstra solution meant that Blackwell’s site team was on a sharp learning curve. “The guys on site had never used the system before and there were a couple of issues with alignment of the precast units and with grouting the steel dowel-bars,” admits Mr Holden. “But Kijlstra were magnificent. They provided us with first-class technical backup so that, even though this was the first time we used the system, we quickly got used to it and produced a good result,” he adds. According to Wieger Faber, building this type of structure with pre-cast units can save huge amounts of time. “We installed an entire chamber in one day. With in-situ concrete it could have taken four or five weeks to complete,” he says. On this project, the time-savings were less dramatic, mainly due to the construction team’s unfamiliarity with the system. “Taking into account the total contract time and the changes we had to make, we estimate that the Kijlstra units still cut the amount of time spent on site by about a week,” says Phil Holden. CA Blackwell is now an enthusiastic convert to the off-site technique and Mr Holden believes that it offers considerable scope for environmental and time-saving benefits: “In fact, we are now actively looking for ways of incorporating this type of pre-cast concrete in future projects of this type,” he says. A timelapse video of the successful Kijlstra installation is available to view at www.blackwellgroup.co.uk
<urn:uuid:307bc878-b1cb-4f9d-b977-1f6043f3871d>
CC-MAIN-2017-30
https://www.environmental-expert.com/articles/kijlstra-leaves-the-competition-high-and-dry-on-flood-alleviation-scheme-289476
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423839.97/warc/CC-MAIN-20170722002507-20170722022507-00203.warc.gz
en
0.959948
1,366
2.625
3
I recently asked this question asking if the temperature was handled by hardware or software and someone told me that it was handled by raspberry pi firmware (hardware). Appearently performance is throttled back when temperature is too high, but it got me thinking: what if it doesn't help? What if temperature continues getting higher? Does Raspberry Pi power itself off or it just stays on? The most likely outcome is that you have a hard crash or lock due to bit errors (The likelihood of a register, gate, or sram node flipping of its own accord is proportional to the junction temperature) . As far as I can tell, There is no default mechanism for thermal shutdown enabled on the raspberry pi kernel beyond throttling. Since RPI lacks hardware shutdown, the effectiveness of any shutdown mechanism in preserving the CPU is limited. Throttling is a mitigating strategy to decrease the power consumption of the CPU and, as a consequence, its temperature. It is an attempt to keep the CPU within its safe operating region. If, for example, the ambient temperature is very hot, no amount of throttling would guarantee safe operation. However, IC's and PCB's can safely "ride out" very high temperatures if they are turned off. Products designed to operate at an elevated ambient temperature use various strategies to maintain the safe operation including underclocking and protective shutdown. At some point, you need to just turn yourself off, and the RPI cannot do this, if soft standby/idle mode provide only a small additional safety margin (device is still on), you may as well run until you crash...
<urn:uuid:4fa85719-d114-4653-a74e-f03e7a5ae246>
CC-MAIN-2020-05
https://raspberrypi.stackexchange.com/questions/79562/what-happens-if-throttling-back-doesnt-help-on-high-temperature-does-raspberry/79572
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00258.warc.gz
en
0.961922
329
2.625
3
Perhaps the greatest breakthrough for the Paralympic movement came in Seoul as for the first time in the history of the Paralympic Games, the Seoul 1988 Games saw Paralympic participants use the same venue and facilities used by participants of the Olympic Games. The 1988 Paralympics was therefore the largest and most well-facilitated Games the event had witnessed. The Seoul Paralympic Games gave Paralympic athletes the opportunity to compete in modern, world class facilities previously reserved for the Olympic Games. Although the Seoul Paralympic Organizing Committee (SPOC) had only a tangential relationship with the Seoul Olympic Organizing Committee (SLOOC), the relationship was substantial enough to recruit and train many of the sports and technical officials for the Paralympic Games as well. The Paralympic village was just 4 kilometres from the Olympic stadium and athletes, coaches, trainers and team supporters were housed in a purpose-built village. Although there was much disappointment that around 156 events for athletes with serious disabilities could not take place due to lack of participants, it was a sign that the Paralympic Games would gain credibility as elite athletic standards were implemented. The Opening Ceremony was held at the Olympic stadium for the first time before a crowd of 75,000 and the new Paralympic flag was presented to the International Coordinating Committee (ICC) President, Dr. Jens Bromann. The 1988 Games showed a marked rise in athletic performance, with many multiple gold medal winners in various sports and events. One of the most dominant athletes at the Games was Trischa Zorn of the USA. Zorn was a visually impaired swimmer in class B2 and won a total of 12 gold medals, including 10 individual titles and two relays. Zorn also set an unbelivable nine world records in the process. The Closing Ceremony took place on 24th October to loud cheers from the capacity crowd. The Seoul 1988 Paralympics were perhaps the most revolutionary Paralympics of all time as they were the first Games where Paralympics were truly considered an equal of the Olympic Games. From Seoul 1988 onwards, the Paralympics has been hosted in the same venue as the Olympics. Date Games were held: October 15-24 Number of nations represented: 61 Number of competitors: 3,053 Number of medals awarded: 2,173
<urn:uuid:1b34b97b-ffcf-475c-a40f-77ca4d5b0262>
CC-MAIN-2017-43
https://www.insidethegames.biz/history/paralympics/1988-seoul
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00619.warc.gz
en
0.979267
483
2.875
3
2009 Michigan Tech Research Magazine A pristine coastline paints a beautiful picture but requires freshwater management and coastal research. Otherwise, global climate change, invasion of plant and animal species, and the effects of man will paint a different picture for future generations. Michigan Tech researchers are environmentally conscious and making positive changes in the world now, so the next generation of researchers can make even greater improvements to our planet in the future. Within these worthwhile endeavors, artistic creativity and inspired research require vision and the desire to make a difference in the world. At Michigan Tech, we foster an atmosphere that promotes sustainable research and creative approaches to difficult environmental and human issues. Our researchers and students are tackling though problems such as global health concerns, hazards like volcanoes or ballast water, and the plight of the transportation infrastructure. All academic disciplines are making great strides on campus. Our students are exposed to the creativity of the arts, the wonder of science and engineering, and much more as they create the future and change the world. David D. Reed Vice President for Research by Jennifer Donovan Was it the gales of November that sank the Edmund Fitzgerald? Was it faulty hatch covers, as the US Coast Guard claimed? Or ballast tank damage caused by bottoming on Six Fathom Shoal, as the Lake Carriers Association believed? Contention continues, as it has since the 26,000-ton freighter took its crew of twenty-nine to a watery grave at the bottom of Lake Superior on November 10, 1975. But soon, in a fifty-foot wave tank that will be affiliated with the new Upper Great Lakes Laboratory (UGLL), scientists may finally be able to determine what actually caused the fabled shipwreck. The wave tank allows for waves to be generated, modified, tested, and studied. With what they learn there, Civil and Environmental Engineering Associate Professor Brian Barkdoll and colleagues may be able to definitively determine the cause of the sinking of the Fitzgerald. Perhaps the culprit will turn out to be a rogue wave—like the “Three Sisters Phenomenon”—a series of three waves following in quick succession, the first disabling the ship and the next two striking fatal blows before she has recovered from the first. Or perhaps something stranger, a mystery not yet imagined. by Dennis Walikainen The teacher in Mary Ann Beckwith emerges as she works in her sun-lit studio on a sunny, chilly December day. The award-winning watercolor artist discusses technique and Tech students. “The students are bright, motivated, and willing to try most of the things I teach,” she says. “Once they learn that failure is sometimes the result in a creative project and that they can make the next results better, they soar.” The professor of art has won the Distinguished Teaching Award at Michigan Tech, and her paintings have garnered acclaim across the nation. by Marcia Goodrich For David Hand, the line between work and play is as thin as monofilament. This is evident from the trophy lake trout on his office wall and in the passion that charges his voice when he talks about a deadly threat to his beloved Lake Superior fishery. Since 2003, viral hemorrhagic septicemia (VHS) has caused massive die-offs of fish species ranging from walleyes to salmon in all of the Great Lakes—except Superior. Infected fish die from internal bleeding and often have open sores and bruised-looking, reddish tints on their skin. The virus that causes VHS is just one of dozens of exotic species that have invaded the Great Lakes since the St. Lawrence Seaway opened in 1959, bringing boats, trade, and money, not to mention parasitic sea lampreys and the like, to the heretofore landlocked Midwest. by Marcia Goodrich A team of Michigan Tech researchers is harnessing the computing muscle behind video games to understand the most intricate of real-life systems. Led by Roshan D'Souza, the group has supercharged agent-based modeling, a powerful but computationally massive forecasting technique, by using graphic processing units (GPUs), which drive the spectacular imagery beloved of video gamers. In particular, the team aims to model complex biological systems, such as the human immune response to a tuberculosis bacterium. On a computer monitor, a swarm of bright green immune cells surrounds and contains a yellow TB germ. These busy specks look like 3-D animations from a PBS documentary, but they are actually virtual T-cells and macrophages, the result of millions of real-time calculations. by John Gagnon David Watkins says an Upper Michigan deer camp and a small village in Africa have something in common: the need for rudimentary sanitation in the form of an outhouse or latrine. Further, Watkins says, that basic technology is appropriate for the circumstance. It is inexpensive; it doesn't rely on scarce water resources; and it can be easier on the environment. "We don't have to look at sewers and flush toilets as the world standard," he says. "In rural areas, latrines are the way to go." Watkins, an associate professor in civil and environmental engineering; Lauren Fry, a PhD student in environmental engineering; and former Tech professor Jim Mihelcic, now . . .
<urn:uuid:23c455f5-bc28-4d55-8da3-a921e37b29c3>
CC-MAIN-2017-30
http://www.mtu.edu/magazine/research/2009/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00254.warc.gz
en
0.942298
1,099
2.796875
3
SMILI (sketched in the figure at left) was an instrument that was designed to measure the isotopic abundances of cosmic rays over a large range of elements, from hydrogen to boron, and over a large range of energies, from below 100 to greater than 2000 MeV. It was a direct descendent of another instrument called PBAR, flown in 1987 which was optimized for low background antiproton studies. It consisted of two layers of plastic scintillators, 16 layers of drift tubes, and a superconducting magnet. The charge and velocity were determined for each cosmic-ray particle from the scintillator signals. Momentum was determined for each particle by measuring its deflection in the magnetic field with the drift tubes. Mass was determined by combining the momentum and velocity measurements. At the heart of SMILI was a split-coil, room-temperature bore, superconducting magnet with a field of 1.5 Tesla and an acceptance of 205 square cm. As charged particles entered and exited the magnet bore, their positions and directions were measured by an array of 510 thin-walled drift tubes arranged in four modules, two with the tube axes parallel to the magnetic field (the bending view) and two with axes perpendicular to the field (nonbending view). Each module consisted of two sections of two close-packed planes, providing a total of 16 planes. A 20 micron diameter gold-plated tungsten wire was strung down the center of the tube while a gas mixture of 50% argon and 50% ethane flowed serially through them, slightly above the ambient gondola pressure of 1 atm. Each tube is read out from one end by an amplifier/discriminator, and the discriminated signals are fed to time-to-digital converters (TDCs). The time-of-flight (TOF) system consisted of two planes of plastic scintillator located 1.8 m apart, and formed each plane of several scintillator slabs 100 cm long by 25 cm wide by 1.9 cm thick. S1 was located above the upper drift tubes and included four such paddles, while S2 was located below the lower drift tubes and included three. The scintillation light was channeled through bent light pipes into photomultiplier tubes (PMTs), one at each end of each paddle. After passing several electronic filters and discriminators the signals are fed to TDCs where the time and pulse height were used to determine particle velocity and charge. A water cerenkov counter is located below S2 to provide a redundant measurement of the particle velocity and to improve the resolution of the TOF system. It uses higly purified water as radiator medium, stored in a light integration container 5 cm thick and approximately 100 cm by 90 m in size. It is viewed by 20 PMTs arranged in pairs. Finally at the bottom of the instrument placed directly below the CK detector was an interaction detector (S3) consisting of four slabs of plastic scintillator viewed at each end by several PMTs through light pipes. This provided an additional measurement of the particle charge and was used to reject events which interacted in the CK detector. All the information of the events detected is readout using a LSI-ll/73 microcomputer, and telemetered to the ground station using a pulse code modulated signal. In addition to event data, periodically the flight program would telemeter also housekeeping information (temperature, pressure, supply voltages, etc.) andcalibration data from SMILI subsystems. The demodulated data stream was recorded on analog tapes by the National Scientific Balloon Facility, while a digital data stream was monitored in real time by the SMILI ground station which also has the ability to send simple commands to the instrument which were used to control the high voltage levels and other instrument parameters. The entire instrument was surrounded by a egg-shaped aluminum shell covered with polyurethane foam, painted white, which serves as a passive thermal control system insulating the detectors while reflecting incident sunlight. Launch site: Prince Albert Airport, Saskatchewan, Canada Balloon launched by: National Scientific Balloon Facility (NSBF) Balloon manufacturer/size/composition: Zero Pressure Balloon N29I-8/8/8/8T-28.40 Balloon serial number: R28.40-3-111 Flight identification number: 272N The balloon was launched by dynamic method using a crane as a launch vehicle, on August 31, 1989 at 19:25 MST (1989 September 1st at 1:25 UTC). After an initial ascent phase of 2 hours and 52 minutes, it reached an average float altitude of 34.7 km and remained there for a total flight time of 17 hour and 22 minutes. During float, the latitude varied between 53º 03' and 53º 57' north, and the longitude varied between 105º 42' and 104º 15' west. This was the first mission of the instrument. All subsystems operated satisfactorily throughout the flight. A total of 2.500.000 events were recorded.
<urn:uuid:969ea6ae-fe77-43ab-b773-4650eca3708f>
CC-MAIN-2017-43
http://stratocat.com.ar/fichas-e/1989/PCA-19890831.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00519.warc.gz
en
0.954824
1,064
3.390625
3
Hydrogen is flowing in pipes under the streets in Cappelle-la-Grande, helping to energize 100 homes in this northern France village. On a short side road adjacent to the town center, a new electrolyzer machine inside a small metal shed zaps water with electricity from wind and solar farms to create "renewable" hydrogen that is fed into the natural gas stream already flowing in the pipes. By displacing some of that fossil fuel, the hydrogen trims carbon emissions from the community's furnaces, hot-water heaters and stove tops by up to 7 percent. Cappelle-la-Grande's system is a living laboratory created by Paris-based energy firm Engie. The company foresees a big scale-up of hydrogen energy as the cost of electrolyzers, as well as of renewable electricity, continues to fall. If Engie is right, blending hydrogen into local gas grids could accelerate a transition from fossil to clean energy. The company is not alone. Renewable hydrogen is central to the European Commission's vision for achieving net-zero carbon emissions by 2050. It is also a growing focus for the continent's industrial giants. As of next year, all new turbines for power plants made in the European Union are supposed to ship ready to burn a hydrogen–natural gas blend, and the E.U.'s manufacturers claim the turbines will be certified for 100 percent hydrogen by 2030. European steelmakers, meanwhile, are experimenting with renewable hydrogen as a substitute fuel for coal in their furnaces. If powering economies with renewable hydrogen sounds familiar, it is. Nearly a century ago celebrated British geneticist and mathematician J.B.S. Haldane predicted a post-fossil-fuel era driven by "great power stations" pumping out hydrogen. The vision became a fascination at the dawn of this century. In 2002 futurist Jeremy Rifkin's book The Hydrogen Economy prophesied that the gas would catalyze a new industrial revolution. Solar and wind energy would split a limitless resource—water—to create hydrogen for electricity, heating and industrial power, with benign oxygen as the by-product. President George W. Bush, in his 2003 State of the Union address, launched a $1.2-billion research juggernaut to make fuel-cell vehicles running on hydrogen commonplace within a generation. Fuel cells in garages could be used as backup sources to power homes, too. A few months later Wired magazine published an article entitled "How Hydrogen Can Save America" by breaking dependence on dirty imported petroleum. Immediate progress did not live up to the hype. Less expensive and rapidly improving battery-powered vehicles stole the "green car" spotlight. In 2009 the Obama administration put hydrogen work on the back burner. Obama's first secretary of energy, physicist and Nobel laureate Steven Chu, explained that hydrogen technology simply was not ready, and fuel cells and electrolyzers might never be cost-effective. Research did not stop, however, and even Chu now acknowledges that some hurdles are gradually being cleared. The Cappelle-la-Grande demonstration is one small project, but dozens of increasingly large, ambitious installations are getting started worldwide, especially in Europe. As the International Energy Agency noted in a recent report, "hydrogen is currently enjoying unprecedented political and business momentum, with the number of policies and projects around the world expanding rapidly." This time around it is the push to decarbonize the electric grid and heavy industry—not transportation—that is driving interest in hydrogen. "Everyone in the energy-modeling community is thinking very seriously about deep decarbonization," says Tom Brown, who leads an energy-system modeling group at Germany's Karlsruhe Institute of Technology. Cities, states and nations are charting paths to reach nearly net-zero carbon emissions by 2050 or sooner, in large part by adopting low-carbon wind and solar electricity. But there are two, often unspoken problems with that strategy. First, existing electric grids do not have enough capacity to handle the large amounts of renewable energy needed to put fossil-fueled power plants out of business. Second, backup power plants would still be needed for long stretches of dark or windless weather. Today that backup comes from natural gas, coal and nuclear power plants that grid operators can readily turn up and down to balance sagging and surging renewable supply. Hydrogen can play the same role, its promoters say. When wind and solar are abundant, electrolyzers can use some of that energy to create hydrogen, which is stored for the literal rainy day. Fuel cells or turbines would then convert the stored hydrogen back into electricity to shore up the grid. Cutting carbon deeply also means finding replacement fuels for segments of the economy that cannot simply plug into a big electrical outlet, such as heavy transport, as well as replacement feedstocks for chemicals and materials that are now based on petroleum, coal and natural gas. "Far too many people have been misled into believing that electrification is the entire [carbon] solution" that is needed, says Jack Brouwer, an energy expert at the University of California, Irvine, who has been engineering solutions to his region's dirty air for more than two decades. "And many of our state agencies and legislators have bought in," without considering how to solve energy storage or to fuel industry, he says. Can renewable hydrogen make a clean-energy grid workable? And could it be a viable option for industry? Some interesting bets are being made, even without knowing whether hydrogen can scale up quickly and affordably. ELECTRODES inside an electrolyzer split water molecules into oxygen (left) and hydrogen (right). The electrodes are one centimeter high. Credit: Durk Gardenier Alamy The few nations that have bet big on replacing coal and natural gas with solar and wind are already showing signs of strain. Renewable energy provided about 40 percent of Germany's electricity in 2018, though with huge fluctuation. During certain days, wind and solar generated more than 75 percent of the country's power; on other days, the share dropped to 15 percent. Grid operators manage such peaks and valleys by adjusting the output from fossil-fuel and nuclear power plants, hydropower reservoirs and big batteries. Wind and solar also increasingly surge beyond what Germany's congested transmission lines can take, forcing grid operators to turn off some renewable generators, losing out on 1.4 billion euros ($1.5 billion) of energy in 2017 alone. The bigger issue going forward is how nations will cope after the planned phaseout of fossil-fueled power plants (and, in Germany, also their nuclear plants). How will grid operators keep the lights on during dark and windless periods? Energy modelers in Germany invented a term for such renewable energy droughts: dunkelflauten, or "dark doldrums." Weather studies indicate that power grids in the U.S. and Germany would have to compensate for dunkelflauten lasting as long as two weeks. Beefier transmission grids could help combat dunkelflauten by moving electricity across large regions or even continents, sending gobs of power from areas with high winds or bright sun on a given day to distant places that are calm or cloudy. But grid expansion is a slog. Across Germany, adding power lines is years behind schedule, beset by community protests. In the U.S., similar opposition prevents new lines from gaining approval. To some experts, therefore, dunkelflauten make wind and solar energy look risky. For example, grid simulations done in 2018 by energy modelers at the Massachusetts Institute of Technology project an exponential rise in costs as grids move toward 100 percent renewable energy. That is because they assumed big, expensive batteries would have to be installed and kept charged at all times, even though they might be used only for a few scarce days or even hours a year. A California-based team of academics reached a similar conclusion in 2018, finding that even with big transmission lines and batteries, solar and wind power could feasibly supply only about 80 percent of U.S. electricity needs. Other power sources will definitely be needed, said team member Ken Caldeira, a climate scientist at the Carnegie Institution for Science, when the study was released. Certain European experts say the M.I.T. and California studies are too myopic. For several decades European researchers have been zooming out from the power grid to a larger view, considering the full spectrum of energy used in modern society. Pioneered by Roskilde University physicist Bent Sørensen and several Danish protégés, such "integrated energy systems" studies combine simulations for electric grids, natural gas and hydrogen distribution networks, transportation systems, heavy industries and central heating supply. The models show that coupling those sectors provides operational flexibility, and hydrogen is a powerful way to do that. In this view, a 100 percent renewable electric grid could succeed if hydrogen is used to store energy to cover the dunkelflauten and without the price jump seen in M.I.T.'s projections. Some U.S. grid studies ruled out hydrogen energy storage because it is costly today. But other modelers say that thinking is flawed. For example, many grid studies being published about a decade ago downplayed solar energy because it was expensive at the time—this was a mistaken assumption, given solar's dramatic cost decreases ever since. European simulations such as Brown's take into account anticipated cost reductions when they compute the cheapest ways to eliminate carbon emissions. What emerges is a buildout of electrolyzers that cuts the cost of renewable hydrogen. In the models, electrolyzers scale up first to replace hydrogen that is manufactured from natural gas, used by chemical plants and oil refineries in various processing steps. Manufacturing "gray" hydrogen (as energy experts call it) releases more than 800 million metric tons of carbon dioxide a year worldwide—as much as the U.K. and Indonesia's total emissions combined, according to the International Energy Agency. Replacing gray hydrogen with renewable hydrogen shrinks the carbon footprint of hydrogen used by industry. Some hydrogen could also replace natural gas and diesel fuel consumed by heavy trucks, buses and trains. Although fuel cells struggle to compete with batteries for cars, they may be more practical for heavier vehicles; truck developer Nikola Motor Company says the tractor-trailer rigs it is commercializing will travel about 800 to 1,200 kilometers (500 to 750 miles) on a full fuel cell, depending on the various equipment and hauling factors. If industry and heavy transport embrace renewable hydrogen, regional hydrogen networks could emerge to distribute it, and they could also supply the carbon-free gas to power plants that back up electricity grids. That is what happens in integrated energy simulations: as more renewable hydrogen is created and consumed, mass-distribution networks develop that store months' worth of the gas in large tanks or underground caverns, much as natural gas is stored today, at a cost that is cheaper than storing electricity in batteries. "Once you acknowledge that hydrogen is important for the other sectors, you get the long-term storage for the power sector as a sort of by-product," Brown says. That perspective comes alive in simulations by Christian Breyer of Finland's LUT University. In his team's latest 100 percent renewable energy scenarios, published in 2019 with the Energy Watch Group, an international group of scientists and parliamentarians, power plants burning stored hydrogen fire up to fill the grid's void during the deepest dunkelflauten. "They are a final resort," Breyer says. "Without these large turbines, we would not have a stable energy system during certain hours of the year." In Breyer's model, less than half of the wind and solar energy required to make and store hydrogen gets converted back into electricity, a big loss, and the hydrogen turbine generators sit idle for all but a few weeks every year. But the poor efficiency of the hydrogen-to-electricity conversion does not break the bank, because this pathway is used infrequently. Breyer says the scheme is the most economical solution for the energy system writ large, and it is not that different from how many grids use natural gas—fired plants today. "For decades there have been power plants that are switched on only once every few years," he says. ENGINEER checks pipes that distribute hydrogen made with renewable energy in Hamburg, Germany. Credit: Joerg Boethling Alamy Even though today's renewable hydrogen generation is meager, Europe is counting on hydrogen to decarbonize its energy systems. The European Commission anticipates renewable energy rising to greater than 80 percent of Europe's power supply in 2050, supported by more than 50 gigawatts of electrolyzers—the capacity of approximately 50 nuclear power plants. Member states are setting their own goals, too. France is calling for its hydrogen-consuming industries to switch to 10 percent renewable hydrogen by 2022 and 20 to 40 percent by 2027. These goals will be difficult to reach without policies that encourage entrepreneurial firms to jump-start mass production of electrolyzers. Blending hydrogen into natural gas pipelines is a place to start because it uses existing infrastructure. Engineers had long assumed that molecular hydrogen—the smallest molecule and highly reactive—would degrade or escape from existing natural gas pipes. But recent research shows that blending of up to 20 to 25 percent hydrogen can be done without seeping from or hurting such pipes. European countries permit blending, and firms in Italy, Germany, the U.K., and elsewhere are injecting hydrogen at dozens of sites to help fuel customers' heaters, cookstoves and other appliances, which do not need alterations as long as the hydrogen content stays below about 25 percent. Engie has been blending at Cappelle-la-Grande for more than a year without incident or opposition, according to project manager Hélène Pierre. She says that public acceptance is helped by extensive monitoring that shows that homes using the blend have cleaner air; adding hydrogen improves gas combustion in appliances, she notes, trimming levels of pollutants such as carbon monoxide that are created when natural gas burns incompletely. Europe's next wave of renewable hydrogen projects could push production to a larger scale. Industrial consortia in France and Germany are seeking financing and authorization for 100-megawatt electrolyzers, 10 times larger than the biggest in operation. Two huge electrolyzer projects are vying for government support to boost a regional hydrogen economy around Lingen, a city in northwestern Germany that is home to a pair of oil refineries. One project that involves a large utility called Enertrag and several of Germany's biggest energy and engineering firms could provide a blueprint for a nationwide hydrogen network. The project takes advantage of existing gas infrastructure but not via blending. Instead the idea is to repurpose spare gas pipelines to deliver renewable hydrogen to the local refineries, as well as a power plant and even a planned filling station for fuel-cell vehicles. "Our idea is to build up a 100 percent hydrogen gas grid," says Frank Heunemann, who is managing director at Nowega, one of the partners on the project and the region's gas-network operator. Nowega can reuse some empty pipes because the region has two natural gas networks. One carries standard natural gas that is nearly all methane. The other was originally built to deliver local natural gas that was high in hydrogen sulfide, and hydrogen can make some steel pipes brittle. Nowega is phasing out the local gas, leaving empty steel pipes that Heunemann says should be able to endure any reactivity with pure hydrogen. European energy supplier RWE will build the consortium's main electrolyzer and plans to burn some of the hydrogen output at its Lingen power station. Engineering giant Siemens intends to optimize one of the station's four gas turbines to handle pure hydrogen. The consortium is thinking about expansion as well. Lingen is about 48 kilometers from underground salt caverns created to store natural gas. Stocking some of Lingen's hydrogen, more than 1,000 meters deep in one of the caverns, could be a logical next step, Heunemann says. (Hydrogen is already stored en masse in caverns in Texas and the U.K.) Nowega also envisions a 3,200-kilometer pipeline network that could reach most of Germany's steel plants, refineries and chemical producers. The plan centers on repurposing natural gas pipes that were originally built to carry hydrogen-rich "town gas" produced from coal, which was common in Europe until the 1960s. Pipelines that historically coped with 50 percent hydrogen should also be fine "to use for 100 percent hydrogen," Heunemann says. THE FUTURE IS TENTATIVE Europe's growing interest in renewable hydrogen is not unique. Japan is planning a multidecadal shift to a "hydrogen society" that has been baked into official energy policy since 2014. Meeting one of Japan's first goals—demonstrating technology to efficiently import hydrogen—is set to begin in 2020 with tanker shipments of gray hydrogen from Brunei, a tiny gas-rich nation nestled in Borneo. Australia's rival political parties are developing competing plans to export hydrogen to Japan. In December 2019 energy ministers across Australia's states and territories adopted a national hydrogen strategy, and the national government announced a $370-million (Australian; $252 million U.S.) hydrogen-stimulus package. Even in the U.S., there are signs of renewed interest. The federal government is once again setting goals for hydrogen technologies, some energy firms are investing and a few states are offering support. Los Angeles may be a leader. "L.A.'s Green New Deal," unveiled by Mayor Eric Garcetti in April 2019, commits the city to reach 80 percent renewable electricity by 2030 and 100 percent by 2050. The mayor is advancing plans to build solar farms and is also constructing a new natural gas—fired power plant to ensure the city has a backup electricity source. That plant could be converted to burn renewable hydrogen; about 125 kilometers of pipelines already push gray hydrogen to the area's refineries. And fuel cells are vying with batteries in plans to repower the roughly 16,000 trucks that haul freight at the region's ports. Fueling those trucks with hydrogen instead of diesel could significantly improve L.A.'s hazy skies. Brouwer says the entire state needs to think more deeply about energy as it seeks to eliminate carbon emissions. The state may be wasting more than eight terawatt-hours of renewable energy potential every year by 2025, according to projections by Lawrence Berkeley National Laboratory—energy that Brouwer says California should instead be socking away as hydrogen to clean up its refineries and to meet soaring electricity demand during summer heat waves. Other experts agree that hydrogen can connect those dots. A recent study by the Energy Futures Initiative, a think tank led by former M.I.T. nuclear physicist Ernest Moniz, who was Obama's second energy secretary, calls on California to tap the "enormous value" offered by renewable hydrogen and other low-carbon fuels. The study concludes that California's carbon-cutting goals may be impossible to meet without them. A host of potential problems could still stall or prevent the scale-up of hydrogen infrastructure in California, Europe, and elsewhere. A persistent issue is public anxiety. Hydrogen is extremely flammable, and accidents happen. Last summer a faulty valve caused a hydrogen explosion at a Norwegian filling station for fuel-cell cars. Concrete blast walls minimized injuries, but media reports immediately questioned whether hydrogen energy would survive the incident. In November 2019 California governor Gavin Newsom asked the state's public utility commission to expedite closure of an underground gas-storage facility, where a four-month leak of natural gas four years earlier had prompted the evacuation of thousands of families. All energy options have their risks, and community opposition complicates many paths to carbon-free energy. In many places, the public is not enamored with nuclear energy, transmission lines or wind turbines. The cost of electrolyzers may be the biggest challenge facing the renewable hydrogen future, however. To begin replacing gray hydrogen in industry, the cost of producing renewable hydrogen needs to drop from about $4 or more per kilogram today to $2 or less. Several studies indicate that could happen by 2030 if electrolyzer costs continue to fall as they have in the past few years. The studies also suggest that pattern may not emerge without government incentives. In a recent report, the International Energy Agency notes that hydrogen needs the same kind of government support that fostered early deployments of solar and wind power—industries that now attract more than $100 billion in annual investment worldwide. Those examples, the agency writes, show that "policy and technology innovation have the power to build global clean energy industries." Improved technology may be arriving. A new class of electrolyzers is entering the market—solid oxide electrolyzers that produce almost 30 percent more hydrogen than the industry-leading proton-exchange membrane electrolyzers, which Engie is using. Former energy secretary and doubter Chu, now a professor at Stanford University, is working on a novel electrolyzer that relies on tighter spacing of components and other tricks to produce hydrogen faster with less energy. According to Chu, the changes could make "a huge difference in operating cost." It's just one more reason, Chu says, why he is warming up to hydrogen.
<urn:uuid:cd8e6591-ee22-4fc6-b6b9-e4518fc29e5b>
CC-MAIN-2023-23
https://nfcrc.uci.edu/In-The-News-Solar-and-Wind-Power-Could-Ignite-a-Hydrogen-Energy-Comeback.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00122.warc.gz
en
0.946996
4,400
3.4375
3
Gecko vert de Bourbon translocation The reintroduction of the Bourbon Green Gecko ( Phelsuma borbonica ) in particular should contribute to the restoration of all the functionalities of the semi-dry forest. This native species with traces found on the Grande Chaloupe site, is a likely pollinator of the trees of the semi-dry forest whose nectar and fruits are appreciated, such as the brown mazambron ( Aloe macra ) or the Petit Vacoa ( Pandanus sylvestris ). 50 individuals sampled In preparation since 2009, the translocation of green Geckos de Bourbon from the Plaine d’Affouches site to the Grande Chaloupe is a first in La Réunion. Enriched by the experience of the Mauritian Wildlife Foundation in Mauritius, the feasibility studies and the drafting of the protocol of the operation resulted in 2016 in the authorization of the translocation action by the National Protection Council of Mauritius. nature. 50 individuals were then sampled in April 2018 using PVC tubes on the source population of Bourbon Green Geckos from the “Hauts” of La Montagne, which is estimated at more than 1,000 individuals per year. Indian Ocean Nature Association (NOI). IUCN recommends not to harvest more than 10% of the total population of a protected animal species. To meet this criterion, 30 females and 20 males from the source population were sampled based on their size and weight. This translocation operation, carried out by the LIFE + Forêt Sèche project with the support of NOI and agents from Réunion National Park, will lead to accurate monitoring (camera / fingerprinting device). In respect of the release protocol defined beforehand, the Geckos were introduced near planting sites, in the heart of the relics of the semi-dry forest of the Grande Chaloupe in the hope of recreating a functional forest.
<urn:uuid:9083a47a-87ae-4abd-b22f-20a9bf73a032>
CC-MAIN-2020-16
https://www.foretseche.re/en/nos-actions/translocation-gecko-vert-de-bourbon/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00372.warc.gz
en
0.895889
397
2.765625
3
What do the following words have in common? Fare, dues, tuition, interest, rent, and fee. The answer is that each of these is a term used to describe what one must pay to acquire benefits from another party. More commonly, most people simply use the word price to indicate what it costs to acquire a product. The pricing decision is a critical one for most marketers, yet the amount of attention given to this key area is often much less than is given to other marketing decisions. One reason for the lack of attention is that many believe price setting is a mechanical process requiring the marketer to utilize financial tools, such as spreadsheets, to build their case for setting price levels. While financial tools are widely used to assist in setting price, marketers must consider many other factors when arriving at the price for which their product will sell. In this part of our highly detailed Principles of Marketing Tutorials we begin a two-part discussion of the fourth marketing mix variable - price. For some marketers more time is spent agonizing over price than any other marketing decision. In this tutorial we look at why price is important and what factors influence the pricing decision.
<urn:uuid:3e2b3ec0-9e0a-4049-b206-bb7733a65429>
CC-MAIN-2014-23
http://www.knowthis.com/pricing-decisions
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00016-ip-10-146-231-18.ec2.internal.warc.gz
en
0.959298
234
2.890625
3
Bishop and confessor, one of the greatest of Welsh saints; d. 612. He is usually represented holding two crosiers, which signify his jurisdiction over the Sees of Caerleon and Llandaff. St. Dubric is first mentioned in a tenth-century manuscript of the "Annales Cambriae", where his death is assigned to the year 612. This date appears also in the earliest life of the saint that has come down to us. It was written about 1133, to record the translation of his relics, and is to be found (in the form of "Lectiones") in the "Liber Landavensis". It may contain some genuine traditions, but as it appeared at least five hundred years after St. Dubric's death, it cannot claim to be historical. According to this account he was the son (by an unnamed father) of Eurddil, a daughter of Pebia Claforwg, prince of the region of Ergyng (Erchenfield in Herefordshire), and was born at Madley on the River Wye. As a child he was noted for his precocious intellect, and by the time he attained manhood was already known as a scholar throughout Britain. He founded a college at Henllan (Hentland in Herefordshire), where he maintained two thousand clerks for seven years. Thence he moved to Mochros (perhaps Moccas), on an island farther up the Wye, where he founded an abbey. Later on he became Bishop of Llandaff, but resigned his see and retired to the Isle of Bardsey, off the coast of Carnarvonshire. Here with his disciples he lived as a hermit for many years, and here he was buried. His body was translated by Urban, Bishop of Llandaff, to a tomb before the Lady-altar in "the old monastery" of the cathedral city, which afterwards became the cathedral church of St. Peter. A few years after the "Liber Landavensis" was written, there appeared the "Historia Regum Britanniae" of Geoffrey of Monmouth, and this romantic chronicle is the source of the later and more elaborate legend of St. Dubric, which describes him as "Archbishop of Caerleon" and one of the great figures of King Arthur's court. Benedict of Gloucester and John de Tinmouth (as adapted by Capgrave) developed the fictions of Geoffrey, but their accounts are of no historical value. There is no record of St. Dubric's canonization. The "Liber Landavensis" assigns his death to 14 November, but he was also commemorated on 4 November. The translation of his body, which the same authority assigns to 23 May, is more usually kept on 29 May. APA citation. (1909). St. Dubric. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/05179a.htm MLA citation. "St. Dubric." The Catholic Encyclopedia. Vol. 5. New York: Robert Appleton Company, 1909. <http://www.newadvent.org/cathen/05179a.htm>. Transcription. This article was transcribed for New Advent by Gerald M. Knight. Ecclesiastical approbation. Nihil Obstat. May 1, 1909. Remy Lafort, Censor. Imprimatur. +John M. Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is feedback732 at newadvent.org. (To help fight spam, this address might change occasionally.) Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:b1fb02ac-ec18-4c47-b4af-a374ceff0c04>
CC-MAIN-2013-20
http://www.newadvent.org/cathen/05179a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702849682/warc/CC-MAIN-20130516111409-00029-ip-10-60-113-184.ec2.internal.warc.gz
en
0.97337
797
2.515625
3
Almost all HF propagation that takes place over 2000 miles is the result of multiple bounces off of the ionosphere. As described earlier, the signals are for all practical purposes reflected off of the ionosphere. Multihop occurs when the signal is bounced between the ionosphere and the ground (or a body of water). Of course, the radio signal is weaker. In fact, every time the signal hits the ground, it loses energy. Also, the signal can be bounced inside of the F-layer. When this occurs, the signal never has to "touch" the ground. Of course, not nearly as much radio energy is lost when this occurs. Both methods can reflect signals around the world. There really is not too much more to the multihop method. Long path propagation is, as the name implies, over a long path--i.e. around the world. Now, specifically, long path refers to the path of communications that is the longest that one could choose...normally one would choose the shortest path to get the job done because that also implies using the least amount of power. For example, instead of a ham in England pointing his beam west to a station in New York, the English ham would point his antenna to the east--180 degees opposite the direction he wishes to have a contact in. The station in New York would have to point his antenna to the west in order to hear the English station--also 18 degrees opposite the direction the station he wishes to contact. By doing this, if the conditions are working with the two hams, they should be able to have a contact, although the signals will be distinctly weak due to the large losses that accompany such long distances. Due to these long distances, long path communications are usually accomplished using two beam antennae, one on end of the contact (although it is not unheard of for one of the stations to have a simple wire antenna of some sort rather than a beam--this is meant to give the large majority of hams who read this some inspiration). Grey line propagation is a little different. It is also ususally a little more common...simply because the opportunity to communicate using this method exists every day. How can this be? I have only one question for you to answer to find out if you can participate using this method: does the sun set where you live? If you answered yes to this quetion then you can communicate with this method (I thought you could). Why does it matter that the sun set in your area? Grey-line propagation is the propagation that accompanies every sunset of every day. It is due to certain qualities being brought out by the absence of Mr. Sun. Things like less absorption. The grey line being refered to is the period of dusk during the late afternoon. During this period, increased communications to areas north and south exist due to the rapidly decreasing D layer. While the D layer is decreasing so quickly, the F layer is still stratiated--dual layered--which means that for a few precious moments, relatively unimpeded transmission can take place between stations north and south of each other (i.e. stations that are experiencing dusk at the same time). An interesting phenomenon occurs: stations located rougly around the Tropics of Cancer and Capricorn will have much better luck with grey line, due to atmospheric conditions--the sun is closer to these two locations year 'round, therefore it should stand that they should get the most ionization, and thus the best results. That does not mean that northern areas won't realize this method...it just means that areas between the tropics will get better results. Backscatter is an interesting phenomenon. Consider that a radio wave has a particular "skip" zone depending upon conditions. That is, if I transmitted a signal right now and if I knew the conditions affecting that signal I could very easily predict where the signal would "touch down" on the earth--where the ground signal received by another station would be greatest. If it has a 300 mile skip on a particular day, then from my station in St. Louis Missouri I could have a nice strong signal in Kansas City, Missouri. In between St. Louis and Kansas City though, the signal would not be very strong--note that I am effectively removing groundplane radiation for the sake of arguement. Now, if I were to increase the transmitter power to such a level that the ionospheric region that my signal was "bouncing" from became saturated, some of the signals woud "bounce" back into areas between Kansas City and St. Louis. That is, some of the signals would "scatter" themselves all around in a somewhat random pattern--I know, that is an oxymoron, but so what. Instead of the signals going following a predictable path, they spread out and go different ways. That is the principle of backscatter--apply alot of power and hope the signal gets to an area other than just the predicted "touchdown" point. Send comments or other such electronic transfers via [email protected] Last Modified: 4/10/98
<urn:uuid:1077014c-9167-4123-ae2e-e1516abca0fb>
CC-MAIN-2014-23
http://www.qsl.net/ki0eg/propagation/f_layer.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275463.39/warc/CC-MAIN-20140728011755-00011-ip-10-146-231-18.ec2.internal.warc.gz
en
0.963773
1,052
3.203125
3
Vermont has released its fall 2012 standardized test scores, and according to the Burlington Free Press, there are encouraging gains in writing. Molly Walsh reports that the rise in scores are especially optimistic among the younger groups. Vermont is one of four states that gives the New England Common Assessment Program test. The test was developed with New Hampshire and Rhode Island in 2005, and Maine later joined its neighboring states. Reading and math proficiency are tested beginning in grade 3, through grade 8, and again in grade 11. Writing and science knowledge are both tested in grades 8 and 11, but for younger grades, science is only tested in 4th grade, and writing only in 5th. The tests are designed to go beyond multiple-choice questions, with some short-answer and extended-response questions. The science test includes some inquiry questions that require experiments or collecting data. Scores in math and science did not increase over 2011. Vermont’s Secretary of Education expressed disappointment in this less optimistic result: “High school mathematics continues to be high on the Agency’s and Governor’s list of priorities. While we only saw a slight increase in high school math scores, our educators are serious about improving our students’ understanding and passion for math,” said Secretary of Education Armando Vilaseca in a press release. “If Vermont’s students are going to be ready to continue their education beyond high school and be successful in the 21st century, they’re going to need stronger math skills and knowledge. A two percent increase is not enough.” Vilaseca said. The high school scores were generally discouraging. Writing proficiency dropped slightly, while math proficiency went from 36% to 38% proficient. Reading did better; 74% of 11th graders were scored as proficient in reading. Younger students did not present much good news in math and reading. Math proficiency has been around 65%, and that did not change this year. Reading scores are similar to the 11th grade performance, at 73%. However, 5th grade students did better in writing. The percentage who passed at a proficiency level in 2011 was 46%, and this went up to 51% for 2012. An even greater gain came with the 8th grade tests, where proficiency went up to 66%, from a previous 59%. State officials said they were pleased with the gains in writing. Michael Hock, director of educational assessment at the state Education Agency, said writing is the bright spot in this year’s results. “The importance of writing skills cuts across all areas of the curriculum,” Hock said in a statement. “For example, we know that our most successful schools have writing programs that focus on all content areas, even math and science. The impact of these programs is consistently evident in those schools’ test scores.” Although Vermont’s schools hope to raise achievement in math and science, the state has an enviable graduation rate of over 90%. Its college graduation rates are also very high, and Business Journal ranks it 8th among the 50 states for educational achievement.
<urn:uuid:476d7282-6f49-4f23-bbe2-fd8b9f2b2b1f>
CC-MAIN-2014-23
http://www.educationnews.org/education-policy-and-politics/vermont-encouraged-by-young-students-gains-in-writing/
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276353.59/warc/CC-MAIN-20140728011756-00214-ip-10-146-231-18.ec2.internal.warc.gz
en
0.970618
640
2.84375
3
A well-developed vocabulary pays off in many important ways. Better-than-average “word power” makes it easier to understand everything you read and hear—from textbook assignments to TV news reports or instructions on how to repair a bicycle. And word power obviously increases your effectiveness as a communicator. Think about it: As far as other people are concerned, your ideas are only as convincing as the words you use to express them.In other words, the vocabulary you use when you speak or write always significantly adds or detracts from what you have to say. VOCABULARY IN CONTEXT was written especially for you. The program was designed to enrich your personal “word bank” with many hundreds of high-frequency and challenging words. There are six thematic books in the series – Everyday Living Words, Workplace and Career Words, Science and Technology Words, Media and Marketplace Words, History and Geography Words, and Music, Art, and Literature Words. Each worktext presents topic-related readings with key terms in context. Follow-up exercises provide a wide variety of practice activities to help you unlock the meanings of unfamiliar words. These strategies include the study of synonyms and antonyms; grammatical word forms; word roots, prefixes, and suffixes; connotations; and the efficient use of a dictionary and thesaurus. Thinking skills, such as drawing conclusions and completing analogies, are included as reinforcement. A word of advice: Don’t stop “thinking about words” when you finish this program. A first-class vocabulary must be constantly renewed! In order to earn a reputation as a firstrate communicator, you must incorporate the new words you learn into your everyday speech and writing.
<urn:uuid:382f74e1-a044-4388-bd00-5ddb78e78d5d>
CC-MAIN-2017-43
http://tienganhedu.com/vocabulary-everyday-living-words-tu-vung-trong-cuoc-song-thuong-ngay/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00543.warc.gz
en
0.955899
358
3.015625
3
Q: I just found out that my 1943 house has insulation in the walls — I’m assuming it’s asbestos. I can’t afford to tear out my walls. What can I do? Are the cancerous fibers getting into the air through large nail holes? I found out about the insulation when my handyman was installing a drapery rod. After drilling the holes, a little insulation came out on the drill bit and he said, “Do you realize your walls are insulated?” At the moment, I thought it was a good thing, and then later realized it had to be asbestos. I’m so worried about this. A: First, don’t assume your insulation is asbestos. According to a bulletin put out by the Environmental Protection Agency, houses built between 1930 and 1950 may have asbestos as insulation. But the heyday of asbestos use in construction was from around 1950 to the mid-1970s. Asbestos is a mineral fiber. It can be identified only with a special type of microscope. There are several types of asbestos fibers. In the past, asbestos was added to a variety of products to strengthen them and to provide heat insulation and fire resistance. Your wall insulation may be vermiculite, a mineral that may or may not contain asbestos. But even if the insulation contains asbestos, you needn’t worry. It won’t be harmful so long as it stays in the wall. Large nail holes are not enough to create a hazard. Asbestos is not dangerous unless it is friable — that is, large amounts of asbestos fibers become airborne. We are all exposed to small amounts of asbestos as we go about our daily lives. However, if asbestos is disturbed, larger amounts of asbestos fibers are released, which if inhaled can lead to health problems. Asbestos that might crumble easily if handled, or that has been sawed, scraped, or sanded into a powder, is more likely to create a health hazard. In general, asbestos material in good condition will not release fibers. If the material is exposed, such as in old heater-duct insulation, check it regularly for signs of wear or damage such as tears, abrasions — or water damage. Damaged material may release asbestos fibers. If the asbestos material is in good shape, do nothing. If it is damaged, the problem can be corrected by either repair or removal. Repair involves either sealing or covering asbestos material. Sealing (encapsulation) involves treating the material with a sealant that either binds the asbestos fibers together or coats the material so fibers are not released. Pipe, furnace and boiler insulation can sometimes be repaired this way. Covering (enclosure) involves placing something over or around the material that contains asbestos to prevent release of fibers. Walls are such a structure. But, for you, the bottom line is: Don’t worry unless you are going to make changes that involve taking the plaster or wallboard off the walls. At that point, you should have the insulation tested. If it does contain asbestos, removal by a professional asbestos abatement contractor is a must. Contact the local office of the California Division of Occupational Safety and Heath for more information about asbestos abatement. What’s your opinion? Leave your comments below or send a letter to the editor. To contact the writer, click the byline at the top of the story.
<urn:uuid:44250a2b-7517-482e-b849-d761548da374>
CC-MAIN-2023-23
https://www.inman.com/2010/06/03/dont-fret-over-asbestos-insulation/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644571.22/warc/CC-MAIN-20230528214404-20230529004404-00136.warc.gz
en
0.934294
713
2.921875
3
Declining rates of circumcision among infants will translate into billions of dollars of unnecessary medical costs in the U.S. as these boys grow up and become sexually active men, researchers at Johns Hopkins University warned. In a study published Monday in the Archives of Pediatric and Adolescent Medicine, a team of economists and epidemiologists estimated that every circumcision not performed would lead to significant increases in lifetime medical expenses to treat sexually transmitted diseases and related cancers — increases that far surpass the costs associated with the procedure. Circumcision is a hotly debated and emotional issue in the U.S., where rates have been falling for decades. In the 1970s and 1980s, about 80% of baby boys were routinely circumcised in hospitals or during religious ceremonies; by 2010, that figure had dropped below 55%, according to the Centers for Disease Control and Prevention. Some of that decline is due to shifting attitudes among parents, but at least part of it can be traced to the decision by many states to eliminate Medicaid coverage for the procedure in order to save costs. Today 18 states, including California, do not provide Medicaid coverage for the procedure, which is considered cosmetic by many physicians. But in the last decade, studies have increasingly shown that removing the foreskin of the penis has significant health benefits, said Dr. Aaron Tobian, senior author of the new study. Three randomized trials in Africa have demonstrated that circumcision was associated with a reduced risk of contracting HIV, human papillomavirus and herpes simplex in men. One of those studies documented a reduced risk of HPV, bacterial vaginosis and trichomoniasis in the female partners of men who were circumcised. Circumcision is believed to prevent STDs by depriving pathogens of a moist environment where they can thrive. The inner foreskin has been shown to be highly susceptible to HIV in particular because it contains large numbers of Langerhans cells, a target for the virus. Tobian and his colleagues developed a computer-based simulation to estimate whether declining circumcision rates would lead to more STDs and thus higher medical costs. If circumcision rates remained about 50% instead of the higher rates of years past, the lifetime healthcare costs for all of the babies born in a single year would probably rise by $211 million, the team calculated. If circumcision rates were to fall to 10% — which is typical in countries where insurance does not cover the procedure — lifetime health costs for all the babies born in a year would go up by $505 million. That works out to $313 in added costs for every circumcision that doesn’t happen, the report said. In this scenario, nearly 80% of the additional projected costs were because of medical care associated with HIV infection in men, the team wrote. The model includes only direct medical costs such as treatment for penile and cervical cancer, which are associated with HPV infection. It doesn’t consider nonmedical or indirect costs, such as transportation to doctors’ appointments or lost income. To Tobian, the message is clear: Government efforts to save money by denying coverage for circumcision are penny-wise but pound-foolish. “The federal Medicaid program should reclassify circumcision from an optional service to one all states should cover,” he said. That sentiment was echoed in an editorial accompanying the study. UCLA health economist Arleen Leibowitz wrote that by failing to require states to cover circumcision in Medicaid plans, the U.S. reinforces healthcare disparities. “If we don’t give poor parents the opportunity to make this choice, we’re discriminating against their health in the future,” she said in an interview. “If something is better for health and saves money, why shouldn’t we do it? Or at least, why shouldn’t we allow parents the option to choose it?” Ellen Meara, a researcher at the Dartmouth Institute for Health Policy and Clinical Practice who was not involved with the study, praised the researchers for conducting a careful analysis. But she questioned whether data from HIV studies in Africa were generalizable to the U.S. Medicaid population. Still, it’s “the best information we have,” she said. “There’s nothing better to plug in.” The analysis comes a week before the American Academy of Pediatrics is scheduled to release a new policy on circumcision. Since 1999, the doctors group has taken a neutral stance on the procedure, saying that “the scientific evidence demonstrates potential medical benefits” but that it’s not strong enough to say that circumcision should be routine. Some reports have indicated that the new policy will state that the health benefits of circumcision outweigh the procedure’s risks, but will stop short of recommending it for all baby boys. A spokeswoman for the academy declined to comment before the policy is formally released Monday. A shift in position could boost support for circumcision, since both pediatricians and parents look to the academy for guidance, Leibowitz said. USC health economist Joel Hay said the new study was inherently flawed because ethical concerns about the procedure trumped any economic analysis of its potential benefits. “You’re taking an asymptomatic individual and forcing a procedure on him,” he said. Hay also argued that Americans didn’t need circumcision to prevent HIV infection because they had other options, such as using condoms. He said that just last month the U.S. Food and Drug Administration approved the use of a once-a-day pill called Truvada to reduce the risk of HIV transmission in high-risk groups. “There’s no reason why people have to engage in this irreversible procedure,” he said. By Eryn Brown, Los Angeles Times
<urn:uuid:721289b8-0e90-4d53-9d26-f4e3b02a489a>
CC-MAIN-2020-24
http://www.health.am/sex/more/as-circumcision-declines-health-costs/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00338.warc.gz
en
0.962084
1,190
2.53125
3
Beneath the Underdog: Race, Religion, and the Trail of Tears By Patrick Minges Union Theological Seminary in the City of © Copyright 1994, 1998, All Rights Reserved Used With Permission In the fields and homes of the colonial plantations of the United States in the late eighteenth century, the first intimate relations between African-American and Native-American peoples were forged in their collective oppression at the hands of the peculiar institution. The institution of the African slavery, as it developed in the New World, was based upon the lessons learned in the enslavement of traditional peoples of the Americas. In spite of a later tendency in the Southern United States to differentiate the African slave from the Indian, African slavery was in actuality imposed on top of a preexisting system of Indian slavery. In North America, the two never diverged as distinct institutions. Vast numbers of indigenous peoples toiled to their death in the fields and mines of the European colonists from the very earliest points of contact. Many of the early explorations of the New World were quite simply slaving expeditions. The colonial predisposition to cite Indian depredations as justification for Indian wars were often quite simply rhetorical exercises to cover the seizure and enslavement of the indigenous peoples of the America. A Cherokee from Oklahoma remembered his fathers tale of the Spanish slave trade, “At an early state the Spanish engaged in the slave trade on this continent and in so doing kidnapped hundreds of thousands of the Indians from the Atlantic and Gulf Coasts to work their mines in the West Indies.” With the arrival of twenty negars aboard a Dutch man-of-war in Virginia in 1619, the face of American slavery began to change from the tawny Indian to the blackamoor African between 1650 and 1750. Though the issue is complex, the unsuitability of the Native American for the the colonials labor intensive agricultural practices, their susceptibility to European diseases, the proximity of avenues of escape for Native Americans, and the lucrative nature of the African slave trade led to a transition to an African based institution of slavery. During this period, however, the colonial wars against the Pequots, the Tuscaroras, the Yamasees, and numerous other Nations led to the enslavement and relocation of tens of thousands of Native Americans. By the late years of the seventeenth century, caravans of Indian slaves were making their way from the Carolina backcountry to forts on the coast just as they were doing on the African continent. Once in places such as Charleston or Savannah, the captives were loaded on ships for the middle passage to the West Indies or other colonies such as New Amsterdam or New England. Many of the Indian slaves were kept at home and worked on the plantations of the Carolinas; by 1708, the number of Indian slaves in the Carolinas was nearly half that of African slaves. By the beginnings of the eighteenth century, the Cherokee people had become objects of the slave trade to the extent that a tribal delegation was sent to the Royal Governor of South Carolina to protect the Cherokee from Congaree, Catawba, and Savannah slave-catchers. In 1705, the Cherokee accused the colonial governor of granting “commissions” to slave-catchers to “set upon, assault, kill, destroy, and take captive” Cherokee citizens to be “sold into slavery for his and their profit.” The Cherokee slave trade was so serious that it had, by this time, eclipsed the trade for furs and skins and become the primary source of commerce between the English and the people of South Carolina. During this transitional period, Africans and Native Americans shared the common experience of enslavement. In addition to working together in the fields, they lived together in communal living quarters, produced collective recipes for food and herbal remedies, shared myths and legends, and ultimately became lovers. The intermarriage of Africans and Native Americans was facilitated by the disproportionality of African male slaves to females (3 to 1) and the decimation of Native American males by disease, enslavement, and prolonged wars with the colonists. As Native American societies in the Southeast were primarily matrilineal, African males who married Native American women often became members of the wifes clan and citizens of the respective nation. As relationships grew, the lines of distinction began to blur. The evolution of red-black people began to pursue its own course; many of the people who came to be known as slaves, free people of color, Africans, or Indians were most often the product of integrating cultures. In areas such as Southeastern Virginia, The Low Country of the Carolinas, and Silver Bluff, S.C., communities of Afro-Indians began to spring up. The depth and complexity of this intermixture is revealed in a 1740 slave code from South Carolina: all negroes and Indians, (free Indians in amity with this government, and negroes, mulattoes, and mustezoes, who are now free, excepted) mulattoes or mustezoes who are now, or shall hereafter be in this province, and all their issue and offspring... shall be and they are hereby declared to be, and remain hereafter absolute slaves. It is important to note at this point that according to most researchers and observers, the concept of racism as an identifying component in interaction did not exist among the traditional nations of the early Americas. William McLoughlin has stressed the importance of clan relationships and the larger national identities of Native Americans; race was not considered a critical element in perception or hostility. In her pivotal work Slavery and the Evolution of Cherokee Society 1540-1866, Perdue sums up the research in stating that the Cherokee regarded Africans simply as other human beings [for] since the concept of race did not exist among Indians and since the Cherokees nearly always encountered Africans in the company of Europeans, one supposes that the Cherokees equated the two and failed to distinguish sharply between the races. Kenneth Wiggins Porter, an African American historian, concurs: [we have] no evidence that the northern Indian made any distinction between Negro and white on the basis of skin color, at least, not in the early period and when uninfluenced by white settlers. In the middle to latter part of the eighteenth century, white colonists began to recognize that, especially in areas of the South where Africans and Indians outnumbered whites 4 to 1, a great need existed to make Indians & Negros a checque upon each other least by their Vastly Superior Numbers, we should be crushed by one or the other. Various mechanisms began to be developed throughout the colonies which served to differentiate between African and Native Americans: slave codes began to distinguish between Africans and Native Americans, miscegenation laws were passed which forbade the intermarriage between the two, African slaves were used against Indian uprisings, Native Americans were used to quell slave revolts, and bounties were offered to Native Americans for runaway slaves. The policy of fostering hatred between the races became an enduring element in the relationships among the varied peoples of the South; it was codified by the Virginia Supreme Court in 1814 when it made provisions related to the natural rights of white persons and Native Americans, but entirely disapproving, thereof, so far as the same relates to native Africans and their descendants. Following the Revolutionary War and with the settlement of hostilities with the Native Americans, the newly established national government inaugurated its program to promote civilization among the friendly Indian tribes which furnished them with useful domestic animals, and implements of husbandry. A critical element in the civilization program was the shift from a subsistence based agricultural system to a plantation based large-scale farming system. However, this dramatic shift in the culture of the peoples of the Southeast could not be accommodated without first altering the entire social, political, and religious structures of traditional societies. Towards this end, the missionaries of the Christian churches proved quite From the very beginning of United States policy toward the Indians, missionaries (as well as government agents) played a critical role in the civilization/christianization of the indigenous inhabitants of North America. George Washingtons Indian policy stated that missionaries of excellent moral character should be appointed to reside in their nation who should be well supplied with all the implements of husbandry and the necessary stock for a model farm. It went further to state It is particularly important that something of this nature should be attempted with the Southern nations of Indians, whose confined situation might render them proper subjects for the experiment. With the establishment of the first model farms and missions among the Five Civilized Tribes of the Southeastern United States, a key element in this civilization process was the implementation of African slaves as laborers in the building and operation of the model farms and missions. Farms grew into plantations, buildings grew into towns. As the program of civilization pursued its goals, slavery spread among the nations of the Southeast. Individuals who held positions of power and land began to grow wealthy and to buy black slaves to extend their fields and tend to their livestock. Intermarriage among the Nations and the whites who served among them increased: mixed-blood natives who spoke English began to adopt the social and cultural patterns of the missionaries and white farmers who surrounded them, including slavery. Gradually the nations developed a landed elite and a small group of shopkeepers and entrepreneurs formed a bourgeois element that became dominant in national affairs. It was among this group of the rich and powerful, the assimilated peoples of the Five Civilized Tribes, that slavery became most accepted. Though the missionaries did not themselves own slaves except with a view towards emancipation and only used slaves rented or borrowed from Native American slave owners, they were reticent to preach against the evil of slavery among their practitioners in the Five Civilized Tribes. Many of their most loyal supporters were slave owners. They and the local governments and federal agents would oppose the missionaries should they choose to espouse the cause of abolition. Many missionaries believed that the most important goal was to first convert the heathen, then attempt to deal with the sin of slavery. In fact, some government agents attributed the progress made by the Five Civilized Tribes to the growth of the practice of slavery among them; one such agent stated I am clearly of the opinion that the rapid advancement of the Cherokees is owing in part to the fact of their being slave holders. In addition, their governing boards in the North did not want to jeopardize contributions from wealthy persons who disliked abolition. The missionaries, and especially those of the American Board, established a basic position of neutrality between two fires and as the Bible did not explicitly condemn slavery, they accepted all to our communion who give evidence that they love the Lord Jesus Christ. However, several dynamic phenomena were to draw many of the missionaries away from their positions of neutrality and cast the Five Civilized Tribes into a cauldron which would have devastating affects upon the Nations for the next hundred years. The first was a decisive split which occurred within the Nations, themselves, as to those who pursued the path of assimilation, commonly referred to as progressives, and those who clung to traditional values, the conservatives. Especially in the light of a pan-Indian religious awakening inspired by Tecumseh/Tenskwatawa of the early nineteenth century, many of the full blooded members of the Southeastern Nations rebelled against assimilation by reasserting the traditional methods of living. This left little room for colonial institutions, including slavery, among large populations of the of full-blooded members of the Southeastern Nations. In addition, there were splits among the various nations according to the level of assimilation to white culture and intermarriage between Europeans and the peoples of the First Nations. Within the so-called Five Civilized Tribes, nations such as the Choctaw, Chickasaw, and especially the Cherokee intermarried with the white missionaries, government agents, and local settlers while the Muscogean people of the deep south did not. A joke developed among the Southeastern nations which highlighted this aspect of Southern society: A Creek said to a Cherokee... You Cherokees are so mixed with whites we cannot tell you from the whites. The Cherokee... replied: You Creeks are so mixed with the Negroes we cannot tell you from the Negroes. From the middle part of the eighteenth century and well into the nineteenth century, Africans had been fleeing slavery South along the same routes that their native forebears had used in earlier times. As Congressman Joshua Giddings described it a hundred years later, The efforts of the Carolinians to enslave the Indians, brought with them the natural and appropriate penalties. The Indians began to make their escape from slavery to the Indian Country. Their example was soon followed by the African Slaves, who also fled to the Indian Country, and, in order to secure themselves from pursuit continued their journey. The Muskogees, and especially their relatives the Seminoles (a corruption of the Spanish word cimarron meaning runaway or maroon) of Southern Florida, accepted these African-American runaways and incorporated them into their nations because the Africans were well-skilled in languages, agriculture, technical skills, and warfare. Just as the underground railroad provided freedom in the north in later years, this other underground railroad ran south to freedom on the border. Among the Muskogees and the Seminoles, the Africans were granted much greater freedom, even though they were referred to as slaves. Africans among the Muskogees could own property, travel freely from town to town, and marry into the family of their owner. Often, the children of a Muskogees African American slaves were free, and African American Muskogees became traditional leaders among several local indigenous communities. Among the Seminoles, there was even greater freedom. The blacks lived set apart to themselves, managing their own stocks and crops, paying only tributes to their owners. The Africans could own property, moved about with freedom, and were allowed to arm themselves. According to contemporary sources, the Seminoles would almost sooner sell his child as his slave. In addition, there exists a law among Seminoles, forbidding individuals from selling their negroes to white people. The Africans were more than just the laborers and technicians for the Muskogee and Seminole, they became their diplomats, their warriors, and their religious leaders. In many areas throughout the South, the Muskogee were continually exposed to an apocalyptic religious tradition that promoted resistance to white oppression. A prophetic Christianity spread among African-Americans, witnessed by Francis LeJau as early as 1710, in areas such as Goose Creek, S.C. and Silver Bluff, S.C. Jesse Galphin, himself, was an Indian trader with the Muskogee. On the frontier, there were constant rumblings of insurrections by black Christians and there was great fear of blacks and Indians coming up from Florida to attack planters, to rob and plunder us, and to rescue enslaved Africans. Calvin Martin, in his work Sacred Revolt, believes that African-American prophetic Christianity may have contributed to the emergence of the Redstick prophetic movement, for at the heart of African American Christianity was a spiritually inspired critical view of Anglo-American civilization. One leader in the Redstick rebellion was the Prophet Abraham (Souanakke Tustenukke), a West African slave who fled south to Florida and served as both war leader and interpreter for the maroon community at Fort Negro, Florida. Throughout the Southeastern United States, there existed independent and integrated Afro-Indian communities led by African and mixed-blood religio/political leaders such as Jim-Boy, Black Factor, Garcon, Mulatto King, Chief Bowlegs, and the Choctaw (Seminole) Chief. Henry Wiggins Porter described the peculiar presence of Africans in Florida as ...not only were there chiefs of mixed Indian and Negro Blood among the Seminoles, and free negroes acting as principal counselors and war-captains, but... the position of the very slaves was so influential that the Seminole nation might present to students of political science an interesting and perhaps almost unique example of a very close approach to a doulocracy, or government by slaves. The presence of such refuges and spiritual centers so close to colonial plantations, especially in the light of slave rebellions in Haiti and the colonies, proved to be a great threat to the institution of slavery. General Andrew Jackson, believing the settlements to be established by villains for the purpose of rapine and plunder, destroyed them in the First and Second Creek War. As Joshua Giddings noted, there was but one effort in Jacksons war, the bloody Seminole War (sic) of 1816-17 and 18 arose from the efforts of our government to sustain the interests of slavery; or that our troops were employed to murder women and children because their ancestors had once been held bondage, and to seize and carry back to toil and suffering those who had escaped death. During these wars, those stolen negroes, not killed or returned to the English colonies, fled deeper into the South. It is important to note at this point that Africans and mixed bloods were not just religious leaders among the exile communities of Muskogees and Seminoles, the same also existed within the communities of the Cherokee, Choctaw, and Chickasaw. Most of the early records of the missionaries note that their earliest converts were the enslaved African-Americans within Native American communities. Even as late as 1818, the missionaries referred to their Sabbath schools as our Black Schools, because of the presence of Africans as both students and teachers. As few missionaries spoke the native languages, the Africans played an intermediary role as teacher, and of necessity, preacher. One of the most fascinating accounts of the presence of the African presence in the early Native American church comes from Cornelia Pelham, a visitor to a mission in the Choctaw Nation: About two thirds of the members of the church are of African descent; these mostly understand English; and on that account are more accessible than the Chickasaws. The last mentioned class manifest an increasing attention to the means of grace, and since the commencement of the present year, more of the full Indians have been constant in their attendance upon religious meetings, than at any time since the mission was established. The black people manifest the most ardent desire for religious instruction, and often travel a great many miles to obtain it... Two or three years ago, a black man who belonged to the mission church, opened his little cabin for prayer, on the evening of every Wednesday, which was usually attended by half a dozen colored persons. This spring, the number suddenly increased, till more than fifty assembled at once, many of whom were full Indians. The meetings, were conducted wholly by Christian slaves, in the Chickasaw language. One of their number can read fluently in the Bible, and many of the others can sing hymns which they have committed to memory from hearing them sung and recited. Similar experiences are recorded among the Cherokees in the early nineteenth century including the case of two slaves who were teaching their Cherokee mistress to read in the Bible. In August 1818, a full blooded Cherokee seeking admission to the Chickamauga Mission was found able to spell correctly in words of 4 & 5 letters. He had been taught solely by black people who had received their instruction in our Sunday School. Within the cultural nexus of the integrated community of the early American frontier, a unique synthesis grew in which African and Native American people shared a common religious experience. Not only did Africans share with Native Americans, the process of sharing cultural traditions went both ways. From the slave narratives, we learn of the role that Native American religious traditions played in African American society: Dat busk was justa little busk. Dey wasnt enough men around to have a good one. But I seen lots of big ones. Ones where dey all had de different kinds of banga. Dey call all de dances some kind of banga. De chicken dance is de Tolosabanga, and de Istifanibanga is de one whar dey make lak dey is skeletons and raw heads coming to git you. De Hadjobanga is de crazy dance, and dat is a funny one. Dey all dance crazy and make up funny songs to go wid de dance. Everybody think up funny songs to sing and everybody whoop and laugh all de time. When I wuz a boy, dere wuz lotsa Indians livin about six miles frum the plantation on which I wuz a slave. De Indians allus held a big dance ever few months, an all de niggers would try to attend. On one ob dese ostenttious occasions about 50 of us niggers conceived de idea of goin, without gettin permits frum de master. As soon as it gets dark, we quietly slips outen de quarters, one by one, so as not to disturb de guards. Arrivin at de dance, we jined in the festivities wid a will. Late dat nite one ob de boys wuz goin down to de spring fo de get a drink ob water when he notice somethin movin in de bushes. Gettin up closah, he look agin when-lawd hab mersy! Patty rollers! Slaves were welcome at the Native American dances and festivals and mixed and mingled and danced together with the Indians, and the Muskogees welcomed new dances including those from their African counterparts. Native Americans also played roles in the development of the African Churches, both in the invisible institution as well as the black church Another dispensation of Providence has much strengthened our hands, and increased our means of information; Henry Francis, lately a slave to the widow of the late Colonel Leroy Hammond, of Augusta, has been purchased by a few humane gentemen of this place, and liberated to exercise the handsome ministerial gifts he possesses amongst us. He is a strong man about forty-nine years of age, whose mother was white and whose father was an Indian. Brother Francis has been in the ministry fifteen years, and will probably become the pastor of a branch of my large church... it will take the rank and title of the 3rd Baptist Church of Savannah. A close neighborly feeling existed between the peoples of the Five Civilized Tribes and the Africans within their Nations. Even as slave owners, the Native Americans were particularly noted for their kindness and refusal to implement even their own national laws with respect to slavery. According to one Southern visitor to the Indian nation, The Indian masters treated their slaves with great liberality and upon terms approaching perfect equality, with the exception that the owner of the slave generally does more work than the slave himself. The slaves themselves noted the differences: We all live around on them little farms, and we didnt have to be under any overseer like the Cherokee Negroes had lots of times. We didnt have to work if there wasnt no work to do... Old Chief treated all the Negroes like they was just hired hands, and I was a big girl before I knowed very much about belonging to him. Even within a particular nation there was great variation; New Thompson noted the only negroes that have to work hard were the ones who belonged to the half-breeds. As the Indian didnt do work he didnt expect his slaves to do much work. Within the conservative elements of the Five Civilized Tribes, more than just a close neighborly feeling existed. Cudjo, the slave of Cherokee Chief Yonaguska of North Carolina, described their relationship as, He never allowed himself to be called `master, for he said Cudjo was his brother, and not his slave. In the late 1820s, the abolition movement spread among the Cherokees of North Carolina; the Cherokee American Colonization Society was formed in 1828 and Cherokee David Walker spoke for many full-blood Cherokees in 1825 when he said, There are some Africans among us; ... they are generally well treated and they much prefer living in the nation as a residence in the United States... The presumption is that the Cherokees will, at no distant date, cooperate with the humane efforts of those who are liberating and sending this prescribed race to the land of their fathers. Caught between Benjamin Lundy, (whose abolitionist newspaper The Genius of Universal Emancipation had once employed William Garrison) in the West and the Manumission Society of North Carolina in the East, there is little doubt that the full-bloods were exposed to abolitionist rhetoric. In 1824, the Baptist minister Evan Jones, a noted opponent of slavery, had come to work as a missionary among the full bloods in the valley towns of N.C. Among the Muskogee and Seminoles of the deep South, the abolitionist movement had been spread by the British in the latter half of the eighteenth and early nineteenth century. The British had offered freedom to Africa-American slaves during both the 1776 and 1812 wars, believing that the terror of revolution in the southern states can be increased to good effect. Among those ex-slaves of Southern Florida who had existed in free communities like Fort Negro, the abolitionist message struck a particular note. In 1828, the Cherokee people took what it considered its final steps towards civilization by the establishment of a constitution, a bicameral legislature, a judicial system, and an electoral process which elected John Ross as principal chief. However, in the same year, the people of the United States elected Andrew Jackson, noted Indian fighter and slave holder, to the Presidency of the United States. In his first message to Congress, Andrew Jackson set forth his plan for the removal of all of the Southeastern Indian nations to lands west of the Mississippi River. Eleven days after Jacksons message to Congress, the state of Georgia (bolstered by their man in the White House) nullified all Cherokee laws, prohibited the Cherokee government from meeting, and ordered the arrest of anyone opposing emigration westward. In the minds of most of the inhabitants of the Southeast, the issues of slavery and removal were indissoluably linked. Among the reasons for removal of the Muskogee, and especially the Seminoles, was the presence of another class of citizens of the nation, the African-Americans. Moreover, the presence of “abolitionist” missionaries was a tremendous threat to the institution of chattel slavery. Indicative of the nature of the problem was the attitude of Sophia Sawyer, when asked in 1832 by the Georgia Guard to remove to African boys from her classroom, replied, ... until the Supreme Court of the United States declares the Cherokee nation to be a part of the State of Georgia I will obey Cherokee laws, which are just laws, not Georgia laws. The relationship between slavery and removal was not one that was lost upon the Cherokees, though their understanding of the situation was propelled by a different focus. Following a sermon by Evan Jones on providence in one of the Valley Towns of North Carolina, a discussion ensued regarding what sins could have turned Gods face away from the Cherokee Nation. God cannot be pleased with slavery, said one of the Cherokees. There followed some discussion respecting the expediency of setting slaves at liberty. When one of those present noted that freeing the slaves might cause more harm than good, a Native Baptist preacher replied, I never heard tell of any hurt coming from doing right. In 1835, the movement to free the African slaves of the Cherokee nation was put into motion by several influential men of the nation. Arrangements were being made to emancipate the slaves and receive them as Cherokee citizens. The following December, the treaty party of the largely assimilated slave-owning Cherokees, signed the Treaty of New Echota relinquishing all lands east of the Mississippi and agreed to migrate to Oklahoma. According to Missionary Elizur Butler, the Treaty of New Echota effectively prevented the abolition of slavery within the Cherokee Nation. Though the signers of this treaty were ultimately punished for treason, the impact of this treaty would be disastrous upon Cherokee and African alike for many years. On the eve of the forced displacement of the Five Civilized tribes, the African-American presence among the Cherokees was estimated by an 1835 Census at approximately 10-15% of the Nation. However, taking into account that free blacks and people of mixed ancestry were probably not considered, we can assume the number to be much higher, especially among the Muskogee and Seminole. In spite of tales used to support emigration, the natives were reluctant to leave their ancestral homelands. In the spring of 1838, the process of forced removal began for the Cherokee at the hands of the U.S. military. An African-American member of the community described the process of removal: The weeks that followed General Scotts order to remove the Cherokees were filled with horror and suffering for the unfortunate Cherokees and their slaves. The women and children were driven from their homes, sometimes with blows and close on the heels of the retreating Indians came greedy whites to pillage the Indians homes, drive off their cattle, horses, and pigs, and they even rifled the graves for any jewelry, or other ornaments that might have been buried with the dead. The Cherokees, after having been driven from their homes, were divided into detachments of nearly equal size and late in October, 1838, the first detachment started, the others following one by one. The aged, sick and young children rode in the wagons, which carried provisions and bedding, while others went on foot. The trip was made in the dead of winter and many died from exposure from sleet and snow, and all who lived to make this trip, or had parents who made it, will long remember it, as a bitter Resistance among the Cherokees and the slaves was high, many had to be bound before being brought out. A Georgia volunteer was later to remark on the cruelty imposed upon the Indians, I fought through the civil war and have seen men shot to pieces and slaughtered by thousands, but the Cherokee removal was the cruelest work I ever knew. The Indians, slaves, and white members of the Cherokee nation were rounded up into concentration camps where they were kept as pigs in a sty. Starvation and disease was so rampant among those forcibly marched to the West that missionary Daniel Buttrick said we are almost becoming familiar with death. A month later he was to say that the government might more mercifully have put to death everyone under a year or over sixty; rather it had chosen a most expensive and painful way of exterminating these poor people. Without a doubt, the Trail of Tears fell hardest upon those 1000 African Americans were forced to march, many without shoes, through the dead of winter into Oklahoma. The route to Oklahoma was blazed by African-Americans, My grandparents were helped and protected by very faithful Negro slaves who... went ahead of the wagons and killed any wild beast who came along. In spite of the fact that they were given the responsibility to guard (with axes and guns) the caravans at night, few of the slaves made their escape. The newspaper reports of the time detailed a peaceful and deathless trek of the Cherokees, but missionary Elizur Butler estimated conservatively that over 4600 Indians and African-Americans died on that nine-month march. More recent estimates put the number of deaths at nearly 8,000 people who died as a direct result of the Cherokee Trail of Tears. An estimate of the number of African-Americans who died on the Cherokee Trail of Tears could be as much as 1/4 to 1/3 of those who made the trek west. If we can assume similar numbers of deaths among the Choctaw slaves as the Cherokee, perhaps 100 of the Choctaw slaves died in route. Many Choctaws stayed in Alabama and formed a community of resistance with African slaves similar to Fort Negro which proved to be a thorn in the side for later governments. Among the Muskogee and Seminoles where not only were relationships with Africans quite deep but where Africans played prominent roles in their society, the question of removal was very serious. The Africans among these nations knew that they were the property of men from whom they, or their ancestors, had fled. The burden of proof lay upon them and that their losing to the United States government meant that they would become the property of whoever claimed them. In 1836, simultaneous wars were initiated by the United States government to remove the Muskogee and their relatives the Seminoles from their lands in the deep South. The process was not to be completed until nearly ten years, twenty million dollars, and fifteen hundred soldiers lives later. The removal of the Muskogees, Seminoles, and their African counterparts was the costliest war in American history until the Civil War. Let us make no mistake about the nature of this endeavor. As General Jessup, the leader of the campaign stated it in 1836, This, you may be assured, is a negro, not an Indian war: and if it be not speedily put down, the South will feel the effects of it on their slave population before the end of the next season. Joshua Giddings saw the war in a similar light; the Second Seminole War on our part had not been commenced for the attainment of any high or noble purpose.... Our national influence and military power had been put forth to reenslave our fellow men: to transform immortal beings into chattels; and to make them to property of slave holders; to oppose the rights of human nature; and the legitimate fruits of this policy were gathered in a plentiful harvest of crime, bloodshed, and individual suffering. The Indians were led in their resistance by the same Afro-Indian leaders who had fled deep into Florida to escape from slavery; Jim-Boy, Gopher John, The Negro Abraham, Cudjo, Wild Cat, and many others led the Indians in their struggle for resistance. Those leaders of the Muskogee and Seminole such as Opothoyehala, Micanopy, and Osceola had deep ties to the African-American communities in their presence. In the Spring of 1837, General Jessup reasserted his position, Throughout my operations I found the Negroes the most active and determined warriors; and during the conference with the Indian chiefs I ascertained that they exercised almost controlling influence over them. To solve the problem, General Jessup set about to divide and conquer; he offered to free the slaves who would separate from the Indians and allow them to move to the west en mass. He wrote to John Horse of the Seminoles, to whom, and to their people, I promised freedom and protection on their separating from the Indians and surrendering. Black emancipation and removal had become the policy of the United States Army. Jessup refused to return the African slaves to their owners in the South, they would be sent to the West as part of the Seminole Nation. Though many Africans surrendered and the Seminoles followed suit, the struggle to remove the last of the exiles from Florida went on for many years. The Africans, Seminoles, and Creeks set about on the path to the Western territory, where the conflict over the status of the Africans was uncertain and the relationship between the Seminoles and the Muskogees seemed undecided. One thing was certain and decided; the losses among the Creeks and the Seminoles in their Trail of Tears were immense. The Creeks and the Seminoles were said to have suffered fifty percent mortality rate. For the Creeks, many of these deaths followed removal, probably one-third died from bulious fevers. Among the Seminole, the deaths were not from disease, but from the terrible war of attrition that has been required to force them to move. As they were proceeding west upon the trail watered by their own tears and sanctified by the many gravestones of their children and elders, many of the Muskogee Indians began to sing the spiritual We are going home. The words We are going home to our homes and land; there is one who is above and ever watches over us rang true to those nurtured in a Christian religion birthed in the cauldron of oppression. It also rang true to those traditionalists among the Muskogee who believed that they emerged from caves in the west and came east to settle in the Southeast. In the collective experience of African-Americans and Native-Americans who struggled to understand why a just deity allowed such injustice, a religious expression was born which reflected the essential nature of the experiences of both peoples. It gave them the strength to resist and it gave them the strength to endure. When the Cherokees were moving west along the more famous Trail of Tears, the missionaries who had been with them through the struggle in the homelands, the concentration camps, and the agony of the journey were with many of the Cherokee at their deaths. Many of the contingencies were led by the ministers of the American Board and their followers. The records of the Trail of Tears show that along the way, the churches themselves were allowed to congregate and express their faith in God. Reverend Jesse Bushyhead, himself a controversial Baptist slave owner, expressed his thanks that were able to continue, amidst the toil and sufferings of the journey, their accustomed religious services. Equally well, we can rest assured that whenever faces gathered around the campfire, there were Africans there to serve as spiritual guides into a different kind of wilderness. When there were dances to celebrate, deaths to mourn, or festivals to mark the passing of the seasons, there were Africans present. In addition, we must never forget that on the trail where we cried, there were also African tears. This we can never forget. This is, of course, an issue of some debate for there are many theories regarding pre-colonial contact between Africans and Native Americans. For a brief overview, see Leo Wiener, Africa and the Discovery of America (Philadephia, 1920); Jack Forbes, Africans and Native Americans: The Language of Race and the Evolution of Red-Black Peoples (Urbana:University of Illinois Press, 1993); Ivan Van Sertima, Came Before Columbus (New York:Random House, 1976); Michael Bradley, Voyage (Toronto : Summerhill Press, 1987). George Washington Williams, History of the Negro Race in America from 1619 to 1880: Negroes as Slaves, as Soldiers, and as Citizens (New York: The Knickerbocker Press, 1882), 123-180. David Brion Davis, The Problem of Slavery in Western Culture (Ithaca: Cornell University Press, 1966), 176. See Almon Lauber, Indian Slavery in Colonial Times within the Present Limits of the United States (New York: Doctoral Dissertation, Columbia University, 1933); Barbara Olexer, Enslavement of the American Indian (Monroe, N.Y.: Library Research Associates,1982); J. Leitch Wright, The Only Land They Knew:The Tragic Story of the American Indian in the Old South (New York: Free Press, 1981); Jack Forbes, Africans and Native Americans: The Language of Race and the Evolution of Red-Black Peoples (Urbana:University of Illinois Press,1993); Patrick Minges, Evangelism and Enslavement (Unpublished Grant Foreman, “Indian Territory in 1878” of Oklahoma IV (1926), 264. Booker T. Washington, The Story of the Negro: The Rise of the Race from Slavery Vol. 1 (New York: Doubleday and Co., 1909), 129. Gary Nash, Red,White and Black: The Peoples of Early America (Englewood Cliffs, N.J.: Prentice Hall, 1974), H.T. Malone, Cherokees of the Old South:A People in Transition (Athens: University of Georgia Press, 1956), 20. James Mooney, Myths of the Cherokees (Smithsonian Institution, Bureau of American Ethnology, Washington, D.C.: Government Printing Office, 1900), 32. J.Leitch Wright, The Only Land They Knew:The Tragic Story of the American Indian in the Old South (New York:Free Press, 1981), 258. For excellent surveys and discussions of this phenomenon, see Kenneth W. Porter, Relations Between Negroes and Indians Within the Present United States,(Washington, D.C.: The Association for Negro Life and History, 1931); J. Leitch Wright, The Only Land They Knew:The Tragic Story of the American Indian in the Old South (New York:Free Press, 1981); Jack Forbes, Africans and Native Americans: The Language of Race and the Evolution of Red-Black Peoples University of Illinois Press., 1993); Laurence Foster, Negro-Indian Relations in the Southeast (Philadelphia, n.p. 1935) John Curdman Hurd, The Law of Freedom and Bondage in the United States (Boston, 1858-1862), 303. William McLoughlin, The Cherokee Ghost Dance: Essays on the Southeastern Indians (Georgia: Mercer University Press, 1984), 266. Theda Perdue, Slavery and the Evolution of Cherokee Society 1540-1866 (Knoxville: University of Tennessee Press, Kenneth W. Porter, Relations Between Negroes and Indians Within the Present United States (Washington, D.C.: The Association for Negro Life and History, 1931), 16. Quoted in William S. Willis, Jr., "Divide and Rule: Red, White, and Black in the Southeast," Journal of Negro History 48 (1963): 165. Quoted in David Brion Davis, The Problem of Slavery in Western Culture.(Ithaca: Cornell University Press, 1966), "Trade and Intercourse Act, March 30, 1802" in Francis Paul Prucha, Documents of United States Indian Policy Edition (Lincoln:Univ. of Nebraska Press, 1990), 19. Perdue, p. 50 American State Papers:Indian Affairs, Vols. I and II, Documents, Legislative and Executive of the Congress of the United States, ed. Walter Lowrie, Walter S. Franklin, and Matthew St. Clair Clarke (Washington, D.C.: Gales and Seaton, 1832, 1834): Vol. Perdue, 54; William G. Mcloughlin, Cherokees and Missionaries (New Haven: Yale University Press, 1984), 139. Robert T. Lewit, The Conflict of Evangelical and Humanitarian Ideals: A Case Study (Cambridge: MA Dissertation, Harvard University, 1959) 35-53. Lewit, 97. William McLoughlin, “Red Indians, Black Slavery, and White Racism: Americas Slaveholding Indians” American Quarterly 26 (1974): 368. William McLoughlin, " Red Indians, Black Slavery, and White Racism: Americas Slaveholding Indians", 371. Perdue, 121. Quoted in Martin, Joel. Sacred Revolt:The Muskogees Struggle for a New World. (Boston:Beacon Press, 1991) p. Joshua Giddings, The exiles of Florida: or, The crimes committed by our government against the Maroons, who fled from South Carolina and other slave states, seeking protection under Spanish laws (Columbus, Ohio: Follett, Foster and Company, 1858), 4. Kevin Mulroy, Freedom on the Border Texas Tech University Press, 1993), 7. Mulroy, 25. It is important to note that many of the Muskogee and Seminole referred to their African brethren as their "slaves" to protect them from white slaveholders who sought their return. In addition, there was some social status acquired by owning slaves, even though the Muskogee and Seminole had little need for slave labor because they did not adopt plantation style agriculture as did the northern nations of the Five Civilized Tribes. Joel Martin, Sacred Revolt:The Muskogees Struggle for a New World (Boston:Beacon Press, 1991), 73. Mulroy, 19. Wiley Thompson to Lewis Cass, April 27,1835. National Archives Microfilm Publications, Microcopy M234, Record Group 75, Records of the Bureau of Indian Affairs, Letters Received by the Office of Indian Affairs, 1824-1831. in Henry Wiggins Porter Collection, Schomburg Center for the Study of Black Culture, New York, N.Y. John L. Williams, The Territory of Florida (Gainesville: University of Florida Press,1962), 239. Martin, 75; Wright, 265. See Francis Le Jau to John Chamberlayne, St. James, Goose Creek 1709/10 quoted in Mulroy, 74. Peter H. Wood, Black Majority, (New York: Knopf, 1974) 298-301. Martin, 73. J.Leitch Wright, Creeks and Seminoles, University of Nebraska Press, 1986), 190. Joshua Giddings, Exiles in Florida, Foster, 24. McLoughlin, Cherokees and Missionaries, 48; Perdue, 89; McLoughlin, Champions of the Cherokees, 21; Wright, and Seminoles, 223; Eighth Annual Report of the American Board of Commissioners for Foreign Missions, (Boston,1818), 16; Ninth Annual Report of the American Board of Commissioners for Foreign Missions, (Boston,1819), ibid.; Brainerd Journal,April 20, 1817, February 12, 1818 The positive attitude of the Cherokees toward African-American missionaries could be related to the fact that the first missionary among the Cherokee was a Black Methodist, John Marrant. Marrants mission in 1740, in which he converted the "king" of the Cherokees, is considered among he most successful missionary enterprise among the Cherokee. According to Michael Roethler, "It is only natural that the Cherokees should judge the value of Christainity by the character of the people who professed it.... The Cherokees had no reason to suspect the religion of this Negro preacher." (Roethler, 126) Sarah Tuttle, Letters from the Chickasaw and Osage Missions (n.p., 1821), 9-10. Chickamagua Journal quoted in H.T. Malone, of the Old South:A People in Transition (Athens: University of Georgia Press, 1956), 142. Lucinda Davis in Works Progress Administration:Oklahoma Writers Project. Slave Narratives. (Washington: U.S. Government Printing Office, 1932) p. 58 Preston Kyles in Works Progress Administration: Arkansas Writers Project. Slave Narratives. (Washington: U.S. Government Printing Office, 1932), 220. J. Leitch Wright, Creeks and Seminoles, Letter of Andrew Bryan to Reverend Doctor Rippon in Milton Sernett, ed., Afro-American Religious History: A Documentary Witness (Durham: Duke University Press, 1985), 49. Rawick, George. Interview with Irene Blocker, p. 264 Raleigh Wilson, Negro and Indian Relations in the Five Civilized Tribes from 1865 to 1907 (Ph.D. Dissertation, Iowa City: University of Iowa, 1949), 22. House Reports, No. 30, 39th Congress, 1st Session, Washington, 1867, Pt. IV, Vol. II, pp. 162 Nellie Johnson in Works Progress Administration, Oklahoma Writers Project, Slave Narratives (Washington: U.S. Government Printing Office, 1932) 157. Western History Collection, University of Oklahoma, Indian Pioneer History, Vol. 108: 213. Cudjo quoted in Perdue, 106. American State Papers II, 651. Carl Degler, The Other South:Southern Dissenters in the Nineteenth Century, (New York:Harper and Row,1974) pp. 19-21. The presence of large numbers of Quakers in North Carolina and Tennessee played a profound role in the development of anti-slavery sentiments. Benjamin Lundy estimated in 1827 that there were 106 anti-slavery socities in the South as compared with 24 in the Northern states. McLoughlin, Cherokees and Missionaries, Wright, Creeks and Seminoles, Angie Debo, A History of the Indians of the United States (Norman: University of Oklahoma Press,1970) 113. Michael Roethler, "Negro Slavery among the Cherokee Indians, 1540-1866" (Ph.D. Dissertation.,Fordham University,1964), McLoughlin, Cherokees and Missionaries, Wright, Creeks and Seminoles, 232. Wright, Creeks and Seminoles, A.B.F.M.Missionary Papers, Cherokees: Vol. VIII, 1831-1837, March 14, 1832. Robert Walker, Torchlights to the Cherokees, (New York: 1931), 298-299. Elizur Butler to David Green, March 5, 1845, A.B.C.F.M. Missionary Papers, Cherokees: Vol. IX, 1838-1845. "they told em they was hogs runnin around already barbecued with a knife and fork in their back. Told em cotton growed so tall you had to put little chaps up the stalk to get the top bolls," Lewis Johnson in Works Progress Administration, Arkansas Writers Project, Slave Narratives (Washington: U.S. Government Printing Office, 1932), Eliza Whitmire in George Rawick, ed. American Slave: A Composite Autobiography (Westport, Connecticut: Greenwood Press, 1972), 380-381. A.B.C.F.M. Missionary Papers , Cherokees: Vol. IX, 1838-1845, Daniel Butricks Journal, February, 1838 James Mooney, Myths of the Cherokee and Sacred Formulas of the Cherokees (Cherokee, N.C. Cherokee Heritage Books, 1982), 124. Roethler, 150. A.B.C.F.M. Missionary Papers , Cherokees: Vol. IX, 1838-1845, Daniel Butricks Journal, July 1838. A.B.C.F.M. Missionary Papers , Cherokees: Vol. IX, 1838-1845, Daniel Butricks Journal, August 1838. Roethler, 150. Nathaniel Willis in Indian Pioneer Papers, Vol. 50, 117. A.B.C.F.M. Missionary Papers , Cherokees: Vol. IX, 1838-1845, Daniel Butricks Journal, March 1838. Russell Thornton, The Cherokees: A Population History, (Lincoln: University of Nebraska Press,1990), 118. Annie Abel, The American Indian as Slaveholder and Secessionist (Lincoln:University of Nebraska Press,1992), Henry W. Porter, Relations, 50-51. Executive Documents, 25th Congress, 2nd Session, 1837-1838,vol iii, no. 78, 52. Joshua Giddings, Exiles in Florida, Wright states the cause of the Second Seminole War was the seizure of Osceolas African wife by merchants who sought to sell her back into slavery. Opothoyohela was to go on to lead a Maroon community in their flight from the Creek Nation to Kansas during the Civil War. Executive Documents, 25th Congress, 3nd Session, 1838, no. 225, 51. Jessup quoted in Mulroy, 38. Russell Thornton, American Indian Holocaust And Survival (Lincoln: University of Nebraska Press,1992), Mary Hill interview, Okfuskee Town, Okemah, Okla., Apr. 19, 1937, Indian Pioneer Papers, 5:106-107. Wright, Creeks and Seminoles, Jesse Bushyhead, quoted in Grant Foreman, Removal (Norman:University of Oklahoma, 1932), 310.
<urn:uuid:a7ad18d0-c7e8-4c4b-b4e9-a5aa060b3c3d>
CC-MAIN-2014-35
http://www.us-data.org/us/minges/underdog.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921550.2/warc/CC-MAIN-20140901014521-00044-ip-10-180-136-8.ec2.internal.warc.gz
en
0.940641
11,666
3.75
4
Young children in the Peruvian Andes often need to travel hours to reach schools. These schools lack adequate learning material and teachers. As a result, children are disenchanted with learning and believe they can better use their time at home helping their parents with farming. Peruvian students consistently score among the lowest on the PISA test compared to other Latin American countries. In 2015, I designed and manufactured a sensory platform for education games that aim to enhance deductive and quantitative skills in adolescents by leveraging the dynamic quality of Play-doh. This affordable solution was deployed in rural areas of Peru where access to educational technologies is severely limited. I was inspired to use Play-doh after discovering the material's conductive qualities and realizing this capability could be combined with technology to make learning tactile and engaging. I set out to design an affordable sensor board that recognizes various manipulations of Play-doh. The board is host to a set of creative games built around memory skills, quantitative reasoning, and pattern recognition in order to engage young children in resource-poor environments with little to no exposure to education. The flexibility and scalability of this educational 'kit' is due to the fact that children are able to swap out "covers" on the board in order to switch between games. The variety of manipulations of Play-doh allow for infinitely many possible games of a variety of difficulty and learning objectives. In December 2015, the second iteration of the board was rolled out in Peru. Stylistic as well as technical improvement were made to the kit. In total, the cost of producing Klay is around $15 per kit. This project came out of a collaboration between the Harvard University School of Engineering and Applied Sciences and Universidad de Ingeniería y Tecnología in Lima, Peru. Klay was the winner of the 2015 Deutsche Bank Challenge. I am currently working on developing the next iteration of the board and implementing the platform in more classrooms around Peru. find out more here
<urn:uuid:d26412ed-e697-44d3-aeb0-b15d7dde0957>
CC-MAIN-2017-34
http://www.alexyyang.com/klay
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00593.warc.gz
en
0.964188
407
2.703125
3
Red flags for children younger than 5 include trouble learning numbers (1-9), colors, shapes, or the first letters of the alphabet. Jeff is 4 and has trouble using crayons and struggles with buttoning, zipping and tying. He also has difficulty following directions and paying attention to what he and others are doing. His poor concentration and difficulty sticking with an activity until completion causes behavioral problems. Mom’s greatest concern is that Jeff’s speech seems delayed and he doesn’t pronounce sounds correctly. It’s typical that children struggle with several areas of development at the same time. Also, they learn at very different rates. Don’t ignore your concerns. Jeff needs to be evaluated by an educational specialist and speech and language therapist to determine if he is developing competently enough to learn once he reaches school age. Early and appropriate intervention is the most successful approach to managing developmental disabilities and delays like Jeff's. Join me on Facebook at Dr. Claudia McCulloch
<urn:uuid:db5d4107-3ec3-4e8c-9c73-f6e73b9677f6>
CC-MAIN-2020-24
http://www.drclaudia.net/blog/learning-problems-in-little-kids-the-signs
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00233.warc.gz
en
0.960795
207
3.125
3
When wanting to communicate effectively two individuals have to make sure that they have no sort of barriers between them which will stop them from communication. They will have to bear in mind that only getting rid of barriers will make the communication effective. Care workers in care settings make sure that they communicate effectively without any barriers to communication.”Effective communication with people of different cultures is especially challenging. Cultures provide people with ways of thinking–ways of seeing, hearing, and interpreting the world. Thus the same words can mean different things to people from different cultures, even when they talk the “same” language. When the languages are different, and translation has to be used to communicate, the potential for misunderstandings increases.”When an individual who is from a foreign country and doesn’t not speak English very well then it will become harder for him to communicate effectively because he will not know what to say or what that other person might be saying to him. It is very important for an individual to learn other languages too so that they could communicate with other members of society such as in a early years settings when a Pakistani girl joins and she does not know how to speak English even living in England, it will become hard for her to communicate with other children and staff. In Sunhil’s playgroup the teacher make sure that the children who cant speak proper English to learn it as fast as they can for their own benefit because then they will be able to speak English properly and easily with other members in playgroup. Learning language where an individual lives is very important because if they don’t know that language they will not be able to communicate effectively or even communicate at all.When cultural differences come in between two individuals communicating, it will affect the communication because they will no longer communicate effectively. Cultural barriers not only affect communication, but also affect how the other person will react whether negative or positive. A person might start to become rude to them and maybe use abusive language toward them which will lead to another person feeling offended.In Sunhil’s playgroup the teachers have to make sure that there is no type of cultural differences between the children because it will affect their communication with other children and also as they grow up it will affect the relationship between their classmates too. The teachers make sure that they don’t have cultural differences between each other too. Cultural differences can affect the effective relationship between individuals who communicate with other for example a person might not want to communicate with another individual because he is from Nigeria. Cultural behaviour can also affect effective communication because if two individuals were talking to one another they would start to argue whether whose culture has better traditions then other. Cultural behaviour might be one of the factors which are so popular in nowadays society. People don’t like to communicate with one another because of the behavior one culture carries out.For example some people might celebrate the bonfire night by doing fireworks however there will be few people who might go against fireworks because they are dangerous and when these two individuals with different belief communicate one might say I like the 5th of November however the person may disagree to him and say that I hate the 5th of November. Some might not want to communicate with other individuals from different cultures because they might feel that their culture is better then the other person’s culture which will affect their effective communication. For example an Indian might say to a Pakistani that their culture is better then the Pakistani culture which will cause arguments between the two affecting the communication between them.”All of these differences tend to lead to communication problems. If the people involved are not aware of the potential for such problems, they are even more likely to fall victim to them, although it takes more than awareness to overcome these problems and communicate effectively across cultures. “Positioning:Communication can be problem when it comes to sitting positions because two individuals cannot communicate effectively if they are not facing each other properly. For example when two individuals are talking to each other and one of them does not face another person then they will not be able to communicate effectively.Positioning can be a big problem for example if Sunhil’s nursery nurse is communicating with a child and she is not on the same level as the child then this will affect the communication because the child will feel scared and will not respond to the teacher as to what she is saying.In some countries there can be influence of position for example in Hindu wedding the bride and the groom walk after each other, mainly the bride follows the groom and say their speech and promises. Some people might find it a barrier to communication because they might not be facing each other and be classed as a rude act however they believe that the groom should walk first because he has more rights. Whereas in the Christians wedding they make sure that they are facing each other so it is direct communication they are doing while saying their promises and speeches.Also not having an another person as the same level can also make them feel like there is a huge barrier between them and also might not be able to hear that person properly and will misunderstand some information. For example in early years settings the teacher might be talking to Sunhil’s and she might be standing up and he might be sitting down which can make him feel that she is standing over him making him have fear and scared from the teacher who is wanting to communicate with him.If the nursery nurse is not at the same level as the service user, then the service user may not take the nursery nurse seriously and may not be able to hear the nursery nurse and may take the information differently. For example, if Sunhil is sat on the floor and the nursery nurse is stood and she tries to talk and ask Sunhil a question he may not communicate or respond properly because he may feel that the nursery nurse is not talking to him and he may not be able to hear the nursery nurse.In early years setting the teachers should make sure that they are facing the children properly in order to communicate effectively or the teachers have to make sure that the children are facing them properly. The positioning of an individual is very important because it will support the effective communication which will take place. The teachers should also make sure that they are facing the person properly and having a good eye contact with them so the other person wants to know what they are saying and make their sayings valued.When a teacher is talking to a child in the nursery they should make sure they are leaning down on the same level as the child to speak to them. This will support the effective communication within them and the child will listen to the teacher more carefully too. This can also help the child have eye contact with the teacher so he can listen to the teacher more carefully and attentively.When a child has bad hearings, the teacher have to make sure that they are on the same level as the child because then she is not then the child will not be able to hear the teacher properly. Even if they can’t hear, they will be able to lip read or partially understand what the teacher is trying to say to that child.In Sunhil’s playgroup there is a girl called Amy, she is hard of hearing so the teachers in the playgroup make sure that they are leaning towards the child so that they can hear what she trying to say or what the teacher might want to say to her. The teachers make sure that Amy can lip read or partially hear what the teacher wants to say to her because in Sunhil’s playgroup effective communication is very important.When a group is having communication, even then positioning is very important because not sitting in a correct position make individuals feel that they are being let out. The individuals should make sure they are sitting together facing each other so they know what everyone is saying so effective communication is carried on without any positioning barriers. For example in Sunhil’s playgroup the teacher make sure that when they are doing story time, they make sure that all the children are sitting in a right position so that there is no kind on communication barriers between them such as few children not listening effectively or some children feeling left out or even some children not being able to hear properly.Gestures:Communicating with gesture is very important in health and social care. Gestures can be such as facial expressions, hand signals, eye gazing or even body position. For example when a teacher in care setting smiles at a child, the child will immediately know the teacher is happy with that child also if the teach pointed hand towards the door, the child will know that teacher is indicating them to go out of the room.Non-verbal communication is very important in early year’s settings because the teacher will help the children know what they are trying to say to them more easily and properly without any difficulties. For example when the children come inside the playgroup the teacher will greet them by waving their hands at them, immediately the children will wave their hands back even if the children don’t verbally hear the teacher properly.For example in Sunhil’s playgroup, if the teacher was to say goodbye to them at home time, the children will immediately know what the teacher is talking about even if the teacher does not speak to them by saying by. The teachers believe that using non-verbal in early years setting is more important than communicating verbally. Using hand signals is also very important in early years setting. Hand signals are very important to use because the teachers can make it easier for children to understand such as if the teacher is indicating to the door the children will know that she wants us out of the room however if the teacher just told the children that she wants them out of the room without hand signals then the children won’t know what she might be saying because some children won’t be listening and some children won’t understand what she is talking about.In Sunhil’s playgroup the teacher used the thumbs up sign which on of the kids in the playgroup found strange who was an Australian. So he went and told his parents about this who informed him that nothing wrong with it. The reason he found it strange was in Australia the thumbs up sign means the opposite.
<urn:uuid:571f7029-cd78-43ad-94c6-6781dc24d2d7>
CC-MAIN-2023-23
https://woodstock-online.com/effects-culture-positioning-gestures-communication-playgroup/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648858.14/warc/CC-MAIN-20230602204755-20230602234755-00363.warc.gz
en
0.976657
2,082
3.296875
3
Alcohol is a toxin, so when you consume it, your body metabolizes it before everything else. It is digested in much the same way sugar is, meaning your pancreas secretes lots of insulin, a hormone that promotes the storage of nutrients as fat and prevents fat cells from getting broken down for energy. When you have alcohol in your stomach, everything you eat is immediately stored as fat. Studies have shown that people who drink on occasion are healthier than those who do not. The positive effects of wine, beer and some other drinks may not affect those who follow a healthy lifestyle as much as those who do not. Having a healthy social life is part of a healthy lifestyle so keep that in mind when you decide whether or not to go out drinking with your friends. Also remember that not all alcohol is created equal. Although almost all alcoholic drinks will spike your blood sugar, red wine has actually been shown to lower blood sugar. It has also been shown to be beneficial in cardiovascular health, as well as overall health, so long as men don’t drink more than two glasses per day, and women don’t exceed one. Use the following Alcohol Cheat Sheet thanks to Paleo Happy Hour. No Amount of Alcohol is Safe? Recently the benefits of alcohol, however, have been in question. The cancer risk is dose dependent, but the 2014 World Cancer Report from the World Health Organization’s International Agency for Research on Cancer has concluded that no amount of alcohol is safe.
<urn:uuid:32247294-c8be-495f-bb07-401caaf63823>
CC-MAIN-2014-35
http://paleoporn.net/q/is-alcohol-paleo/
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800767.23/warc/CC-MAIN-20140820021320-00193-ip-10-180-136-8.ec2.internal.warc.gz
en
0.963745
307
2.734375
3
Letters and numbers have shapes that are unique to each other. If your little one has learned about shapes as early as possible, he will be better prepared to ask for letters and numbers in the future. When learning to draw, he will also learn to write letters and numbers. Games that help the pattern help develop the pre-reading skills of the Little One. You are going to get Shape Nets for Practice with you. By giving shape and color, Little has a richer vocabulary that compiles what he sees, what he wants, and what ideas he gets. Receptive language skills, such as taking direction, also often rely on words of shape and color (For example, “Please get a blue pencil.”). Teaching shapes and colors to your child as early as possible can improve language skills. You need to take Shape Nets for Practice with you. The world consists of various shapes and colors. So your little one starts to associate it with things that are already known to him, he can create an efficient environment by filtering out unnecessary information. For example, when he wants to find a fire engine toy in his toy box, he can find it quickly because it removes everything that is not red. Signs and symbols use color to provide additional information about health and safety. For example, a red sign for agreement on “danger” or “stop”. Color can also ask someone’s health. More than just bruised skin. Meanwhile, people with reddish skin may have long been in the sun. Moms can tell your child about the information and ask him to immediately report to adults who are known to be able to find these signs when related. To help your child understand shapes and colors, Moms can make shape hunting games. The method is to cut interesting papers into certain unique shapes. Then, spread the paper around the house in a safe place to meet the children. Then, have your little one look for the requested form by instructing a simple request. Just then take the Shape Nets for Practice.
<urn:uuid:3c3b3b41-acd9-4bbc-a937-bca427ae46a3>
CC-MAIN-2020-29
https://educativeprintable.com/shape-nets-for-practice/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00498.warc.gz
en
0.965358
418
3.9375
4
Andrew Young never formally studied economics. But he learned early in his time as a civil rights leader what a powerful tool for good it could be. “Young people look back now and think the civil rights movement was about marching, getting beat up and bit by dogs, but the whole civil rights movement was really about the economy,” he said yesterday. “The economic withdrawal campaign was what really changed the South.” He went on to explain: the black community in Birmingham, Alabama, went on a consumption strike. They didn’t buy anything but food and medicine for 90 days. “And after 90 days with nobody buying anything, the business community came to us and asked what they could do to get us all to shop again. We said, you can desegregate. You have signs saying ‘black’ and ‘white’ over the water fountains. You can just take those down. “You have black women working in every department [of your stores], but they are in maids’ uniforms, even though they know more about things than any of the salesclerks. Let them wear the dresses they are selling. Call them clerks and let them work on commission, too. “The hardest thing to get the business owners to agree to was desegregating the lunch counter. So we started out with a 30-day test, sending one couple a day to each lunch counter. We invited the owners to come down and observe them, but to also serve them. “There was never an incident. After that it was open; it was integrated.” The irony was, he said that segregation was still the law in Birmingham, but when 100 businessmen decided it had to change, it did, regardless of what the statutes said. Back then, in the 1960s, Mr. Young just assumed America’s was the greatest economy on the face of the Earth, it’s only fault being that black people were not sufficiently a part of it. Somewhat later in his diverse career, though, Mr. Young began to have a darker view of economics, to see the other side of its power. The turning point came in 1971, during the Nixon Presidency, when America unilaterally abandoned the Bretton Woods system of international monetary management, which, he said had produced “unprecedented growth and development worldwide.” Mr. Young entered Congress the following year, and was on the Banking Committee in 1973 when then secretary of the Federal Reserve, Arthur Burns, came in to testify about the advantages of the abandonment of Bretton Woods’ tight currency controls. He recalled asking Burns whether this would lead to people playing politics with the dollar, causing big fluctuations in currency values. “Arthur Burns puffed on his pipe and said: ‘Young man, you’ll soon learn the dollar doesn’t need you to defend it.’ ” The truth is, he said, he did not fully understand what was happening, nor did his fellow Congressmen. They were intimidated by the economists. And there began his growing disenchantment with the practice of economics. It was a disenchantment which grew over the decades, as deregulation became the vogue. Notably under Reagan, but also under Clinton, and finally disastrously, under the Bush administration. “From the very beginning in America, there was always a sense of morality and social responsibility,” he said. “But this is an economics that has no sense of social responsibility and no sense of morality.” Where, he asked, is the moral sense of a banker who would insist on his $100 million bonus, in the current straitened times? Mr. Young is exercised about the growing disparities not only in this country. The former U.S. ambassador to the United Nations sees it happening all over the world. “In China there’s a big gap between the haves and have-nots. In India it’s even bigger. In South Africa there’s the most volatile gap.” Unless something is done about the problem of massive unearned wealth — accrued either through nonproductive speculation or inheritance, he foresees the possibility of class wars. “I don’t trust people who have inherited wealth,” he said. “If you have never had to work for a living you don’t appreciate those who do. This guy who took the $100 million bonus has never worked for a living. He just thinks that what he did was work.” Mr. Young waxed Biblical. “If there is any judgment — and I think there is — the question Jesus says he’s going to ask is, ‘Did you feed the hungry, did you clothe the naked, did you heal the sick, did you set at liberty those who were oppressed?’ “Well I don’t think you do that through missionary baskets. I think you do that through the global economic system.” As irony would have it, just at the time that the so-called immoral economic system showed its fundamental weakness, Barack Obama was elected. And now the problems sowed over 30-plus years become his. “It’s no longer the Nixon/Reagan Bush economic construct,” said Mr. Young. “It’s Obama’s bailout”. He is not much comforted by some of the advisors Mr. Obama has gathered around himself. Mr. Young fears several of them are too much of the current system. “There’s some really bright folk around Obama, but I think he’s the one that has heart, conscience and soul enough to do this,” he said. Mr. Young suggested Mr. Obama was in “almost the same situation I was in when I became mayor of Atlanta in 1981. “It was the crime capital of the world, the economy was crashed. The business community fought against me.” But he got that business community together, a little as he had done nearly 20 years earlier, and pointed out their shared economic interest. “I told them, if I’m a miserable failure as mayor it won’t hurt me particularly. I’ve written a book, I can write another one and make a little money on outside speeches. If I’m not a good mayor, I’ll survive. But your businesses won’t. “You have a vested interest in making me a good mayor. “And they bought that.” And ultimately, he was sure, people would buy Obama’s economic, health and other reforms. Because they will come to recognize their interest. Early in Mr. Obama’s candidacy for president, Mr. Young said, he sent a copy of The Defining Moment, Jonathan Alter’s chronicle of the first 100 days of Franklin Roosevelt. What’s remarkable about it, he said, was the initial resistance to FDR’s reform agenda, all the things for which he was attacked, “we now know were absolutely right and necessary.” Mr. Young will give a lecture, or as he prefers to put it, “lead a discussion” on the economy today at 6 p.m., at the Annual Taste of the Road Scholar event at Shearer Cottage in Oak Bluffs.
<urn:uuid:acd7ce80-f8d1-4efc-a086-8439dfbe2917>
CC-MAIN-2014-15
http://mvgazette.com/news/2009/08/17/civil-rights-leader-learned-young-all-politics-economic?k=vg535403e04f839&r=1
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00497-ip-10-147-4-33.ec2.internal.warc.gz
en
0.981124
1,571
2.59375
3
Latest Nick Schmerr Stories Scientists interested in the construction of the rock layers immediately under the Earth´s crust, the lithosphere and asthenosphere, have new tools to help analyze these layers and further understand plate tectonics. The researchers have been using seismic waves to study the lithosphere-asthenosphere boundary, or LAB. This boundary is where the hot, convecting mantle asthenosphere and the overlying cold and rigid lithosphere meet. It has been found that seismic waves move faster... - Exultant; jubilant; triumphant; on the high horse. - Tipsy; slightly intoxicated.
<urn:uuid:5b2430b3-f48b-4347-9d6d-b2f609642974>
CC-MAIN-2014-35
http://www.redorbit.com/topics/nick-schmerr/
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832538.99/warc/CC-MAIN-20140820021352-00055-ip-10-180-136-8.ec2.internal.warc.gz
en
0.885387
130
2.90625
3
Researchers have found high levels of toxic lead in the popular spice turmeric. What is lead chromate, and why do people use it to color turmeric? Lead chromate, a chemical compound comprising lead and chromium, is a yellow pigment that can enhance the brightness of a substance. It is also poisonous, acting as a neurotoxin when humans ingest or inhale it. Experts consider lead unsafe in any quantity as it leads to cognitive defects. Usually, manufacturers use lead chromate to give yellow and orange oils and paints their color. However, previous research has identified turmeric as a source of lead exposure across many turmeric-producing districts in Bangladesh. Turmeric is an essential spice that many people consume daily in South Asia. It also has some medicinal uses. It may potentially treat inflammation and have healing effects across many conditions, including cancer. Adulteration of spices is not unusual, and the addition of toxic agents to spices is common. However, the addition of lead chromate to turmeric threatens public health in Bangladesh. The researchers behind the present study wanted to assess the effect of this practice and its regulation. The team, from Stanford Woods Institute for the Environment, California, designed the study to assess the extent of turmeric adulteration with lead chromate, a substance that authorities have banned as a food additive. In the first instance, the researchers found that the adulteration of turmeric with lead chromate was an issue stretching back to the 1980s, when people first used it to enhance the color of turmeric that flooding had left dull. Some members of the team had previously investigated the various potential sources of blood lead level contamination in people in Bangladesh. They did this by looking at the different isotopes of lead, which allowed them to create a chemical signature known as a fingerprint for lead-adulterated turmeric. Their findings, available in Environmental Science & Technology, showed that this was the most likely culprit for the origin of lead in people’s blood, making the study the first to link lead in turmeric directly to lead levels in the blood. What were the results of the study? In the current study, which appears in the journal Environmental Research, the researchers first identified and visited the nine major turmeric-producing districts in Bangladesh (as well as two minimal ones) to assess the practice of adulterating turmeric across the supply chain. They conducted interviews with 152 workers across the production sites. Following this, they collected samples of yellow pigments and turmeric from the most frequented wholesale markets, and they collected samples of oils and dust from turmeric polishing mills to assess evidence of adulteration. The researchers used mass spectrometry and X-ray fluorescence to identify the lead and chromium concentrations in all 524 of the samples that they collected. Turmeric lead and chromium concentrations were highest in the Dhaka and Munshiganj regions (minimal turmeric producers), where the team detected a maximum concentration of 1,152 micrograms/gram (µg/g), compared with 690 µg/g in the nine major turmeric-producing districts. They found evidence of lead chromate adulteration at seven out of nine of the major turmeric-producing districts and noted that 2–10% of yellow pigments at the polishing mills contained lead chromate. Soil samples from these mills also had a maximum concentration of 4,257 µg/g of lead. The interviews confirmed how the practice of adding lead chromate to turmeric started over 30 years ago and continues today. The consumers’ desire to have bright and colorful yellow curries seems to be the primary driver of this practice. Farmers stated that turmeric merchants are able to sell poor quality roots and increase profit margins by requesting the adulteration of that poor quality turmeric with yellow pigment. How to limit contamination This practice is extremely harmful to health. There was no direct evidence of contaminated turmeric beyond Bangladesh, and the researchers believe that food safety checks by importing countries encourage large scale spice processors in Bangladesh to limit the amount of lead that they add to turmeric for export. However, the researchers say that “the current system of periodic food safety checks may catch only a fraction of the adulterated turmeric being traded worldwide.” Lead author Jenna Forsyth adds, “People are unknowingly consuming something that could cause major health issues. We know adulterated turmeric is a source of lead exposure, and we have to do something about it.” Going forward, it seems that there is a need to improve the education surrounding toxic pigments and to move consumer behavior away from eating contaminated foods. In addition, the research team plans to develop business opportunities that reduce lead exposure.
<urn:uuid:2d98c909-bdab-4f0b-bbb2-bfb8197590a7>
CC-MAIN-2023-14
https://eatinghealthyblog.com/2019/10/17/turmeric-may-contain-dangerous-levels-of-lead/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00568.warc.gz
en
0.960425
989
3.703125
4
to Like Us! to the family Pomacentridae. This family is comprised of 28 genera and 360 species. It includes all damselfish and clownfishes. Established populations of damsels extend from the western Pacific to the Eastern Indian Oceans to the Great Barrier is a coral reef inhabitant occupying depths from 30 to 150 feet. damsels have a rounded body, spiked dorsal fin and the forked tail characteristic of its grouping. Its bright yellow color palette is with electric blue vertical pin striping on its upper and lower body Coloration has a tendency to fade as the fish matures. The golden marketed under various aliases including yellow damselfish, lemon lemonpeel damsel and golden damsel. is a hardy and somewhat aggressive species. Its ability to contend with multitude of environmental parameters makes it an excellent choice for inexperienced aquarist. The fish’s stamina and its low price tag often lead to it being used as a biological stabilizer in the cycling of new aquariums. If flourishes in the newly established aquatic environment, then it is risk of adding more expensive species of lesser constitution to the In a marine reef it will not disrupt the anchored inhabitants or devour ornamental crustaceans. In nature it makes its home amid gorgonian fans black coral trees. These would make the perfect surroundings for a fish in a reef tank. This species reaches up to 5 inches in length as Take its temperament into account when choosing its tank mates. Although it is very even tempered compared to many damselfish species, it should not be smaller more timid species. Introducing this fish to a pre-established population or in unison with the other species you wish to keep in your aquarium will reduce aggressive behavior. A minimum tank size of 30 golden damsel is an omnivore. In their natural habitat their diet primarily of zooplankton. These fish take readily to aquarium life. not picky eaters and instances of problems getting them to start their new surroundings are rare. They will eat common flake food marine omnivores. But as with any marine species, a varied diet will insure general health and maintain coloring. Vitamin enriched brine shrimp is a good supplement. They should also be provided with an abundance of to graze on. are sequential hermaphrodites. They are all born as males. If a group are introduced to an aquarium together the largest most dominant of the will experience a morphological hormonal surge until it gender that of a female. This is a trait common to all hermaphroditic marine Nature will always insure that both genders are present in a insure the prorogation of the species. These damsels are known to breed captivity. The male damsel will instinctively guard freshly fertilized until they hatch. Page For Future Saltwater & Marine Fish Care & Breeding Guide Our Guide: Article Usage/Legal Disclaimer
<urn:uuid:dda1eae4-e37a-49f4-ae82-7958eec08544>
CC-MAIN-2014-23
http://exotic-aquariums.com/golden-damselfish.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510263423.17/warc/CC-MAIN-20140728011743-00276-ip-10-146-231-18.ec2.internal.warc.gz
en
0.874952
656
2.640625
3
Born at Beauvais, he was a descendant of the counts of Namur, and in his youth served as an officer in a regiment of cavalry. Finding it necessary to quit the army in order to take charge of his younger brothers who had been left orphans, he was appointed a farmer-general by Louis XV. In 1777 he visited England, Germany and Holland; and in the following year he travelled through Italy, with the view of exploring thoroughly the remains of ancient art. He afterwards settled at Rome, and devoted himself to preparing the results of his researches for publication. He died in 1814, leaving the work, which was being issued in parts, unfinished; but it was carried on by M. Gence, and published complete under the title L'Histoire de l'Art par les monuments, depuis sa décadence au quatrième siècle jusqu'à son renouvellement au seizième (6 vols. fol. with 325 plates, Paris, 1823). In the year of his death D'Agincourt published in Paris a Recueil de fragments de sculpture antique, en terre cuite.
<urn:uuid:a72478fa-a2c1-495e-9f2d-01d288e6d43d>
CC-MAIN-2013-48
http://www.reference.com/browse/art+par
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345772708/warc/CC-MAIN-20131218054932-00069-ip-10-33-133-15.ec2.internal.warc.gz
en
0.953908
243
2.65625
3
3D graphene: Solar cells' new platinum? One of the most promising types of solar cells has a few drawbacks. A scientist at Michigan Technological University may have overcome one of them. Harnessing the potential of quantum tunneling: Transistors without semiconductors (Phys.org) —For decades, electronic devices have been getting smaller, and smaller, and smaller. It's now possible—even routine—to place millions of transistors on a single silicon chip. Scientists grow ultrahigh-purity carbon nanotubes Small-molecule solar cells get 50% increase in efficiency with optical spacer Researchers speed up transistors by embedding tunneling field-effect transistor New flow battery could enable cheaper, more efficient energy storage MIT researchers have engineered a new rechargeable flow battery that doesn't rely on expensive membranes to generate and store electricity. The device, they say, may one day enable cheaper, large-scale energy ... Engineer brings new twist to sodium-ion battery technology with discovery of flexible molybdenum disulfide electrodes (Phys.org) —A Kansas State University engineer has made a breakthrough in rechargeable battery applications. Drinking water from the sea: Electrochemically mediated seawater desalination in microfluidic systems (Phys.org) —A new method for the desalination of sea water has been reported by a team of American and German researchers in the journal Angewandte Chemie. In contrast to conventional methods, this techni ... Champion nano-rust for producing solar hydrogen EPFL and Technion researchers have figured out the "champion" nanostructures able to produce hydrogen in the most environmentally friendly and cheap manner, by simply using daylight. Sodium-ion battery cathode has highest energy density to date Helicopter takes to the skies with the power of thought (w/ Video) A remote controlled helicopter has been flown through a series of hoops around a college gymnasium in Minnesota. It sounds like your everyday student project; however, there is one caveat… the helicopter ... Scientists invent self-healing battery electrode Researchers have made the first battery electrode that heals itself, opening a new and potentially commercially viable path for making the next generation of lithium ion batteries for electric cars, cell ... Composite battery boost (Phys.org) —New composite materials based on selenium (Se) sulfides that act as the positive electrode in a rechargeable lithium-ion (Li-ion) battery could boost the range of electric vehicles by up to ... Microbial battery: Team uses 'wired microbes' to generate electricity from sewage Engineers at Stanford University have devised a new way to generate electricity from sewage using naturally-occurring "wired microbes" as mini power plants, producing electricity as they digest plant and ...
<urn:uuid:83d327ea-ac65-455c-834e-25bf6b19e343>
CC-MAIN-2014-35
http://phys.org/tags/electrodes/sort/popular/all/
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00394-ip-10-180-136-8.ec2.internal.warc.gz
en
0.906373
588
2.96875
3
Taqiyyah - Taqiyyah is a word which is used to describe justifiable deception on behalf of Islam, and once again, self willed people have pressed this concept to their own purposes. First, as a general matter, taqiyyah only applies in a specific situation where a person is coerced to renounce Islam under the threat of violence. Under that exceptional circumstance, it is acceptable to renounce Islam with the mouth, while not actually doing so in the heart or the conscious mind. However, the notion that taqiyyah represents some broad license to commit “pious fraud” against non-Muslims is wholly incorrect. Just as the object of any truly Islamic jihad must be circumscribed to the revealed will of God in the Qur’an, likewise there can be no truly pious act of deception that conflicts with God’s command. Abrogation - Some people suggest that the verses sent down during the Meccan period and those sent down during the period of Medina are different, and that verses about peace have been annulled. Those claims are unfounded. All the verses of the Qur’an are valid, from beginning to end. It is disbelief to speak of the annulment of any of God’s commandments. These are ideas some people have invented for themselves, and therefore have no validity whatsoever. No commandment in the Qur’an can cease to apply. It is not acceptable to annul a verse on the basis of fabricated hadiths and of historical information. It must be kept in mind that no hadith (saying of the Prophet Mohammad) can conflict with the Qur’an. If it does, it is not an authentic hadith. Evidence only from the Qur’an - There is no need to look to another source when the verses of the Qur’an are so explicit. All kinds of stories appear in various historical sources, and every society has its own stories. There is the history of the Umayyad, the Abbasid, the Iranians etc., and they are all very different. We do not know what is objectively true in ancient history, therefore they are not evidence against Islam. About the hadith resources; the criteria of its authenticity is its coherence with the Qur’an. Most strife and conflict has been the result of misinterpretation and misinformation of history. That is also why the Islamic world is fragmented at the moment. The false fabrications of people today or in the past are of no concern to us; the person who applies things wrongly is in manifest error. If people have falsely made things up, their actions have nothing to do with religion itself. In addition, the Qur’an is a whole and every verse expounds one another. So any verse from the Qur’an should be interpreted within the spirit of the Qur’an. If somebody picks one verse from the Qur’an and tries to implement it out of its context or without the knowledge of the general spirit of the Qur’an, he might practice it falsely. Most of the time, even with the explicit statement, there are conditions or exceptions explained. God warns people “Do you then believe in a part of the Book and disbelieve in another part ?” (Qur’an, 2:85) Protecting unbelievers - In the Qur’an, God says if any unbeliever asks you for protection, give them protection and escort them to a place where they are safe (Qur’an, 9:6). Thus, Muslims have the responsibility to protect even the unbelievers when they seek protection. This means a Muslim may have to give his life to protect the unbelievers and this is a must in the Qur’an. How can one claim that a Book which makes it a rule for Muslims to protect the unbelievers would make it a rule for them kill everyone if they do not believe? And there is no point in claiming otherwise because they have the right to live as unbelievers as God says there is no compulsion in religion. Social life with the People of the Book - According to the Qur’an, Christians and Jews, are people Muslims can marry, live with, and eat with, as People of the Book. Under Islam, one person has a Christian wife and another a Jewish wife; one person worships at the synagogue, another at the church and yet another at the mosque. Meanwhile, they all live in peace. This provision alone is more than sufficient evidence that Muslims are bound to live together with Christians and Jews in a climate of peace and love. If a Muslim trusts and loves a woman enough to eat what she cooks, and enough to raise his children, why would he want to kill her? Which part of a true book would advise Muslims to kill their wives? Therefore, the entire idea that Muslims are authorized to kill Christians and Jews collapses into its own absurdity. If you don't see your comment after publishing it, refresh the page. Our comments section is intended for meaningful responses and debates in a civilized manner. We ask that you respect the fact that we are a religious Jewish website and avoid inappropriate language at all cost.
<urn:uuid:5ec25392-19c4-41a0-9c2d-edc4486e7699>
CC-MAIN-2014-10
http://www.jewishpress.com/indepth/opinions/listen-to-me-islam-does-not-command-war-against-jews/2013/01/06/3/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999677352/warc/CC-MAIN-20140305060757-00085-ip-10-183-142-35.ec2.internal.warc.gz
en
0.954715
1,064
2.546875
3
38. Apician Morsels; or, Tales of the Table, Kitchen, and Larder. By Dick Humelbergius Secundus. 8vo, London, 1834. 39. Cottage Economy and Cookery. 8vo, London, 1844.[Footnote: Reprinted from the Journal of the Agricultural Society, 1843, vol. iii, part I]. The staple food among the lower orders in Anglo-Saxon and the immediately succeeding times was doubtless bread, butter, and cheese, the aliment which goes so far even yet to support our rural population, with vegetables and fruit, and occasional allowances of salted bacon and pancakes, beef, or fish. The meat was usually boiled in a kettle suspended on a tripod [Footnote: The tripod is still employed in many parts of the country for a similar purpose] over a wood-fire, such as is used only now, in an improved shape, for fish and soup. The kettle which is mentioned, as we observe, in the tale of “Tom Thumb,” was the universal vessel for boiling purposes [Footnote: An inverted kettle was the earliest type of the diving-bell], and the bacon-house (or larder), so called from the preponderance of that sort of store over the rest, was the warehouse for the winter stock of provisions [Footnote: What is called in some places the keeping-room also accommodated flitches on the walls, and hams ranged along the beams overhead; and it served at the same time for a best parlour]. The fondness for condiments, especially garlic and pepper, among the higher orders, possibly served to render the coarser nourishment of the poor more savoury and flavorous. “It is interesting to remark,” says Mr. Wright [Footnote: “Domestic Manners and Sentiments,” 1862, p. 91], “that the articles just mentioned (bread, butter, and cheese) have preserved their Anglo-Saxon names to the present time, while all kinds of meat—beef, veal, mutton, pork, even bacon—have retained only the names given to them by the Normans; which seems to imply that flesh-meat was not in general use for food among the lower classes of society.” In Malory’s compilation on the adventures of King Arthur and his knights, contemporary with the “Book of St. Alban’s,” we are expressly informed in the sixth chapter, how the King made a great feast at Caerleon in Wales; but we are left in ignorance of its character. The chief importance of details in this case would have been the excessive probability that Malory would have described an entertainment consonant with the usage of his own day, although at no period of early history was there ever so large an assemblage of guests at one time as met, according to the fable, to do honour to Arthur.
<urn:uuid:1f93d0b7-3ddc-497d-a848-05f535e6e96c>
CC-MAIN-2014-35
http://www.bookrags.com/ebooks/12293/62.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830746.39/warc/CC-MAIN-20140820021350-00307-ip-10-180-136-8.ec2.internal.warc.gz
en
0.965354
615
2.609375
3
Artificial intelligence (AI) has traditionally been deployed in the cloud, because AI algorithms crunch massive amounts of data and consume massive computing resources. But AI doesn’t only live in the cloud. In many situations, AI-based data crunching and decisions need to be made locally, on devices that are close to the edge of the network. AI at the edge allows mission-critical and time-sensitive decisions to be made faster, more reliably and with greater security. The rush to push AI to the edge is being fueled by the rapid growth of smart devices at the edge of the network—smartphones, smart watches and sensors placed on machines and infrastructure. Earlier this month, Apple spent $200 million to acquire Xnor.ai, a Seattle-based AI startup focused on low-power machine learning software and hardware. Microsoft offers a comprehensive toolkit called Azure IoT Edge that allows AI workloads to be moved to the edge of the network. Will AI continue to move to the edge? What are the benefits and drawbacks of AI at the edge versus AI in the cloud? To understand what the future holds for AI at the edge, it is useful to look back at the history of computing and how the pendulum has swung from centralized intelligence to decentralized intelligence across four paradigms of computing. Centralized vs. Decentralized Since the earliest days of computing, one of the design challenges has always been where intelligence should live in a network. As I observed in an article in the Harvard Business Review in 2001, there has been an “intelligence migration” from centralized intelligence to decentralized intelligence—a cycle that’s now repeating. The first era of computing was the mainframe, with intelligence concentrated in a massive central computer that had all the computational power. At the other end of the network were terminals that consisted essentially of a green screen and a keyboard with little intelligence of their own—hence they were called “dumb terminals.” The second era of computing was the desktop or personal computer (PC), which turned the mainframe paradigm upside down. PCs contained all the intelligence for storage and computation locally and did not even need to be connected to a network. This decentralized intelligence ushered in the democratization of computing and led to the rise of Microsoft and Intel, with the vision of putting a PC in every home and on every desk. The third era of computing, called client-server computing, offered a compromise between the two extremes of intelligence. Large servers performed the heavy lifting at the back-end, and “front-end intelligence” was gathered and stored on networked client hardware and software. The fourth era of computing is the cloud computing paradigm, pioneered by companies like Amazon with its Amazon Web Services, Salesforce.com with its SaaS (Software as a Service) offerings, and Microsoft with its Azure cloud platform. The cloud provides massively scaled computational power and very cheap memory and storage. It only makes sense that AI applications would be housed in the cloud, since the computation power of AI algorithms has increased 300,000 times between 2012 and 2019—doubling every three-and-a-half months. The Pendulum Swings Again Cloud-based AI, however, has its issues. For one, cloud-based AI suffers from latency—the delay as data moves to the cloud for processing and the results are transmitted back over the network to a local device. In many situations, latency can have serious consequences. For instance, when a sensor in a chemical plant predicts an imminent explosion, the plant needs to be shut down immediately. A security camera at an airport or a factory must recognize intruders and react immediately. An autonomous vehicle cannot wait even for a tenth of a second to activate emergency braking when the AI algorithm predicts an imminent collision. In these situations, AI must be located at the edge, where decisions can be made faster without relying on network connectivity and without moving massive amounts of data back and forth over a network. The pendulum swings again, from centralization to decentralization of intelligence— just as we saw 40 years ago with the shift from mainframe computing to desktop computing. However, as we found out with PCs, life is not easy at the edge. There is a limit to the amount of computation power that can be put into a camera, sensor, or a smartphone. In addition, many of the devices at the edge of the network are not connected to a power source, which raises issues of battery life and heat dissipation. These challenges are being dealt with by companies such as Tesla, ARM, and Intel as they develop more efficient processors and leaner algorithms that don’t use as much power. But there are still times when AI is better off in the cloud. When decisions require massive computational power and do not need to be made in real time, AI should stay in the cloud. For example, when AI is used to interpret an MRI scan or analyze geospatial data collected by a drone over a farm, we can harness the full power of the cloud even if we have to wait a few minutes or a few hours for the decision. Training vs. Inference One way to determine where AI should live is to understand the difference between training and inference in AI algorithms. When AI algorithms are built and trained, the process requires massive amounts of data and computational power. To teach an autonomous vehicle to recognize pedestrians or stop lights, you need to feed the algorithm millions of images. However, once the algorithm is trained, it can perform “inference” locally—looking at one object to determine if it is a pedestrian. In inference mode, the algorithm leverages its training to make less computation-intensive decisions at the edge of the network. AI in the cloud can work synergistically with AI at the edge. Consider an AI-powered vehicle like Tesla. AI at the edge powers countless decisions in real time such as braking, steering, and lane changes. At night, when the car is parked and connected to a Wi-Fi network, data is uploaded to the cloud to further train the algorithm. The smarter algorithm can then be downloaded to the vehicle over the cloud—a virtuous cycle that Tesla has repeated hundreds of time through cloud-based software updates. Embracing the Wisdom of the “And” There will be a need for AI in the cloud, just as there will be more reasons to put AI at the edge. It isn’t an either/or answer, it’s an “and.” AI will be where it needs to be, just as intelligence will live where it needs to live. I see AI evolving into “ambient intelligence”—distributed, ubiquitous, and connected. In this vision of the future, intelligence at the edge will complement intelligence in the cloud, for better balance between the demands of centralized computing and localized decision making.
<urn:uuid:72f6f84b-d0d8-4995-8652-6c83b8789bc7>
CC-MAIN-2020-34
https://www.forbes.com/sites/mohanbirsawhney/2020/01/27/why-apple-and-microsoft-are-moving-ai-to-the-edge/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738555.33/warc/CC-MAIN-20200809132747-20200809162747-00434.warc.gz
en
0.952607
1,409
3.03125
3
Fukushima-1 NPP Radiation 8 Times Above Safety Levels Nuclear radiation at Japan's Fukushima-1 nuclear power plant has reached 8 times above the acceptable government safety guidelines. According to reports of the operator of the plant, the Tokyo Electric Power Company (TEPCO), the levels of nuclear radiation around Fukushima NPP have risen to 8 millisieverts per year, surpassing the government standard of 1 milliseviert, reports ITAR-TASS. TEPCO officials told the press that the main reason behind the drastic increase in radiation were X-rays coming from storage tanks holding radioactive water that has been leaking from the Fukushima facility. The State Nuclear Regulation Authority held a meeting on Friday aimed at curbing the rising levels of radiation at the plant. According to the nuclear regulator, TEPCO must indicate specific deadlines for the reduction of radiation exposure in the area of the plant under 1 milliseviert. Another leak of water with high content of radioactive substances was registered at the Fukushima plant on December 22. The Japanese government has so far granted USD 473 M to contain the fallout from the stricken plant. - » Serbs and Croats 'Will Have to be Allies in Future' - » Mexico in 3-Day Countdown to Search for Earthquake Survivors - » New European Copyright Reforms in the Digital Single Market - Is There a Threat to Quality Journalism? - » First Results of the Referendum on Independence of Kurdistan - » The Winner of Eurovision - 2017 is in Hospital, Waiting for a Heart Donor - » The Authorities in China Completely Blocked WhatsApp Millions at fatal risk as Fukushima radiation poisons Pacific 12 January 2014 Voice of Russia The main risk will be to the people of Japan, and it’ll be people who live along the coastline of Eastern Japan who will be greatly at risk. It’s within one kilometer from the sea. Just in terms of cancer there’ll probably be about 400,000 to 800,000 extra cancers in Japan in the next 50 years as the consequence of this. It will be absolutely measurable. The nuclear industry says it cannot be measured over the background rate, but it will be certainly measurable. We’ve already seen some effects in infant mortality and thyroid cancer in Japan. So this is just going to get worse. We are going to see a major effect on the general health of the Japanese population in Eastern Japan. It’s going to be quite measurable. There’s going to be a decrease in the birth rate and an increase in the death rate.
<urn:uuid:387f375c-0981-44f6-9c7e-de53d16405e6>
CC-MAIN-2017-39
http://www.novinite.com/view_news.php?id=157195
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696677.93/warc/CC-MAIN-20170926175208-20170926195208-00176.warc.gz
en
0.914213
535
3.03125
3
Date of this Version Microbial drinking-water quality testing plays an essential role in measures to protect public health. However, such testing remains a significant challenge where resources are limited. With a wide variety of tests available, researchers and practitioners have expressed difficulties in selecting the most appropriate test(s) for a particular budget, application and setting. To assist the selection process we identified the characteristics associated with low and medium resource settings and we specified the basic information that is needed for different forms of water quality monitoring. We then searched for available faecal indicator bacteria tests and collated this information. In total 44 tests have been identified, 18 of which yield a presence/absence result and 26 of which provide enumeration of bacterial concentration. The suitability of each test is assessed for use in the three settings. The cost per test was found to vary from $0.60 to $5.00 for a presence/absence test and from $0.50 to $7.50 for a quantitative format, though it is likely to be only a small component of the overall costs of testing. This article presents the first comprehensive catalogue of the characteristics of available and emerging low-cost tests for faecal indicator bacteria. It will be of value to organizations responsible for monitoring national water quality, water service providers, researchers and policy makers in selecting water quality tests appropriate for a given setting and application. Originally Published In International Journal of Environmental Research and Public Health Bain, Robert; Bartram, Jamie; Elliott, Mark; Matthews, Robert; McMahan, Lanakila; Tung, Rosalind; Chuang, Patty; and Gundry, Stephen, "A Summary Catalogue of Microbial Drinking Water Tests for Low and Medium Resource Settings" (2012). All Faculty. 22. Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 License. In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
<urn:uuid:f0f92125-7ffe-48ab-9ee9-f448f3ab6b1b>
CC-MAIN-2023-14
https://digitalcommons.fiu.edu/all_faculty/22/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00515.warc.gz
en
0.881703
523
2.59375
3
Endometriosis symptoms afflict about 7 million American women. The signs symptoms of endometriosis are one of the most painful conditions a woman will ever have to deal with. Endometriosis is defined as the abnormal growth of endometrial cells that become scattered in areas where they do not belong. Endometriosis islets can grow in the fallopian tubes, within uterine musculature or outer surface of the uterus, the ovaries, pelvic organs, colon, bladder, the sides of the pelvic cavity and even the lungs. With the onset of the menstrual period, the islets increase in size, swell with blood and bleed into the surrounding areas and tissues. The problem is that there is no place for the tissue and blood to go, and the result is inflammation and a great deal of pain. The occurrence of endometriosis symptoms is on the increase, and there is much debate about why. Here are the most common endometriosis symptoms:. 1. Pain – abdominal pain and cramping. And these endometriosis symptoms may be severe in a woman with mild endometriosis and may hardly occur in women with widespread endometriosis. The pain and cramping can be debilitating. 2. Inflammation – during the early part of the menstrual cycle, the endometrial tissue becomes filled with blood. When menstruation occurs, this tissue also gives off blood, but it cannot go anywhere. This blood accumulation causes inflammation that in the abdominal and pelvic tissue becomes very painful. 3. Painful sexual intercourse – endometrial tissue creates pressure in the lower pelvis or prevents the free movement of the pelvic organs. 4. PMS in the days before and during the menstrual period. 5. Rectal bleeding – also painful bowel movements can occur. 6. Chronic fatigue – pain, bleeding and cramping can be exhausting for the woman, making it difficult or impossible to function normally. 7. Infertility and miscarriage – the more widespread the endometriosis, the more likely the woman will have fertility and miscarriage problems. Some women have endometriosis without having endometriosis symptoms, while others have symptoms but with little endometriosis. And the good news is that something can usually done about it without drugs or surgery with a good chance of experiencing significant improvement. While the causes of endometriosis symptoms are unknown, high estrogen levels in women appear to be a contributing factor. Endometriosis seems to be a disease of the industrialized countries. It often runs in families, and in many women, there is a correlation to immune dysfunction. Emotional issues are often involved as well in women with endometriosis. In all these causes, hormonal imbalance is a common theme among the various factors. Women diagnosed with the signs symptoms of endometriosis are frequently encouraged to have a hysterectomy. There ARE conditions for which hysterectomy is advisable or medically necessary, especially if malignant cancer is involved. The presence of malignant ovarian, uterine, or cervical cancer, uncontrollable bleeding, severe endometriosis (adenomyosis) and complex hyperplasia would justify the hysterectomy procedure. Otherwise, remember that hysterectomy is a permanent surgical procedure with numerous undesirable side effects. If you have endometriosis symptoms, learn more about the natural approach recommended by naturopathic physicians without resorting to drastic measures such as hysterectomy. Read all you can about hormone imbalance, excess estrogen consequences and the role of natural progesterone in treating endometriosis symptoms and related women’s health problems. Be First to Comment
<urn:uuid:c0055e78-40eb-4117-98bd-e03cb4a3fab3>
CC-MAIN-2023-23
https://vitalhealthsecrets.com/2020/01/31/endometriosis-symptoms-what-are-the-signs-symptoms-of-endometriosis/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00332.warc.gz
en
0.931593
759
2.875
3
This summer has been challenging for tomatoes. The abundance of rain, sometimes lack of sunshine and the cooler temperatures have not made for the best tomatoes. Good, hot, humid weather, with adequate rain and sunshine makes for vine-ripened tomatoes that have a flavor all their own. For me, juicy and nutritious describes wonderful tomatoes to a “T.” Tomatoes are spheres of healthful eating. One medium sized tomato, about 150 grams or about 5 ounce provides 3/4 of the recommended dietary allowance of vitamin C, more than 1/4 the vitamin A, plus iron and niacin. They have all that satisfying goodness and only 35 calories. Now is the best time to eat and enjoy tomatoes when they are locally grown and vine-ripened. Local tomatoes abound during the summer months when many luscious varieties are in great demand. Summer tomatoes are picked close to ripeness and make relatively short trips to the market. If you have picked or purchased tomatoes that are not fully ripened, place them in a cool place away from direct sunlight. Too much sunlight causes tomatoes to soften without properly ripening. Light red tomatoes will ripen in three to five days if not refrigerated. Since most tomatoes are picked mature but not ripe, they will continue their ripening process. Keep in mind that tomatoes produce their own ethylene, which stimulates changes in color. When selecting tomatoes choose smooth, firm and plump tomatoes with good color. Weight’s a factor, too. Make sure the tomato is a good weight for its size. Tomatoes love the kid-glove treatment; so handle them gently to prevent bruising. Tomatoes can be broiled, baked, roasted, fried, stuffed, added to soups, sauces, stews and gravies or used with other vegetables. The tomato’s most popular use is as is, either eaten out of hand, sliced and seasoned or cut in salads. Tomatoes are terrific any way and they are a true convenience food with almost no waste! Tomatoes are wonderful cooked too. They have a delicate taste and texture and make an impressive and colorful appearance. Adding fresh tomatoes in recipes is a snap. When the recipes called for a peeled tomato, place it on a slotted spoon, dip in simmering water for one minute and remove. The skin slides right off. It is also a great time of year to make your own salsa, but please use an approved recipe and processing times. There are no shortcuts. Purdue Extension offers a free publication titled “Let’s Preserve Tomatoes” and can be downloaded from Purdue’s Education Store. However you decide to prepare them, enjoy tomatoes now.
<urn:uuid:73fa74cb-df4f-47f1-b9d7-48dda98b74b1>
CC-MAIN-2017-34
http://www.flavor574.com/2014/09/15/lienhart-cross-0915/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104704.64/warc/CC-MAIN-20170818160227-20170818180227-00636.warc.gz
en
0.940554
553
2.546875
3
Why nuclear energy? Climate change is not just an ecological crisis. It is a humanitarian crisis. Nuclear energy fights climate change by deeply decarbonizing our electricity. Countries around the world have permanently displaced fossil fuels by switching to nuclear energy. Air pollution from fossil fuels shortens millions of lives per year. Children are most affected by avoidable lung and cardiovascular disease. Nuclear energy emits no pollution, so nearby communities can enjoy clean air. About 85% of the radiation we receive in our lives comes from natural sources. The other 15% comes almost entirely from medical imaging and therapies that save lives. In fact, some nuclear reactors produce radionuclides, like Cobalt-60, used for medicine, medical instrument sterilization, and food safety.
<urn:uuid:685b985c-b478-4097-b635-32d14ce2fca2>
CC-MAIN-2023-40
https://www.doctorsfornuclearenergy.org/nuclear
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00519.warc.gz
en
0.895451
167
3.46875
3
How Can Technology Help in the Classroom? Mar 8, 2012 E-Learning/CALL 2880 Views Teachers who strive to improve their classroom setting often wonder about the addition of technology and how it might help their students. While the debate about whether technology helps or harms students continues, parents and teachers must understand the potential benefits of using technology in the classroom. Improving Technical Skills: In this modern world that constantly produces new and improved technological advances, the skills that come with technology are vital to future success. Children need to learn skills like typing, research and communication via technological devices early. By learning the basic skills in school while they are young, students are able to improve their ability to keep up in this ever-changing world. New technological devices are ideal when it comes to motivating students. Books, paper and pen are often boring and make it challenging to motivate the students. Bringing in a new gadget that has e-books or interesting learning tools helps draw in students and motivate them to try completing tasks because they are able to also try out the new device. By motivating the students to learn the technology, teachers are also helping them learn vital skills like reading, arithmetic and sciences. Helping Special Needs: Technology used in the classroom can also help students who have special needs keep up with their peers. For example, a student who has problems hearing can use a tablet with a record to written feature that allows him or her to record the lecture as the teacher speaks and then see the written form of the lecture. This helps him or her keep up with the activities in class. Technology is useful in a wide range of applications that helps students who have special needs of any type. Depending on the particular disability, students can apply advances in different measures. Students who are striving to learn the use of a new technological device often end up working together and improving their communication skills through tutoring, discussion and simple inquisitiveness. As students discuss and try new ideas while learning the new technology or software, they are improving their ability to work out problems without the help of adults and become better at communicating. This ability to work together to solve problems carries forward into adulthood, when students will need the skills to succeed in future careers. Technology is a useful tool that teachers can add to the classroom setting. It has a wide range of potential benefits that can improve student learning, motivate and help for better life skills. As teachers incorporate more devices and technological items to the classroom, the students will benefit from the improvements to the learning environment and ultimately will see improved success that increases self-confidence.
<urn:uuid:884118c2-b354-4db3-ac86-79db06f0298b>
CC-MAIN-2020-16
http://eslarticle.com/pub/e-learning-call/90559-how-can-technology-help-in-the-classroom.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371618784.58/warc/CC-MAIN-20200406035448-20200406065948-00315.warc.gz
en
0.964363
525
3.34375
3
Strategies Implemented Nationally. Read how districts are focusing on outreach, improving school climate, and addressing students’ life challenges to cut chronic absenteeism, reflecting increased awareness of its adverse effect on learning and ESSA reporting requirements. The Regional Educational Laboratory West reports on how it works with partners to develop strategies for building a “culture of attendance ,” including using data to better understand and identify ways to reduce chronic absenteeism and its negative consequences. Resources are also provided. Bullying May Alter Brain and Increase Mental Health Risks. Researchers report that teens who are regularly bullied by their peers may be left with shrinkage in key parts of their brain, increasing their risk for mental illness. Early peer victimization interventions could prevent such pathological changes. Resources. Campus Safety has published important bullying statistics and what you can do to address bullying both in school and at home. School Climate Linked to Suspensions. When teachers and administrators work to create a more positive school climate, including clear rules and positive teacher-student relationships, student suspensions can drop by as much as 10%, according to new research. Dress Code Policy. Students reacted positively to a trial easing of dress code rules in Alameda, CA, which now largely leaves it up to students and their families to determine appropriate attire for school. Steps to Positive Discipline. Edutopia lists steps to take for proactive, supportive, responsive, and restorative discipline that creates an environment in which problem behavior is less likely to occur. California Suspension Rates Higher in Rural Schools. Virtual Healthy School Resource. While California has made substantial progress in reducing school suspensions, it faces a challenge in often overlooked rural regions of the state, where student suspension rates are significantly higher than those of urban areas. Education Week has released its second annual "Big Ideas" report . One of them, “The Kids Are Right: School Is Boring,” emphasizes the importance of connecting "with the rapidly changing world beyond the school walls to solve problems, explore ideas, rally for a cause, or learn a new technical skill." Schools often fall short in helping students in grief. Read a summary of helping strategies and resources . Strategies include recognition by trusted adults of their loss, a genuine expression of sympathy, and an offer of assistance. Review of MTSS Approach. The winter issue of Addressing Barriers to Learning includes an analysis of the strengths and weaknesses of the Multi-tiered System of Support (MTSS) framework, as well as guidance and resources on school mental health. School Mental Health Staffing Report. A new federal report describes the level of school mental health staffing by student body racial and ethnic composition, using data from the 2015-16 National Teacher and Principal Survey. See additional Mental Health content in California Corner. A new NECS report focuses on barriers to parent-school involvement for early elementary students, using data from the Early Childhood Longitudinal Study. The four most commonly reported barriers were getting time off work, inconvenient meeting times, no child care, and not hearing about things they might want to be involved in. The U.S. CDC Virtual Healthy School is an online, interactive school that provides innovative learning experiences to help make schools healthier, including nutrition, physical education and activity, and the management of chronic health conditions (e.g., asthma, diabetes) in schools. As student health needs far outpace resources, California districts are increasingly turning to outside partnerships to provide services that they see as crucial to a student's success. School Nutrition Programs Reduce Obesity. A Yale School of Public Health study reveals school policies and programs promoting healthy eating can limit obesity. Policies include making sure all school-based meals met federal guidelines, providing nutrition newsletters to students and their families, and school-wide campaigns to encourage drinking water. Role of School Police and Social Workers. In this video , social workers discuss how issues like hunger and mental health impact school security, and how social workers and campus police can work together to improve student safety. Lockdowns and Mental Health. A Washington Post analysis found that more than 4 million children endured school lockdowns last school year. While most kids won’t suffer long-term consequences, experts who specialize in childhood trauma suspect that a meaningful percentage will. School Climate (General) Review of Trends. Education Week reviews trends and policy changes that may affect school climate in 2019, noting students' ability to engage and succeed in the classroom is influenced by how safe, supported, and connected they feel at school. Four issues highlighted are chronic absenteeism, teacher stress and supports, discipline policies, and safety (school police). of educational research trends highlights meeting the needs of the students, staff, and parents, and fostering positive, engaging school climates. School Climate Linked to Effective Early Learning. A new study says programs with strong organizational structures hold the key to effective early-childhood education, and lists exceptional administrators and collaborative teachers as the two most important components of those structures. The California Department of Education has released “Social Emotional Learning in California: A Guide to Resources ,” developed by a multi-agency team that included national experts in the field. Aspen National Commission Report. The final report of the Aspen Institute’s National Commission on Social, Emotional, and Academic Development, From a Nation at Risk to a Nation at Hope , calls for an acceleration of efforts to ensure that all U.S. students have access to quality SEL and providing recommendations to advance these efforts. Read how some districts are creating positive environments where employees always feel valued, respected, and connected, including new hiring practices and customer service training. Addressing LGBTQ Needs. Edutopia provides guidance and resources for establishing a Gay Strait Alliance in middle school as a means to prevent bullying based around students’ perceived sexuality or identities. LGBTQ Youth Research. An analysis of CDC YRBS data reveals that transgender youths were more likely to report experiencing violence victimization, substance use, suicide risk, and sexual risk compared with their peers whose gender identity aligns with their sex. These findings indicate a need for intervention efforts to improve health outcomes among transgender youths. California Student Mental Health Report Card. Data from the 2015-17 Biennial Statewide California Healthy Kids Survey of secondary students show that respondents have significant unmet needs that could benefit from intervention and supports. The California Student Mental Health Report Card shows how these unmet needs can be addressed using a Multi-Tiered System of Supports. Local Mental Health Report Card Template. Project Cal-Well Mental Health downloads include a report template for LEAs to highlight their own data on student mental health. Student Mental Health Guide. To improve mental health awareness and mental wellness of students, check out the Project Cal-Well model, Three Component Model to Support Students' Mental Health: A Guide for California Schools Rural School Resources. The California Rural Ed Network has launched a new online resource bank to provide rural school-focused research-based information and professional development articles. The Network is also exploring ways to elevate student voices from rural schools and to bring attention to "inequalities that disproportionately affect rural education systems." Retired State Board President Michael Kirst expressed worry that a failure to sufficiently fund training for teachers and principals in the new academic standards, school climate, and other supports for students could undermine expectations for achievement. CalSCHLS Helpline: 888.841.7536
<urn:uuid:e0fd5dd0-5d3b-455a-bec4-afabaaf983a0>
CC-MAIN-2020-34
https://us4.campaign-archive.com/?u=5e8d04643ed22cde647fb5ea4&id=82d2dd2232
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00523.warc.gz
en
0.929537
1,576
2.96875
3