text
stringlengths 263
344k
| id
stringlengths 47
47
| dump
stringclasses 23
values | url
stringlengths 16
862
| file_path
stringlengths 125
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 57
81.9k
| score
float64 2.52
4.78
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
PrefaceMany who have not learnt Reason, nevertheless live according to reason.
—Democritus of Abdera (5th-4th century BC)
More than a few readers will likely discern a tautology in our title phrase “assessment technologies.” Assessment instruments and methods are all technologies when technology is understood in its etymological sense as techne, craft or technique (Partridge, 1958, p. 698). When colleagues sheepishly confess, “Well, I do assessment, but I don’t use any technology,” I ask them to explain what it is they do, and inevitably a use of technology emerges. Even in its earliest Latinate sense assessment was technical. In Roman culture, an assessor provided the sovereign with technical advice regarding the value of things to be taxed (Steen, 1999). The assessor helped value the amount to be taxed, and he did so technically. This process of valuation is of interest today, for assessment instruments reflect what we value in learning, deem worthy of measurement. Hence it would be wise to make explicit, in each case, the technology of assessment, to better grasp how it reflects our values and the degree to which the technology determines the results of our studies—all for the sake of clarifying our values and refining our use of technology to serve learning. The confession by a colleague reveals the given or tacit presence of technology. Is there anything more slippery to put in perspective than technology, except perhaps one’s native language which by no means operates independently of everything technical?
Understanding through Research
Technologies of assessment are put in perspective in diverse ways by the authors in the present handbook. Many chapters assembled here demonstrate the curricular and institutional integration of instruments for effective assessment of thinking, writing, learning, and organizational structure in higher education. Other chapters evince a heightened methodological awareness with regard to the relative appropriateness and usability of technologies, what they lack, overlook, but also the opportunities they create. The professional research contained here represents diverse stakeholders whose broad constituency from composition studies to psychology to mathematics, liberal arts colleges to major research institutions, is heartening for those who promote the growth of assessment culture. Creativity receives attention in these pages, as do multi-modal practices and instruments familiar to a new generation of students. The result is to make instruments and methods of assessment more relevant, appear less intrusive and alien, more “ours” and less “theirs.” This research should help allay the pervasive concern, warranted or otherwise, among many faculty members that their daily practice has been commandeered by assessment imperatives in counter-intuitive ways. We know that teachers regularly practice assessment without calling it such. They assess intuitively as it were, putting truth before method. They also make adjustments in their teaching based on their assessments that experts call “closing the loop.” They call it something else—changing the lesson plan; updating the syllabus; refining learning objectives. The purpose of the research in this book, then, is to help educators make their own assessment efforts more articulate and systematic by demonstrating applied research in cases and initiatives. Such applied research is beneficial for readers and practitioners. The historian Droysen’s expression, forschend zu verstehen, “understanding through (re)search,” is apt here (Grondin, 1994, p. 81). It means that in research one relearns and becomes more methodologically explicit about what one already knows and does, creating a new resource. A case in point is when an utterly devoted instructor received help to formalize his grading criteria, carried around in his head for over a decade, at a rubric design workshop where the latest design research is learned and applied. He marveled at the rubric table he produced, which, now objectively sharable, he brought to class and distributed. His intent was not to officially inform his students how he graded and ranked them, but to ask their opinion how they might refine the criteria after a thorough discussion of learning objectives. The result was that the teacher and student collaboratively produced a rubric of which the students had co-ownership, and which raised their awareness of what they were in class to learn. This example confirms that the best teachers use assessment and evaluation to help students learn (Bain, 2004, p. 151).
Yet some educators will remain methodologically vague by choice, their pedagogy romantic, convinced that the intrinsic multiplicity (ambiguity) of the learning experience is immeasurable but true, and any attempt to reach an unequivocal assessment outcome is always already false. Romantics, it is important to remind ourselves (for some of us are them, or have been, or will be on occasion), are comfortable with inexplicit evidence—that which seems to be the case. They suspect that when an assessment technology like Accuplacer informs them of what is in fact the case, method has supplanted truth and technology has become sovereign.
Truth and Method
This sort of technical usurpation is allegorized in Kafka’s story, “A Visit to a Mine” (1976), narrated by miners whose work is interrupted by visiting engineers and inspectors—an accreditation team like WASC if you will, performing an audit of an academic program. “The management has issued some instructions or other about boring new galleries, and so the engineers arrived to make the initial survey.” The miners stare, transfixed by the procession of engineers and their machine replete with measuring instruments, an “extremely expensive apparatus.” The inspection group represents a different professional class. “One, a lively man with black hair, has eyes that take in everything. A second with a notebook makes jottings as he goes, looks around him, compares, notes down.” It is the ninth engineer, however, who embodies instrumental reason in his relation with a perambulator full of instruments.
Obviously he understands the apparatus thoroughly and seems to be really the man in charge of it. From time to time, without stopping the vehicle, he takes up a part of some instrument, peers through it, screws it open or shut, shakes and taps it, holds it to his ear and listens; and finally, while the man pushing the instruments usually stands still, he lays the small thing, which one can scarcely discern at a distance, back into its packing with great care. This engineer is a little domineering, but only in the service of his instruments. Ten paces ahead of the perambulator we have to give way to it at a wordless sign of his finger, even where there is no room for us to make way. (Kafka, 1976, p. 406)
This is not an “invitation to dialogue,” to invoke a phrase from the WASC Handbook of Accreditation (2001). The engineer is engrossed with his instruments and does not communicate with the miners except to soundlessly signal his obtrusive approach. To be sure, there can be a romance with measuring technology as much as with the immeasurable, but in the engineer’s case the relation exposes his domination. The intimate nature of his technical enthrallment, which separates him from others, is perceived through the respectfully objective gaze of the miners, imparting an air of absurdity to the otherwise sober behavior of the engineer. The almost fetish-like character of his relation to his tools is noteworthy, for they are his tools to measure their work space, tools with regard to which the miners can make no claim of ownership.
The reified behavior of the inspector is something before which many educators are most sensitive if not mistrustful. I have been using inspection, audit, and assessment interchangeably here to make a point about the relation between faculty members and assessment instruments and their representatives. I was the administrator when our Division of English and Applied Linguistics was subjected to an academic audit, so I have a concrete impression of the faculty response. The word “audit” is unpleasantly redolent of taxes; no one looks forward to an audit. We saw earlier that assessment is etymologically linked to tax collection. Some faculty members are wary not of mining machines or tax audits but assessment technologies such as Collegiate Learning Assessment (CLA) enthusiastically purchased by their universities. They have heard that Accuplacer and e-Write use an Artificial Intelligence program to produce their scores. For these educators the very idea of such systems and the uncritical bureaucracy that endorses them brings to mind the Skynet computer program in the “Terminator” films. This is writ large a narrative of the Verselbstandigung der Apparatur or autonomization of the apparatus (Adorno, 1998). The program takes over everything and destroys the world. As darkly comical as it sounds, this is not far from the sort of development feared by more than a few faculty members when confronted with assessment mandates and methods.
From Reactive to Active“We’ve got to determine what happens, rather than adjust to what they do. It’s definitely an important game.”
–University of Alabama football player
It would be pleasant to say that most educators have gotten involved with assessment because it is interesting and worthy of attention, sort of like a dissertation topic. And it is interesting and worthy of our attention, as the present book demonstrates. But it is rather the case that assessment sort of befalls many a teacher and administrator who in fact did not study it in graduate school. It comes down on many a head like climatic change, forcing a shift in lifestyle, or marches by in some official capacity that commands our attention, like the inspectors in Kafka’s “Visit to a Mine” who transfix the gazes of the miners with their somber procession. As a result, there is often little intellectual distance between educators and their assessment activity; they simply react to a mandate, grab a tool or template as one would an umbrella or pair of boots, mimic usage and do what needs to be done. They update a set of learning objectives without asking what the objective of learning is in the first place, a much tougher assignment. Pascal’s famous wager, Il faut parier, means, “You must do something; nonparticipation is not an option.” Well yes, but there is busy work and intellectual work, and most of us prefer the latter. The problem is that reactive behavior is not the most effective approach to a serious and complex state of affairs, or for that matter to an academic discipline. Nor does it bode well for the growth of a culture. It would be more active to take ownership of assessment and find in it an intellectually compelling set of problems and challenges for administrators, faculty members, and students to share. Several of our contributors argue this point, most notably Sean McKitrick, John Wittman, and Steve Culver. Motivation would thereby be more intrinsic, related to the matter at stake, and not mandated by the agenda of the newest political majority. Hitherto perfunctory tasks would point back to questions for reevaluating the conditions of learning and the possibilities of improvement in a time of scarce resources. Only in this way can we avoid the constricted cognition described by Collingwood (1978, p. 106): “He was trying to see only those elements in the situation which he already knew how to deal with, and was shutting his eyes to anything which might convince him that his ready-made rules were not an adequate guide to the conduct of life.” So it stands with assessment, whose technologies and assessments try to keep pace with yet inevitably lag behind the new epistemologies of student learning, webs of relations and ways of knowing so browsed, blogged, hypertexted, instant messaged, gamed, and hacked, as to be irreducible to any one identity, data set, or system. Sue Pieper and colleagues, and Deirdre Petttipiece and her group, are some of the authors in our book who look closely as these emerging technologies insofar as they impact or are integral to the assessment process.
The desire to turn from a reactive to active context for assessment is essentially the origin of the present book. In 2006, at the CCAS conference in Boston I heard many deans complain how difficult it is to get faculty members involved in assessment without rancor or resistance. One of the most heavily attended sessions was “Getting Faculty Excited about Assessment.” I realized something obvious, that for assessment to be done with enthusiasm by faculty members it would have to look like their other academic writing and have the same scholarly value, be integrated with their professional personae and rewarded on the scale of promotion. Hence it became my project to solicit substantial research papers in assessment for a book-length manuscript. Here would be my supporting evidence for saying to my colleagues, “Write up assessment projects not as internal memoranda, but with the flair, polish, and intellectual intensity you put into your scholarship.” This is apparently more difficult in practice than it sounds, for one rarely sees essays on assessment; it is rather the case that we have technical treatises and studies that do not, by any stretch, belong to the genre of the essay. A simple name change from “report” to “reflective essay,” recommended by WASC for purposes of documenting the institutional self-review process, does not automatically qualify the actual documentation being produced as an essay, at least as typically understood by its practitioners (Atkins, 2005). Nonetheless to even speak of an essay format for documenting educational assessment is a step in the right direction, and concretely confirms the sincerity of WASC’s stated intention “to maintain a posture of experimentation in the years ahead” (WASC, 2001).
Somewhat incredulous then, not unlike some of my colleagues, I wanted to see if assessment has depths. I knew that the place to look is in the research. Once I knew the answer, I could earnestly persuade colleagues to become engaged or involved with assessment, as is argued by several authors in the present book. Furthermore, few of my colleagues, however incredulous of assessment initiatives, would oppose the idea that as scholars we are still learning. Howard Gardner makes a good case for this in his description of what he calls “the disciplined mind,” which demonstrates “continuing mastery of one’s professional or employment roles, including the acquisition of additional disciplinary or interdisciplinary acumen” (Gardner, 2007). It would be beneficial then if assessment is viewed as an element of this new knowledge, which is very much conceived as an active learning process. In its active modality, alternative and authentic assessments become feasible (McMillan, 2001), as demonstrated in this handbook by Eubanks and Brunsden. This shift from passive to active awakens new kinds of thinking when one no longer resists but engages. One recalls here the protagonist in Kono Abe’s The Woman of the Dunes (1991) who at first seeks escape from a dwelling that is under constant threat of burial by sand dunes. He refuses to dig until he realizes that not digging sand is not a solution and the digging must be integrated harmoniously into his daily existence. At the point where his digging becomes active, his behavior evinces inventiveness; he builds a water collection device to make life more endurable.
Happily, the chapters that appear here, arising from serious research, will disabuse the incredulous of their suspicions. As Henry James said in defense of his novels, “there are depths” to assessment, and, one has to say, a richness of conceptualization. I wanted to see if there are depths, not only shallows—not that shallows can’t be interesting and yield their own depths. But how many pre-and post tests based on ten items learned over fifteen weeks must we administer before we realize we need to swim more deeply to assess deep understanding, reasoning, and academic skills? But to pause for reflection, to gauge the relevance and appropriateness of a method is not as convenient as applying it and reaping data, however trivial. Let us remember what such methods do for us: as is said of myth, “it is a system of the elimination of arbitrariness” (Blumenberg, 1985, p. 43). Assessment, like myth, alleviates anxiety, gives us a semblance of measure and order where there seems disorder, simplicity where the situation is complex or nuanced.
In a recent essay in which Les Perelman seeks to put Collegiate Learning Assessment in perspective, he says the following about the Voluntary System of Accountability that was the bureaucratic precondition for the CLA: “This enterprise, however, of trying to measure and then compare the common benefits of a college education among widely differing educational institutions through two-hour standardized tests reaffirms the truth of H. L. Mencken’s observation that ‘For every complex problem there is an answer that is clear, simple, and wrong.’ These assessments are, at best, useless, and, at worst, subversive to the complex set of abilities that should inform undergraduate education. (Perelman, 2008, p. 1). It seems to me that what Perelman is doing is demythologizing assessment by warning us about blind faith in system-wide applications. Something like CLA is not a magic formula that is universally applicable. Bob Broad, who contributed the Foreword to this book, has done equally critical work with rubrics (2003). Another one of our contributors, Daniel Chambliss, moves in the same direction when he speaks of the individual as a neglected unit of analysis and when he examines the bureaucratic language that invokes assessment as a cure-all (2007). Likewise Stephen Ehrmann (1998) when he argues for a “unique uses” perspective to bring proportion to a paradigm that favors a uniform (average) impact perspective. Not solely enthralled with deriving data associated with what is average, he is encouraged by the fact that diverse outcomes are possible in the same course because of the different life paths students follow; a course is seen less as a set of fixed objectives than as a flexible learning opportunity. This is the case with delayed outcomes whose truth is untimely. Delayed outcomes can appear abysmal in present evidence, only becoming positive eight years hence, as when a student takes an advanced philosophy course as a sophomore, barely passes it, flops in exams and papers, yet is instilled with an obscure thirst for speculative inquiry that later matures into a passion and vocation in philosophy proper. A “C” student becomes a renowned teacher of philosophy. We have all heard of such cases.
Let us rephrase and say that the scholars cited above do not so much demythologize as interrupt myth
in assessment, to borrow a phrase from Jean-Luc Nancy (1991), prevent it from becoming a reified practice by practicing critique as integral to its effective implementation. Other contributors to this handbook, such as Asao Inoue, Mya Poe, and John Wittman, are forging ahead early in their careers with critically aware research as they focus on the related issues of race, cultural diversity and technology in assessment.
The KISS of Death
All of the scholars mentioned here, and many handbook contributors as well, deserve our attention because they practice critique while fully involved with the matter of assessment. By so doing they “desimplify” assessment, to borrow a catchword of the heyday of deconstruction. Much of the resistance to assessment arises from the problem that Mencken, cited by Perelman, observes above. The oft repeated slogan “KISS” (Keep it Simple Stupid) has done more harm than good, the kiss of death, further convincing already skeptical colleagues that “assessment culture” is less a watchword than an oxymoron. Many educators suspect assessment to be intellectually vacuous not only because of the KISS slogan with which it is associated, but because they often hear that it is focused on outcomes. Those who have privileged process over product are understandably perturbed. But it is not assessment per se that is profound or not, since it can take many forms and styles, but rather certain outcomes--say, a simple pre-and post test assessment project. These simple outcomes don’t condemn assessment as an activity; they merely challenge practitioners to invent new and more encompassing assessment instruments, such as the critical thinking and/or writing rubrics described in detail by Hedva Lewittes and Teresa Flateby, to name just two of our contributors.
The chapters have been organized in three sections. In all but a few chapters the research is applied, that is, methodologically worked out in concrete frameworks of data collection and analysis, either with actual data sets or literature searches associated with a concrete matter at stake. A focus on assessment design and instrumental qualification in an applied framework characterizes the chapters in Section One. Although many of the chapters here are case studies, most of those which explicitly declare themselves as such, and which were not primarily concerned with instrument design so much as implementation, comprise Section Two. It is always nonsensical to separate applied from theoretical chapters for many reasons, but those with more overarching or ostensibly critical viewpoints, as well as those focused on workplace and assessment culture issues, were placed in Section 3.
Section IAssessment Technologies and Instruments
Chapter I describes a model for evaluating complex organizations or systems. The design assessment model proposed is a response to current notions of assessment. Purdue University’s strategic planning process provides a context for describing how design assessment takes place in a higher education setting.
Chapter II describes how critical thinking is assessed using two critical thinking learning outcomes that were required for the State University of New York’s General Education program. As part of this process faculty developed a rubric that delineated the criteria for rating critical thinking on a four-point scale.
Chapter III reviews literature over the past ten years regarding technology tools that are being used in higher education to assess student learning. Three commonly used technology tools are examined: electronic portfolios, course management systems, and audience response systems.
Chapter IV explains the Consensual Assessment Technique, discusses how it has been used in research on creativity, and explores ways it might be employed in assessment in higher education. Unlike other measures of creativity, such as divergent-thinking tests, the Consensual Assessment Technique is not based on any particular theory of creativity, which means that its validity (which has been well established empirically) is not dependent upon the validity of any particular theory of creativity.
Chapter V discusses creativity assessment as a means for evaluating skills required in higher education. Creativity is assessed in the context of the creative person, process, product and press or environment. A historical view of creativity assessment is addressed with a substantive approach to understanding the construct of creativity, its measurement and evaluation, and the broader implications for use in higher education settings.
Chapter VI conceptualizes writing assessment as a technology, accounting for the ways in which assessment dialectically constructs and is constructed by its historical environment. Seeing writing assessment as a technology provides a full account of assessment as an environment of conflict and social reproduction, but most importantly, it accounts for racial formations existing around it and because of it.
Chapter VII explores ways qualitative and quantitative methods are complimentary, as opposed to competing concepts. The chapter reviews the literature on qualitative, quantitative, and mixed method designs, then highlights successful examples of mixed-method assessment at a mid-sized, private university, presented in general frameworks which can be used on any campus.
Chapter VIII describes the development of the Cognitive Level and Quality of Writing Assessment online system. Beginning with needs identified in a learning community program, the system evolved from a classroom analytic writing and thinking assessment rubric to an online system for classroom assessment and instructional purposes.
Section IIAssessment Applications and Initiatives
Chapter IX reports on the process used to develop an outcomes assessment initiative for the Multimedia Writing and Technical Communication Program at Arizona State University. Discussed herein are the development of outcomes, the mapping of outcomes to the curriculum, the use of electronic portfolios to assess student writing using Phase 2 scoring procedures, and how results from the first three semesters of implementation are being used to evaluate and improve the program’s curriculum.
Chapter X details the Composition program self-assessment conducted at the University of Louisville in anticipation of a review conducted by the Southern Association of Colleges and Schools (SACS) in 2006. The chapter includes a comprehensive analysis of internal assessment rationale, theoretical foundations, methodologies, and results.
Chapter XI uses a case study to report assessment of three instructional delivery formats: (a) online, (b) distance, satellite campuses, and (c) traditional on-ground format. Student competencies on learning outcomes in a basic interpersonal communication college course were analyzed in a core assessment project (a course assignment portfolio) using a department-approved assessment rubric. This assessment effort provides an example of how faculty can employ assessment as part of a continuous improvement cycle.
Chapter XII presents a case study of a college classroom strategy that enables assessment and some remediation of student problem solving skills in mathematics. It reports the outcomes of implementing this technique and its associated processes in several lower-level mathematics courses in the calculus sequence at Penn State- Altoona.
Chapter XIII describes Coker College’s subjective performance assessment program to rate student thinking and communication skills. It uses a discussion of the epistemology of assessment to motivate an emphasis on direct observation by experts as the basis for “authentic” assessment for complex learning outcomes. The purpose of this chapter is to provide a philosophical framework and practical methods that can help institutions assess liberal arts learning outcomes.
Chapter XIV describes an assessment project conducted during the 2006-07 and 2007-08 academic years at the University of Guam in Elementary Japanese I and II. This chapter presents the results of this assessment and a reflection on the assessment process itself, including its rationale, methodology and consequences in terms of on-going and future assessment in the Japanese Studies program and Foreign Language instruction.
Chapter XV narrates an outcomes assessment study completed in a basic composition course at a small urban open admissions community college. The course was a pilot course designed in response to marginally remedial performance on a standardized writing instrument and solidly exempted performance on the standardized reading instrument. It will be shown how data was used to guide decision making about curricular change on our campus.
Chapter XVI reports research on the impact of different forms of peer observation and feedback on preservice teachers’ skills in analyzing classroom teaching and their perceptions of their experience with peer assessment. According to this study, peer observation and feedback is beneficial to preservice teachers’ learning. However, to maximize its effectiveness, a culture of peer assessment should be established in teacher education programs.
Chapter XVII offers a detailed description of an innovative senior capstone program developed by the Department of Communication Studies at Samford University. The program incorporates student-driven communication workshops and electronic portfolios, and uses qualitative and quantitative methods to assess Transformational Learning Abilities or TLAs. The program is shown to serves as a catalyst for student engagement, faculty development, and departmental transformation.
Chapter XVIII argues that the trend favoring assessment initiatives of a system-wide scope such as program review and collegiate learning tend to overlook the specific, highly concrete learning experiences of individual students in the liberal arts. These individual cases offer a rich source of data. The insights that can be derived from a rigorous analysis of such individual experience can tell educators much about learning outcomes, teaching quality, and curricular effectiveness.
Chapter XIX introduces a variety of multi-modal writing assessment instruments and methods, and discusses potential ways to determine the value of the student texts produced with them. New instruments associated with computerized scoring and distance technology make multi-modal writing assessments readily available, affordable, replicable, and transferable, but the value of the texts produced must be carefully ascertained.
Chapter XX introduces methods that can be used to engage faculty in the assessment process, working within a shared governance structure in institutions of higher education. It begins by identifying assumptions about including faculty in the assessment process, placing special emphasis on social capital and networking theories often used in communication and sociological research. The chapter t identifies six methods that might be used to engage faculty strategically in the assessment process, and then used three case studies to help explain these methods.
Chapter XXI suggests five elements to consider when developing an environment for assessment that successfully engages faculty: structure of assessment; qualifications of those in assessment; focus of assessment conversations; faculty development; and linkages with other areas within the institution.
Chapter XXII argues that as primary stakeholders in writing program assessment, students and instructors need to be included proactively in assessment research. However, little research has been done to determine how to accomplish this methodologically even though assessment affects pedagogical practices, student populations, and public opinion about what constitutes good writing. Instead of traditional quantitative, psychometric research, this chapter argues that assessment practitioners need to utilize local opportunities to discover native needs.
Chapter XXIII is guided by evidence that the study of racial-ethnic group differences on educational tests has yielded a substantial body of research internationally in the last decade. It maps current research about racial-ethnic minorities and educational testing in the English-language educational assessment literature. From an initial search yielding 420 articles, 78 were identified for further analysis of the frameworks for defining race-ethnicity and discourses found in the research literature.
Chapter XXIV describes a method developed to assess the outcome of a “cultural familiarity” general education goal. Challenges in defining, measuring, and providing summary information on variables of interest are discussed. The authors review the process of developing their own “oral examination” assessment method, explain their rationale for using this particular method, and suggest that locally-developed methods – this one and others – may have particular benefits that make them especially useful for program review and revision.
Concluding the book there is a glossary of assessment instruments, many of which are not fully invoked, studied, or referenced in the preceding chapters, yet which are relevant to studies and conversations about assessment.
Not the Final Word
Most of the contributors to the present work understand that assessment is not the final word in the total effort to raise the quality of learning in our institutions of higher education, no more than the final exam is the high point for even the best students in a well attended and dynamic seminar. For my own colleagues, and for many faculty members I have met elsewhere, knowledge of the goals and methods of assessment helps them survive the visits of the accrediting agencies. End of story. Many faculty members go back to their teaching and research until the next round of visits. Understandably, no few think it is a passing fad; they have seen many fads over the years. Only so many construe and value assessment itself as an area in which they will conduct and publish research. The latter are the ones who have contributed to the present volume, and we are grateful to them for doing so.
|
<urn:uuid:257148f7-f99a-4365-be74-e6715cc8688d>
|
CC-MAIN-2014-15
|
http://www.igi-global.com/book/handbook-research-assessment-technologies-methods/444
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.956935 | 6,425 | 2.875 | 3 |
Is it easier for teachers to tell their students to ‘shut up’?
When I started teaching, I once told two children to “Shut it!” on our way home from a trip to St Paul’s Cathedral. The coach was hot and stuffy and my patience was running on empty. It had been a tough day. Incidents: one child abused the Whispering Gallery acoustics with “Jordan is a tosser”, another had mocked some German tourists with a ‘Sieg Heil’ salute and a third stole a CD entitled ‘My Spirit Hath Rejoiced’.
I don’t actually remember saying “Shut it!” but I do remember that the front half of the coach fell silent for about 10 minutes whilst the other half were oblivious and continued mucking about. The colleague sitting next to me smiled and said:
“You’ve let yourself down, your class down and the whole school down! You’ll have parents knocking down your door tomorrow!” and he carried on chatting to the driver.
Another colleague said, “Well, that’s their emotional well-being and sense of self-worth down the drain!” Their concern was overwhelming. As it turned out, no parents came, social services were not involved and I learnt a valuable lesson which is why I’ve been telling kids to button it ever since.
Button your lip!
If students just won’t stop chatting, there are lots of strategies you can use to settle things down. One of the more radical and inappropriate classroom management techniques to use, is duct tape as one teacher in the US did!
Not unsurprisingly, this caused a real hoo-ha and things blew up a little! Unbelievably, this isn’t an isolated incident with another teacher doing the same thing with another pupil in a different school.
Wouldn’t it have just been easier to have told the students involved here to ‘shut up’?
Put a sock in it …
Telling those two children on the coach to put a sock in it was wrong because I’d lost my rag and I’d let the day’s events get the better of me. I’d given the words too much emotion and power. It is what we in the profession call ‘ballsing up’ but I put that down to being a naïve young teacher on his first school trip, wet behind the ears and a bit green. It was a classic school boy error and I learnt that the ‘red hot chilli pepper teacher’ had to give way to the ‘cool as a cucumber teacher’.
Some will say that saying “Shut up!” is unprofessional, showing a complete lack of respect for the child and perhaps, an emotional abuse issue. They might well be right. All children and young people are entitled to protection from abuse, suffering and neglect and I completely understand that they need to be kept safe. We work as trusted adults in loco parentis and protecting them from harm is number one. Our language and how we use it can have a profound effect on their well-being and their thinking, including our own.
Words are powerful and we need to use them wisely, but how we say words is everything. A venomous ‘end of your tether’ shriek of “Shut up !” can be a desperate and primeval cry for help that sends out the message you’re not coping, so children have every reason to feel vulnerable. But there is another way: humour.
Trap door shut!
Said with humour in a completely non-threatening and tongue in cheek way, “Shut it will you” can actually work wonders and you shouldn’t be afraid to use it, at least with older learners that is. The thing is, there has to be respect there in the first place before you do it. After getting to know pupils well and understanding what makes them tick, a well-timed “Just shut it” executed in the style of Lord Sugar with half a smile, raised eyebrows and a slight tilt of the head can get them to refocus and settle down. It’s a message of love and not of hate.
You will hear students saying it too. If someone is being a distraction and playing the class fool, you will often get their peers telling them to “Shut up!” and it works. A witty “Trap door shut” or “Press the mute button will you?” can help to shut-down petty low-level behaviour in a far more effective way than exploding like a stick of dynamite.
It is a surprisingly effective message that rarely gets misinterpreted.
“Belt up will you?”
“Yes sir, sorry sir!”
(… the class giggles. Everyone gets on, no one gets hurt and everyone can achieve …)
Behaviour consultants get twitchy and sanctimonious at this, but as a strategy it can work wonders, yet doesn’t work for everyone which is why behaviour management has to personalised and carefully differentiated … Other strategies are available.
I’m not a huge fan of blowing up in class but this has to happen on occasions. Many of my colleagues frequently turn red with spit flying out of their mouths and they are still respected by the pupils. I don’t warm to “my resilience bucket is empty” or “you are stopping yourself from reaching your full potential” but if it works for you then go for it but sometimes ‘shush‘ can be slush.
Quite apart from striking a balance, pulling no punches, being human, being inspirational, being a polymath, being in control, being a maverick, being ready, and being there, the priority is being careful with your choice of words. If you are going to tell someone to cut the cackle, pipe down or button it, at the end of the day, it’s about knowing your pupils and your pupils knowing you.
That’s what really makes a difference to behaviour in the classroom.
|
<urn:uuid:5c4405d0-a333-4db3-b36a-a5413050aa23>
|
CC-MAIN-2020-29
|
https://www.teachertoolkit.co.uk/2017/01/07/shut-up/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00150.warc.gz
|
en
| 0.966986 | 1,303 | 2.703125 | 3 |
We were studing about Nidra (sleep) Vritti. Nidra (sleep) Vritti gives us knowledge about the nothingness, when we get deep sound sleep we must not waste our life to have access sleep, which will start giving us sorrows, sickness, tension and so many other diseases. In Mahabharta too Vyas Muni says that to eat, to sleep, to marry and to be afraid – these are four qualities equally in human beings as well as in animals. The human being is only separated from the animals’ category if they follow the religious path which animals cannot adopt. So in the absence of religious path human beings even having the body of the man or woman, they are equal to animals. In SAAM VEDA, it is said in mantra 1826 that he who awakes early in the morning about 4’0clock and does worship, meditation and studies of holy books like Vedas, shashtras then one day Almighty God becomes his friend, which has the deepest meaning. Students and family holders used to awake in the past Satyug, Dwapur,Treta Yug early in the morning to do auspicious deeds and to discharge their duties stated in the past and holy books. We see in the Mahabharta and Valmiki Ramayan that Rishis and Munis used to awake even before 3’0clock morning for taking bath and religious deeds Vyas Muni says that Lord Krishna used to awake in the middle of night and used to do worship and yoga practise. This tradition stated in the Vedas is now not being adhered to often, which has resulted in unhappiness and early death, sickness, etc. Actually Nidra (sleep)is the mental modification but soul never got sleep.
Now we come to Smriti Vritti.
Anubhut Vishya Asampromoshah Smritti (1/11)
(Anubhuut Vishya Asampromoshaha) when experienced matter are not forgotten then this Vritti is called Smriti (memory) Vriti.
Meaning = whatever we had seen, listened and experienced about the incidents, scenes etc., in the past and those are not being forgotten and often we remember the same in our life time again then this Vritti which recollect the past experiences is called Smritti (memory) Vritti. Here Vyas Muni says that whether the Chitta remember the past experienced matters. Actually in the memory both are appeared. First, past experienced knowledge, which is to be recollect and second the matters (objectives) ie; the desire towards the experienced knowledge and its subject matter both are appeared in the memory. The reason for its rememberance is mental affections ie; the desire towards the experienced merriment again and again. The experienced knowledge and attachment towards the materialistic world generates the form on the chitta about the matter and its knowledge both. Therefore whatever knowledge we get sense ie eye, ear, nose, tongue and skin that makes effect on our Chitta, which comes in our memory again and again. For example-suddenly our memory recollects mango. Here theshape of mango is the shape which we had seen in the past. Apart from the shape of mango the second knowledge is about the sweetness and healthiness of the mango. Both the situations will be recollected at one time. So the shape of mango and the quality (sweetness, etc.) of the mango which both are produced on the Chitta, these are the forms, effects of the mango which had been effected on the Chitta when we saw and used mango in the past. So in this case the shape and the qualities of the mango previously made the effect on the Chitta and afterwards these effects are raised on the Chitta and generated both the memory of the already experienced shape of mango and its qualities on the Chitta.
|
<urn:uuid:2e2adcc8-a98d-4dc4-a257-7bb13b5c576a>
|
CC-MAIN-2020-24
|
https://www.vedmandir.com/yoga-philosophy-meditation-20/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00460.warc.gz
|
en
| 0.964339 | 816 | 2.65625 | 3 |
Even people who don’t regularly drink tea know the positive effects tea offers when it comes to immunity. It's why most of us opt for a piping hot cup of tea when we're feeling under the weather. Tulsi tea has been used in Ayurveda to boost the immune system during and after illnesses, largely in part because it offers antifungal and antibacterial properties. The medicinal properties of tea mean you can quit your cold or flu sooner.
Can Help Prevent Chronic Diseases
Teas are often high in antioxidants, which work to eliminate free radicals throughout the body and prevent oxidative stress. Tea also contains polyphenols, which studies have shown result in a lower likelihood of cancer. Teas are often used in Asia to combat or prevent chronic diseases such as high blood pressure and diabetes.
Stimulates Brain Function
While you can drink just a cup or two of tea when you're sick to get the health benefits, if you want the brain boosting power of tea, you'll need to drink tea regularly. The main way tea improve brain function is by increasing blood flow to the brain, thus providing it with nutrients and essential oxygen for a healthy mind.
|
<urn:uuid:0c434d68-0ff4-47ba-8628-3fcb8df2a0f4>
|
CC-MAIN-2020-16
|
https://www.justgoodfood.shop/blogs/news/article
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00401.warc.gz
|
en
| 0.950352 | 239 | 2.703125 | 3 |
We've all had recourse to say: "My head tells me to do one thing, but my heart says do the other." Sometimes we are forced to make a decision but we feel ourselves to be pulled in opposite directions by reason and emotion.
Thanks to an innovation that has transformed the study of the mind, scientists are now able to see precisely what happens in the brain in situations like this. For the first time in history we are getting close to answering the question of whether the heart rules the head.
The progress is due to functional magnetic-resonance imaging, or fMRI.
This technique allows the measurement of the level of oxygen in the blood, and tells scientists which parts of the brain are most active. It can show, for example, the parts of the brain that operate when we fall in love and when we have food cravings. It has even recently revealed the differences in the brains of Democrats and Republicans.
But the technique also holds out the promise of answering deep questions about our most cherished human characteristics. For example, do we have an inbuilt moral sense, or do we learn what is right and wrong as we grow up? And which is stronger: emotions or logic?
Before fMRI, information about the parts of the brain involved in different tasks could only be gathered by studying people who had suffered brain damage from trauma or stroke, and seeing how their brain function changed. Now, the brains of healthy people can be scanned as they are given different tasks.
"fMRI has provided striking evidence in favor of some theories and against others," said Joshua Greene, of Princeton University's Department of Psychology. "But I don't think the real payoff has hit yet. That will come when we have successful computational theories of complex decision-making, ones that describe decision-making at the level of neural circuits."
Greene, together with Jonathan Cohen, professor of psychology at Princeton, is using fMRI to look at the factors that influence moral judgment.
To do so, the researchers scan the brains of volunteers while posing them fiendishly tricky dilemmas. For example, imagine you and your neighbors are hiding in a cellar from marauding enemy soldiers. Your baby starts to cry. If he continues, the soldiers will discover your hiding place and kill you all. The only way to save yourself and the others is to silence your baby – by smothering him to death. What do you do?
Clearly, you would feel intense emotions, and this shows on the brain scan. But you would also be forced to make a logical assessment of the situation, and this shows up on the brain scan too. Areas involved in abstract reasoning and those that process emotions light up.
In other words, when processing a difficult and personal moral dilemma, we really are of two minds. Greene found that if the dilemma is not so personal, the reasoning part of the brain is dominant.
When a dispute exists between two sides, say in a court of law or in a territorial land claim, there is often a mediator. So too, it seems, the brain has one too. Researchers found a region called the anterior cingulate cortex, believed to be involved in mediating conflict, was highly active in brains struggling with the crying baby scenario.
Greene and colleagues showed a neurological basis for the phrase "of two minds," and that both compete for dominance. So does the heart rule the head? Answer: Sometimes. But the head doesn't give in without a fight.
And we can use fMRI to go further, and examine how we got to be the way we are. Belgian professor Guy Orban, head of the division of neurophysiology at Katholieke Universiteit Leuven, uses fMRI to tackle evolutionary questions about the brain. His experimental subjects look at rotating 3-D images while their brains are being scanned – but unlike Greene, Orban's subjects include monkeys as well as humans.
Orban's research shows pronounced differences in the way the two species process 3-D images. Humans show activity in regions of the brain (in the visual and intraparietal cortex) that have no clear counterpart in monkeys.
"The results suggest that, as humans evolved, some portions of their brains adapted to produce specific abilities, such as controlling fine motor skills," said Orban.
So if we have evidence that human brains have evolved spatial processing abilities from monkey brains – and it seems that we do – could we have evolved moral abilities from our primate ancestors too?
Sarah Brosnan, of Emory University, Atlanta, has shown that the idea is plausible. She found that trained monkeys have a sense of fairness: They refuse to work if a fellow monkey doing the same job is seen to receive tastier food items as payment.
"Everything that evolves is a modified version of something else that already evolved," said Greene. "If you can trace the evolutionary history of the structures involved in a certain kind of thinking then perhaps you can make the case that the thinking in question is shaped by the creature's evolutionary history."
This kind of thinking is what led Dr. Andreas Bartels, now at the Max Planck Institute for Biological Cybernetics, Tubingen, Germany, to propose (on the basis of fMRI work) that romantic love evolved from maternal love.
Similarly, Dr. Val Curtis of the London School of Hygiene and Tropical Medicine published work earlier this year showing that our sense of disgust has evolved to protect us from disease. That sense of hygiene, said Greene, might be the basis for so-called higher senses, such as moral feelings.
Greene is currently working on this idea. "For example," he said, "we might describe the behavior of someone who takes bribes as disgusting. I think that's more than a simple, learned metaphor."
Greene believes that although cultural influences on morals are strong, an important genetic element is also present. "Much of what we think of as culturally learned or individually reasoned in moral judgment," he said, "may turn out to be driven primarily by evolutionary forces."
Everyone has heard kids in the schoolyard call each other "animals" in response to some childish comment or behavior. What the work using fMRI is doing is reminding us that we are all animals. And even our human senses and morals come from them.
|
<urn:uuid:5d294442-7d90-4403-aa9b-c458b02aa3b0>
|
CC-MAIN-2017-26
|
https://www.wired.com/2004/11/clear-pictures-of-how-we-think/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320570.72/warc/CC-MAIN-20170625184914-20170625204914-00379.warc.gz
|
en
| 0.96218 | 1,294 | 3 | 3 |
Despite recent reports, the Centers for Disease Control and Prevention (CDC) is not advising men to shave their facial hair to avoid catching the coronavirus.
The reports cited an infographic that has been circulating online that shows the types of facial hairstyles that do and don’t work well when wearing facepiece respirators.
The graphic actually dates back to 2017 and was related to workplace safety.
A spokesperson for the CDC told The Associated Press that the image “was developed several years ago and is intended for professionals who wear respirators for worker protection.”
The resurfacing of the graphic comes at a time when Americans are on high alert about the coronavirus, or COVID-19, which has now killed two people in U.S. and left dozens of others sick. Worldwide, the virus has killed more than 3,000 people, with a majority in mainland China.
The panic over the coronavirus has led people to go out and buy face masks in an attempt to protect themselves. However, when it comes to COVID-19, the CDC actually says people should only wear a mask if they’re experiencing symptoms or are being investigated for possibly having the virus.
“CDC does not recommend the routine use of respirators outside of workplace settings (in the community),” the spokesperson told AP.
Health officials say that with masks in short supply, people should not hoard masks to allow those who are in need of masks, such as cancer patients and those recovering from surgery, to be able to obtain them.
Editor's note: This story originally stated that the CDC was recommending a clean shave to reduce the risk of contracting COVID-19.
|
<urn:uuid:6cbb899f-0879-429e-afa6-a350786f7c61>
|
CC-MAIN-2020-29
|
https://www.fox47news.com/news/national/cdc-a-clean-shave-lowers-coronavirus-chances-when-wearing-a-mask
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00492.warc.gz
|
en
| 0.968202 | 347 | 2.53125 | 3 |
. Advertisement .
. Advertisement .
In geometry, the Euclidean distance is the length of the line segment that connects two points. Finding this distance is key to many applications of general and advanced mathematics.
This tutorial will show how to calculate Euclidean distance in Python. You can implement different libraries – all of which apply the Pythagorean theorem to do the job.
How To Calculate Euclidean Distance In Python
Using The math Module
The simplest way to carry out this operation is to use the built-in math module. It comes with the Python Standard Library. It has official support from Python, and you won’t need to install additional libraries.
Since the 3.8 version, Python’s math module has a function called dist(). It returns the Euclidean distance between two points when you give it two lists or tuples containing their Cartesian coordinates.
a = [12, 2] b = [6, 10]
distance = math.dist(a, b) print(distance)
Using The NumPy Library
NumPy is a mathematical Python library written with large, multi-dimensional matrices and arrays in mind. Among its huge collection of high-level operations are several functions that can be used for finding Euclidean distances.
Keep in mind that you will have to numpy.array – the library’s main data type – instead of regular Python lists.
Basic Functions You can use a combination of square(), sum(), and sqrt() functions in NumPy to calculate Euclidean distances in a straightforward manner.
import numpy as np a = np.array((12, 2)) b = np.array((6, 10)) distance = np.sum(np.square(a-b)) print(np.sqrt(distance))
dot() numpy.dot() can return the dot product of two vectors. You can make use of it to find the sum of the squares before taking its square root. Here is an example that uses that approach:
import numpy as np a = np.array((12, 2)) b = np.array((6, 10)) c = a – b distance = np.dot(c, c) print(np.sqrt(distance))
linalg.norm() The linalg.norm() function in NumPy can produce the norm of both vectors and matrices. When fed with the vector connecting the two points, it should give you the length of that vector. This is what you are looking for.
import numpy as np a = np.array((12, 2)) b = np.array((6, 10)) distance = np.linalg.norm(a – b) print(distance)
Using The SciPy Library
SciPy is a free scientific computation library with many advanced functions for signal processing, statistics, and optimization, among other others.
It uses NumPy under the hood and includes many sub-packages. One of them (spatial) has a dedicated function (distance.euclidean) you can use to calculate Euclidean distance in Python.
Thanks to this, everything is fairly simple:
from scipy.spatial import distance a = [12, 2] b = [6, 10] distance = distance.euclidean(a, b) print(distance)
There are plenty of ways to calculate Euclidean distance in Python. From built-in functions to third-party libraries, you are spoilt for choice. Take factors like simplicity and extensibility into account, and you should be able to pick the most suitable choice.
Leave a comment
|
<urn:uuid:c7ee7bb0-7042-47ae-8c4b-d34ebaf8de73>
|
CC-MAIN-2023-23
|
https://ittutoria.net/how-to-calculate-euclidean-distance-in-python/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654031.92/warc/CC-MAIN-20230608003500-20230608033500-00741.warc.gz
|
en
| 0.848945 | 818 | 4.40625 | 4 |
Do you remember telephone books? These great big lumbering things, often yellow, were once an indispensible part of every household. Today we don't need them anymore, as we can store several phone books' worth of information on small devices we carry around in our pockets. Those devices will also soon be outdated. And one day in the not too distant future our control of information will be complete. We will be able to encode an infinite amount of it on tiny little chips we can implant in our brains.
An image of a coiled galaxy taken by NASA's Spitzer Space Telescope. The eye-like object at the centre of the galaxy is a monstrous black hole surrounded by a ring of stars. Image: NASA/JPL-Caltech
Except that we won't. Not because of a lack of technological know-how, but because the laws of nature don't allow it. There is only so much information you can cram into a region of space that contains a finite amount of matter. "We are talking about information in the sense of something that you can store and reproduce," explains Jacob Bekenstein, the physicist who first came up with this limit of information in the early 1980s. "[To be able to do that] you need a physical manifestation of it; it could be on paper, or it could be electronically [stored]."
Bekenstein isn't a computer scientist or engineer, but a theoretical physicist. When he came up with the Bekenstein bound, as the information limit is now known, he was thinking about a riddle posed by black holes. These arise when a lot of mass is squeezed into a small region of space. According to Einstein's theory of gravity the gravitational pull of that mass will become so strong that nothing, not even light, can escape from its vicinity. That feature is what gave black holes their name.
Room for randomness
The riddle concerned the question of what happens when something falls into a black hole. Most physical systems come with room for variation. For example, at this particular instant in time all the atoms and molecules that make up my body are in a particular configuration. But that configuration is only one of many that are possible. You could swap the position of the tea molecules currently sloshing around in my stomach, or reverse the direction in which they are moving, without altering my macrostate: the physical variables I am able to observe in myself.
This room for variation — the allowed amount of randomness underlying my macrostate — is measured by a number physicists would call my entropy. The more configurations of smallest components (the more microstates) there are corresponding to my macrostate, the higher my entropy. You can also think of entropy in terms of information. If a large number of microstates are possible, then that's because there are many different components (eg atoms) that can be arranged in many different ways. To describe a single microstate exactly would mean to specify the exact position, speed and direction of motion of each component, which requires a lot of information. The higher the entropy, the more information you need. This is why you can think of entropy as measuring the minimum number of bits of information you would need to exactly describe my microstate given my macrostate.
The behaviour of the entropy of a system over time is described by a law of physics called the second law of thermodynamics. It says that the entropy of an isolated physical system can only ever go up or stay the same, but it can never decrease. To shed some light on this, think of my cup of tea before I imbibed it. At the very start, the instant I put the milk in, the tea and milk molecules were neatly separated. After a while, however, the milk will have diffused, milk and tea will be thoroughly mixed up, and the liquid will have reached an equilibrium temperature. The latter situation has a higher entropy than the initial situation. That's because there are many more microstates that correspond to an equilibrium cup of tea than there are microstates that correspond to a situation in which the milk and tea molecules are only allowed to be in certain regions within the cup. So the entropy of my cup of tea has increased over time. (You can find out more about entropy here.)
But what about those black holes? Initially black holes were thought of as very simple objects with no room for variation at all. Physicists thought their entropy was zero. But if I fell into a black hole, I would never get out again and my entropy would be lost to the world. The overall entropy of the Universe would have decreased. "The moment you have a black hole, you have some sort of trash can where to hide entropies," says Bekenstein. "So the question is, what does the second law say in that case?"
A photo taken by NASA's Chandra X-ray Observatory revealing the remains of an explosion in the galaxy Centaurus A. There is a supermassive black hole in the nucleus. Image: NASA.
It seems that the second law would be violated, and this would indeed be true if the black hole had no entropy at all. However, in 1970 Stephen Hawking found that black holes come with a property that behaves very much like entropy. Every black hole has an event horizon. That's their boundary of no return: if you cross it, you won't come back. Like the shell of an egg the event horizon has an area. Using theoretical calculations Hawking showed that, whatever happens to the black hole, this area never decreases — just like the entropy of an ordinary physical system.
Bekenstein took the bold step of suggesting that the area of the event horizon does indeed measure a form of entropy. "A black hole is very simple, but it's hiding a complicated history," explains Bekenstein. In an ordinary system like my cup of tea, entropy is a measure of our uncertainty about what's going on at a molecular level. If its entropy is high then that's because there are many possible microstates corresponding to a macrostate. I can observe a macrostate, for example the tea's temperature and mass, but that doesn't give me a clue about what the exact microstate is because there are so many possibilities. "For the simplest black hole all I can figure out is its mass, but it has been formed in one of many possible ways," says Bekenstein. "There are many alternative histories and they all count towards the entropy."
Bekenstein’s idea was controversial at first, but further investigations into the theory of black holes confirmed that it made sense to define a black hole entropy (call it ). It turns out to be proportional to a quarter times their horizon’s surface area ; to be precise
where cm is called the Planck length.
The notion of black hole entropy gave people a way of generalising the second law of thermodynamics to systems that include black holes: for such a system it's the sum of the ordinary entropy that lives outside the black hole and the black hole entropy that can never decrease. "If some entropy falls into the black hole the surface area will grow enough for the sum of these two entropies to grow," explains Bekenstein. The increase in the black hole entropy will compensate, and most frequently over-compensate, for the loss in the ordinary entropy outside it.
The generalised second law inspired Bekenstein to a little thought experiment which gave rise to the Bekenstein bound on information. Suppose you take a little package of matter with entropy and you lower it into a black hole. This will increase the black hole’s entropy and, equivalently, its surface area. You lower the package into the hole very carefully so as to disturb the hole as little as possible and increase the surface area by the smallest possible amount. Physicists know how to calculate that smallest possible amount. Writing for Newton’s gravitational constant and for the speed of light, it turns out to be
where is the total mass of the package and is its radius. Thus, lowering the package into the black hole will have increased by at least
When you have dropped the package into the black hole, the outside will have lost an amount of entropy. Since the overall entropy cannot decrease, the increase in must exactly balance or exceed In other words,
The entropy of your package cannot be bigger than the number on the right of this inequality, which depends on the package’s mass and its size. And since any package carrying entropy could in theory be dropped into a black hole in this way, any package must comply with the bound.
The limits of information storage
How is all of that linked to the storage capacity of a computer chip or some other information storage device? The entropy measures the number of bits needed to describe the chip’s microstate. Some of those bits go towards describing the parts of the chip designed to store information. More storage capacity requires more entropy. And since the entropy is limited (in terms of the chip’s mass and size) by the expression above, so is its storage capacity. To increase the amount of information a device can carry beyond any bound, we would have to increase its size and/or mass beyond any bound too.
Could a brain be uploaded on a computer?
Current devices don't come anywhere near the Bekenstein bound, so there's no need to worry that we will hit the limit any time soon. In fact, the only things physicists know of that exactly reach the bound are black holes themselves. But it is interesting that such a bound even exists. "In the future when information storage technologies will get much better you still will not be able to exceed this," says Bekenstein. "It's a very big bound, but it's a finite bound."
There is another interesting angle on Bekenstein's bound. It puts a limit on how much information you need to completely describe a physical system, such as a human brain, down to the tiniest detail. Since, according to the bound, that information is finite this means that, in theory at least, a human brain could be entirely recreated on a computer. Many people believe that everything about a person, including consciousness and a sense of self, arise from physical processes in the brain, so in effect it would be possible to upload a person onto a machine. We are nowhere near being able to do this, both in terms of the computing power and the other technologies that would be needed. But still, it's a fascinating thought. Matrix, here we come.
About this article
|
<urn:uuid:75c4fab3-5ed9-4634-bcaa-6a9574e8a155>
|
CC-MAIN-2023-40
|
https://plus.maths.org/content/comment/reply/node/6206/comment_node_article
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00146.warc.gz
|
en
| 0.960633 | 2,176 | 3.453125 | 3 |
Clyde Ellis, Luke Ellis Lassiter, Gary H. Dunham, eds. Powwow. Lincoln: University of Nebraska Press, 2005. vii +309 pp. $19.95 (paper), ISBN 978-0-8032-6755-8.
Reviewed by Megan L. MacDonald (American Studies Program, Purdue University)
Published on H-AmIndian (February, 2007)
Pan-Indian or Intertribal: The Identity of Powwow Dancing
Clyde Ellis, Luke Eric Lassiter, and Gary Dunham do a wonderful job of compiling a striking collection of works by and about powwows throughout indigenous cultures. At the root of the text is the power and knowledge represented by a uniquely indigenous expression of community and identity. Reference to some seminal texts on the powwow, including those by Barre Toelken, Clyde Ellis, William Powers, and Alice Kehoe, are offset nicely by a range of current literature, by such authors as Mark Mattern, Thomas Kavanagh, and Patricia Albers, analyzing cultural values within the powwow. Powwow presents a collection of essays by a number of scholars and maintains a recent trend in localizing powwow activity, like Tara Browner's Heartbeat of the People (2002), rather than generalizing the powwow as a pan-social phenomenon.
What is a powwow? Ellis tells us immediately that "every weekend of the year Indian people gather in one place or another to share their dances and songs, renew friendships, and reaffirm their shared experiences as members of a tribe, organization, family or community.... 'Why do we dance?... Well, how many reasons you got? Sometimes it's for ceremony. Sometimes it's because I want to put on my getup and shake a leg. And sometimes it's because I want to remember my friends and family. And sometimes it's just because'" (p. vii). Overall, Powwow tells us that powwows are fun, serious, traditional, modern, respectful, organized, and a good time all at once. The dynamic nature of powwow essentially defines the many ways in which powwow is represented in these essays.
The authors state that the "goal is to draw attention to some of the differences and similarities from community to community and group to group and to help point the way toward a more systematic and nuanced cross-cultural understanding of powwows" (p. xiii).
Do they fulfill this goal? Yes and no. The presentation of a cross-cultural powwow understanding is present, but I am not sure that the analysis is present in this collection. One such example is the repeated questioning of powwows as pan-Indian or intertribal. The editors and authors cannot seem to decide which term is currently appropriate for the multiple nations intermingling at powwow events. I think if we are discussing a "nuanced cross-cultural understanding of powwows," this discussion of intertribal vs. pan-Indian is vital to the way in which powwows bridge indigenous nations (p. xiii).
The book itself is divided into three sections, all easily laid out and readable. Part 1 introduces the powwow via history and how it developed into a significant marker of community identity for indigenous nations. The history of organized indigenous dancing is often traced to World War I, with some localized dances like the Grass Dance affiliating with specific nations.
Part 2 introduces performative aspects of powwows with specific reference to people and bodies--elders, princesses, singers--and how these bodies contribute to identity formation and community building in specific powwows. Essentially this section represents the people you see at every powwow.
Part 3 (what I came to refer to as Powwow Outliers) addresses powwow culture inside particular enclaves we do not immediately associate with powwow--indigenous gays and lesbians, southeastern tribes reclaiming profession of their cultures, German nationals dancing, New Age acting within a constructed tribal/powwow identity, and the multitude of expected (Indianness, common identity) and unexpected (tribally specific identity/community expression) reasons these various groups powwow.
As powwow history and lineage has been outlined in previous works, it is part 3 that fired me up the most about this book. I have so many questions to ask it--is powwow able to be appropriated if powwow is such an ambiguous and "unclaimable" event? Can anyone appropriate powwow for their community, tweaking it as they see fit? Is powwow uniquely indigenous? What makes powwow so easily transformable for individual communities? Given its broad nature, does powwow work for or against indigenous identity? Perhaps these questions are the reason the editors did not provide a conclusion, in order to bridge some of the larger, more unsolvable questions raised through the readings.
A brief discussion of how replacing pan-Indian with intertribal concludes that "understanding powwow as both a community-specific and a cross-cultural institution means doing more than simply renaming things, so dropping 'Pan-Indian' won't necessarily produce more perceptive treatments" (p. xiii). The editors do state that the collection is a stepping-off point for further discussion, not a text that solves issues. In an amazing way, however, the collection introduces the intertribal critique without overt analysis, allowing the reader to draw his/her own conclusions about pan-Indian or localized cultural event with regard to powwow. In that way, this collection would be excellent for undergraduate populations with little exposure to powwow or indigenous cultural activities.
It seems another goal is to spark interest and discussion concerning the powwow as something more than just a set of generalized cultural practices; however, the text does just the opposite. Numerous examples illustrate how the powwow can be a sociocultural template that many groups may take and shape as their own. For example, the New-Agers, the Monacan nation, and the Ho-chunk each take the common ground of dance, music, head people, emcees, arena directors, and elders, and make it their own with practices, songs, and language unique to their own nation or region, or re-discovering of community.
If there is additional discussion of this review, you may access it through the list discussion logs at: http://h-net.msu.edu/cgi-bin/logbrowse.pl.
Megan L. MacDonald. Review of Ellis, Clyde; Lassiter, Luke Ellis; Dunham, Gary H., eds., Powwow.
H-AmIndian, H-Net Reviews.
Copyright © 2007 by H-Net, all rights reserved. H-Net permits the redistribution and reprinting of this work for nonprofit, educational purposes, with full and accurate attribution to the author, web location, date of publication, originating list, and H-Net: Humanities & Social Sciences Online. For any other proposed use, contact the Reviews editorial staff at [email protected].
|
<urn:uuid:f476d6d7-500b-4d1e-8461-b8c5e4ed707a>
|
CC-MAIN-2013-20
|
http://www.h-net.org/reviews/showrev.php?id=12856
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00014-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.925154 | 1,460 | 2.828125 | 3 |
Here are some great Exit Ticket templates that I may use: http://wgbyeducation.files.wordpress.com/2011/03/exitslips.pdf
And here is a Pinterest board that I've used to inspire my daily checks: http://www.pinterest.com/jills1105/exit-tickets/
These exit ticket do not have to be fancy and photocopied. They might be a slip of paper, conversation with sharing, index card. What matters most is: will the questions you are asking provide you with the data you need to drive your instructional decision making?
I've been asked to complete tickets that are too open ended or general. I try not to make that mistake and I try to draft my questions to get at the most important learning: What will students know, understand, and be able to do? Here is a list of Exit Ticket prompts I’ve begun to accumulate that are grouped by levels of Bloom’s taxonomy and more:
So, now I have the Exit Ticket data… what do I do with it? Some examples of ways a I might respond to Exit Ticket data might be:
- all students met the objective & move on with the curriculum what the data says
- how I might respond to it most students have not met the objective
- plan a follow up activity using a different modality some students met the objective, some partially met it, some are still struggling with it
- sort the exit tickets to create flexible groups http://daretodifferentiate.wikispaces.com/Flexible+Grouping with tiered activities http://daretodifferentiate.wikispaces.com/Tiering.
Exit tickets are just one way to collect small snapshots of learning that we've done together and can provide me valuable direction on my lesson planning and their choices for instructional strategies.
|
<urn:uuid:4c61955f-628a-48ac-ae2e-201c1ac2c691>
|
CC-MAIN-2017-39
|
http://mclaughlinsclass.blogspot.com/2015/08/understanding-checks-and-tickets-out.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690228.57/warc/CC-MAIN-20170924224054-20170925004054-00192.warc.gz
|
en
| 0.908372 | 387 | 2.640625 | 3 |
Of the 23 classes of dioxin sources identified in the study, only six accounted for 90 percent of all dioxin emissions in North America, the report’s summary stated. Those were municipal solid waste incinerators, backyard trash burning, cement kilns burning hazardous waste, medical waste incinerators, secondary copper smelters and iron sintering plants.
The report said of 44,000 emission sources identified in North America as causing pollution in Nunavut, the United States accounted for 62 percent, Mexico accounted for 30 percent and Canada for 8 percent. Dioxin sources within Nunavut accounted for less than 0.02 percent of the total.
The study said an estimated two to 20 percent of dioxin pollution in Nunavut areas originated outside North America, mainly in Japan, France, Belgium and Britain.
|
<urn:uuid:37dfa502-c4ce-42d8-9dcc-e1b0fc45e924>
|
CC-MAIN-2014-15
|
http://abcnews.go.com/Technology/story?id=119903&page=2
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00016-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.916446 | 170 | 2.890625 | 3 |
Try our Free Online Math Solver!
Precalculus Practice Final Exam Solutions
Practice Final Exam, Solutions
Below are the solutions to all problems contained on the
Practice Final Exam. Keep in mind that the
solutions I give below are very complete and thorough, and that I’ve provided them for study purposes;
I would not expect most of your answers on an exam to be as thorough or well-worded as mine below!
If you have any questions about these solutions (or if you think you’ve found a typo!), please let me
1. (16 points total; 4 points each) Indicate whether each
statement is true or false, and provide a
one-sentence explanation for your answer.
(a) The domain of the function sec(x) is (-∞,∞).
This is false: recall that sec( θ ) = 1
cos( θ ) , so sec( θ ) will be undefined whenever cos( θ ) = 0.
This occurs when (that is, at all numbers for integers k). In
particular, it occurs, which means there are places, listed above, where the function sec( θ ) is
(b) Every real number is a rational number.
This too is false: there are numbers that are not rational. (That is, they cannot be written as
fractions, p/q , where p and q are integers, q ≠ 0.) For instance π, e, and sqrt(2) are all irrational.
(c) The Vertical Line Test can be used to tell whether a function is one-to-one
This too is false: the VLT allows us to see whether or not a graph is the graph of a function
in the first place. The Horizontal Line Test allows us to tell whether a function is one-to-one
(d) The leading coefficient of a polynomial of the form p(x) = a0 + a1x + · · ·
+ anxn is a0.
Wow, another false statement!: the leading coefficient is the coefficient on the highest power
of x. In the example above, this coefficient is an.
2. (32 points total; 8 points each) Solve each of the following equalities.
(a) |2x + 5| − 3 = 0
First, move the 3 to the other side, obtaining |2x + 5| = 3. Recall that there
are now two
possibilities for 2x+5; either 2x+5 = 3 or 2x+5 = −3. In the first case, 2x = −2 => x = −1;
in the second case, 2x = −8 => x = −4. Thus we obtain the solutions x = −1 and x = −4.
Let’s bring everything over to one side of the equation and find a common denominator:
Recall that a fraction is 0 only if its numerator is 0
(and its denominator is not). Thus we
This is now easily solved: 3 = s2 => s = ±sqrt(3). Since neither of these values for s makes the
denominator s − 3 equal to 0, these are our solutions: s = ±sqrt(3).
We need to get rid of the radicals, and the best way to do
that is to square the equation , but
first we do a little rearrangement:
Now square both sides, obtaining
This is now easily solved by canceling terms on both
sides, to obtain 4sqrt(x) = 4, or
sqrt(x) = 1.Squaring once more, we find our only solution, x = 1.
Recall that we have to check to make sure this isn’t a “false” solution (which could arise from squaring in the first place!):
Thus we really do have a solution, x = 1.
(d) tan( θ ) = 0
Recall that tan( θ ) = sin( θ )/cos( θ ) . This fraction will only equal 0 when the numerator equals 0; that
is, when sin( θ ) = 0. As we saw in class, this happens when the y-coordinate of our point on
the unit circle is 0, which occurs at the following values of θ :
That is, tan( θ ) = 0 if and only if θ = kπ for some integer k.
3. (27 points total; 9 points each) Let f(x) =sqrt(x − 19), g(x) = x^3 − 8, and h(x) = ln(x).
(a) Find the domain of each of the functions f, g, and h.
Since we can’t take the square root of a negative number , for f we require that x−19 ≥ 0, or
x ≥19. Thus the domain of f is [19,∞).
Since we can plug any number into any polynomial, the domain of the polynomial g is
Finally, we cannot take the logarithm of a number less than or equal to 0. (One way to
remember this is to remember that the log ln(x) gives the power of e needed to obtain the
value x; if x ≤ 0, no power of e will equal it!) Thus the domain of h is (0,∞).
(b) Find a formula for the composition h ο g, and give the
domain of this function.
The composition is easily computed:
For this function to be defined, we need that g is defined
at x and that h is defined at g(x).
Since g is defined everywhere, the first condition is met. Referring to our work in part (a),
the second condition requires x3 − 8 > 0. This is easily solved:
Thus the domain of h g is (2,∞).
c) Find a formula for the composition f g, and give the
domain of this function.
The composition is easily computed
As before, any old x can be plugged into g, but we also
need that x3 − 27 ≥ 0, which gives
x3 ≥ 27, or x ≥ 3. Thus the domain of f g is [3,∞).
4. (15 points total) Let f(x) = 3(ex-1) + 5.
(a) (7 points) Use shifting and stretching of basic
functions to sketch a graph of the function
f. (Be sure to provide a scale for your axes, and indicate at least one point on the graph
Your graph should begin with the graph of the function ex and then
i. shift this graph to the right by 1 unit, then
ii. stretch the graph vertically by a factor of 3, and finally
iii. shift the graph up by 5 units.
As a point of reference you could indicate that the point (0, 1) on the original
graph goes to
(1, 1), then (1, 3), then (1, 8) under the process above. Note also that the horizontal asymptote
y = 0 of the function ex will give a new asymptote, y = 5, for our function f.
(b) (8 points) Find the inverse to the function f, writing
it as y = f-1(x).
We write y = 3(ex-1) + 5 and solve for x in terms of y:
Now all we have to do is switch the roles of x and y:
5. (32 points total; 8 points each) Let
(a) Find the domain of the function f.
The only problem we might encounter is dividing by 0. Thus we must exclude those x that
make the denominator 0. To find these xs, we factor the denominator:
Excluding these two points gives us the domain
Incidentally, notice that if x ≠ 2 we can cancel that term from the numerator and denominator,
obtaining the simpler function which equals f(x) except when x = 2.
(b) Find all zeroes of the function f.
The zeroes (also known as roots) of the function are those values of x at which the function is
defined and equal to 0. Since rational functions are equal to 0 if and only if their numerators
are equal to 0, we note
Since x = 2 is not in the domain, this value does not give
a zero, and we have zeroes at x = −4
and x = 1.
Incidentally, we could have considered the simplified function and gotten the same
zeroes! In either case, the value x = 2 will give a hole in the graph.
(c) Find all asymptotes of the function f.
Now we consider the simplified function
, having already decided that there’s a
hole in our graph at x = 2.
Since the denominator is 0 at x = −1, we know that we will have a vertical asymptote at
x = −1.
To find horizontal asymptotes (if any), we examine what happens for BIG values of x:
for such values of x, since the leading terms of the
numerator and denominator are the terms
that dominate. But x3/x = x2. Since this is not a constant value, there is no line y = constant
that our function approaches. Therefore there is no horizontal asymptote. (Rather, in the
long run the function behaves a lot like a parabola , and you can make use of that information
in part (d) below!)
(d) Use the information from (a), (b), and (c) to sketch a
graph of the function f. (Be sure that
your graph is consistent with the information from the first three parts!)
Just make sure that you incorporate the information above regarding the hole at
x = 2, the
asymptote at x = −1, the zeroes at x = −4 and x = 1, and the long-term parabolic behavior
we determined in (c). Beyond this, maybe evaluating f at a few other points (like f(0) = −16,
f(3) = 24.5, and so forth) could help you sketch the graph.
6. (30 points total; 10 points each) Give brief definitions or explanations of
the following concepts;
you may use examples or diagrams if appropriate. The clearer, more complete, and more correct
your answer, the better.
(a) Horizontal Line Test
The HLT tells us that a function f is one-to-one if and only if any horizontal line intersects
the graph of f in at most one place. The test works because if a line intersects the graph at
two different places, say at x = x1 and x = x2, then f(x1) = f(x2), violating the condition
that we’d need for f to be one-to-one.
Put simply, a function f with domain A and range B is a rule that assigns to every element of
the set A a unique element in the set B. The keys here are the words “every” and “unique”:
we have a function if and only if every element in A gets assigned something in B, and this
assigned element is unique.
(c) secant line (to the graph of a function)
A secant line to the graph of the function f is any line that intersects the graph in two points,
(x1, f(x1)) and (x2, f(x2)). The slope of this line can be used to estimate the slope of the
function’s graph, by taking the points x1 and x2 to be quite close together.
7. (10 points) Rationalize the numerator in the expression
To perform this operation we have to use the conjugate of the numerator,:
where the final numerator comes from simplifying the
product of the previous numerators. But
now we can simplify a touch more and obtain our final expression:
Note that the denominator must now unavoidably contain
that nasty conjugate expression; there’s
no way around that! However, for certain applications, this is all right.
8. (20 points total) Let
(a) (8 points) Find all roots of the function p.
Using the Rational Root Theorem, our only possible roots are of the form ±1, ±2,
(You should review this theorem briefly if you don’t remember what it says!) To find our first
root, we just try each of these values in turn. Happily, the first one, x = 1, is in fact a root.
We can then long divide x3 −4x2 −x+4 by x−1 and obtain the quotient x2 −3x−4, which
is easily factored as (x − 4)(x + 1) using the AC Method or just by inspection. Thus we have
the roots x = −1, x = 1, and x = 4.
(b) (6 points) What happens to the value of p(x) as x approaches ∞? As x approaches −∞?
As x approaches ∞, the term that dominates in p(x) is the leading term, x3.
Since the leading
coefficient, 1, is positive, the function p(x) will behave like x3 and grow without bound and
become increasingly large, positively, for large values of x.
Similarly, as x approaches −∞, the function behaves like x3 and grow arbitrarily large, negatively.
(c) (6 points) Use the information in (a) and (b) to sketch a graph of the function p.
The information above should help you draw a graph that passes through the
x-axis at the
values x = −1, x = 1, and x = 4, and that grows positively in the direction of 1 and
negatively in the direction of −∞.
9. (18 points total; 6 points each) Simplify each expression as much as possible
(Hint: if the point (x,
y) lies on the unit circle, what can you say about
x2 + y2?)
Recall that the sine and cosine of an angle represent, respectively, the y- and x-coordinates
of a point on the unit circle. Thus for any Our expression then
reduces easily, remembering that
The top simplifies by canceling the inverse functions ex
and ln(x) with one another: eln(4) = 4.
Thus the top is sqrt(4) = 2.
The bottom simplifies by recalling that “the difference of logs is the log of the quotient”:
Thus the whole expression is 2/2 = 1.
Keep your cool! Just do one log after another, working from the inside out:
Extra Credit. (5 points) Come up with a mnemonic phrase
for remembering the quadrants in which the various
trigonometric functions are positive.
Of course, the standard one is “All Students Take Calculus,” but I hope you can
let your imaginations
|
<urn:uuid:42a26f8b-150e-4da3-9d43-3b22cc72ee1d>
|
CC-MAIN-2020-16
|
http://softmath.com/tutorials-3/relations/precalculus-practice-final.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00208.warc.gz
|
en
| 0.888688 | 3,321 | 2.515625 | 3 |
Start learning with our library of video tutorials taught by experts. Get started
Viewers: in countries Watching now:
This course introduces basic physics simulation principles in Autodesk 3ds Max using MassFX, a system that makes it cost effective to animate rigid body objects, cloth, and particle systems. Author Brian Bradley introduces basic concepts such as gravity, drag, volume, and density, and how Newton's Laws of Motion can help you understand the interaction of objects with these unseen forces. Using the purpose built scene, Brian walks through the tools and features of the MassFX (PhysX) system, applying the principles discussed as he goes. Along the way, discover how to combine rigid bodies and constraints, mCloth fabrics, and mParticles geometry to create fairground-style effects.
For our final foray into the MassFX Toolbar Constraints flyout, we are going to create a crazy target, something that, comparatively speaking, has a much broader range of motion than the constraint presets we've worked with up to this point. The idea is to create something that will hopefully be a little more challenging for our ball launchers to hit. To get started, we need to run through our by now familiar procedure. So, let's select our Hinge object, which in this case happens to be this little piece of domed geometry.
And then, of course holding down the Ctrl key, we can click to add the target frame itself. Now, we can go up to the MassFX toolbar and from the flyout, add a ball- and-socket constraint. We of course need to say Yes to adding the modifiers. And then finally, we can set our constraint helper size to suit. Now obviously, we will need our dome geometry, which is we say is acting as our hinge, to remain fixed in place. This time, coming into the Modifier Properties, we can set the Rigid Body Type to Static.
Now, if we run the simulation without animation, you can see, after a few moments, our target flops forward and then begins to behave in a very odd manner indeed. Now, this is clearly not the effect that we are after here. The mistake we've made is to have two rigid body objects overlapping or into penetrating one another. In fact, if I just delete our constraint helper, you can see our hinge object does indeed intersect the stand geometry, which is itself a static rigid body.
Let's switch to our perspective view using the P key and with the hinge object selected, let's use Zoom Extents Selected to get a better view of the problem. As you can see, clear intersection. The quick fix here is to select our two pieces of geometry, switch back to our target's close-up camera, and just raise them up a little in the scene. We are best moving both objects together, so as to not inadvertently create intersection between our parent and child geometry.
That would itself create a similar problem to the one we are trying to solve. With that done, we can select our parent and child objects in turn and reapply the ball-and-socket constraint. Now interestingly, 3ds Max tells us that a static rigid body cannot be a part of a constraint setup even though we have clearly seen in earlier examples that this can be so. To work around this, let's select our hinge object and set its Rigid Body Type to Kinematic. Then we can reapply the ball-and- socket constraint and finally switch our Rigid Body Type back over to Static.
Now, when we run the simulation our target again flops forward, but after a little while, it simply settles into that position--no more odd jumping around. To get something a little more interesting, let's select the constraint helper and jump over to the Command panel. First of all, in the Swing and Twist limits rollout, I want to set the Swing Y and Swing Z angle limits to 120 degrees each. Doing this whilst leaving the Twist option set to Free will give us a nice broad range of motion for our target. To make things interesting, we do of course want our target to try its best to return to an upright position.
To do this, in the Spring rollout, we can set the Spring to Resting Swing Springiness value to something like 2.7. We also want to set the corresponding Damping option to 0.01. Now, if we didn't want our target to spin or twist quite as freely as it will, we could also set the Spring to Resting Twist values to accomplish that. With those parameter tweaks in place, let's run the simulation once again, this time with animation, and see what we get now.
As you can see, once our target takes a hit, we do indeed get a really nice range of motion from it, and of course it does try and return to an upright position. Although, I must say that it doesn't seem to be making itself particularly hard to hit, so I guess not such a success on that score. There is no doubt about it, the MassFX constraint system is robust, easy to work with, and houses lots of flexibility regarding the types of motion that we can set up with it.
We have deliberately used some very specific examples in this chapter, but with a little bit of imagination applied to our use of these constraint tools, a wealth of possibilities can open up to us. What about characters, though? Is there any way to make use of the MassFX constraint system to help with creating automated character motion? We are going to look at doing just that using the MassFX Ragdoll system.
There are currently no FAQs about Creating Simulations in MassFX and 3ds Max.
Access exercise files from a button right under the course name.
Search within course videos and transcripts, and jump right to the results.
Remove icons showing you already watched videos if you want to start over.
Make the video wide, narrow, full-screen, or pop the player out of the page into its own window.
Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.
|
<urn:uuid:fdd9d022-3a72-45cd-9782-5947d88e84dd>
|
CC-MAIN-2014-23
|
http://www.lynda.com/3ds-Max-tutorials/Creating-crazy-targets-Ball-Socket-constraint/117103/123398-4.html?w=0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00142-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.933743 | 1,268 | 2.90625 | 3 |
The proportion of Americans in their prime working years who have jobs is smaller than it has been at any time in the 23 years before the recession, according to federal statistics, reflecting the profound and lasting effects that the downturn has had on the nation’s economic prospects.
By this measure, the jobs situation has improved little in recent years. The percentage of workers between the ages of 25 and 54 who have jobs now stands at 75.7 percent, just a percentage point over what it was at the downturn’s worst, according to federal statistics.
Before the recession the proportion hovered at 80 percent.
While the unemployment rate may be the most closely watched gauge of the economy in the presidential campaign, this measure of prime-age workers captures more of the ongoing turbulence in the job market. It reflects “missing workers” who have stopped looking for work and aren’t included in the unemployment rate.
During their prime years, Americans are supposed to be building careers and wealth to prepare for their retirement. Instead, as the indicator reveals, huge numbers are on the sidelines.
“What it shows is that we are still near the bottom of a very big hole that opened in the recession,” said Heidi Shierholz, an economist at the Economic Policy Institute, a left-leaning think tank.
The falloff has been sharpest for men, for whom the proportion had been on a slow decline before the recession. The percentage of prime-age men who are working is smaller now than it has been in any time before the recession, going all the way back to 1948, according to federal statistics. The proportion of prime-age women is at a low not seen since 1988.
The nation’s unemployment rate has shown signs of improvement, ticking down from 10 percent to 8.1 percent. But if it tallied people who have given up looking for jobs, it would certainly be higher.
The ratio of employment to population, which economists refer to as “epop,” “is a much better measure for what people are experiencing in the job market,” Shierholz said. “The unemployment rate is screwy right now because the labor market is so weak that people have stopped trying.”
For example, last month, the unemployment rate ticked down from 8.2 percent to 8.1 percent. Ordinarily, a drop in unemployment would be interpreted as a sign of improving economic health. But it dropped largely because so many people stopped looking for jobs.
Shierholz estimates that about 4 million workers have simply stopped looking, and so do not show up in the tally used for the unemployment rate.
As the presidential race heads into the summer, the health of the economy — and how voters view it — becomes critical, and for many people, the job market is their most significant contact with the economy.
According to the most recent Washington Post-ABC News poll, the issue of paramount interest to voters is the economy and jobs, with more than half describing it as the “single most important issue.”
By comparison, the next most important issue, health care, trailed far behind at 7 percent, and moral and family values followed at 5 percent.
The polls also show that, while the official statistics show improvement, voters offer gloomy economic diagnoses
About 83 percent of those in the poll, conducted in mid-May, rated the state of the economy as “poor” or “not so good,” a much higher portion of negative views than at any other time in the 10 years preceding the recession.
The job market “feels like a game of musical chairs — if you didn’t have a job when the market crashed, well, that chair is gone,” said Karen Akers, 50, of Vienna, who lost two jobs to budget cuts during the recession.
She just reentered the workforce in March, although at a lower salary in client relations at a sprinkler company.
“I don’t know that people trust any of these economic numbers these days, anyway, because they were all good before the crash,” she said. “Whatever economists are telling us, I don’t know that we can believe it any more than what we see in the job market — and what you find there is not good.”
Indeed, in interviews outside the unemployment office in Alexandria on Friday morning, people looking for work said that finding a job today, three years after the recession’s official end, seems just as hard as it did during the recession.
“In 2008, it was much easier — I got a job right away,” said 41-year-old Rob from Arlington, who last worked in sales for a defense contractor. Like other workers interviewed at the unemployment office, he declined to give his last name to protect his privacy.
“It’s definitely more negative, which really caught me off guard,” he said. “Employers have gotten used to doing pretty much what they want to do in this market.”
“I’m actually considering a position in retail,” said a 53-year-old Northern Virginia woman who had held a senior position in international sales and recently earned a master’s degree in management. She has been looking for a job for three years. “I can’t tell you how many women I know, one of whom was a bank vice president, who have already taken these kinds of jobs — they’re working at Joann’s Fabrics, Sur la Table and Crate & Barrel.”
The impact of these difficulties reaches far beyond those looking for work.
For those working, real wages have been stagnant since 2008, Shierholz said.
Moreover, the number of people quitting jobs — a figure that tends to rise when jobs seem plentiful and fall when they seem scarce — remains lower than it was at any time in the years leading up to the recession, according to government statistics.
Some of the workers have sensed a slight strengthening in their outlook, however: a few more calls, a few more openings, a few more interviews than they’d previously seen. Indeed, the “epop” figure for prime-age workers has risen since October.
Mark, 50, a heating and AC technician from Alexandria, was out of work in 2009 but found a job right away. He was laid off again about six months ago and, standing outside the Alexandria unemployment office, said it seems harder this time around.
“The economy is just really messed up right now,” he said.
|
<urn:uuid:3a086f85-63b5-4657-ae3a-2726d8e95d10>
|
CC-MAIN-2017-47
|
https://www.washingtonpost.com/business/economy/job-recovery-is-scant-for-americans-in-prime-working-years/2012/05/29/gJQAnza9zU_story.html?hpid=z3
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00600.warc.gz
|
en
| 0.975162 | 1,380 | 2.546875 | 3 |
Have you ever felt that finding just the right piece of information is as elusive as the legendary Pot of Gold? You are not alone. Nearly everybody gets frustrated with the overwhelming amount of information that is the hallmark of our connected society. Take heart! In less than one hour, you can learn how to think strategically about information discovery.
The Pot of Gold Tutorial consists of 6 modules designed to help you manage information to support your learning during your college years and beyond. Use the navigation arrows at top right to move through each module. Interactive practice exercises and a quiz at the end of each module let you test your understanding of the concepts. You can review any trouble spots or ask for help by clicking on the "Ask a Librarian" link at the top of each content page.
Information competence is a treasure that will serve you for a lifetime. Start the quest for YOUR Pot of Gold now.
|
<urn:uuid:779fd99b-36ba-493c-9302-8789aa77b6f8>
|
CC-MAIN-2014-23
|
http://library.nd.edu/instruction/potofgold/introduction/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889106.77/warc/CC-MAIN-20140722025809-00024-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.941416 | 180 | 2.78125 | 3 |
Description of Historic Place
The Old Guysborough Court House Museum is located in the heart of the rural community of Guysborough, NS. Built in a simple meeting house style in 1842-1843, the historic place is distinguished by its Gothic arched windows. The designation includes both the building and the property.
The Old Guysborough Court House Museum is valued for its over one-hundred and thirty year role as the principal public building in Guysborough County, and for its Gothic arched windows.
The Guysborough Court House was built in 1842-1843 and immediately became the most important building in Guysborough County, which is located over two hundred and fifty miles from the provincial capital of Halifax. As the only Court House in the County, it served as the location of the Court of General Sessions, and later as the meeting place of the municipal council. The building and the surrounding grounds served multiple purposes for the people of Guysborough County. They were used for agricultural exhibitions, as a polling station, a recruiting station during both world wars, and for legal trials. On September 18, 1867, Guysborough residents voiced their opinion on Confederation by voting, at the Court House, for two anti-Confederation province assembly representatives. In 1973 it ceased to function as a court house when a modern building was constructed. Today it is operated open to the public as a museum, operated by the Guysborough Historical Society, interpreting the history of the building and the county.
The Old Guysborough Court House is an important building in Nova Scotia and is unique among historic court houses in Nova Scotia. It was designed by carpenter Elisha Randall, who also was the builder, and he added what are considered his own personal touches to the design, including the buildings’ distinct Gothic arched windows. Using local materials, Randall built the Court House in a plain symmetrical meeting house form, with the exception of the windows.
Source: Notice of Recommendation of Registration, Provincial Heritage Property File no. 94
Exterior Character-Defining Elements of the Old Guysborough Court House Museum
related to its simple symmetrical meeting house style and location within the community include:
- wood shingle cladding
- windows with true Gothic Revival point arch with a simple tracery pattern;
- projected, enclosed porch on the street side with gable roof;
- south side wing;
- wood shingle roof;
- sandstone foundation;
- location on a main road, in the centre of the community.
Interior Character-Defining Elements of the Old Guysborough Court House Museum include:
- all historical and original elements related to the function of the building as a court house, including: Petit Jury room, judge’s bench, gallery, barrister’s room, main court room including benches for spectators, and Grand Jury room.
|
<urn:uuid:573ea2de-3a4a-486b-be9a-fa760f901e7a>
|
CC-MAIN-2023-40
|
https://www.lieuxpatrimoniaux.ca/en/rep-reg/place-lieu.aspx?id=3053
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00114.warc.gz
|
en
| 0.962494 | 595 | 3.015625 | 3 |
Benthic boundary layer processes and seabed dynamics in shelf seas are caused by both natural events (such as tidal currents, waves, winds, storms, and biological activity) and anthropogenic perturbations (for example dredging or trawling), and they are highly variable both in space and in time. This results in spatio-temporal variability of bed and suspended sediments, the latter of which are inherently controlled by the turbulent benthic boundary layer. Spatial variability of these bed and suspended sediments then has an important impact on overall sediment mobility, transport and pathways in shelf seas and estuaries, and thus on shelf seas ecosystems via the effect of suspended particulate matter (SPM) on light penetration for example. Indeed, a spatial gradient of suspended sediment in the presence of tidal currents results in sediment transport patterns that are distinct from the more typical resuspension-driven ones. This was first observed by Weeks et al. (1993), has since been reproduced in simple shelf sea models (e.g., Souza et al., 2007) and recently been shown to occur in tidally-dominated estuaries (Amoudry et al., 2014).
Nevertheless, the spatial variability of turbulent boundary layer processes and seabed dynamics, as well as its impact on sediment mobility and pathways, have yet to be fully characterised to the point of developing robust shelf seas predictive models. This is mostly due to a lack of necessary data sets, and the project will address this key gap via analyses of new extensive observational data collected in the Celtic Sea.
Spatial data on near-bed processes are difficult to obtain and are thus rare (e.g., Amoudry and Souza, 2011). However, new extensive observational data collected in 2014 and 2015 in the Celtic Sea as part of the NERC Shelf Sea Biogeochemistry programme will enable the studentship to focus on the spatial variability of benthic boundary layer processes in shelf seas. In particular, the studentship will seek to address how the inherent spatial variability of bed characteristics controls benthic boundary layer turbulent processes, seabed dynamics and transport of suspended sediment in shelf seas. The new observational data collected will enable the following hypotheses to be tested: 1) Spatial variability of near-bed turbulence and resuspension in shelf seas is generated by spatial variability in bed characteristics, independently of the influence of bathymetry (i.e. water depth). 2) Spatial variability in near-bed processes results in sediment transport pathways, and in feedback mechanism(s) linking spatial variability in bed characteristics and shelf seas sediment transport.
The studentship will focus on the following objectives: (i) characterize turbulent processes and dynamics in the benthic boundary layer for a series of contrasting shelf seas benthic environments, (ii) determine how spatial variability of the benthic environment controls these near-bed processes (iii) investigate the impact on transport and pathways of sediments at the shelf sea scale.
The studentship will use extensive in situ observational data from a number of deployments of a NOC instrumented benthic lander at several locations in the Celtic Sea, which present contrasting benthic environments in terms of sediment type along a permeable sand to cohesive mud gradient (sand, muddy-sand, sandy-mud, mud). Data from an additional deployment for a deeper sandy site will also be available. The lander is equipped with a suite of state-of-the-art instruments, which provide a unique and comprehensive series of data sets on near-bed hydrodynamics and sediment transport.
Near-bed hydrodynamic and turbulence is obtained from analysis of several acoustic Doppler velocimetry instruments, both at point locations (Acoustic Doppler Velocimeter) and for vertical profiles (Acoustic Doppler Current Profiler and high-resolution Nortek Pulse Coherent Acoustic Doppler Profiler). Analysis of these data will aim to resolve detailed characteristics of the turbulent tidal boundary layer (e.g., Souza et al., 2004; Souza and Howarth, 2005). Near-bed resuspension characteristics will be derived from data collected with the following instruments: Acoustic Backscatter System, LISST (Laser In Situ Scattering and Transmissometry) (e.g. Ramirez-Mendoza et al., 2014), and LISST-Holo (Holographic Particle Imaging System) (e.g. Davies et al., 2011). Small-scale bed topography (e.g., ripples) is measured with a 3D ripple profiler. The bed at each site has been fully characterised in terms of the sediment characteristics (grain size, bulk density, porosity) and benthic fauna from a series of cores, SPI (Sediment Profile Imaging) images and trawls.
|
<urn:uuid:282a62f0-bf49-44fa-80cf-a14555a471cf>
|
CC-MAIN-2020-10
|
https://oldsite.iahr.org/site/cms/newsarticle.asp?chapter=1&nid=936
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00304.warc.gz
|
en
| 0.896977 | 995 | 2.921875 | 3 |
In our journey to understand better our food system we dive into the cycle of food from producers to consumers, aiming to understand what happens in each step along the way and who are the people involve in it. Did you know the food supply chain has six steps, starting in pre-production activities such as seed sourcing and ending in food waste and what we do with our kitchen leftovers?
Supporting local products is not only a way to foster a robust informal economy and raising livelihoods to families as a way to alleviate poverty, is also an appreciation of the delicious Khmer food culture and identity! Check out this map of flavours and food-related products from each province, which ones have you tried?
We all see in the news issues like climate change risks and food insecurity. But good news is, the current unsustainability of our food production can slowly be changed by our actions and choices when we buy ingredients. Every gesture is a vote in the world we want to create!
|
<urn:uuid:b99b9c0f-3945-4900-aa31-211bc6d2e95e>
|
CC-MAIN-2023-40
|
https://kh.boell.org/en/illustrations
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00771.warc.gz
|
en
| 0.963359 | 198 | 2.625 | 3 |
Short Message service center
A Short Message Service Center (SMSC) is a network element in the mobile telephone network. Its purpose is to store, forward, convert and deliver Short Message Service (SMS) messages.
The full designation of an SMSC according to 3GPP is Short Message Service - Service Center (SMS-SC).
SMS can be directed in several ways:
- From mobile to another mobile - referred to as MO-MT (Mobile Originated - Mobile Terminated)
- From mobile to a content provider (also known as Large Account / ESME) - referred to as MO-AT (Mobile Originated - Application Terminated)
- From application to a mobile - referred to as AO-MT (Application Originated - Mobile Terminated)
The tasks of an SMSC can be described as
- Reception of text messages (SMS) from wireless network users
- Storage of text messages
- Forwarding of text messages
- Delivery of text messages (SMS) to wireless network users
- Maintenance of unique time stamps in text messages
When a user sends a text message (SMS message) to another user, the message gets stored in the SMSC (Short Message Service Center), which delivers it to the destination user when they are available. This is a store and forward option.
An SMS center (SMSC) is responsible for handling the SMS operations of a wireless network.
- When an SMS message is sent from a mobile phone, it will first reach an SMS center.
- The SMS center then forwards the SMS message towards the destination.
- The main duty of an SMSC is to route SMS messages and regulate the process. If the recipient is unavailable (for example, when the mobile phone is switched off), the SMSC will store the SMS CAR message.
- It will forward the SMS message when the recipient is available and the message's expiry period is not exceeded.
SMSCs can be used to interface with other applications, for example a spreadsheet can interface with the SMSC allowing messages to be sent SMS from an Excel spreadsheet, or to send an SMS from Excel. Inbound messages to a long number or short code can also be passed through the SMSC allowing m2m communications or Telematics.
Validity period of an SMS message
An SMS message is stored temporarily in the SMS center if the recipient mobile phone is unavailable. It is possible on most mobile handsets to specify an expiry period after which the SMS message will be deleted from the SMS center. Once deleted, the SMS message will no longer be available for dispatch to the recipient mobile phone (even if it comes on line). The validity period should be regarded by the handset user as a request, as the SMSC itself can be configured to ignore or otherwise handle message delivery schedules.
Message status reports
The SMS sender needs to set a flag in the SMS message to notify the SMS center that they want the status report about the delivery of this SMS message. This is usually done by changing a setting on the mobile handset.
|
<urn:uuid:4730e9ec-c5d4-4416-9492-91b9d35ccc22>
|
CC-MAIN-2023-23
|
https://en.wikipedia.org/wiki/Short_message_service_center
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649105.40/warc/CC-MAIN-20230603032950-20230603062950-00066.warc.gz
|
en
| 0.852692 | 650 | 2.984375 | 3 |
- freely available
Sensors 2013, 13(4), 4258-4271; doi:10.3390/s130404258
Abstract: The main objective of this study was to use infrared spectroscopy to identify vegetable oils used as raw material for biodiesel production and apply multivariate analysis to the data. Six different vegetable oil sources—canola, cotton, corn, palm, sunflower and soybeans—were used to produce biodiesel batches. The spectra were acquired by Fourier transform infrared spectroscopy using a universal attenuated total reflectance sensor (FTIR-UATR). For the multivariate analysis principal component analysis (PCA), hierarchical cluster analysis (HCA), interval principal component analysis (iPCA) and soft independent modeling of class analogy (SIMCA) were used. The results indicate that is possible to develop a methodology to identify vegetable oils used as raw material in the production of biodiesel by FTIR-UATR applying multivariate analysis. It was also observed that the iPCA found the best spectral range for separation of biodiesel batches using FTIR-UATR data, and with this result, the SIMCA method classified 100% of the soybean biodiesel samples.
Brazil has always stood out on the global scene for its advanced know-how in the production of biofuels, and was the second-largest producer of biodiesel in 2010 and the biggest global consumer in 2011 . The first experiments on the use of ethanol in Otto cycle engines date back to the beginning of the 20th century. Although studies on biofuels in Brazil started long ago, it was only in the 21th century that the country put into action a plan to produce biodiesel on a large scale, taking advantage of the experience acquired with the Pro-Alcohol Program. With the intent to broaden the Brazilian energy matrix, in 2004, the Federal Government launched the National Program of Biodiesel Production and Use (PNPB).
Biodiesel is defined by the National Petroleum Agency (ANP), through Government Directive N° 255, of 15 September 2003, as a compound fuel derived from vegetable oils or animal fats, called B100 . It can be used in pressure-ignited internal combustion engines or for other types of energy generation and can partially or totally replace fossil fuels. Therefore, there are wide possibilities to use biodiesel in urban, road and rail transportation, for the generation of energy, in stationary engines, and others.
Brazil enjoys a privileged position compared to other countries, due to its biodiversity and vast territorial area, able to facilitate the cultivation of distinct species in every region. Consequently, the raw materials for the production of biodiesel can be selected in accordance with their availability in each region throughout the country . Among the sources stand out among them are oilseeds, like cotton, peanut, dendê (palm oil), sunflower, castor bean, barbados nut and soybean [4–6]. Besides the privileged location, two other factors drive Brazil's biodiesel production. The first is the amount of arable land available and the second is the abundance of water resources. According to the Ministry of Agriculture, just considering the new areas that could be destined for the production of oilseeds, they would amount to approximately 200 million hectares .
Currently, soybean oil is the most used vegetable raw material for making biodiesel in Brazil, with an average share of 78% in the production of this fuel, followed by cotton oil, with approximately a 4-percent share. The remainder includes animal fats, and other oily materials . Notwithstanding soybean oil's status as most important raw material, in terms of volume, in the production of biodiesel, the Federal Government has been encouraging the development of other oilseed crops, particularly the ones linked with family farming operations. Furthermore, depending on only one crop as major supplier of raw material of an important national energy autonomy project might turn it unsustainable, as it would promote the economic development only (or mainly) of regions where climate and geological characteristics are favorable, whilst keeping the project at the mercy of economic pressures from one production chain only. Similar problems surfaced in the development of the Pro-Alcohol Program in the 1970s.
In this sense, the Ministry of Agriculture, Livestock and Food Supply (MAPA) has been assisting the farmers with crop management practices, providing them with cultivars for the production of biodiesel. In line with this work, the Brazilian government encourages the production of biodiesel from different oilseeds and technological nuances, inviting the participation of agribusiness and family farming operations . Likewise, federal decrees define the taxation rules, which can vary according to planting region, raw material or production category, with distinct tax rates levied on agribusiness and family farming, where the latter is a priority of the program. Another factor that leads to the cultivation of several oilseed crops is easy access to bank loans and reduced interest rates, besides the obligation of the biodiesel producing companies to acquire 5% of their raw material from family farmers. Besides the incentive for the production of biofuels, aligned with the economic development brought about by the production of the oilseeds, the adoption of a quality control program is essential for the identification of the different vegetable oil sources of these biofuels.
This need becomes even more relevant as there are soaring financial attractions for the production of alternative biofuels from renewable sources, in which a diversity of fuel formulations is (or could be) available in the market. This would also inhibit the use of raw materials and the production of biodiesel without the authorization of the regulating organ.
Nevertheless, few studies with the aim to identify a vegetable oil source utilized in the production of biofuels exist. With the incentives of the federal government, now encouraging the use of new raw materials for the production of biodiesel, it is necessary to identify their source and, to this end, there is a need to resort to methodologies that make it possible to identify a vegetable oil source. With regard to chemistry, vegetable oils of distinct sources present a different fatty acids chemical compositions. They differ with regard to the length of the chain, the degree of saturation or the presence of other chemical functions , properties that can all be identified through spectrometric techniques [9–14].
A major reason for characterizing its source is related to inspection, as some countries rely on different policies depending on the raw material. Another reason is related to the specific physical-chemical properties of every different vegetable oil and their relation with correct application. Within this context, besides the development of research towards making it technically and economically viable to use other raw materials for the production of biodiesel, it becomes evident (or consequent) that it is necessary to develop analytical techniques to make it possible to identify the vegetable oil source utilized in the production of biodiesel.
Multivariate analyses have recently made possible modeling of chemical and physical properties of simple and complex systems from spectroscopic data. Recent works using near infrared (NIR) spectroscopy, and multivariate analysis for biodiesels in order to identify which vegetable oils are used in production were investigated. Principal component analysis (PCA), and hierarchical cluster analysis (HCA) were used for unsupervised pattern recognition while soft independent modelling of class analogy (SIMCA), was used for supervised pattern recognition . In another work four different multivariate data analysis techniques are used to solve the classification problem, including regularized discriminant analysis (RDA), partial least squares method/projection on latent structures (PLS-DA), K-nearest neighbors (KNN) technique, and support vector machines (SVMs). Classifying biodiesel by feedstock (base stock) type can be successfully solved with modern machine learning techniques and NIR spectroscopy data . Also two classification methods are compared, namely full-spectrum soft independent modelling of class analogy (SIMCA) and linear discriminant analysis with variables selected by the successive projections algorithm (SPA-LDA) .
In the other hand, qualitative and quantitative analysis using spectroscopy in the infrared region expanded from the time when the data generated by a FT-IR spectrophotometer could only be scanned, enabling statistical methods to solve problems of chemical analysis [17–21]. In HCA the spectra data matrix is reduced to one dimension, by matching similar pairs, until all points in a single group are matched. The goal of HCA is to display the data in a two-dimensional space in order to emphasize their natural groupings and patterns. The distance between the points (samples and variables) reflects the similarity of their properties, so the closer the points in the sample space, the more similar they are. Results are presented as dendrograms, which samples or variables are grouped according to similarity. In PCA the n-dimensional data is designed into a low-dimensional space, usually two or three. This is done by calculating the principal components obtained by making linear combinations of original variables. In a principal component analysis, clustering of samples defines the structure of data through graphs of scores and loadings, whose axes are principal components (PCs) in which data are designed [22–24]. The iPCA analysis consists of dividing the data set into a number of equidistant intervals. For each interval a PCA is performed, and the results are shown in charts of scores. This method is intended to give an overview of the data and may be useful in the interpretation of signs which are more representative of the spectrum to build a good model for multivariate calibration [25–27]. In SIMCA, there is a training set which is modeled by principal component analysis (PCA). Subsequently, new samples are fitted to the model. Test samples are classified as similar or dissimilar [23,28].
2. Experimental Section
2.1. Materials and Methods
Were used six different vegetable oil sources: canola, cotton, corn, palm, sunflower and soybean. For the latter two, two samples of each oil from different sources were acquired. A two-letter code was used to identify the samples. The first letter specifies if the oil sample is degummed (O) or biodiesel (B), the second letter specifies which vegetable oil source was utilized (for example, C = Canola) and the code that comes next to letter identification represents the analysis reproduction number. Finally, the small letter (a or b) identifies the origin of the sample. The biodiesel samples were produced from samples of degummed oils. From the cotton oil sample two batches were produced and from the soybean sample (b) three batches of biodiesel were produced. This procedure was adopted with the purpose to guarantee the method reproducibility. The canola and sunflower biodiesel batches were acquired from the biodiesel pilot plant of the University of Santa Cruz do Sul—UNISC, in Rio Grande do Sul, Brazil.
The methylation route was used to produce the biodiesel via transesterification. Sodium methoxide (Rodhia) was used as catalyst, and as reagent, methyl alcohol (Vetec, P.A) at a 1:6 molar rate . The biodiesel samples were characterized through methods standardized by the AOCS Physical and Chemical Characteristics of Oils, Fats, and Waxes and European Norm (EN) by the following parameters and respective methods: moisture (AOCS Ca2e-84), acidity rate (AOCS Ca5a-40), total glycerol (EN 14105), free glycerol (AOCS Ca14-5) and methanol (EN 14110).
2.2. Acquisition of Spectra in the Medium Infrared
The infrared spectra were acquired on a Perkin Elmer model Spectrum 400 FTIR Spectrometer, based on a Universal Attenuated Total Reflectance sensor (UATR-FTIR). A range from 4,000 to 650 cm−1 was scanned, with a resolution of 4 cm−1 and 32 scans. The crystal utilized in this technique, contains diamond in its upper layer and a zinc selenide focusing element. The spectra of each sample were acquired with six replicates. Later, they were normalized, in order to eliminate the differences in intensity stemming from concentration variations, reducing external effects in the same order of magnitude, and all of them varying within an intensity range from 0 to 1 .
2.3. Multivariate Data Analysis
All obtained spectra were treated by multivariate analysis tools, using the Hierarchical Cluster Analysis (HCA) and the Principal Components Analysis (PCA) and the Soft Independent Modeling of Class Analogy (SIMCA), through the computer program Pirouette® 3.11 by Infometrix (Bothell, WA, USA). Interval Principal Component Analysis (iPCA) from the software Matlab® 7.11.0 (The Math Works, Natick, MA, USA) was also employed, using the iToolbox package ( http://www.models.kvl.dk, Copenhagen, Denmark).
2.4. Modeling of Biodiesel Batches in the Medium Infrared
The set of raw spectra of biodiesel samples are shown in Figure 1. To remove noise the spectra were then treated using the Savitzky–Golay first derivative procedure with a second-order polynomial and a 15-point window. Mean centered data and Standard Normal Variate (SNV) were used as pre-processing tools for multivariate analysis .
2.4.1. PCA and HCA
In the PCA and HCA, the 735–1,783 and 2,810–3,035 cm−1 regions were selected because the other regions contained no spectral information or were polluted by water vapor or carbon dioxide bands due to poor compensation. For obtaining the HCA dendrogram, the Euclidian distance and the incremental connection method were used. In Figure 2, one can observe the spectra of samples of biodiesel with the application of the first derivative and the SNV. The regions of the spectra that were excluded are highlighted.
2.4.2. Interval Principal Component Analysis (iPCA)
The objectives of the results obtained at the Interval Principal Component Analysis (iPCA) consisted in detecting the spectral region where there is the best separation of the different samples of biodiesel with the intent to utilize it later in the SIMCA classification method. The spectra were split into 8, 16, 32 and 64 equidistant regions, while the combination of results between the principal components: PC1 versus PC2, PC1 versus PC3 e PC2 versus PC3, was also evaluated.
2.4.3. Soft Independent Modeling of Class Analogy (SIMCA)
Once the best spectral region was obtained with the iPCA algorithm, the SIMCA model was built using of the biodiesel spectra data. The SIMCA model built was in accordance with the data in Table 1.
3. Results and Discussion
3.1. Characterization of the Biodiesel Batches
The results from the characterization of the biodiesel samples are shown on Table 2.
3.2. Joint Analysis between the Biodiesel and the Degummed Oil Samples
Through the PCA, it was observed that 93.73% of data variances were explained by the analysis of the two principal components. Figure 3 shows PCA scores plot (PC1 versus PC2) obtained from UATR/FTIR data. PC1 separates the biodiesel samples, with positive values, from the degummed oil samples, in negative values on the scores chart. On the other hand, PC2, in turn, manages to separate both the biodiesel samples and the samples of palm and cotton degummed oils, in positive values, from the samples of biodiesel and samples of soybean, sunflower, canola and corn degummed oils, in negative values on the scores chart.
Although the samples of degummed oils and the samples of biodiesel are on opposite sides in Figure 3, it is clear that the vegetable oil source exerts an influence on the PC2 of these samples, for example, by observing the samples of biodiesel and the samples of degummed palm and cotton oils, it is ascertained that they are located approximately at the same height of the PC2 axis, though on opposite sides. The same thing also occurs with the other samples. The trends observed through analyses of the principal components were confirmed through the dendrogram obtained by HCA (Figure 4).
In this dendrogram one can observe the presence of two clusters, one associated with the biodiesel samples and the other associated with the degummed oil samples. The results achieved in the dendrogram are totally in line with the results achieved on the PCA scores plot (PC1 versus PC2).
3.3. Interval Principal Component Analysis (iPCA)
The best results at the iPCA were achieved with the two first principal components (PC1 versus PC2) and splitting the spectrum into 16 equidistant intervals. Figure 5 features the percentage variance chart for every region studied. In this chart, for each interval, that is to say, for each region of the spectrum, variance is calculated, in percentage terms, for each principal component. It should be added that the bars present in percentage form (height of the bars) the variance contained in each main component for each interval. In this figure, interval 14 accumulates 99.54% of information in the first two principal components for the UATR-FTIR spectra data.
The spectral region from 1,300–900 cm−1 is referred to as the fingerprint, as it confirms the identity of compounds. Within this range, the most important absorptions are the ones stemming from the stretching of the C–O bond of the esters. These absorption ranges of the ester C–O bonds, actually correspond to two asymmetric vibrations that involve the bonds C–C and C–O. In the case of saturated aliphatic esters, the two bands observed appear at 1,275–1,185 cm−1 and at 1,160–1,050 cm−1. The first involves the bond stretching between the oxygen and the carbonyl carbon, coupled with C–C stretching. The second involves the bond stretching between the oxygen atom and a carbon atom. The band that occurs in the biggest number of waves is usually the more intense of the two .
The spectral region where the best separation of biodiesel samples in the UATR-FTIR spectra data was achieved includes the range of 1,276 to 1,068 cm−1, regarding interval 14, which can be visualized in Figure 6.
Figure 6 presents differences between soybean and sunflower samples. It is observed that the batches of soybean A and B are not in the same group and, consequently, they present differences in their chemical composition. This is justified by the characterization data of the biodiesel samples shown on Table 2. The batch of soybean A presents parameters such as moisture, total glycerol, free glycerol and methanol that are not in line with the specified quality patterns of biodiesel (set forth by ANP 07/2008), particularly with regard to the total glycerol rate, reaching the value of 1.72%, and the established limit is 0.25%. The amount of glycerol in the batch suggests that the decantation process was insufficient, which means that the glycerin was not totally removed. In the same way the behavior of the batches of sunflower A and B can be observed, where it becomes evident that the biodiesel from sunflower A has more similarity with the soybean A, which is not in compliance with the recommended specification. For these reasons, the batches of soybean A and sunflower A were not considered in the development of the SIMCA modeling.
3.4. Soft Independent Modeling of Class Analogy (SIMCA)
The spectral region, from 1,276 to 1,068 cm−1, where the best biodiesel sample separation was achieved using UATR-FTIR spectra data at the iPCA was used for the SIMCA modeling. Prior to this modeling, a PCA was developed from the spectra samples that make up the training data presented in Table 1. Upon analyzing the results achieved with the PCA, it was observed that 98.40% of data variance was explained in the two first principal components. The Figure 7 shows the scores plot (PC1 versus PC2) for the UATR-FTIR spectra of biodiesel samples used in the SIMCA training set.
Table 3 presents a summary of the SIMCA model obtained. Figure 8 presents the Coomans diagram which features the orthogonal distances of the biodiesel utilized for the training set. It is observed that Class II and Class IV samples classify correctly into their respective classes.
Figure 9 presents the results achieved for the testing samples. The results proved satisfactory and suggest a 100% correct classification for the spectra of the samples of the batch of soybean biodiesel tested.
The present paper suggests that is possible to develop a methodology to identify vegetable oils used as raw material in the production of biodiesel by Fourier transform infrared spectroscopy using a universal attenuated total reflectance (FTIR-UATR) sensor by applying multivariate methods of analysis. Upon comparing the samples of degummed oils and biodiesel in the FTIR through the PCA, it becomes evident that a vegetable oil source has the same influence on the principal components as the corresponding biodiesel.
The application of principal component analysis by interval method (iPCA) made it possible to locate the best spectral intervals for the separation of samples of biodiesel using UATR-FTIR spectra data. In light of the results obtained in the FTIR, the SIMCA modeling allowed for the 100% classification of the soybean biodiesel samples.
The authors would like to thank CAPES, PRONEX-FAPERGS, CNPq and FAP-UNISC.
- Boletim mensal dos combustíveis renováveis; Ministério de Minas e Energia, Secretaria de Petróleo, Gás Natural e Combustíveis Renováveis (in Portuguese), 2011. Available online: http://www.mme.gov.br/spg/galerias/arquivos/publicacoes/boletim_mensal_combustiveis_renovaveis/Boletim_DCR_nx_042_-_junho_de_2011.pdf (accessed on 1 November 2012).
- BRASIL; Portaria N° 255, de 15 de Setembro de 2003. Agência Nacional do Petróleo (ANP) (in Portuguese). Available online: http://www.perkinelmer.com.br/downloads/biodisel/ANP%20Portaria%20255_2003.pdf (accessed on 1 November 2012).
- Pinto, A.C.; Guarieiro, L.L.N.; Rezende, M.J.C.; Ribeiro, N.M.; Torres, E.A.; Lopes, W.A.; Pereira, P.A.P.; Andrade, J.B. Biodiesel: An overview. J. Braz. Chem. Soc. 2005, 16, 1313–1330. [Google Scholar]
- Parente, E.J. Biodiesel: Uma Aventura Tecnológica Num País Engraçado(in Portuguese), 1st ed.; Unigráfica: Fortaleza, Brazil, 2003. [Google Scholar]
- Anuário da Indústria de Biodiesel no Brasil. In Brazilian Biodiesel Industry Yearbook (in Portuguese); Grupo Biodiesel: Curitiba, Brazil, 2010.
- Biodiesel. O novo combustível do Brasil; Programa Nacional de Produção e Uso do Biodiesel (PNPB) (in Portuguese). Available online: http://www.biodiesel.gov.br/docs/cartilha.pdf (accessed on 1 November 2012).
- Boletim mensal dos combustíveis renováveis; Ministério de Minas e Energia, Secretaria de Petróleo, Gás Natural e Combustíveis Renováveis(in Portuguese). Available online: http://www.mme.gov.br/spg/galerias/arquivos/publicacoes/boletim_mensal_combustiveis_renovaveis/Boletim_DCR_nx_032_-_agosto_de_2010.pdf (accessed on 1 November 2012).
- Moretto, E.; Fett, R. Tecnologia De Óleos E Gorduras Vegetais Na Indústria De Alimentos (in Portuguese); Varela Editora e Livraria Ltda: São Paulo, Brazil, 1998. [Google Scholar]
- Foglia, T.A.; Jones, K.C.; Phillips, J.G. Determination of biodiesel and triacylglycerols in diesel fuel by LC. Chromatography 2005, 62, 115–119. [Google Scholar]
- Ghesti, G.F; Macedo, J.L.; Brandão, G.S.; Melo, O.P.M.; Luz, S.M.; Dias, S.C.L.; Dias, J.A.; Malafaia, F.R. Potencialidade Do Modelo Hatr/Pls Para O Monitoramento E Quantificação Da Produção De Biodiesel (in Portuguese). In Proceedings of the 3° Simpósio Nacional de Biocombustíveis, Rio de Janeiro, Brazil, 8–9 April 2010; Available online: http://www.abq.org.br/biocom/2010/trabalhos/22-7304.htm (accessed on 1 November 2012).
- Lira, L.F.B.; Vasconcelos, F.V.C.D.; Pereira, C.F.; Paim, A.P.S.; Stragevitch, L.; Pimentel, M.F. Prediction of properties of diesel/biodiesel blends by infrared spectroscopy and multivariate calibration. Fuel 2010, 89, 405–409. [Google Scholar]
- Pimentel, M.F.; Ribeiro, G.M.G.S.; Cruz, R.S.; Stragevitch, L.; Filho, J.G.A.P.; Teixeira, L.S.G. Determination of biodiesel content when blended with mineral diesel fuel using infrared spectroscopy and multivariate calibration. Microchem. J. 2006, 82, 201–206. [Google Scholar]
- Silva, A.G.B.; Pontes, M.J.C. Identificação de Fraude em Misturas de Diesel/Biodiesel Utilizando a Espectrometria NIR e Quimiometria. Proceedings of the X Jornada de Ensino, Pesquisa e Extensão JEPEX da UFRPE, Recife, Brasil, 20 October 2010; p. 1.
- Veras, G.; Gomes, A.A.; Silva, A.C.; Brito, A.L.B.; Almeida, P.B.A; Medeiros, E.P. Classification of biodiesel using NIR spectrometry and multivariate techniques. Talanta 2010, 83, 565–568. [Google Scholar]
- Balabin, R.M.; Safieva, R.Z. Biodiesel classification by base stock type (vegetable oil) using near infrared spectroscopy data. Anal. Chim. Acta 2011, 689, 190–197. [Google Scholar]
- Insausti, M.; Gomes, A.A.; Cruz, F.V.; Pistonesi, M.F.; Araujo, M.C.U.; Galvão, R.K.H.; Pereira, C.F.; Band, B.S.F. Screening analysis of biodiesel feedstock using UV-vis, NIR and synchronous fluorescence spectrometries and the successive projections algorithm. Talanta 2012, 97, 579–583. [Google Scholar]
- Godoy, S.C.; Ferrão, M.F.; Gerbase, A.E. Determination of hydroxyl value of soybean polyol by attenuated total reflectance/fourier transform infrared spectroscopy. J. Am. Oil Chem. Soc. 2007, 84, 503–508. [Google Scholar]
- Al-Gharabli, S.I. Determination of glucose concentration in aqueous solution using ATR-WT-IR technique. Sensors 2009, 9, 6254–6260. [Google Scholar]
- Parisotto, G.; Ferrão, M.F.; Muller, A.L.H.; Müller, E.I.; Santos, M.F.P.; Guimarães, R.C.L.; Dias, J.C.M.; Flores, É.M.M. Total acid number determination in residues of crude oil distillation using ATR-FTIR and variable selection by chemometric methods. Energy Fuels 2010, 24, 5474–5478. [Google Scholar]
- Li, X.; Xie, C.; He, Y.; Qiu, Z.; Zhang, Y. Characterizing the moisture content of tea with diffuse reflectance spectroscopy using wavelet transform and multivariate analysis. Sensors 2012, 12, 9847–9861. [Google Scholar]
- Khanmohammadi, M.; Garmarudi, A.B.; De la Guardia, M. Characterization of petroleum-based products by infrared spectroscopy and chemometrics. Trends Anal. Chem. 2012, 35, 135–149. [Google Scholar]
- Benar, P.; Gonçalves, A.R.; Mandelli, D.; Ferreira, M.M.C.; Schuchardt, U. Principal component analysis of the hydroxymethylation of sugarcane lignin: A time-depending study by FTIR. J. Wood Chem. Technol. 1999, 19, 151–165. [Google Scholar]
- Sabin, J.G.; Ferrão, M.F.; Furtado, J.C. Multivariate analysis to applied in the identification of antidepressants. Part II: Principal components analysis (PCA) and soft independent modeling of class analogies (SIMCA). Braz. J. Pharm. Sci. 2004, 40, 387–396. [Google Scholar]
- Gasso-Tortajada, V.; Ward, A.J.; Mansur, H.; Brøchner, T.; Sørensen, C.G.; Green, O. A novel acoustic sensor approach to classify seeds based on sound absorption spectra. Sensors 2010, 10, 10027–10039. [Google Scholar]
- Leardi, R.; Nørgaard, L. Sequential application of backward interval partial least squares and genetic algorithms for the selection of relevant spectral regions. J. Chemom. 2004, 18, 486–497. [Google Scholar]
- Sussulini, A.; Prando, A.; Maretto, D.A.; Poppi, R.J.; Tasic, L.; Banzato, C.E.M.; Arruda, M.A.Z. Metabolic profiling of human blood serum from treated patients with bipolar disorder employing 1H NMR spectroscopy and chemometrics. Anal. Chem. 2009, 81, 9755–9763. [Google Scholar]
- Kuligowski, J.; Carrión, D.; Quintás, G.; Garrigues, S.; De, L.G.M. Direct determination of polymerised triacylglycerides in deep-frying vegetable oil by near infrared spectroscopy using partial least squares regression. Food Chem. 2012, 131, 353–359. [Google Scholar]
- Geir, R.F.; Bjørn, G.; Olav, M.K. A method for validation of reference sets in SIMCA modeling. Chemom. Intell. Lab. Syst. 2004, 72, 101–109. [Google Scholar]
- Porte, A.F.; Schneider, R.C.S.; Kaercher, J.A.; Klamt, R.A.; Schmatz, W.L.; Silva, W.L.T.; Severo, W.A. Sunflower biodiesel production and application in family farms in Brazil. Fuel 2010, 89, 3718–3724. [Google Scholar]
- Ramos, M.L.A.; Nunes, R.A.; Maurício, M.H.P.; Maurício, G.M.; Araruna, J.T., Jr. Espectroscopia De Fluorescência Induzida Por Laser Aplicada À Rápida Identificação De Plumas De Hidrocarbonetos (in Portuguese). Proceedings of the 2° Congresso Brasileiro de P&D em Petróleo & Gás, Rio de Janeiro, Brasil, 15–18 June 2003; pp. 1–6.
- Zimper, U.; Aaltonen, J.; McGoverin, C.M.; Gordon, K.C.; Krauel-Goellner, K.; Rades, T. Quantification of process induced disorder in milled samples using different analytical techniques. Pharmaceutics 2010, 2, 30–49. [Google Scholar]
- Colthup, N.B.; Daly, L.H.; Wiberley, S.E. Introduction to Infrared and Raman Spectroscopy; Academic Press: San Diego, CA, USA, 1990; pp. 301–307. [Google Scholar]
|Class||Specification Sample||Identification||Numbers of Batches||Number of Spectra|
|Parameters||Moisture||Index Total Acidity||Total Glycerol||Free Glycerol||Methanol|
|Methods||AOCS Ca2e-84||AOCS Ca5a-40||EN 14105||AOCS Ca14-56||EN 14110|
|Maximum Standards (ANP 07/2008)||500 mg·kg−1||0.5 mg·KOH·g−1||0.25% Weight||0.02% Weight||0.2% Weight|
|Soybean A (BSa)||601||0.42||1.72||0.20||0.184|
|Soybean 1B (BSb)||310||0.35||0.15||0.013||0.154|
|Soybean 2B (BSb)||326||0.38||0.16||0.009||0.165|
|Soybean 3B (BSb)||305||0.32||0.14||0.011||0.147|
|Canola B (BCb)||433||0.51||0.26||0.018||0.179|
|Cotton A (BAa)||343||0.45||0.15||0.003||0.145|
|Cotton B (BAb)||356||0.42||0.17||0.006||0.167|
|Sunflower A (BGa)||458||0.49||0.24||0.009||0.145|
|Sunflower B (BGb)||489||0.43||0.11||0.012||0.153|
|Class||Specification of the Batch||Number of Factors||% of Cumulative Variance||Correctly Classified||Incorrectly Classified|
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license( http://creativecommons.org/licenses/by/3.0/).
|
<urn:uuid:0eeeb337-08d3-40ec-a62f-67655dce61f7>
|
CC-MAIN-2017-43
|
http://www.mdpi.com/1424-8220/13/4/4258/htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00436.warc.gz
|
en
| 0.863913 | 7,699 | 2.671875 | 3 |
The Lei Cheng Uk Han Tomb Museum is composed of an ancient brick tomb and of an exhibition hall adjacent to it. It is located in Sham Shui Po.
The tomb and gallery came under the management of the former Urban Council in 1969. The museum later became a branch of the Hong Kong Museum of History in 1975. As such, it is managed by the Leisure and Cultural Services Department of the Hong Kong Government. A newly built exhibition hall opened in 1988, when the tomb was declared a gazetted monument. The hall was refurbished in 2005. Details on the discovery and characteristics of the tomb, as well as bronze and pottery artifacts found in the tomb are on permanent display in the exhibition hall.
A 3D digital animation in the exhibition hall provides a detailed view of the interior of the tomb. Moreover, a 1:1 replica of the inside of the Lei Cheng Uk Han Tomb is displayed at the Hong Kong Museum of History.
There is an Exhibition Hall located next to the tomb. The first section is about food and drink in Han as it seems because most of what was found in the Han tomb is related to food. The display of this section begins with the old Chinese adage, 'food is the first necessity of the people'. There is a map depicting food distribution, a pictogram of rice distribution and a table of the major food groups. There are also three replicas of figurines. Two of the figurines are cooks, and another one is a farmer.
The second section is about the excavation of the Han tomb. The excavation process, the inside of the tomb and the archaeologists at work are shown with several photographs.The tomb's structure and layout are shown with the models and plans. This display also how the professionals dated the tomb by using the inscriptions on the bricks.
The third part of the gallery shows the artifacts found in the tomb. As the only Eastern Han Dynasty brick tomb ever found in Hong Kong, the Lei Cheng Uk Han Tomb has invaluable historic value containing 58 items found on site. Objects include cooking utensils, food containers, storage jars and models (a house, a granary, a well and a stove) made of pottery (50), as well as bowls, basins, mirrors, and bells made of bronze (8). No human skeletal remains were found.
Granary model found at the tomb
Ding found at the tomb
|
<urn:uuid:a4f80856-f22d-4e12-af00-190d6b40cb8a>
|
CC-MAIN-2017-47
|
http://hk-heritage.blogspot.com/2010/11/klei-cheng-uk-han-tomb-in-sham-shui-po.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00101.warc.gz
|
en
| 0.968514 | 491 | 2.859375 | 3 |
|Oracle® OLAP DML Reference
10g Release 1 (10.1)
Part Number B10339-02
An expression that has the DATE data type, or a text expression that specifies a date. See "TEXT-to-DATE Conversion".
In place of a DATE expression, you can specify a text expression that has values conforming to a valid input style for dates. Oracle OLAP automatically converts the values of the text expression to DATE values, using the current setting of the DATEORDER option to resolve any ambiguity.
The value of WEEKDAYSNEWYEAR specifies how many days of the new year there must be in the week for WEEKOF to consider it to be week 1 of the new year. For example, when January 1 is on a Wednesday, then the week of December 29 to January 4 has four days in the new year. WEEKDAYSNEWYEAR must therefore have a value of 4 or less for that week to be counted as week 1. This determination of week 1 affects the numbering of all weeks in the year.
Example 24-40 Finding Today's Week
The following statement sends the week of the year in which today's date falls to the current outfile.
When today's date is August 5, 1996, which is a Monday, this statement produces the following output.
|
<urn:uuid:1c416e1c-1311-4f93-b947-8e1660521f2d>
|
CC-MAIN-2014-10
|
http://docs.oracle.com/cd/B14117_01/olap.101/b10339/x_trap028.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00094-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.865942 | 277 | 2.84375 | 3 |
Warwick Medical School and their researchers have been studying the long term mental effects of adolescence children who move from school to school frequently.
The research found that frequent moving from one school to the next ‘heightens the risk of developing psychotic-like symptoms by up to 60% and contributes to mental health, psychotic disorders and suicide problems in adulthood.’
For more information on this study: Frequent School Moves
School, teachers, peers, homework, hormones, and extra curricular activities all play a role in your child’s mental health. When children have to move frequently from school to school it puts a lot of extra pressure on the child.
Children worry about getting good grades, getting along with their peers, making teachers and parents happy, girl-boy relationships, peer pressure of taking drugs, drinking alcohol, or smoking cigarettes, bullying, homework assignments and chores at home.
Thousands of children are latchkey children meaning when they go home after school to an empty house, their parents are still at work, leaving their children to fend for their selves for hours alone.
For information on legal age restrictions by state see:
|
<urn:uuid:eec7a95f-d88f-4376-9eaf-4b926f84d6bb>
|
CC-MAIN-2014-35
|
http://www.examiner.com/article/increased-psychotic-symptoms-associated-with-frequent-school-moves?cid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830323.35/warc/CC-MAIN-20140820021350-00135-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.963096 | 231 | 3.046875 | 3 |
Type 2 Diabetes
Open as PDF
to About Diabetes
Diabetes is more than one disease, although
in each type high blood glucose is found. The types of diabetes
are defined by what has caused the blood glucose to be high.
If blood glucose is high, either there is not enough insulin,
or the insulin is not working effectively.
Insulin is a hormone secreted from the pancreas. Insulin
helps move the glucose from the blood into the cells. In type
2 diabetes the cells in the pancreas that produce insulin
may not produce enough, or the insulin that is produced is
not able to move the glucose from the blood, into the cells.
Normally when you eat, blood glucose rises, and this causes
insulin to be secreted by the pancreas. If the cells in the
pancreas are not producing enough insulin, or if the insulin
isn’t effective, blood glucose remains in the blood
causing it to be high. Over time, high blood glucose levels
may hurt your eyes, kidneys, nerves and heart.
The body tries to keep all systems in a balance or at a
steady rate, called homeostasis. For most people,
it is the release of insulin that allows the glucose to
enter the body
cells to provide energy needed for cellular activities. If
your pancreas isn’t producing enough insulin, medications
may help. If the insulin you are producing isn’t
working well, medications may help that as well.
People who are overweight sometimes become “insulin
insensitive” and the insulin they produce doesn’t
work well. Losing weight usually helps this condition.
Remember, this information doesn’t replace any that
your doctor may have given you or spoken to you about. Whenever
you have questions about your diabetes or how it is being
treated, talk to your doctor.
Diabetes | Food & Diabetes
| Medications & Diabetes
| Current Issue | Archive
|
<urn:uuid:03a0ff42-4316-4a39-9c15-13fa09474a7a>
|
CC-MAIN-2014-15
|
http://urbanext.illinois.edu/diabetes/about_04.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.902077 | 426 | 3.734375 | 4 |
Standing 1.2 metres tall at the shoulder and weighing 210 kilograms, woodland caribou are among the more formidable members of our Canadian wildlife. They’re also hardy, being the only large mammal able to eat lichen as a primary food source. Caribou are unique among members of the deer family in that both sexes grow antlers. Shown above is a woodland caribou photographed in Alaska.
Photo by Dean Biggins (U.S. Fish and Wildlife Service).
In the province of Nova Scotia there are no fewer than 42 lakes, islands, marshes, ponds, brooks and rivers named after the woodland caribou — a species once so populous in Atlantic Canada they were a force of nature. Nowadays, no more than 140 remain in the southeastern corner of the country, none of which are in Nova Scotia.
Woodland caribou can be found all throughout Canada’s north, but the populations can be differentiated geographically. We divide this iconic species into six separate populations, five of which still exist today. The 140 individuals mentioned above are all that are left of the Atlantic Caribou population, which once spanned the entirety of the Maritimes and beyond.
With the boisterous arrival of Europeans to Eastern Canada, the Atlantic caribou population began a slow but steady retreat northward, disappearing from Nova Scotia and New Brunswick in the last century and from Prince Edward Island as early as 1874. The remains of this once mighty population are now confined to the northern shore of the Gaspé Peninsula, clinging to life in the mountainous Gaspésie National Park. They constitute the only population of caribou south of the St Lawrence River.
In the 1950s, the national park contained as many as 750 caribou, but this number plummeted to 200 by the 1980s. The population estimate of 140 comes from a document published in 2006, but more recent estimates are lower still. The surviving population, genetically distinct from any other across Canada, continues to decline despite being protected from the hunting and habitat loss, which destroyed the Atlantic Caribou. To lose them means losing something that cannot be replaced, not truly.
The odds are very much against these remaining Atlantic Caribou. Their fight for survival has been a long one and it could end on the mountains of Gaspésie National Park, but I choose to believe we’ll see fit to safeguard this animal, if only for the sake of our gratitude toward it. For generations past, both before and after European settlement, the caribou granted early Canadians a way of life with their meat and their fur that we would be hard pressed to repay in full.
- A\J Editorial Board (19) A\J Editorial Board
- A\J Special Delivery (186) A\J Special Delivery
- Backstage at A\J (87) Backstage at A\J
- Current Events (216) Current Events
- EcoLogic (16) EcoLogic
- Food and Culture (29) Food and Culture
- Green Living (36) Green Living
- Made in Canada (23) Made in Canada
- Renewable Energy (59) Renewable Energy
- Shades of Green (15) Shades of Green
- Summer Reading Series (8) Summer Reading Series
- Sustainable A\J (58) Sustainable A\J
- The Green Student (19) The Green Student
- The Mouthful (14) The Mouthful
- The Wild Side (44) The Wild Side
- Think Global (21) Think Global
- Turtle Island Solidarity Journey 2018 (4) Turtle Island Solidarity Journey 2018
Popular on A\J
- Join an immersive educational experience with us through free virtual reality tours on Wednesday Feb 19th 2020 in D… https://t.co/yRBU7MKCCO — 1 week 2 days ago
- RT @noahjlpro: Situation has been resolved, apparently the Minister's police detail may have followed the group to the community c… https://t.co/VjnjwUMEYn — 1 week 5 days ago
- RT @noahjlpro: Sources suggest that OPP officers have shown up with several cruisers and possibly a bus with officers. no confirmation yet. — 1 week 5 days ago
|
<urn:uuid:16f026aa-5b6c-48c8-8a15-a94ed52b2c21>
|
CC-MAIN-2020-10
|
https://alternativesjournal.ca/community/blogs/shades-green/shades-green-retreating-caribou
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00199.warc.gz
|
en
| 0.927338 | 886 | 3.640625 | 4 |
Plants halt shifting sands
Nov 13, 2006
Physicists in Germany have turned their attention to an age-old challenge facing desert dwellers -- how plants can be used to stop moving sand dunes. The researchers defined equations of motion that describe wind velocities, vegetation growth, sand movement and how the shape of a desert landscape changes as plants grow in it. The result is a "fixation index" that predicts when plants will be successful at halting a dune. This could lead to a better understanding of the dynamics of coastal dunes and predict how these landscapes will evolve over time. (Phys. Rev. Lett. 97 188001).
Although plants are not common in areas with sand dunes, they play an important role in stabilizing the movement of sand and fixing the position of dunes. Indeed, there is often a relentless competition between plants and sand, which has been harnessed for thousands of years by people living in such regions to control the movement of dunes.
Orencio Durán and Hans Herrmann of the University of Stuttgart in Germany developed their equations using scientific observations of sand-dune behaviour in desert regions. For example, scientists have observed that crescent-shaped "barchan" dunes change into parabolic-shaped dunes when colonized by plants. This transformation is thought to be the first step in halting the movement of a sand dune.
The Stuttgart physicists have used the equations to define a "fixation index", θ, which is the ratio between the rate at which a sand dune erodes and how quickly a plant can grow and inhibit erosion. The equations reveal that plants can transform a barchan dune into a parabolic dune when θ has a value less than 0.5 and plant-growth trumps erosion. Conversely, if θ is greater than 0.5, wind-induced sand erosion stops plants from growing and a barchan dune can continue to move.
"Our results will help allow scientists to make long-term predictions (over thousands of years) about how coastal dunes evolve," Herrmann told PhysicsWeb. “They might even have an impact on environmental issues, like how to increase biodiversity." The research may also aid in protecting semi-arid regions that are threatened by desertification processes.
Durán and Herrmann now plan to repeat their calculations with different types of plants and for varying amounts of rainfall. They also hope to actually test their predictions in the desert.
About the author
Belle Dumé is a freelance science writer based in France
|
<urn:uuid:a163fb14-ed41-46d1-a7c9-cdcd7c722e7b>
|
CC-MAIN-2014-10
|
http://physicsworld.com/cws/article/news/2006/nov/13/plants-halt-shifting-sands
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021920399/warc/CC-MAIN-20140305121840-00036-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.93697 | 536 | 3.859375 | 4 |
We live in a world saturated in technology and media. Many of us have a device that connects us to the rest of the world. Our networks and relations are no longer limited to the city or community we live in, especially as we learn to navigate the Internet. We can now connect to other parts of the world we never have visited or known. We can access a substantial wealth of information with a simple Google search, bringing up thousands of websites, videos, and posts on our computer screens. We do not necessarily need to be experts to know something. Rather, we only need a stable Wi-Fi connection and the skills to navigate the digital world. People can connect instantly and share information about anything from daily experiences to educational resources to global and local events.
Social media sites such as Facebook, Twitter, and YouTube allow us to not just share our own everyday experiences but also share important developments occurring around the world. Some have even used social media to start a movement. We can readily see this with online social movements such as #BlackLivesMatter, #YesAllWomen, Kony 2012, and the use of social media during the protests and revolutions happening in the Middle East. These sites have provided an opportunity to create a space where people inform and share issues to a potentially global audience. SocRogueScholars itself was created for the purpose of empowering silenced voices and raising public awareness about social justice.
Like real-life spaces for conversations and movements, however, social media is not a perfect space. As people engage in these movements for social justice, they are also faced with resistance and opposition. Although constructive criticism and civil behavior would be ideal, it is not always the case. With #BlackLivesMatter and #YesAllWomen, for example, it is not at all uncommon for people to resort to demeaning, victimizing, and verbally violent tactics. Disagreement can easily turn into retaliation, and the issue can be made worse as a result of online anonymity. There is a complicit lack of responsibility or accountability for people who prefer to have their real-life identity concealed online. It is important to emphasize that this inclination toward aggression and retaliation is not limited to one side or group. They are tactics that can be used on all sides of social media and of justice.
As we explore the issues of navigating online conversations and movements, we need to remind ourselves that these spaces are not necessarily positive, nor are they solely negative. It depends largely on what we bring to the virtual table, and how we approach others who might not agree with us. Social media can offer us a space to raise awareness and bring different people together under a cause, but it can also work to divide people and to spread misinformation.
|
<urn:uuid:d840aa07-0e61-42d0-a276-4cd72c4b130c>
|
CC-MAIN-2017-43
|
https://socroguescholars.wordpress.com/2016/05/21/from-information-to-justice-on-the-web/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825308.77/warc/CC-MAIN-20171022150946-20171022170946-00184.warc.gz
|
en
| 0.952635 | 551 | 3.203125 | 3 |
Article Title: Designing eLearning for Cognitive Ease
Publication: Integrated Learnings: eLearning
Summary: Daniel Kahneman dedicates a chapter to the topic of cognitive ease in his book, Thinking, Fast and Slow. This article suggests how some of the findings Kahneman presents can benefit eLearning design.
The amount of time spent on eLearning may influence mood, too. Long lessons may leave learners wondering if they’ll ever end, while a series of short lessons can help create a sense of progress. Shorter lessons can also help prompt learners take a brief break and re-energize if they’re feeling mentally fatigued.
|
<urn:uuid:b266e58b-2cc0-4faa-9d21-1d2ada527c9b>
|
CC-MAIN-2014-15
|
http://shelleygable.wordpress.com/2013/05/10/designing-elearning-for-cognitive-ease/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00301-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.87249 | 135 | 2.53125 | 3 |
How To Improve Brain Health Naturally. The term mental health describes a person’s emotional and. 4 ways to boost your brain power.
Put words to your feelings. Students who eat breakfast perform better than those who don’t. How to improve brain health naturally?
Develop A Regular Nighttime Routine That Relaxes Your Body Before Sleep And Works For You.
Movement boosts blood flow to your brain and helps nerve cells in the part of the brain that controls your memory. Top three ways to naturally improve brain health. When these exercises are practiced with high volume, it helps restore.
Here Are Some Types Of Therapy That Help Heal Brain Damage Naturally By Harnessing Neuroplasticity:
If you find yourself not getting the recommended amount, it. By taking conscious steps to protect your brain health, you can minimize memory loss. Aim to get at least 30 minutes of aerobic exercise five days a week.
There Should Be Implemented For Your Overall Health As Well As Improving Your Brain Functionality.
However, people should plan a good and simple story to stimulate their own brain and improve their memory. There are some theories that sleep helps clear abnormal proteins in your brain and consolidates memories. Research shows that physical exercise is one of the best things you can do for brain health.
Gaba Is A Neurotransmitter Which Is Essential In Regulating Communication Between Brain Cells.
Have you seen small capsules containing fish oil? How to improve brain function | enhance your nervous system naturally the brain is central to everything:. So start the morning with a healthy breakfast.
Let Us Show You How.
Students who eat breakfast perform better than those who don’t. Let us show you how. Get a fast start with breakfast.
|
<urn:uuid:0782abec-eeb9-49c3-b08f-6b91a0bf7f5b>
|
CC-MAIN-2023-23
|
https://nuovavetro.com/how-to-improve-brain-health-naturally/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649302.35/warc/CC-MAIN-20230603165228-20230603195228-00710.warc.gz
|
en
| 0.902324 | 358 | 2.6875 | 3 |
The National Museum of African American History and Culture had a noose placed on the grounds of a Smithsonian museum, for the second time in less than a week.
The first incident occurred on Saturday with a noose being placed on a tree outside of the Hirshhorn Museum. On Wednesday, a noose was put in the middle of an exhibit on segregation. At this time, it’s not known whether the incidents are related.
“The noose has long represented a deplorable act of cowardice and depravity; a symbol of extreme violence for African Americans,” Museum director Lonnie Bunch said. “Today’s incident is a painful reminder of the challenges that African Americans continue to face.”
Ultimately, it was a strong statement by Bunch.
And, the Smithsonian’s secretary, David Skorton, sent an institution wide email, according to Complex. He said that everyone at the Smithsonian’s museums and research centers condemns the racist attack.
“Our museum is a place of learning and solace, a place to remember, to reflect and to engage in important discussions that help change America,” Bunch said. “This was a horrible act, but it is a stark reminder of why our work is so important.”
The chaos from the second incident caused the museum to be closed up approximately three hours, according to Complex.
“The Smithsonian family stands together in condemning this act of hatred and intolerance, especially repugnant in a museum that affirms and celebrates the American values of inclusion and diversity,” Skorton said. “We will not be intimated. Cowardly acts like these will not, for one moment, prevent us from the vital work we do.”
All in a week’s work, we’ve seen one of the most known African Americans, Lebron James, become a victim of racism, and also the entire race as well with “museum-gate”.
Let’s do better next week, America.
|
<urn:uuid:7739bf24-4bd2-4ea5-8ebb-d9cf5f67e726>
|
CC-MAIN-2023-50
|
https://www.hypefresh.com/african-americans-are-still-being-reminded-of-racism/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00625.warc.gz
|
en
| 0.953449 | 431 | 2.6875 | 3 |
- Read a few of our sample essays on your topic
- Develop your own ideas
- Your paper will practically write itself
So why would one argue that free trade is the best policy? For starters, it increases efficiency, competition, and variety available for consumption. This is mainly due to the absence of tariffs and trade barriers that complicate and slow down the market. Free trade also yields the consumption and production effects. With the consumption effect, countries are given the opportunity to trade at different prices; likewise, the production effect illustrates how trade is making you more efficient in the context of international trade. After all, gains from trade are analogous with technological progress.
Another way to look at the advantages of free trade is to look at the disadvantages of protectionism. Tariffs and quotas are basically just "blunt" policy tools that have several, better alternatives. Moreover, protection introduces a high possibility of mutual loss.
Politics is a huge factor, possibly even the determining factor in the rise and fall of free trade. It is the root of motivation for some regimes to design their economic policy after free trade in an effort to last longer and to encourage more comprehensive adherence to rules. Perhaps the most sensible way to do this is through the hegemonic stability theory. Free trade regimes are most stable and strongest when the system has a hegemonic distribution of power. The single most powerful state will choose to establish free trade because such a system will feed its own strength and thus stabilize the system (Brawley 157).
Quotes talked about in this paper
- Gore and other supporting politicians are expecting when they propose a "new way on trade." ...
Names mentioned in this research paper
Suranovic, Gore, Bibliography Brawley,
Locations mentioned in this paper
Keywords included in this paper
trade, free trade, free and fair, fair trade, trade policy, international trade, market, level playing field, Trade Federation, hegemonic stability theory, labor and materials, protectionism, rules and regulations, tool, International Political Economy, market economy, poor countries, economic policy, world market, international market, Another way, international agreements, technological progress, living standards, pollution, import taxes, equal opportunities, economic competition, collective good, working conditions, New ideas, long run, technical assistance, general public, tariffs, interests, regimes, the free, stable, the status quo, a level, the market, policies, the working class, GATT, a question, side effects, polluting, redress, sensible,
|
<urn:uuid:b6a4754e-5c3b-4d9f-bcc1-248293a26837>
|
CC-MAIN-2013-20
|
http://www.directessays.com/viewpaper/64776.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697843948/warc/CC-MAIN-20130516095043-00098-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.926259 | 516 | 2.578125 | 3 |
This is a very common scene,——. You start running regularly, but before you realize the sports injury. If you feel uncomfortable, you should put an ice pack on the sofa.
So how to prevent sports injuries caused by running, so that you can continue to run safely?
Like other sports, it takes time and thinking before you start running. Unfortunately, most of us only think about how to prevent injury after knee discomfort. If you study this problem from the beginning, you will be one step ahead of many runners.
Every one of us is different, so it’s hard to say how likely you are to be injured, because some people are more likely to be injured than others. A running injury survey in the Journal of Sports Medicine shows that the annual running injury rate is 37% to 56%.
One thing is clear: about 50% to 75% of sports injuries are caused by overuse of the knee and repeating the same movements. Injured runners must pay attention to the difference between “complete recovery” and “no injury”, and understand the “early recognition of symptoms of overuse.” One
So let’s take a look at how to prevent sports injuries caused by running.
1. Strengthen muscles
Many people rely heavily on the power of running shoes. However, recent research suggests that your shoes may not be the key to injury. The review of runner injury research does not even encourage the wearing of orthopedic and specially designed running shoes, but should focus on specific strengthening exercises for weak areas.
Maybe you should worry less about what to wear on your feet (unless it’s flip flops) and pay more attention to the strength of your lower body. This means you need strength training to prevent potential weakness from causing injury. If you feel that you need to run and are worried about injury, you may need to find a professional sports rehabilitator to strengthen your body to prevent further running injuries.
2. Maintain a good running posture
You may think that there is no “correct” way of running, and running is natural. But in fact, there are some mistakes in form and technology that you should avoid. For example, a heel impact may cause injury when stepping on it.
There is evidence that the damage caused by a forefoot impact (when your sole touches the ground first) is much less than the injury caused by a rear foot impact (when your heel touches the ground first). Therefore, you may need to carefully consider how you run and how to strengthen or weaken specific parts of your lower body. Although it is technically possible to change the foot stroke from the back foot to the front foot, it is still uncertain whether this change will cause injury before the lower limbs become stronger.
Lack of exercise is a pain in the lower body of many people. If your calves and feet lack exercise, and you have tight hip flexors, you may be more prone to injury. Remember to stretch after running, it can not only help you improve flexibility. Stretching the muscles when they are warm is the safest way to improve flexibility. Remember to stretch different parts of the body, especially calf lift and extension, wall push/push-ups, hip flexor extension, hip opening hamstrings (such as clam extension, upper body squat and leg lift) and quadriceps extension .
It is also important to remember that all muscles are connected. Back problems usually cause lower body injuries, so don’t forget to stretch the upper body: chest, shoulders and back to minimize upper body tension.
4. Correct training and rest
Maybe at the moment you think that a day is not complete without jogging, and running is an addictive exercise. But overtraining is also a common cause of injury. Running 8 to 10 miles a day, enduring any pain, seems to turn yourself into a running machine, but it will only make you unable to continue running.
Don’t make low-level mistakes. Make sure you train wisely, listen to your body and give it proper rest. Use correct methods such as stretching and foam rollers to relax and rest to ensure that you get enough quality sleep (at least 7 hours). The usual “overtraining” situation is likely to be that the body is actually recovering.
5. Water and nutritional supplements
For running and recovery, nothing is more important than adequate water and nutrition. For those who want to lose weight, a low-calorie diet plus a few miles of running seems to be a good solution, but in fact, doing so is harmful to your metabolism and muscles.
Even if you lose weight, you need to eat to maintain your metabolism. Instead of damaging your long-term health or being hurt by an unsustainable lifestyle, it is better to lose weight gradually. Make sure you eat enough food and water to run and recover.
Eat high-fiber carbohydrates before or the night before running to fill the muscles with the necessary glycogen (energy). After running, you must supplement your blood sugar levels with foods rich in carbohydrates and protein to maintain muscle growth.
There is nothing like the pleasure of dopamine after running that will motivate you to hit the road next time. Therefore, you must provide the body with the most suitable things at the most suitable time, so that you can run faster in the next run.
|
<urn:uuid:be3960a5-d05c-430a-a701-36cffb323a05>
|
CC-MAIN-2023-23
|
https://www.voltrx.com/news/put-on-your-running-shoes-and-run-along-the-right-trajectory-of-life/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648000.54/warc/CC-MAIN-20230601175345-20230601205345-00547.warc.gz
|
en
| 0.954632 | 1,105 | 2.59375 | 3 |
The Hidden Secrets Of Manuka Honey
Before the advent of science and lab tests, our ancestors used honey to treat wounds, boils, and burns. While our ancestors believed the substance has natural healing properties, it was not until the 19th century that scientists were able to prove this. Today, as our ancestors did, we still use it for its many health benefits which of course include the treatment of wounds, mild burns, and boils.
Since researchers discovered that this bee-produced substance has natural antibacterial qualities, several other studies have been carried to uncover other health benefits that humans can enjoy from it. People that regularly consume it are protected from damages that are caused by bacteria.
Some types have been found to boost the production of special cells that repair tissue damages caused by infection. While most honey is known to have health benefits, the Manuka honey itself is thought to be more potent than other types.
What is Manuka Honey and Where is it Sourced From?
Manuka honey is a dark variety produced by bees that pollinate the flowers on the manuka plant. This plant is native to Australia and New Zealand which is why the honey is mostly produced in these two countries. That said, both Manuka honey from Australia and those from New Zealand may vary slightly because of factors like the method of extraction.
Where it is produced is not its only distinguishing factor. The nectar of the flowers of the Manuka bush gives it its distinct non-peroxide activity properties. Like we said earlier, the Manuka is believed to be more potent than other types of honey because it has active compounds that are responsible for its stronger antibacterial properties. Dihydroxyacetone (DHA), Methylglyoxal (MGO), and Leptosperin are the key compounds present in the Manuka that give it its unique antibacterial properties.
Since several factors can affect the potency of this naturally occurring substance, the Manuka inclusive, honey producers created a scale which they use to rate its potency. This rating is known as the ULF rating. The rating reflects the concentration of the three key compounds we mentioned earlier.
For this variant to be used for therapeutic purposes, it must have a ULF rating of 10+. So, the higher the rating, the higher its potency, and cost.
Click here to learn more about the ULF rating.
Health Benefits of Manuka Honey
The following are some of the health benefits of the Manuka:
Helps Wound to Heal Faster
As mentioned earlier, our ancestors used honey to treat wounds sores, boils, and burns. Today, it is still used effectively for such. In fact, in 2007, the United States Food and Drug Administration (FDA) approved the use of Manuka honey for wound treatment.
Aside from the antioxidant and antibacterial properties it offers, its use keeps the wound moist which prevents microbial infections. Several studies have revealed that this honey amplifies the regeneration of tissue, thereby enhancing the healing of wounds. They also show that it even decreases damage from burns.
For instance, one study was carried out for two weeks on 40 participants who had non-healing wounds. It aimed to investigate the effects of the substance on their wounds. The results of the study revealed that an impressive 88% of the wounds decreased in size. Also, it created an acidic wound environment which is known to favour healing.
Treatment of Sore Throat
The Manuka honey may be effective in the treatment of a sore throat. Its antibacterial and antiviral properties can kill off the bacteria that cause the pain and reduce inflammation, thereby soothing the sore throat. You will find it interesting to know that it not only attacks the bacteria responsible for the discomfort but that it also soothes the pain by coating the inner lining of the throat.
A study done on patients receiving chemotherapy for head and neck cancer aimed to observe the effects of Manuka honey consumption on Streptococcus mutans, a certain type of bacteria that causes sore throats. The scientists found that the number of Streptococcus mutans decreased significantly after the patients began consuming it.
Visit https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6615571/ to learn more about Streptococcus mutans.
Furthermore, it decreases the oral bacteria that are known to cause mucositis. Mucositis is a side effect that people suffer from chemotherapy and radiation treatment.
Improves Oral Health
Reports from the Centre for Disease Control and Prevention (CDC) say that nearly 50% of Americans suffer from some form of periodontal disease. This is largely caused by harmful oral bacteria that are not effectively gotten rid of.
To prevent tooth decay and maintain overall good oral health, we must minimize the growth of these harmful bacteria that cause plaque to form on our teeth. However, we must take caution as we do so because there are good oral bacteria that help to keep our mouths healthy.
Research has shown that this honey fights those harmful bacteria that cause tooth decay, gum inflammation, and plaque formation. They reveal that it effectively inhibits the growth of bad oral bacteria like actinomycetemcomitans and gingivitis.
Acne is one of many skin problems that people experience. It can be caused by several things including stress, poor diet, hormonal changes, bacteria growth, or reaction to certain skincare products. Manuka honey has antimicrobial properties which when combined with a low-pH skin product is believed to combat acne.
Since it is known to attack bacteria, applying it topically can rid your skin of all acne-causing bacteria and speed up the healing process. Aside from attacking the bacteria responsible for the condition, it also has anti-inflammatory properties. What this means is that in addition to getting rid of the bacteria, it will help to decrease the inflammation that comes with acne.
Combat Digestive Problems
Many people suffer from digestive problems, especially irritable bowel syndrome (IBS). Common symptoms that are associated with this condition include diarrhea, constipation, irregular bowel movements, and abdominal pain. Research suggests that regularly consuming this honey may help to decrease these symptoms.
Research shows that it effectively reduces inflammation and improves antioxidant status in rats suffering from ulcerative colitis and IBS. It also attacks Clostridium difficile, also known as C. diff, a bacterial infection that causes severe bowel inflammation and diarrhea.
|
<urn:uuid:3ac1eba3-e87e-45c5-abd3-e0c71f4797cd>
|
CC-MAIN-2023-14
|
https://manukahoneyorganic.com/what-can-manuka-honey-do-for-you/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00323.warc.gz
|
en
| 0.956375 | 1,327 | 2.53125 | 3 |
Hundreds of Thousands of Muslims will be performing I'tikaaf this year, just like they do every year. Whilst it is a rewarding experience both mentally and spiritually, there are many aspects of this process that most Muslims are not completely familiar with. This points back to the basic question: what is Ramadan? It is a month of opportunities for Muslims to seek forgiveness and repentance for their sins; a time to devote oneself in prayer and establish a connection with Allah (swt); a month of unlimited blessings, offerings, and rewards for Muslims to avail and if everyone knew all the benefits of being in I'tikaf, they would be sure to perform it every year.
Those who are fortunate enough do so take part in I'tikaf in the last ten days of Ramadan. In these ten days, people spend the entirety of their day in a mosque. During these ten days of Ramadan people devote themselves to religious acts and responsibilities such as Salah, Sawm (fast), Taraweeh, reciting Quran, preaching about Islam to other fellow Mu’takifs (the people performing the I'tikaf) and so on.
One of the many aims of I'tikaaf is to be secluded from worldly distractions. I'tikaaf allows Muslims to stay away from worldly desires and such activities that can damage their fast or the overall enthusiasm to obtain blessings in the month of Ramadan. Another benefit is that it gives the Mu’takifs adequate time to assess their past and present. While a person is in Aitekaf they are reciting Quran and learning how to live life in the most appropriate manner; they can decide to change the way they live right there in the Aitekaf.
I'tikaaf also helps a person become mature, helpful to others, polite, a good decision maker and an overall good example of how a Muslim should be. During I'tikaf a person feels more relaxed and trouble free because they are away from the outside world and they don’t know what is happening in the neighbourhood, the city or the country. Thus, they get to focus wholly on the responsibilities of I'tikaaf and completing it successfully.
It is important for every Mu’takif to keep recalling the purpose of Itikaf. One can only stay motivated and on the right track during I'tikaaf if they know why they undertook it in the first place. Just like paying Zakat is a means of purifying wealth; performing I'tikaaf is a means of purifying the soul.
Muslim Aid would just like to emphasize the importance of paying Zakat. Ramadan is an excellent time to make a donation.
So let’s all aim to perform I'tikaaf this Ramadan and keep the entire Muslim Ummah in our thoughts and prayers. Be sure to check out the Muslim Aid Blog for more information on Ramadan and other Islamic topics.
|
<urn:uuid:fcdebadd-681d-4643-9deb-ef2e6c37720a>
|
CC-MAIN-2020-16
|
https://www.muslimaid.org/media-centre/blog/reenergize-the-soul-of-aitekaf/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00268.warc.gz
|
en
| 0.950753 | 598 | 2.734375 | 3 |
The fact that we are re-creating religion for ourselves after centuries of suppression makes us very
eclectic and very concerned with the "rightness" of a particular thing for the individual. So when you see some
people calling it a religion and others not, when you see it capitalized in some instances and not in others, don't be
confused - we're all still basically talking about the same thing.
Neo-Paganism is any of several spiritual movements that attempt to revive the ancient polytheistic religions of Europe and
the Middle East. These movements have a close relationship to ritual magic and modern witchcraft.
Neo-Paganism differs from them, however, in striving to revive authentic pantheons and rituals of
ancient cultures, though often in deliberately eclectic and reconstructionist ways, and by a particularly
contemplative and celebrative attitude.
Typically people with romantic feelings toward nature and deep
ecological concerns, Neo-Pagans centre their dramatic and colourful rituals around the changes of the
seasons and the personification of nature as full of divine life, as well as the holy days and motifs of the
religions by which their own groups are inspired.
Modern Neo-Paganism has roots in 19th-century Romanticism and activities inspired by it, such as the
British Order of Druids (which, however, claims an older lineage). Sometimes associated with extreme
nationalism, Neo-Pagan groups and sentiments were known in Europe before World War II, but
contemporary Neo-Paganism is for the most part a product of the 1960s. Influenced by the works of the
psychiatrist Carl Jung and the writer Robert Graves, Neo-Paganists are more interested in nature and
archetypal psychology than in nationalism.
Neo-Paganism in the postwar decades has flourished particularly in the United States and the United
Kingdom and in Scandinavia. Some of the major Neo-Pagan groups are the Church of All Worlds, the
largest of all the pagan movements, which centres on worship of the earth-mother goddess; Feraferia,
based on ancient Greek religion and also centred on goddess worship; Pagan Way, a nature religion
centred on goddess worship and the seasons; the Reformed Druids of North America; the Church of the
Eternal Source, which has revived ancient Egyptian religion; and the Viking Brotherhood, which
celebrates Norse rites. Beginning in the late 1970s, some feminists, open to feminine personifications of
the deity, became interested in witchcraft and Neo-Paganism.
|
<urn:uuid:0030df59-2ff3-4c07-8eb5-718c6fcc99b1>
|
CC-MAIN-2017-34
|
http://lost-civilizations.net/celts-wicca-paganism-page-6.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104160.96/warc/CC-MAIN-20170817210535-20170817230535-00668.warc.gz
|
en
| 0.951649 | 544 | 2.59375 | 3 |
When people talk about the state of California, most would think that the big cities of Los Angeles or San Francisco are the capital city. Â These people would be surprised to know that the city of Sacramento is the state capital. Â Although Sacramento is also a large city, it’s just not one of the most popular cities to be considered as the state capital.
Back in 1849, the original capital city for the state of California was Monterey. Â During this time, the first ever Constitutional Convention and state elections were held in this city. Â It was also agreed upon during the said convention that San Jose will be the new capital city after Monterey. Â It stayed this way until 1851, and a year after the state capital was moved again to the city of Vallejo. Â After another year, Benicia became California’s state capital. Â Only after this city did Sacramento became the official capital of the state of California. By 1879, it was resolved by the Constitutional Convention to make Sacramento the permanent capital city of California.
Back in those times, Sacramento was the obvious choice for the state capital. Â During the Gold Rush period, people from across different states would pass through Sacramento for their activities. Â Sacramento back then attracted many people which stimulated and boosted the city’s economic and business climate. Â Most of the powerful politicians of the time were based in Sacramento making it an obvious choice for the state’s capital. Â The location of Sacramento was also a perfect choice back then against possible invasion from foreigners because it was not near the ocean. But with major rivers nearby and its proximity to other major cities like San Francisco, Sacramento thrived to become a bustling city in those times. With all these factors involved, the city of Sacramento became the state capital of California.
|
<urn:uuid:7418ff03-c6e6-4c51-a7b6-ad7be5b334b5>
|
CC-MAIN-2017-30
|
http://www.knowswhy.com/why-is-sacramento-the-capital/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424909.50/warc/CC-MAIN-20170724182233-20170724202233-00041.warc.gz
|
en
| 0.980016 | 379 | 3.71875 | 4 |
For the first time, an off-shore attempt is being made to drill into the Chicxulub Crater and extract samples from the “Peak Ring,” the layer of rock where earth rebounded after the apocalyptic asteroid hit the earth, apparently causing the dinosaurs and 70% of other life forms to become extinct during the Cretaceous period, 66 million years ago. The estimated 18 km wide asteroid crashed into Earth, creating a 100 km across and 30 km deep crater. Geological surveys reveal an arching chain of mountains on the ocean bed where the rim is located.
Geophysicist Sean Gulick from the University of Texas, Austin and co-chief of the $10 million project says,
Chicxulub is the only preserved structure with an intact Peak Ring that we can get to. All the other ones are either on another planet or they’ve been eroded … We’re 30km off-shore, which allows us to re-supply easily. We’ve also timed the project so that we’re pre the hurricane season. Hence, starting now and trying to finish before June.
An international coalition of researchers are on a lift boat scientific research platform drilling 30 kilometers off shore from the port of Progreso on the Yucatan Peninsula. The lift boat, dubbed Myrtle, is a 137-foot mission-specific drilling platform held steady by three 6-foot wide pylons. For sixty days, the team will be drilling 24 hours a day. Layers of sediment and rock will be extracted unearthing prehistoric biology and planetary geology from the center of the crater.
Using a tungsten–carbide roller cutter, researchers will begin by drilling to 500m meters, cutting through the sediment to the limestone. At 500 meters, the hole will be cased in steel and drillers will switch to a diamond-tipped bit that can be used to retrieve 3-meter long cores. The core samples will be quickly frozen at -80-degrees Celsius to preserve them for research purposes. Though Myrtle has labs for initial investigation, the main study will be conducted after the core samples have been sent to the IODP’s repository in Bremen, Germany.
At 550 m, there is the possibility of finding algae and black shale from 55 million years ago. Upon reaching 550-600m, there is hope of finding evidence of life forms that survived or formed after impact and documenting the size and number of species. The impact layers should be found at 650-800 m. Chunks of rock, impact melt, sediment and some ash crystals of shocked quartz may appear.
Finally, at 800-1,500, the target depth and prime Peak Ring should appear with its volcanic granites and light minerals with fracturing of the rock filled with mineral veins, part of the hydrothermal system. And, the most important part of the process, the search for microbial DNA of chemosynthetic microbes that live in the crack may be extracted.
In that section, the big excitement is, “How did life come back at Ground Zero?” Was it the specialists or the generalists? Is there any clue to what organisms repopulated first, as opposed to what the environmental consequences were for the ocean?
Professor Joanna Morgan, co-lead investigator from Imperial College of London says,
We want to know where the rocks that make up the Peak Ring come from. Are they from the lower, mid or upper crust? Knowing that will help us understand how long craters are formed, and that’s important for us to be able to say what was the total impact energy, and what was the total volume of rock that was excavated and put into the Earth’s stratosphere to cause the environmental damage.
The team’s drilling strategy allows multiple chances to get to the 1,500-meter goal, even if a bit gets stuck or they have to abandon a hole and drill a new one.
Gulick sums up,
Most things just didn’t worry about the high CO2 time, and kind of made it through. However, if you look today, at our current increasing CO2 values, already it’s looking like we have a pretty big extinction event in the making. Is that something to do with vulnerability? Is that something to do with the specific kill mechanisms? We should understand this better, given that we’re kind of in the middle of an experiment ourselves.
That’s something to ponder if we don’t want history repeating itself.
|
<urn:uuid:e355176c-335c-431e-95be-4b8f6515c8fb>
|
CC-MAIN-2020-16
|
https://mysteriousuniverse.org/2016/04/drilling-begins-in-the-gulf-of-mexicos-dinosaur-crater/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00486.warc.gz
|
en
| 0.92958 | 944 | 3.265625 | 3 |
In general there are four types of ignition systems that are popularly used in cars, namely;
- Conventional ignition system, is the model of the ignition system that was first designed. This system relies on distributor and platinum mechanics as the main features. This system, we will discuss further below.
- The CDI ignition system, is an ignition model that uses the principle of discharge (Discharge) on the Capacitor to produce voltage on the coil, this system also still uses platinum as a circuit breaker.
- The transistor ignition system, is a semi-electronic ignition system that no longer uses platinum as a circuit breaker but instead uses transistors as circuit breakers.
- DLI ignition system, as the name implies the less ignition distributor no longer uses distributor components to distribute the current to each spark plug. This one system is most commonly found in modern machines today. For more details about DLI, you can read how DLI ignition works.
|
<urn:uuid:80f1d597-9fbe-4157-9043-bddb70ce650f>
|
CC-MAIN-2020-24
|
http://dcwestyouth.com/various-types-of-ignition-systems/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00515.warc.gz
|
en
| 0.921209 | 196 | 3.3125 | 3 |
Author(s): Castellsagu X
Abstract Share this page
Abstract Cervical cancer is the most common cancer affecting women in developing countries. It has been estimated to have been responsible for almost 260 000 deaths annually, of which about 80\% occurred in developing countries. Persistent infection by certain oncogenic HPV types is firmly established as the necessary cause of most premalignant and malignant epithelial lesions of the cervix and of a variable fraction of neoplastic lesions of the vulva, vagina, anus, penis, and oropharynx. There are more than 100 known HPV genotypes, at least 15 of which can cause cancer of the cervix and other sites. HPV 16 and 18, the two most common oncogenic types, cause approximately 70\% of all cervical cancers worldwide. HPV, especially genotypes 6 and 11, can also cause genital warts. HPV is highly transmissible and it is now considered the most common sexually transmitted infection in most populations. Although most women infected with the virus become negative within 2 years, women with persistent high-risk HPV infections are at greatest risk for developing cervical cancer. Since the identification of HPV as the necessary cause of cervical cancer, HPV-based technology has become the centre of novel primary and secondary cervical cancer prevention strategies by the introduction of HPV testing in screening and of HPV vaccines in preadolescent girls and young women. If implemented widely and wisely the deployment of these protocols has the potential to complete Papanicolaou's goal of cervical cancer eradication by extending the benefits of prevention to the developing populations of the world.
This article was published in Gynecol Oncol
and referenced in Journal of Cancer Science & Therapy
|
<urn:uuid:e4e5cc97-ddec-407c-931f-60a4d4d92b53>
|
CC-MAIN-2017-39
|
https://www.omicsonline.org/references/natural-history-and-epidemiology-of-hpv-infection-and-cervical-cancer-358914.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00399.warc.gz
|
en
| 0.948863 | 345 | 3.328125 | 3 |
ALS offers aquatic toxicity testing (WET Testing) in effluent and receiving waters utilizing both vertebrate and invertebrate freshwater species.
Both acute and chronic bioassays are performed using Ceriodaphnia dubia and Pimephales promelas (fathead minnows) to determine the toxicity of the water.
Ceriodaphnia dubia are freshwater organisms used in both acute and chronic toxicity testing. They are most commonly known as water fleas and inhabit lakes, ponds and marshes throughout much of the world. Neonates utilized in acute toxicity tests are introduced into five dilutions of the sample for 48 hours. Mortality and simple water chemistry are monitored at test start, after 24 hours, and at test end. At test completion, a statistical program (CETIS) is used to generate a LC50/TUa (the concentration at which 50% of the organisms are affected).
In chronic toxicity testing, Ceriodaphnia neonates are monitored for 7 days or until 60% of the control organisms have three broods of offspring. During the test, organisms are placed into fresh sample dilutions daily and mortality and offspring are monitored and recorded. At test completion, a statistical program is used to generate the LOEC (lowest observable effected concentration), NOEC (no observable effected concentration) and IC25.
Pimephales promelas commonly known as fathead minnows are cultured in house and are maintained in a recirculating system. P. promelas used for acute testing may be 1-14 days of age and are introduced into 5 sample dilutions for 48 hours. As with Ceriodaphnia, P. promelas undergo the same monitoring and statistical evaluation.
Pimephales promelas used in chronic testing must be <1 day old. Test chambers are cleaned and new sample media is introduced daily. Mortality is monitored for 7 days and at test end, the organisms are dried to determine if organism growth was affected. CETIS, a statistical program, is used to generate the LOEC, NOEC and IC25.
|
<urn:uuid:aa9dee7d-304e-4859-8490-65ce4749885e>
|
CC-MAIN-2017-26
|
https://www.alsglobal.com/services-and-products/environmental/water-quality/aquatic-toxicity-testing-wet-testing
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00534.warc.gz
|
en
| 0.924821 | 429 | 2.921875 | 3 |
In this course, critique is defined as a detailed, objective analysis of a work of graphic design and its effectiveness. Critique is an integral part of the making process for designers: it’s not just how we determine if a work of design is or is not successful, it’s how we move our work forward. Critique is also where other voices and opinions can be brought into the design process. This course will introduce a lexicon (or vocabulary) in order for you to demonstrate clearer and more considered ways of talking about graphic design in the context of critique. Through this lexicon and a provided Critique Framework, we will model and then practice objectively describing how well a work of design functions based on specific, clearly articulated criteria. By refining this skill, you will enhance your ability to communicate about design with peers, colleagues, and clients. This course is essential to anyone looking to develop and refine a critical vocabulary around talking about art and design, such as: - practicing graphic designers at all levels - future and current students of art or design programs - teachers teaching visual art, design, or related subjects - any stakeholder in a design project that needs to communicate effectively with designers on their team. This course is recommended to learners enrolled in either the Graphic Design Specialization or UI/UX Design Specialization on Coursera to further develop their skills in graphic design. You may take this course before, during, or after completing either Specialization. Note this is not an art appreciation course for a general audience. While no prior experience is required to begin, the knowledge and skills taught in this course is useful only if you have some relationship to design in a professional context, i.e., practicing, studying, or teaching design, or working directly with designers. No specific software or tools are required to complete this course.
Help others make their choice. Be the first one to leave a review
|
<urn:uuid:053b62d1-f72d-41b5-b914-e83438c37015>
|
CC-MAIN-2023-40
|
https://edleed.com/course/the-language-of-design:-form-and-meaning-89941
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00615.warc.gz
|
en
| 0.937753 | 384 | 3.625 | 4 |
Unix Permissions on Folders
Unix permissions become somewhat of a tricky thing when they're applied to directories instead of files. This post should hopefully clear up any confusion about them (they had me pretty confused so I figured them out by trial-and-error).
On normal files, the permissions are pretty straightforward:
- Read allows you to read the contents of the file (i.e. open a text file in an editor, open an image in a viewer).
- Write allows you to write to a file (make changes and save it, copy a new file over top of an old one, etc.)
- Execute allows you to run the file as a program (for a binary or e.g. a Perl script).
On directories, the permissions get a bit trickier.
- Read allows you to list the contents of the directory to see the files inside. This is it. It doesn't implicitly allow you to read the contents of the files, or to descend into subdirectories. It only lists them.
- Write allows you to create new files in the directory or delete existing ones (even if the files belong to a different user, like root, that you wouldn't think you'd be able to delete). This only works if the Execute bit is also on.
- Execute allows you to interact with a file, or to traverse past the directory into a subdirectory (provided that subdirectory has permissions that allow you to do so). Basically, the Execute permission allows the permissions on the file or subdirectory to take effect. If the folder doesn't have Execute permissions, any attempt to do anything to a file or subdirectory is automatically denied.
So here's a breakdown of the different permission combinations and what they do for you:
- --- (000): No permissions. Can't open the folder, list its contents, or do anything with its contents.
- r-- (400): You can list the contents. You can't open the files, or descend into subdirectories. Can't create new files or write to existing ones.
- -w- (200): This is the same as ---, or, no permissions at all. The write permission is useless without Execute.
- --e (100): You can't list the contents of the folder, but if you know of a specific name of a file or subdirectory, you can interact with it (provided it has permissions that allow you to). You can
cd into a subdirectory or read/write a file that has permissions for it. But no listing the directory's contents.
- rw- (600): Same as r--, you can list the contents but nothing more. The Write permission is useless without Execute.
- r-e (500): You can list the contents, access files (subject to their permissions), and descend into subdirectories (subject to their permissions). You can't create new files though.
- -we (300): This one is interesting. You can not list the contents of the directory, but you can interact with existing files (if you know their names in advance), and you can create new files in the directory and delete existing files. You can also descend into subdirectories.
- rwx (700): All permissions for the directory.
There are 2 comments on this page.
thanks mate, for clearing it out!
THANK YOU! This helped me immensely. For those trying to configure Unix/Linux permissions, it can be extremely confusing. The fact that you MUST have "execute" permissions set on a parent directory to even allow descending into a subdirectory where you DO have permissions is a tricky thing.
Copyright © 2023 Noah Petherbridge — All rights reserved
Hosted by DigitalOcean
Page generated in
|
<urn:uuid:10b2c35e-3322-4b27-9791-ede8e0133fd1>
|
CC-MAIN-2023-40
|
https://www.kirsle.net/unix-permissions-on-folders
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511386.54/warc/CC-MAIN-20231004152134-20231004182134-00160.warc.gz
|
en
| 0.905408 | 797 | 3.25 | 3 |
The Unicode codepoint U+FEFF when occurring at the start of a file. Its purpose is to give file-reading software a means of determining which of several popular Unicode encoding schemes the file is using. Additionally, for some encodings it indicates whether the most- or least-significant byte comes first. If the software detects a byte pattern at the start of the file that represents this codepoint in a recognized encoding, it assumes that the rest of the file also uses this encoding. The file's content is considered to start at the position following the byte-order mark.
|
<urn:uuid:1655a636-ca31-427f-bdc8-91dc8657079b>
|
CC-MAIN-2017-43
|
http://oxfordindex.oup.com/view/10.1093/oi/authority.20110803095539871
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823282.42/warc/CC-MAIN-20171019103222-20171019123222-00288.warc.gz
|
en
| 0.902932 | 119 | 2.734375 | 3 |
parshat lech lecha
In his book, The Gifts of the Jews: How a Tribe of Desert Nomads Changed the Way Everyone Thinks and Feels, by Thomas Cahill the concept of pilgrimage does not appear. I am surprised, because pilgrimage is seminal to the Jewish narrative and calendar. Three times a year Jews made the trek up to Jerusalem on their major festivals called Regalim (foot festivals). The very word for festival; Hag comes from the same Semitic root as the Arabic Hajj which means pilgrimage and is the core of a Muslim’s life. Many of us were introduced to English literature with a reading of The Canterbury Tales by Geoffrey Chaucer, which is about pilgrims on the way to the shrine of Saint Thomas Becket at Canterbury Cathedral… telling tales. Every Christian and Muslim understood the obligation of pilgrimage as a core component of a life well lived.
Pilgrimage does not rate as a gift of the Jews, because pilgrimage has disappeared from mainstream Judaism and our collective culture. [UJA Missions, Birthright and tourist trips don’t count… too much baggage, pre-arranged accommodations, supervision/guidance and too little walking]
It may be, that we Jews lost our taste for pilgrimage when it was adopted by competing religions. Certainly the fact that so many Jews were slaughtered during the Crusades would not have endeared them to this variant of active vacation. The anti-Semitism associated with pilgrimage kinda takes the bounce out of your step. The triennial pilgrimage may have also fallen into disuse after the destruction of the Temple and associated exile. A pilgrimage is a trip away from home, the yearning for a return to Zion and aliyah is a return home. Pilgrims buy a return ticket.
In a book that I just finished and will quote more extensively below, the author Gideon Lewis-Kraus suggests that Jews for the most part don’t do pilgrimage: “because pilgrimage has largely been for sedentary people, so the Jews of the Diaspora didn’t have much use for it. The last thing they needed was yet another reason to keep moving.”
It seems to me, that we lose if we ignore pilgrimage… the institution, the tradition, the narrative and most of all the state-of-mind. … and it all started with Abraham in Genesis 12.
1 Now the LORD said unto Abram: ‘Get thee out of thy country, and from thy kindred, and from thy father’s house, unto the land that I will show thee.
This is usually translated as “Get thee out (from your country and your birthplace and your father’s house. . . .)” But it literally means, “Go to yourself.” Rashi, understands lech lecha to mean: “go for yourself”. Lecha, he explains, means “for your benefit and for your good”. The Siftei Chachamim writes that lecha actually means li’retzoncha, according to your desire or will. According to the Pri Ha’aretz, Rebbe Menachem Mendel of Vitebsk “Go to find your true self. Go to develop yourself to its utmost. Go where? To the land that I will show you. To the place of the origin of Man’s body and soul.”
I am particularly intrigued with the concept of pilgrimage, because of Gideon Lewis-Kraus’s book; A Sense of Direction: Pilgrimage for the Restless and the Hopeful. Kraus; a secular Jewish writer in his early 30’s living a bohemian life with other expats in Berlin decides on a lark one early morning at a bar to go on a 500 mile Christian pilgrimage to the El Camino de Santiago. He becomes addicted to pilgriming and on completion of the Camino, does The Shikoku; a multi-site pilgrimage of 88 temples associated with the Buddhist monk Kūkai (Kōbō Daishi) on the island of Shikoku, Japan. His third and last pilgrimage is to Uman where he joins Hasidim (and his estranged gay-Rabbi father ) to visit the Beslever Rebbe’s grave.
In his irreverent, sometimes hilarious and always thought provoking musings, he struggles to find the essence of pilgrimage and, to my mind, reveals what we may be missing. Here are some excerpts that try to provide a pretext for taking a pilgrimage…
“There are a lot of reasons, and that’s part of what this book is about, figuring out what it means to want to make a pilgrimage, and then what it feels like to be on it, and what it does and doesn’t do to your life afterward, but it all has something to do with leaving your home, leaving comfort and responsibility behind, and putting yourself and your usual desires aside to concentrate on doing this difficult, painful trip that a lot of other people have done for a long time, and to be in the company of other sufferers who are doing it now. While you’re on it, everything feels so simple, even if you’re in pain, and you make these instant friendships based on a shared sense of need and vulnerability, and it’s a sense of need and vulnerability that are beyond explaining— there’s no real need to be able to say why you hurt or why you’re doing it, you just sort of trust that everyone is doing it for some reason or another and that’s enough.” It occurred to me it was strange that I’d brought my father to do exactly that— to explain himself— and what he’d ended up showing me was that these explanations are always evolving. [p 323]
But the thing about this sense of order is that it’s provisional, this sense of coherence that it’s evanescent. This makes it a bounded experiment in the as if, a few weeks of coming to terms with difficulty and disappointment— and cost— in terms of their necessary existence. It’s a vacation to a land where life has meaning— the meaning of moving forward, of getting to Santiago— and things, in the broadest sense of the term, make sense, in the broadest sense of the term. Its fixed points allow you to deal with the fact that everyone is in motion. It’s so easy to feel this way on the road because it’s provisional. This is its strength and its limitation. It is to be used and discarded. Its remove from the past, from conflict, from real life affords you the chance to form relationships with wonderful people from whom you expect nothing, whom you begrudge nothing, whom you owe nothing, people who haven’t ever had the chance to hurt you, and probably won’t, and if they do or you do you just walk away, you stay in motion. The stakes of communitas are low because everything is taking place in the present. The grace comes easy. And the sense of coherence that seemed so vital and inalienable while you were on the Camino, the sense that you’re simply spending your hours the way you’re spending them and, for the moment, not worrying too much about the costs, disperses into the air like incense the moment you’re no longer on the way. But a life cannot be lived, at least by most people, walking up and down the Camino, or walking the circuit of Shikoku until death. The real trick, then, is to find some way to recall these feelings of grace and coherence and meaning and forgiveness— for what we gain with this coherence is the ability to forgive, ourselves and others— when the as if has run its course, when Santiago is achieved and you are returned to a world where all is conflict and nothing makes itself plain to us, where there is no hope for miraculous intercession and the people you love most will hurt and disappoint you and you, in turn, will hurt and disappoint them. Where the ground is shifting and we rarely know where we stand. …. If you’re able to believe that there is a God and that God acts in the world, if it has never occurred to you that this makes theodicy a problem, if you have that true gift that is faith, you ought to count yourself inordinately blessed. For the rest of us, there is one Camino or another, and then, perhaps more important, there is the memory of that Camino. These are brief encounters with radical acceptance that we do our best to secretly save up in our hearts. [p 327]
The neutral word “pretext”— as opposed to the loaded words “reason” and “excuse”— suspends, for the moment, the question of moral responsibility, and makes way for the final fact that, as Wittgenstein says, we just do what we do. We’re all going to find pretexts for doing what we’re going to do anyway, for having our adventures and doing our demmij along a road in northern Spain, or in a broken-into temple in rural western Japan. There is no such thing as the life we deserve, just like there is no such thing as a prophylactic against regret. There is the life we live. There is the series of crises we do our best to muddle through. No sacrifice now will make the future effortless or the pain we will inevitably cause easier for others to forgive. The thing that can be so hard about my dad’s life, about anyone’s life, is that he caused so much pain and is somehow happy now. He is happy and has come to tell a story in which the pain he caused was worth it. But we all do our best, and we hurt some people and get hurt by others and what’s as terrible as it is wonderful is that we endure, we endure and find ways of looking back and, if we are able to manage the trick of perspective, if we are able to hold on to our memories of Santiago, we find a way for it to have made sense. [p 332]
These conversations always make me think of a line in Wittgenstein where he’s talking about the chain of reasons we give for doing something. If you ask someone “Why?” enough times— if every time they provide a “Because…” you respond with another “But why?”— they get to a point where no further account is available, where they are doing something that seems to them self-evidently worthwhile. You must then simply say, “I have hit bedrock. My spade is turned [i.e., turned back on itself, can dig no more]. This is what I do.”
People say, “I’m doing this because I’m in pain.” But the more I heard that, the less sense it made: the causal connection isn’t at all obvious. Nobody can say why this experience, rather than, say, a spa trip, or marathon training, ought to provide solace. I think Wittgenstein points in the direction of an answer. There’s something satisfying about having to throw our hands up, acknowledge our final inscrutability to ourselves. It’s a relief that there’s only so far we can take an explanation, and a comfort to be thrown back upon the nontrivial fact that we have preferences— preferences— that custom and idiosyncrasy and accident have given us the desire to do something instead of some other thing, or instead of nothing. It short-circuits the usual chain of accounting and gets right to the part where you have to shrug and say, “This is what I do.” [p 223]
Sounds like lech lecha to me……
|
<urn:uuid:83327fdc-6edd-40d9-954d-16b991f1cb34>
|
CC-MAIN-2017-26
|
https://madlik.com/2012/10/26/walking-without-pretext/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00661.warc.gz
|
en
| 0.958351 | 2,506 | 2.78125 | 3 |
China is situated in the conjunct sector of Eurasia, India-Australia and Pacific plates, where the structures of the lithosphere are very complex and show some obvious characteristics.
The crust of the Chinese continent is rather thick; the relief of Moho basically presents mirror image of surface topography. The crust of the Qinghai-Tibet plateau is the thickest in China and even in the globe. The thickness of the margin is 44-60km. In hinterland of the plateau, the thickness is 60-70 km; it may amount to 80 km, constituting a huge "mantle basin" roughly in E-W direction. Eastern China and adjacent sea areas are regions where the crust thickness becomes thinning; The Songliao and North China plains are two "mantle uplifts" with the crust thickness ranging from 30 to 34 km. The crust thickness of the sea areas sharply decreases to 30-20 km. The contours of the crust thickness strike approximately E-W in the west, while NNW in the east.
The lithosphere of China and adjacent areas presents a tendency: thicker in the west and thinner in the east, at the same time, thicker in the south and thinner in the north. The Central Asia, Qinghai-Tibet plateau and Sichuan basin are the regions where the lithosphere is thickened with a thickness of the lithosphere ranging from 170 to 190 km; in the northeastern Tarim basin and Changdu area of eastern Tibet, the thickness amounts to 200 km. To the east of the Great Hingganling-Taihang Mountains-Xuefeng Mountains, including marginal seas, it is a region where the lithosphere is thinning with thickness of the lithosphere ranging from 50 to 85 km. The lithosphere of the area between the two regions mentioned above is gently thinning, or it is a stable area, with thickness of the lithosphere ranging from 85 to 170 km. Researches show that great difference of structures of the lithosphere and asthenosphere exists between the east and the west of China. In the west, the lithosphere and asthenosphere show obvious "layered structure" with very thick lithosphere (130-200 km) and thin asthenosphere (40-100 km), reflecting an environment of collision and convergence of plates; In the east, the lithosphere and asthenosphere show "mosaic block structure" with very thin lithosphere (50-85 km) and very thick asthenosphere (200-300 km), reflecting characteristics of upsurge of asthenospheric materials and extensional thinning of the lithosphere.
Inverse tomographic images of seismic surface waves show a huge anomalous zone of low velocity in the depths from 70 to 250 km in eastern Asia and northwestern Pacific regions. This zone is 12 000 km long from north to south, and 2 500-4 000 km wide from west to east, the lithosphere is thin in thickness (50-80 km) with low Vs (4.20-4.35 km/s); the asthenosphere is very thick(c. 300 km)with very low Vs(4.15-4.28 km/s).
The lithosphere of the Chinese continent and adjacent sea areas may be divided into 5 types: cratonic, orogenic rift, island arc and oceanic crust.
|
<urn:uuid:aa4bb57e-a24a-4b89-82e9-b069869cb6b2>
|
CC-MAIN-2014-23
|
http://www.cprm.gov.br/33IGC/1320866.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00240-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.91836 | 692 | 3.171875 | 3 |
Fully-Functional Space And Terrestrial Photovoltaic/Electrodynamic Shield (PV/ES) Hybrid System For Dust Mitigation
The dust deposit on sensitive and costly surfaces of investigative
equipment is a serious concern for future lunar and Martian missions.
Several methods have been surveyed to protect optical or photovoltaic
surfaces from the dust deposition threat but limiting condition of
lunar and Martian environment makes them either inefficient or
infeasible. Since there is no drag force on particles, and the magnetic
field is negligible, applying electrostatic field could be the best
approach for dust control.
The unique properties of ITO, Graphene and Carbon Nanotube (CNT)
combined with recent scientific advancements that make these materials
readily available allow development of smart surfaces on a small and
large scale that will benefit the power generation devices. Our
research groups from UCF and UF in collaboration with KSC have been
working on high and low voltage technologies like Electrodynamic Dust
Shield (EDS) and Electrostatic Lunar Dust Collector (ELDC) for future
lunar and Mars Missions.
The research team developed active and very efficient cleaning methods
based on ITO, Graphene and CNT materials requiring minimal cost,
weight, and power that repel and remove particulate buildup of surface
contaminants, e.g., dust and dirt, from solar panels used for the space
mission. The cleaning methods ensure that solar panels are operational
for longer periods of time and with lower power loss due to dust
|
<urn:uuid:be3522a8-2797-4acb-909b-849c1e9496bd>
|
CC-MAIN-2017-30
|
https://fsi.ucf.edu/fully-functional-space-and-terrestrial-photovoltaicelectrodynamic-shield-pves-hybrid-system-for-dust-mitigation/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423808.34/warc/CC-MAIN-20170721182450-20170721202450-00276.warc.gz
|
en
| 0.861967 | 331 | 2.734375 | 3 |
Introduction to verb agreement
When you match up the subject of a sentence with a verb, it's called 'agreement'. Here's how it works!
Want to join the conversation?
- My question is why isn't "I bakes a cake" appropriate, It sounds odd but what is the rule behind it that makes the sentence wrong?(22 votes)
Bakesis the third person singular conjugation.
Iis the first person singular pronoun.
All sentences must have agreenment between the subject and the verb.(53 votes)
- When David says "The Dogs barks" is not okay then could you add something on the end of it? An example could be "The Dogs barks scare me."(6 votes)
- The correct grammar for that would be: "The dog's barking scares me."
The reason for this is that "dog's" becomes possessive since the bark belongs to the dog.(21 votes)
- I have to agree with trek. With the sentence Jake bake a cake. By simply adding two quotation marks it can work. "Jake bake a cake!"(8 votes)
- You would have to add in a comma after "Jake" for that to work. In the example he uses, your way wouldn't work even with quotation marks due to the lack of commas. If you want it to sound as if it's a demand, then the correct way of saying it would be: "Jake, bake a cake!"(7 votes)
- Jake bake a cake. It seems like it's a commanding sentence. So it technically wrong or no?(9 votes)
- It is wrong. If it were a command it would be, "Jake, bake a cake." The comma is essential to the sentence if it is a command. But since it is not, it is incorrect. For it to be in the present tense, it should be, "Jake bake s a cake."(11 votes)
- i know 'were' is plural, but in this sentence would it be right?
The architect's report, along with some models, were presented yesterday.(7 votes)
- Does the (s) mean that anything is plural?(6 votes)
- Not quite, there are some words (in which I will not take the time to think of) that end with ‘s’ that are singular.(6 votes)
- I have a question:
In I love it, isn't love a noun? It is an idea.(4 votes)
- “Love”, like many words, can function as different parts of speech, depending on how it’s used in the sentence. In this sentence, “love” is what the subject, “I”, is doing. That makes it a verb, not a noun, in this sentence.
Does that help?(6 votes)
- The fundamental concept behind verb agreement is:
Singular subjects take singular verbs and plural subjects take plural verbs.
This is what I have understood(somewhat) till now and tell me whether there is any correction in the above sentence.
The concept of verb agreement has verb agreement to follow. how cool!lol.(6 votes)
- Hello guys, please could someone tell me if this sentence is right, it's not from the lesson it's just a sentence I used recently, here it is "I myself seen it in the movies" we were talking about yoga so I was trying to say I have never seen it in a real life.(3 votes)
- Huh, I would have said: "I, myself, saw it in the movies." I wonder if my way was grammatically correct. Hassan, you could say it as: "I, myself, have seen it in the movies." I hope this helps!(6 votes)
- Hi guys, please help me with verb agreement on these sentences.
“The movie, including all the previews, takes about two hours to watch.”
Why the verb agreement in this sentence is singular? Aren’t “movie, including all the previews” suggest there are multitude of object?
“Either answer is acceptable.”
It means both answers are correct, but why singular verb agreement is used?
“Every one of those books is fiction.”
It means a multitude of books are fiction, but why singular verb agreement is used?
“The Prime Minister, together with his wife, greets the press cordially.”
It means both the Prime minister and his wife greet the press, but why singular verb agreement is used?(2 votes)
- Regarding "the movie" In this sentence, drop the material between the commas, and you can see that the subject is singular. The verb agrees with that.
Regarding "either": "Either" is an adjective modifying a singular subject, "answer". The verb agrees with the subject. (Had the adjective been "both" or "all", then the subject would have had to be plural, and the verb would have to agree with the plural.)
Regarding "Every one..." "Every" modifies the subject "one". The verb agrees with the subject. "of those books" does further modification work, but the subject is still "one".
Regarding "the prime minister" This is exactly the same as the first sentence.
Though they're grammatically correct in the forms in which you presented them, take comfort in the fact that they are poor writing.(5 votes)
- [Voiceover] Hi grammarians. Today I want to talk about this idea, in English, that we call agreement. So, I'm going to teach you how to be agreeable, make it so that all of your sentences get along really well. Let me give you an example. "The dog barks," as opposed to, "The dogs bark." Do you see the difference? It's funny because the 's' seems to migrate, seems to move from one part of the sentence to the other. So what you wouldn't say is, "The dog bark." That doesn't work. And you wouldn't say, "The dogs barks." I know that seems strange that you wouldn't want to have the 's' follow the other 's', but you actually only want to have one of these in a sentence. So, "The dog barks." "The dogs bark." When there's only one of something, strangely, in the present tense, it takes this 's'. So, "Jake bakes a cake." "We bake a cake." Not, "Jake bake a cake," which, admittedly, is a fun sentence to say. But this is not, in standard English, this is not correct. It doesn't exhibit what we call agreement. Similarly, you also wouldn't say, "We bakes a cake." If you're familiar with the Lord of the Rings books or movies, if you remember Gollum, the character Gollum is a weird little bug-eyed critter whose just kind of obsessed with the one ring, and he has this very particular way of speaking that is not, strictly speaking, standard. He's just wild about that ring. And the way he refers to everything, he just throws on an 's' on the end of every word he uses, every verb. So, if he's talking about, you know, this magic ring that he's very fond of, which he calls the precious, he wouldn't just say, "I love it." He'd say, "I loves it." Like so, and that's not correct. That does not show agreement. So, the test here is really... You can figure out whether or not you are using agreement in your sentences if you sound like Gollum. Because if Gollum were being grammatical, he would say, "I love it." So this is the sense that I want you to develop is you listen to yourself speak and you ask, "Do I sound like Gollum, the little ring monster?" Because if you do sound like Gollum, then you're probably not operating under the agreement rules of standard American English, or you're just doing a Gollum impression. That's the basic idea behind agreement Is you just want to make sure that the parts of the sentence match up. Sentence parts match. They agree. They get along. They work. You can learn anything. David out.
|
<urn:uuid:b86abeb4-50b1-4b33-906b-9f009bd483ca>
|
CC-MAIN-2023-23
|
https://www.khanacademy.org/humanities/grammar/parts-of-speech-the-verb/introduction-to-verbs/v/introduction-to-agreement-verbs-the-parts-of-speech
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644506.21/warc/CC-MAIN-20230528182446-20230528212446-00457.warc.gz
|
en
| 0.977807 | 1,777 | 3.09375 | 3 |
Ch. 9. Amazing world of carbon compounds
Q1. Fill in the blanks.
1. The organic compounds having double or triple bond in them are termed as _______(unsaturated)
2. The general formula of alkanes is _____________.
3. _____________ are known as parent organic compounds. (hydrocarbons)
4. Covalent compounds are generally soluble in ____________ solvents. (organic)
5. Triple bond can be obtained by sharing ____________ pairs or ________ electrons. (three, shared)
6. Hydrocarbons necessarily contain _____________ and ______________. (hydrogen, carbon)
Q2. Give the IUPAC name of the following compounds.
Q3. Write short notes on
i. The remarkable property of carbon atom to form bonds with itself and give rise to a single structure of chain is called catenation.
ii. Such carbon chains can be straight or branched forming large molecules.
iii. The two ends of some chains join together to form closed ring – like structures.
iv. While catenating, the bonds between carbon atoms can be single, double or triple covalent bonds.
2. Functional group
i. The atoms or group of atoms present in the molecule which determines characteristic property of organic compounds are called the functional group.
ii. Halides (F, Cl, Br, I) aldehyde (- CHO), Hydroxyl ( - OH), Carboxyl (- COOH), etc., are some of the functional groups.
iii. The chemical properties of different compounds having the same functional groups are different.
iv. There exists a homologous series of one particular type of functional group, e.g., alcohols, aldehydes, ketones, ethers, etc.
3. Homologous series.
i. A group of organic compounds containing same functional group which can be represented by the same general formula and which have similar trends in their properties are called a homologous series.
ii. In the general formula of compounds of a homologous series, the symbol R is used to represent its functional group.
iii. Alcohols (R – OH), Aldehydes (R – CHO), Carboxylic acid (R – COOH), Ketones(R-C(=O)-R') are some of homologous series and their general formulae.
iv. The alkanes family is a homologous family which have linear molecules. Examples: Ethane C2H6, Propane C3H8, Butane C4H10, Pentane C5H12.
Q4. Differentiate between: Detergents and soaps
i. Detergents are generally ammonium or sulphonate salts of long chain carboxylic acids.
ii. Detergents have a strong cleansing action.
iii. Oils or fats are not used in their manufacture, usually petrochemicals are used to produce detergents.
iv. They function well in hard or soft water.
i. Soaps are sodium or potassium salts of long chain carboxylic acids.
ii. Soaps have relatively weak cleansing action.
iii. Vegetable oils or animal fats are used along with hydroxide of sodium or potassium to manufacture soaps.
iv. They do not work well in had water but work well in soft water.
Q5. Answer the following.
1. Draw chain and ring structures of organic compound having six carbon atoms in them.
Ans. a) chain structure –
b) Ring structure: -
2. Which organic compounds are named as Alkanol in IUPAC system?
Ans. In IUPAC method, alcohols are named as AlKanol. In naming the alcholols by IUPAC method the last ‘e’ of the parent alkane is replaced by ‘ol’ to indicate the presence of OH group.
3. Explain: What do you understand by substitution reaction?
i. Reaction where substitution of one or more atoms in a molecule for another atom takes place are called substitution reactions. Saturated hydrocarbons undergo substitution reaction.
ii. In Substitution reaction one or more hydrogen atom of a hydrocarbon are replaced by some other atom (like chlorine)
iii. Ex: Substitution reaction of methane with chlorine.
Methane reacts with chlorine in the presence of sunlight to form methyl chloride and hydrogen chloride.
4. Which organic compounds readily undergo addition reactions? Why?
Ans. Unsaturated hydrocarbons undergo addition reactions, that is addition reactions are given by all the alkenes and alkynes. This is because reactants add to the carbon atoms of
double and bond to form a single product of saturated hydrocarbon.
5. State two examples of (organic) compound having covalent bond and two examples having ionic bond.
i. Hydrocarbons like methane and ethane have covalent bonds.
ii. In organic compounds line NaCl and KMnO4 have ionic bonds.
6. State and explain how alkanes are further classified.
Ans. Alkanes are saturated hydrocarbons with single bonds. Alkanes are further classified as:
i. Straight chain alkanes: The alkanes in which all the carbon atoms are attached by covalent bonds in a continuous chain are called straight chain alkanes or normal alkanes Eg. CH3 – CH2 – CH2 – CH3 is normal butane or n – butane.
ii. Branched chain alkanes: - The alkanes in which all the carbon atoms are not in a continuous chain and some of them are linked to other carbon atoms to make the branch are called branched chain alkanes.
a. The alkanes, in which there is only one carbon chain attached to the second carbon atom of the long chain are called iso – alkanes Eg.
b. The alkanes, in which there are two single carbon branches, attached the second carbon atom of the long chain are called neo – alkanes. Eg.
7. What are alkynes? Name the first member of alkyne family.
Ans. Hydrocarbons containing triple bonds between carbon atoms are alkynes. The first member of alkyne family is ethyne or acetylene
A wonderful image released by Michael Ströck under the GNU Free Documentation License: The structures of eight allotropes of carbon:a) Diamond b) Graphite c) Lonsdaleite d) C60 (Buckminsterfullerene) e) C540 Fullerene f) C70 Fullerene g) Amorphous carbon h) Single-walled carbon nanotube
|
<urn:uuid:a04020f4-e5f0-46d9-811e-ffb325484513>
|
CC-MAIN-2014-10
|
http://www.omtexclasses.com/2011/07/amazing-world-of-carbon-compounds.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999673298/warc/CC-MAIN-20140305060753-00016-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.894228 | 1,420 | 3.96875 | 4 |
Christmas can be a particularly difficult time for those struggling with homelessness
There are many more people with nowhere to call home this Christmas than ever before. Whether they’ve lost their home because of a family breakdown, mental health, addiction or financial problems, Christmas is going to be a sombre time for many this year with no way to put food on the table.
Depression has kept many out of work and homeless for years and thousands more are in temporary accommodation, on couches or otherwise straddling the edge of the poverty line, uncertain of where they'll be next week. Many people who spend Christmas at a refuge or soup kitchen are simply lonely, and often elderly.
The volunteers who come to help by providing clothes and food, soon start to break down a lot of the preconceived notions that they had before about homelessness and poverty.
The daily lives of people experiencing homelessness are stressful, dangerous, traumatic, and often take a toll on their mental health.
People can go without many things but going without a safe and comfortable space to live can be catastrophic for their general well-being. Homelessness itself can trigger a mental illness or worsen an existing condition, without even considering other factors such as poverty, personal conflicts, death of a loved one, serious medical condition, social isolation and other personal issues.
The literature confirms that people experiencing homelessness tend to have poorer mental health and have a higher prevalence of mental illnesses than the general population. However, it’s important to note that not all homeless people have or will ever develop a mental health condition.
Rates of depression are alarmingly high among the homeless youth population. Considering the social factors that impact mental health, this population is clearly a population at higher risk. Negative impact on mental health often begins well before a young person becomes homeless. Unhealthy family relationships, family breakdown, poverty, physical, emotional and sexual abuse, histories of domestic violence and neglect are known factors that impact our street youth.Homelessness is often stressful and can lead to depression and thoughts of suicide. Not all individuals with depression will commit suicide, however too many of those who die from suicide suffered from depression. Organisations like Wesley LifeForce suicide prevention services and Lifeline have experienced counsellors who can help.
A number of studies have reported higher rates of suicidal behaviour among homeless youth in comparison to youth who are not homeless.
If you are homeless or a risk of becoming homeless contact Wesley Mission Homeless Services to find a place to stay, or help you keep your home. Having a safe place to call home gives people the opportunity to progress towards independent living with dignity, privacy, security and self-worth.
|
<urn:uuid:cfbbbc61-f6b7-4211-97dc-0af7fe6d330f>
|
CC-MAIN-2020-24
|
https://www.wesleymission.org.au/news-and-publications/latest-news/wesley-mission-news/homelessness-at-christmas/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00501.warc.gz
|
en
| 0.957969 | 540 | 2.515625 | 3 |
The Department of Civil Aviation (DCA) director-general Azharuddin Abdul Rahman called MH370’s disappearance an “unprecedented aviation mystery”. As the authorities, investigators and experts continue looking for clues to explain what really happened, KiniBiz compiles some basic questions and provides quick answers.
In a separate post KiniBiz keeps track of notable events that have occurred since the plane went missing — click here for the timeline. Stay tuned for continuous updates of the timeline.
What could have happened to MH370?
Thus far there has been no clear indication of what could have happened to the plane and the 239 people onboard. Authorities are not ruling out any possibility at the moment, which includes terrorism and sabotage.
Are there likely to be survivors?
MAS had previously stated that they fear the worst.
When was radio contact with MH370 lost?
Subang air traffic control had last contact with MH370 at 2.40am, March 8, 2014, handing over to the Ho Chi Minh city control.
What does it mean when radio contact was lost? Does it happen often?
Pilots are generally required to communicate their locations and status via radio to air traffic controllers at regular intervals and when they reach certain locations along their flight path. Between each radio contact there is normally no radio communication.
However planes like the Boeing 777-200ER used by MH370 transmit data on engine performance and other technical aspects automatically at regular intervals.
What altitude did MH370 disappear at and where was it at that point?
According to MAS on March 8, 2014, MH370’s last known location before disappearing off the radar was at 065515 North (longitude) and 1033443 East (latitude). It was flying at an altitude of 35,000 ft.
However the Royal Malaysian Air Force (RMAF) later revealed that its radar detected an object believed to be the missing plane 200 miles northwest of Penang at about 2.15am, flying at about 29,500 ft above sea level.
“I am not saying that this is MH370, we are saying that we are working with the experts to determine this aircraft’s identity,” said RMAF chief Rodzali Daud.
When did MH370 go off radar screens? Why does this happen?
MH370 went dark on radar screens about one hour after take-off.
There are two types of radar. The military uses primary radar, which detects everything in its range using electromagnetic waves but cannot identify the object.
On the other hand, air traffic controllers use secondary radar which also detects a plane via electromagnetic waves. The difference is that when a plane picks up electromagnetic waves from a secondary radar, the transponder on the plane would send back a signal identifying the plane.
Secondary radars have a range of 300km around the nearest radar station, which means its coverage may be limited over wide bodies of water. Coverage is also affected by geography, altitude and the Earth surface’s curvature.
In MH370’s case, its transponder stopped submitting signals over the Gulf of Thailand despite still being in range of ground-based radar stations. This effectively means it was invisible to secondary radar, although primary radar would still pick its location up.
What is a transponder?
A transponder is simply a device that transmits radio signals in response to electromagnetic waves from civilian radars or secondary radars. In its response the transponder would tell the ground secondary radar its squawk code, position, altitude and call sign.
Additionally transponders are constantly radar-pinged, which helps air traffic controllers figure out the plane’s speed and direction.
What is a squawk code?
A squawk code is a four-digit identification code that helps air traffic controllers identify each plane on their radar. No other plane within a certain distance would have the same squawk code.
When a plane enters airspace where there is a plane using the same squawk code, the plane would be reassigned to a different squawk code. This is normal for long-distance flights.
Do you need a transponder to be able to detect the plane?
In a nutshell, yes and no depending on the type of radar in question.
What do you mean? What are the radar types?
Primary radars normally used by the military do not need transponders to detect the plane but would not be able to pinpoint the identity of each object detected.
If a plane’s transponder is turned off, the plane would be invisible to secondary radar, which is normally used by air traffic controllers.
Why can’t the secondary radar detect the plane if the transponder is not turned on?
Secondary radar relies on signals coming back from transponders to detect the plane.
Yes, they can. Common reasons include the scenario where the transponder is faulty or if there is a fire or other safety risk involving the transponder.
In MH370’s case, the on-board transponder apparently stopped functioning 40 minutes after take-off although it is unclear why.
Can transponders be used to indicate problems on the plane?
Yes. Pilots can send different codes based on the situation, for example the code for hijacking is 7500 while communications failure corresponds to the code 7600. The code for emergencies is 7700.
How is military radar different from civilian radar? What can the former see that the latter cannot?
Military radar can see flying objects irrespective of whether there is a transponder on-board or not. However the radar cannot identify each object as they would only be blips on the radar screen.
Civilian radar relies on the transponder to send a response signal in order to see and identify the object.
Did MH370 turn back?
It is not certain yet but the possibility is not ruled out. MAS had previously stated that they are examining the possibility that MH370 turned back towards Subang.
Additionally Vietnamese authorities previously said it had informed MAS on March 8 that MH370 had turned back due west, although Vietnam did not receive immediate response when it attempted to contact the plane.
“We informed Malaysia on the day we lost contact with the flight that we noticed the flight turned back west but Malaysia did not respond,” Vietnam deputy minister of transport Pham Quy Tieu was quoted as saying by AFP on March 12, 2014.
How can the radar detect a turnback?
Even if the transponder is off, military radar (primary radar) can still detect objects within its range, MH370 included. Based on the last known location and time of the flight when last spotted, it is possible for primary radar to correlate the plane to flying objects that were being tracked at the time.
What is the evidence so far that there was indeed a turnback?
So far the indications toward that possibility have come from military radar data. Authorities are currently investigating the possibility of a mid-flight turnback.
Where can the plane be now given the evidence available?
It is unclear where the plane might be now.
Was there an onboard explosion?
Reports quoted unnamed US officials saying that US spy satellites did not detect any signs of a mid-air explosion in the area where MH370 was last spotted.
Have such planes been missing for this long?
The last major airplane crash was in 2009, which saw Air France Flight 447 (an Airbus A330) plunge into the Atlantic Ocean. It was carrying 228 passengers and crew ― none survived. However, it took rescuers five days to find the wreck and three years to determine that ice crystals had caused the auto-pilot to disconnect. Bodies of 74 passengers remain unfound.
As for the Boeing 700 line (MH370 used a Boeing 777-200), the line has “enjoyed one of the best safety records of any jetliner ever built”, said Teal Group aviation consultant Richard Aboulafia.
“(Boeing 777) has provided a new standard in both efficiency and safety,” said Aboulafia.
There are no prior disappearances involving Boeing 777 planes before MH370. Two major previous incidents involved crashes where the planes (Asiana Airlines in 2013 and British Airways in 2008) landed short of their runways.
What is the black box?
Popularly called “black box”, the flight data recorder or accident data recorder (ADR) is a device that records any instructions sent to any electronic systems on an aircraft. Additionally the device also records cockpit conversations including communications with air traffic personnel.
Since these devices are relied on to investigate accidents, the “black box” is made to withstand extreme conditions such as high-speed impact and intense heat. However contrary to its popular name the device is normally coated with bright orange paint to increase visibility in a wreckage.
Is it the only thing which will give final answers?
Not necessarily, although the black box would come in handy. If the wreckage can be found, other clues as to what happened since the plane vanished may be found from the remnants of the plane and its wreckage location.
What is likely to indicate where the plane landed/crashed?
The clearest clue would be its last known flight trajectory before it vanished from both military and civilian radar completely. However it is not confirmed at press time whether the flying object detected by military radar 200 miles northwest of Penang, thought to be MH370, is indeed the missing plane.
Is it more likely to be in the sea or on land?
At present search efforts include both sea and land areas as neither possibility has been ruled out.
Additionally news reports said US investigators suspect MH370 flew on for at least four hours after vanishing from radar on March 8, 2014 as Rolls Royce had received bursts of engine information updates.
However a MAS spokesman has disputed this claim, saying that the data link was severed at the same time as MH370 disappeared. It is unclear if Rolls Royce was able to receive the data independently of MAS. Acting transport minister Hishamuddin Hussein had later said this allegation is not true.
What can one do now to locate the plane?
Authorities have widened the search area on Monday, March 10 2014 after the possibility arose that MH370 may have turned back.
Volunteers can join the crowdsourcing initiatives to comb through available satellite imagery for any sign of a possible wreckage.
|
<urn:uuid:1a1700cc-fc65-4fd2-a8a5-f3c015bb71e7>
|
CC-MAIN-2014-35
|
http://www.kinibiz.com/story/corporate/76275/mh370-some-questions-and-answers.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833115.44/warc/CC-MAIN-20140820021353-00167-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.956573 | 2,176 | 2.546875 | 3 |
How is Computer Science Changing the World
Ever since it has come into existence, computer has changed the entire world. The concept of business, work culture to information technology, all have been made to change with the advent of computer technology. In the initial stage computer was used to weather forecasting simply by calculations and observations but today the entire concept of forecasting is changed and we can now predict the weather like never before. This has immensely help the mankind from loss of lives and property by natural calamity and other unforeseen circumstances. Research and technology evolution has developed a new role even in biology, communication, data computing technology and through digital transformation revolution. There are inspirational studies from steam engine to combustible engine. The automobile industry used the tools, materials and skills used in steam engine as platform to develop a new world of transportation.
In the late 1970s and 1980s tremendous advancement has been observed in developing computers and affordable cheap and storage of networking. Networking system firstly used in the university computer science departments and there evolve a collaborative development of software. At the emergence of these networking devices at the collaboration of communities by common operating system, the notion of software changed dramatically. The network became more uniform and interconnected, thus creating a global inter linking and a digital traffic infrastructure. It increases in computing power capacity and thereby set aside storage to host repositories for worldwide acceptance. The internet was used not only to send mails and chatting and it became an exchange mechanism for data.
In the 1990s there came a system for storing documents in the internet called as web came into being. Through this people started writing documents for online publication, called as web pages. This has entirely transformed the society and has become a platform for interactions. These systems are evaluated by the confluence and arrival on the web a sizeable number of users without which search engines would not work. So, human actions are also a part of this system. For example, traffic data, banking transactions, social media etc. are part of this system.
Software which runs on a computer is different from rapid interaction with the human world, and necessitates different kind of technologies. There are bits of information that explodes in the internet every day and due to this, new jobs like data base management and creation of software do exist. A person who wants to know all these in detail he has to get a computer degree. A postgraduate course in computer is Masters in Computer Science. State of Purpose is a part or process of PG application. This exemplifies a student’s profile details in a nut shell. A MS degree in Computer Science needs a Sample SOP for MS in CS.
The fastest growing jobs like forensic science and statisticians depend on the process of information quickly. A person can hold the key of success if he can find a meaningful way to help the process of creating software. Mostly science relies on mathematics to be able to find the statistical probability of problems. Computer Science has the role to play. Without going out from home scientists can share the information with colleagues and can settle a big projects.
Creative arts like pictures, videos and documents have got impetus through computer science. It has given us a gift of communicating ourselves through emails. It has completely changed the narratives of the mode of communications and the way we participate in the scientific process. There are new emerging technologies which can change the entire world completely. Virtual reality and augmented reality can the rule the world in the next decade. The basic concept or the key aspects of computer science is the concept of reuse. All programmers from all over the world together contribute standard implementation of public utilities. Hind end computer hardware is able to realistic 3D graphics in computer animation and games. The next course of computer hardware technology will be of quantum computing with hardware which can compute multiple data simultaneously. The nature of software engineering changes from object oriented programming to the machine learning and scripting. In the near future almost all jobs will require some knowledge of computer programming and skill required for software engineering and thus the subject of computer science will become very important and specialized course.
|
<urn:uuid:ae04749d-ce23-4164-b8a2-3a3fa0c2200b>
|
CC-MAIN-2020-10
|
http://www.blogmaniaa.com/how-is-computer-science-changing-the-world/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00065.warc.gz
|
en
| 0.946299 | 812 | 3.09375 | 3 |
NEW YORK (GenomeWeb) – The Malagasy people of Madagascar have recent African and Asian heritage, according to a new study.
Though Madagascar is about 250 miles off the coast of Africa, the origins of the Malagasy people have been a point of debate, though historical, linguistic, ethnographic, and other investigations have noted both African and Asian influences on the population.
In a new study, a University of Toulouse-led team of researchers collected genetic samples from more than 2,700 people living in nearly 260 different Madagascar villages. As the researchers reported yesterday in the Proceedings of the National Academy of Sciences, they uncovered recent Bantu and Indonesian ancestry among Malagasy individuals.
The proportion of African and Asian heritage individuals had varied with where they lived on Madagascar and suggested independent colonization of the island by African and Asian groups rather than by an already admixed population, the researchers added.
"The present Malagasy population shares recent common ancestors with Bantu and Austronesian populations now living [5,000 miles] apart," Toulouse's Thierry Letellier and his colleagues wrote in their paper.
The researchers conducted grid-based sampling of 257 villages across Madagascar to collect DNA for analysis. They analyzed the full mitochondrial genomes of 2,691 people, Y chromosomes from 1,554 male individuals, and the genome-wide SNP data of 700 people.
Phylogenetic analysis of the mitochondrial DNA data the researchers generated indicated that with the exception of a Madagascar-specific lineage all the other mtDNA lineages they found on the island were also present among Africans or East Asians. They noted that the frequency of East Asian and African mtDNA lineages was roughly equal. Additionally, they reported that the Madagascar-specific lineage M23 has a recent origin, dating back some 1,200 years.
The Y chromosome data, though, indicated that Y chromosomes of African origin were more common among the Malagasy people than those of East Asian origin, a split of 70 percent to 30 percent, they reported.
The genome-wide SNP data likewise indicated that Southeast Asian and East African Bantu groups contributed to the genetic makeup of the Malagasy people. On average, the researchers reported that the Malagasy population is about 60 percent African, 36 percent East Asian, and 4 percent West Eurasian.
Though an identical-by-descent analysis, Letellier and his colleagues found that Bantu and Indonesian populations — particularly Indonesian populations from south Borneo — shared many large fragments with Malagasy people.
Still, the genomic diversity of the Malagasy people varied by geography, as Asian ancestry was higher in the central highlands of the island and African ancestry was higher in the north and along the coast. In particular, maternal African lineages were more common in northern Madagascar, while maternal Asian lineages were more common in southern and central Madagascar. At the same time, paternal Asian lineages were less frequent than paternal African lineages, and paternal African lineages were present mainly on the coast and in the north.
Additional analyses indicated that each of the Malagasy genetic groups has the same origin and all diverge from southern Borneo and Bantu populations. The researchers dated the split between the Borneo and Malagasy populations to have occurred some 2,000 years to 3,000 years ago and the split between Bantu and Malagasy people took place about 1,500 years ago.
For all Malagasy genetic groups except one, the researchers found evidence of only one admixture event rather than several and they estimated that that admixture event occurred between 500 years and 900 years ago.
Based on their findings, the researchers envisioned a scenario in which the Indonesian population arrived on Madagascar before the African population did, and that a primarily male African population likely landed at the northern end of the island before spreading south.
"The hypothesis that Austronesians were the first to settle Madagascar before an African paternal wave is supported by the earlier split of Malagasy from Indonesian source populations and explains the predominance of both Austronesian maternal lineages and the Austronesian linguistic background," Letellier and colleagues added.
|
<urn:uuid:f4091a86-e514-4832-a785-edc3e3260d9e>
|
CC-MAIN-2023-23
|
https://www.genomeweb.com/genetic-research/genomic-analysis-sheds-light-diversity-malagasy-people-madagascar
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00593.warc.gz
|
en
| 0.956088 | 856 | 2.984375 | 3 |
Last night, the Nobel Peace Prize was given to Ellen Johnson Sirleaf, Leymah Gbowee and Tawakul Karman, 3 of the most respectable female activists of our time. The committee in Oslo, Norway announced that these women were chosen “for their nonviolent struggle for the safety of women and for women’s rights to full participation in peace-building work.”
Ellen Johnson Sirleaf is the president of Liberia and the only elected female head of state in Africa. She is running for reelection to a second term on October 11, against 15 other candidates. A Harvard-educated economist, Sirleaf is praised for the growth she has achieved after Liberia’s devastating 14-year civil war, and is expected to win a second term.
Leymah Gbowee is an African peace activist who was a key figure in organizing the movement to bring an end to the Second Liberian Civil War. In 2002 Gbowee began the peace movement by organizing women to pray for peace through non-violent protest and prayers.
Tawakul Karman is a Yemeni journalist and human rights activist, who works for the release of political prisoners in her home country, organizing demonstrations and sit-ins. One of the loudest voices in the Yemeni protests, she has received death threats and has become a major figurehead of the ongoing Arab Spring opposition. A 32 year-old mother of three, she is one of the youngest people to receive the prize.
more information on these women and the Nobel Peace Prize can be found here.
|
<urn:uuid:b6cc53d8-fcf3-49a5-90eb-9450b72b5d43>
|
CC-MAIN-2023-50
|
https://hrionline.org/2011/10/07/nobel-peace-prize-granted-to-women-activists/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00205.warc.gz
|
en
| 0.974501 | 319 | 2.578125 | 3 |
MCLA is adjacent to a diverse array of terrestrial and aquatic ecosystems available for study both in and out of the classroom. This creates unique opportunities for students interested in Environmental Studies.
The Environmental Studies Program, in conjunction with MCLA's Berkshire Environmental Resource Center (BERC), oversees and coordinates activities within Scully Forest. Scully Forest is approximately 60 acres and serves as an outdoor classroom for understanding forest community structure and land use.
The photovoltaic array installed on Venable Hall can be monitored at the Soltrex Web site for power output and sunlight in watts per square meter. You can also see the air temperature and wind speed on Venable Hall's roof top. The array is clearly visible from classrooms in Venable Hall as well as from the upper patio outside the school's cafeteria.
|
<urn:uuid:b2192409-7d58-4fb0-a75e-6844e1642e49>
|
CC-MAIN-2013-48
|
http://www.mcla.edu/Undergraduate/majors/environmentalstudies/facilities/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163971635/warc/CC-MAIN-20131204133251-00053-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.916191 | 159 | 2.5625 | 3 |
A lesson summary is used to recapture
students attention and build to a motivational
climax. You may want to give short or interim
summaries at various places in a lesson. Give
final summaries after you have covered all of the
main points of the lesson. Quickly reviewing the
main points can refocus students and help them
The purpose of the summary is to instill in
students a desire to retain and use what they
have learned. Although you motivate students
throughout the lesson, the summary is your last
chance to emphasize how important the
information is to them as individuals. The
ultimate goal of instruction is to cause students to remain motivated beyond the instructors influence
and apply what they have learned on the job.
You must constantly motivate your students to learn. Grasp every opportunity to motivate students.
As an instructor, you must find which methods and devices work best with your students in certain
situations. After instruction begins, assess each students achievement level, learning styles, and
motivational patterns to determine the readiness of each student to learn.
Remember Maslows theory. Be sensitive to your students needs and provide them with a proper
learning environment. Those students with adequate motivation should then be ready to learn.
Following Maslows theory does not mean you pamper students into learning. Rather, it means you look
upon each student as a human being with certain inherent rights and feelings.
As an instructor, be professional, intellectually honest, and aware of your influence on students. Set
a good example at all times by maintaining a neat, clean, and proper personal appearance. Be courteous;
use proper titles when addressing military or civilian students. Set rules early in the course and
maintain them. Welcome and profit from the evaluations of the students or other instructors. Avoid
arguments, control distractions, and be fair in all your dealings with students--above all be consistent.
|
<urn:uuid:969c257c-1895-4b5f-af26-3dc174dfe7bf>
|
CC-MAIN-2017-47
|
http://navyadministration.tpub.com/134t/css/Summary-134T_30-30.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806086.13/warc/CC-MAIN-20171120164823-20171120184823-00697.warc.gz
|
en
| 0.923803 | 404 | 3.84375 | 4 |
Sponge Iron Production By Direct Reduction Of Iron Oxide
This book provides a fascinating study of the very important emerging field of direct reduction in which iron ore is ‘directly reduced’ in the solid-state, using either natural gas or non-coking coal, to produce a highly metallised material, referred to as sponge iron (or direct reduced iron). This intermediate product is subsequently melted in electric arc furnaces or induction furnaces (sometimes even in basic oxygen furnaces) to produce liquid steel. Such a process combination enables steel to be produced without using coking coal, which is an expensive input in the normal blast furnace—basic oxygen furnace route of steelmaking adopted in integrated steel plants.
What people are saying - Write a review
I study this book very carefully and get more knowledge about types of reactions,heat transfer sequence,amount of heat generated and utilised during operation,rotary kiln and cooler details and finally different types of methods for production of DRI.
Type Availability and Demand 1836
Fundamentals of Direct Reduction
Coalbased DR Processes Using Rotary Kilns 82134
Coalbased Processes Using Reactors Other Than Rotary Kiln 135176
|
<urn:uuid:0862e10d-81a4-48b6-8cc9-eacbb0668061>
|
CC-MAIN-2017-47
|
https://books.google.co.za/books/about/Sponge_Iron_Production_By_Direct_Reducti.html?id=IcT6-tg1uz4C&redir_esc=y&hl=en
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806543.24/warc/CC-MAIN-20171122084446-20171122104446-00031.warc.gz
|
en
| 0.90457 | 247 | 2.671875 | 3 |
History, we suspect, was written by a man. There are, literally, thousands of books about gritty, bearded men setting off on long voyages. A quick glance at the ‘Explorers section’ of the local library, or history textbook, will confirm this. But hiding innocuously between books about Marco Polo and Magellan, are stories of equally gritty women who did everything their male counterparts did, except, in a skirt. Sangeetha Sampath Pai combs through history to bring you a three-part series about women explorers through three centuries.
The beginning of the 20th century saw the end of Victorian-era skirts and attitudes. But eyebrows still rose and gossip still ensued when the words ‘Woman Explorer’ were heard. But these brave-hearts were on the other side of the world, and didn’t hear a word of it!
Marguerite Harrison (1879-1967): The Spy Who Travelled The World
Marguerite Harrison could well have been the heroine of a thrilling novel—a beautiful woman, a spy, a reporter, a translator, an explorer. She journeyed extensively, sometimes as an intelligence agent, sometimes as a reporter.
Her travels took her across Europe, to Italy, France, Holland, Germany and even to Lubynka, an infamous prison in Russia. Harrison led an unapologetic, colourful and adventurous life. And we are pretty sure, she didn’t regret a minute of it.
Louise Arner Boyd (1887-1972): The Ice Woman
Louise Arner Boyd had a passion for exploring. But she had no interest in plowing through tropical jungles, sailing on Orient seas or watching animals in the African plains. Instead, Boyd headed north, far up north.
Boyd, the ‘ice woman’, held an endless passion for the Arctic Ocean and landscape. She was the first woman to fly over the North Pole. She led scientific expeditions to Greenland. She studied the effects of polar magnetic fields on radio communication deep in the Artic. No wonder, the media dubbed her ‘The Girl Who Tamed The Artic’.
Amelia Earhart (1897-1937): The Flying Queen
As a child Amelia loved to explore her neighbourhood. Perhaps she was preparing for a time when she would be the first woman to fly across the Atlantic Ocean solo, a journey that immortalised her.
One of 20th century’s greatest explorers, Amelia was flying at a time when there were only a handful of women.
Her death is tragically poetic. She disappeared over the Pacific Ocean while flying—doing what she loved best, exploring the skies, her neighbourhood.
Alexandra David-Neel (1868-1969): The Woman Who Travelled to Forbidden Lands
Alexandra David-Neel expressed an intense desire for freedom and spirituality from a young age.
Little wonder then that she journeyed to Lhasa, Tibet when it was forbidden for foreigners. Alexandra’s fascination with Buddhism and spirituality took her all over the Eastern hemisphere including India, Japan and China.
She has written over 30 books on spirituality and her teachings have inspired many artists, writers and philosophers. And quite a few cubicle-dwellers, as well.
Harriet Chalmers Adams (1875-1937): A Woman Who Travelled A Hundred Thousand Miles
Harriet Chalmers Adams led an action-packed life. Her various avatars included writer, photographer, explorer and war correspondent.
Her travels, if plotted on a world map, would look like the connectivity-maps of airlines, with red lines traversing the length and breadth of the planet. She journeyed extensively through South America, Asia, South Pacific and even retraced Columbus’s route to America and beyond. It is estimated that Harriet covered over hundred thousand miles during her travels.
Delia Akeley (1875-1970): The African Queen
Delia Akeley was born in a quiet little corner of Wisconsin, far away from the golden grass of the African prairies. But Delia had an unconditional love for Africa and made several exploratory trips during her lifetime. She was, by all accounts, completely at home in the bush.
During her travels across the continent, Akeley lived with Pygmies, doing exhaustive research on the then, unknown tribe. Akeley was also the first woman to traverse the entire African continent.
Dame Freya Madeline Stark (1893-1993): The Centurion Explorer
Freya Stark was no ordinary explorer. Where others took a left, she bravely swerved right. She travelled to the remotest corners of the earth, the most inhospitable landscapes. She was the first woman to enter Luristan, Iran. She travelled to Afghanistan at a time when even men were wary of going there.
Her fascination with the Orient and her flair for capturing it beautifully in books, earned her the title ‘The Last of the Romantic Travellers’, one she held for a hundred years, till her death.
Margaret Murie (1902-2003): The Arctic Conservationist
Margaret Murie was a famous author, naturalist and conservationist. Her love for preserving the natural splendour of the world took her to places most of us have only seen in an atlas—Alaska and the Arctic. These rarely-visited lands, often buried under snow even in the height of summer, were no match for Murie. She plowed on through snow, ice and mountains to call the world’s attention to the spectacular wilderness at the roof of the world.
She died on the cusp of the 21st century, trying to make sure the world remained as it was, for 21st century travelers.
|
<urn:uuid:8b688ada-cb05-4c59-9dc3-4148daa3b9a1>
|
CC-MAIN-2014-15
|
http://blog.coxandkings.com/ms-adventure-fearless-female-explorers-from-the-20th-century/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00203-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.955039 | 1,193 | 3.0625 | 3 |
One of the earliest Greek philosophers, Thales of Miletus (c. 7th century BCE), maintained that the universe contained a creative force that he called physis, an early progenitor of the term physics; he also postulated that the world and all living things in it were made from water. Anaximander, a student of Thales, did not accept water as the only substance from which living things were derived; he believed that in addition to water, living things consisted of earth and a gaslike substance called apeiron, which could be divided into hot and cold. Various mixtures of those materials gave rise to the four elements: earth, air, fire, and water. Although he was one of the first to describe Earth as a sphere rather than as a flat plane, Anaximander proposed that life arose spontaneously in mud and that the first animals to emerge had been fishes covered with a spiny skin. The descendants of those fishes eventually left water and moved to dry land, where they gave rise to other animals by transmutation (the conversion of one form into another). Thus, an early evolutionary theory was formulated.
At Crotone in southern Italy, where an important school of natural philosophy was established by Pythagoras about 500 BCE, one of his students, Alcmaeon, investigated animal structure and described the difference between arteries and veins, discovered the optic nerve, and recognized the brain as the seat of the intellect. As a result of his studies of the development of the embryo, Alcmaeon may be considered the founder of embryology.
Although the Greek physician Hippocrates, who established a school of medicine on the Aegean island of Cos around 400 BCE, was not an investigator in the sense of Alcmaeon, he did recognize through observations of patients the complex interrelationships involved in the human body. He also contemplated the influence of environment on human nature and believed that sharply contrasting climates tended to produce a powerful type of inhabitant, whereas even, temperate climates were more conducive to indolence.
Hippocrates and his predecessors were concerned with the central philosophical question of how the cosmos and its inhabitants were created. Although they accepted the physis as the creative force, they differed with regard to the importance of the roles played by earth, air, fire, water, and other elements. Although Anaximenes, for example, who may have been a student of Anaximander, adhered to the then-popular precept that life originated in a mass of mud, he postulated that the actual creative force was to be found in the air and that it was influenced by the heat of the Sun. Members of the Hippocratic school also believed that all living bodies were made up of four humours—blood, black bile, phlegm, and yellow bile—which supposedly originated in the heart, the spleen, the brain, and the liver, respectively. An imbalance of the humours was thought to cause an individual to be sanguine, melancholy, phlegmatic, or choleric. These words persisted in the medical literature for centuries, a testament to the lengthy popularity of the idea of humoral influences. For centuries it was also believed that an imbalance in the humours was the cause of disease, a belief that resulted in the common practice of bloodletting to rid the body of excessive humours.
|
<urn:uuid:45051399-9d27-44e1-8cb2-3537fec5212f>
|
CC-MAIN-2023-14
|
https://learnhtml.foobrdigital.com/theories-about-humankind-and-the-origin-of-life/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00295.warc.gz
|
en
| 0.97973 | 687 | 3.625 | 4 |
One SJCOG performance measure tracks average energy usage per household. The lack of available sources tracking annual energy usage makes monitoring this indicator over time difficult. However, SJCOG’s Envision Tomorrow modeling tool (which takes existing and previous place conditions and analyzes community growth patterns) measures energy usage per household to better understand environmental impact, household budget, and building efficiency of San Joaquin County households. This graph shows annual residential energy use, per 1,000 BTU per unit, in 2015. According to the graph, the most efficient housing types and energy categories are multi-family and single-family small units.
Total Residential Electricity Consumption
Naturally, total residential energy consumption has increased throughout the years. However, some years saw unchanging or decreasing energy from the previous year despite residential growth, indicating less energy usage per person or greater energy efficiency. In 2019, San Joaquin County residents collectively consumed nearly 1.9 million kilowatts (1,900 gigawatts) of electricity and 90 million therms of gas.
Total Residential Gas Consumption
Residential Electricity and Gas Consumption per Capita
|
<urn:uuid:3d652fee-d919-489a-b8ff-43f2b9b315ce>
|
CC-MAIN-2023-40
|
https://www.sjcog.org/255/Household-Energy-Usage
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511406.34/warc/CC-MAIN-20231004184208-20231004214208-00707.warc.gz
|
en
| 0.914777 | 223 | 3.09375 | 3 |
The mean attendance of a school for the first 4 days of the week was 780 and for the
first five days it was 840.How many were present on Friday?
Pls answer it very quickly.
Let the first five days be M, T, W, Th and F.
For the first 4 days, mean attendance is 780, therefore we have:
(M + T + W + Th )/4 = 780
(M + T + W + Th ) = 3120 ……(i)
For the first 5 days, mean attendance is 840, therefore we have:
(M + T + W + Th + F )/5 = 840
(M + T + W + Th + F) = 4200 ……..(ii)
By subtracting (i) from (ii), we get
F = 1080.
Therefore, the number of students present on Friday is 1080.
|
<urn:uuid:3cf7901f-4934-46e5-ad52-d68dc01f5ad0>
|
CC-MAIN-2020-05
|
https://www.meritnation.com/ask-answer/question/the-mean-attendance-of-a-school-for-the-first-4-days-of-the/data-handling/2861300
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00245.warc.gz
|
en
| 0.927496 | 190 | 2.671875 | 3 |
by Ralph Buoniconti and Cheryl Smith
August 24, 1992 is a date that many residents of Florida are likely to remember the rest of their lives. That, of course, was the day Hurricane Andrew devastated southern Florida, delivering an unprecedented blow to the states economy and the confidence of residents in the ability of their homes and businesses to withstand another strong hurricane.
As the scope of the changes required became evident to construction professionals, the county created the Metro Dade County Building Code Evaluation Task Force. Studies were conducted to determine how so many buildings failed, and to provide more rigorous testing and evaluation procedures to help improve the performance of hurricane-resistant building materials.
Research identified the loss of windows, doors, and roofing materials as a major cause of catastrophic building loss in a hurricane. The reason is simple to understand: when windows and doors fail, the integrity of the building envelope is breached, producing intense internal pressurization of the structure. The main culprit by far in breaching a buildings "envelope" was found to be windborne debris.
In recognition of this, Dade County adopted tough new building standards in 1993 that govern the performance of glazing. In essence, the South Florida Building Code requires that every exterior opening in a house or business be protected against flying debris propelled by hurricane-force winds. Such protection can consist of shutters or impact-resistant windows, doors and skylights.
Similar codes and testing procedures have been adopted by the counties of West Palm Beach, Monroe and Broward in Florida. In addition, as of July 1998, the Texas Department of Insurance will require hurricane glazing protection in 14 counties at high risk of wind storm damage.
The tests devised in Florida to determine whether adequate protection exists consist of two major components: impact testing and pressure testing. To measure impact performance, two different kinds of "missiles" are shot at the glazing. A large missile, defined as a nine-pound two by four, is launched from an air cannon at 34 mph (50 ft/second). Manufacturers seeking hurricane resistance qualification supply three test specimens, each of which must survive two impactsone in the center and one in the cornerwithout penetration. According to the new codes, windows, doors and skylights installed 30 feet or less from ground level must be made of materials that pass the large missile tests.
A second impact test uses smaller missiles because it is assumed that even in hurricane winds, large objects are unlikely to be traveling at high speeds (if at all) above 30 feet. The test missile is intended to represent roof gravel weighing two grams moving at 55 mph (80 ft/second). Again, manufacturers supply three specimens, but in this test 30 impacts are required on each specimenten in the center, ten along the edge, and ten near the corner. All three windows must survive the impacts without penetration.
However, the tests that evaluate the performance of these materials have themselves had to weather some controversy. The "roof gravel" is a potential source of inconsistency. Some products, may pass with some types of rock and fail with others because some gravel is made of softer minerals or has a more forgiving shape. Current Dade County tests, still widely followed, are due to be updated. Newer ASTM and TDI standards use steel balls weighing exactly two grams in an effort to eliminate the sources of variability.
For the final phase of testing, glazing samples are subjected to cyclic pressure testing. Hurricane winds dont blow at a constant rate, so this test seeks to simulate hurricane force loading at 9,000 wind cycles, where each cycle is a function of the maximum wind speed (converted to pressure) to which the product will be "rated."
Each window, door and skylight manufacturer determines the pressure at which its product will be tested. Dade County requires a minimum of 60 pounds per square foot for the first three floors of a building in a non-coastal environment. Its important to remember, however, that wind pressure requirements vary depending on where the glazing will be installed. Higher pressure performance is necessary near the corners of a building or where surrounding structures cause a special wind effect. In addition, certain coastal conditions can produce pressures in excess of 200 pounds per square foot and therefore require correspondingly more rigorous certification levels.
To pass, a manufacturers specimens must pass the impact tests without penetration and during the cyclic pressure test, the specimens must remain in their frames and have no tear or crack longer than five inches and wider than 1/16 inch through which air can pass. In addition, weathering tests are conducted on all of the glazing materials used in the system to assess how the materials can be expected to perform in a storm even after long-term exposure to the elements.
Ralph Buoniconti is an engineer at GE Structured Products, Pittsfield, MA, with special expertise in testing and regulatory matters. Cheryl Smith is a Structured Products marketing specialist for the hurricane protection industry who works with glazing system manufacturers, architects and developers nationwide.
LEXAN® is a registered trademark of General Electric Company.
One material to which glaziers are turning for glazing that maintains its integrity storm after storm is LEXAN glazing from GE Structured Products of Pittsfield, MA. In repeated tests performed on standard glazing made from LEXAN sheet, the large missile literally bounces off the material. Both vertical glazing for windows and doors made from MR10 sheet and overhead glazing for skylights made from XL10 sheet perform equally well. A high-performance coating developed by the manufacturer helps the materials further resist abrasion and UV damage.
Besides passing Dade County and other south Florida hurricane tests, glazing made from LEXAN sheet complies with all major model building codes. RB and CS
To make buildings less vulnerable to harsh weather conditions that can cause glazing failure, Glasslam NGI of Pompano Beach, FL, has developed Safety-Plus®. This laminated glass product is meant to address the problem of PET films and laminates that hold a glazing lite together, but do nothing to hold the lite in its frame: a potentially lethal threat.
Safety-Plus consists of a three-ply interlayer, the middle layer a heavy-duty PET film. The PET film is cut longer than the glass size and mechanically held into the glazing frame. Silicone used in the installation adheres to the PET film, causing the lite to be held securely in place despite any possible glass breakage, according to the company.
© Copyright 1998 Key Communications, Inc. All rights reserved. No reproduction of any type without expressed written permission.
|
<urn:uuid:f4e60956-eba9-4e1c-8a75-8ebaede80b33>
|
CC-MAIN-2013-20
|
http://www.usglassmag.com/USGlass/1998/9803/9803hurricane.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705958528/warc/CC-MAIN-20130516120558-00083-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943247 | 1,365 | 3.328125 | 3 |
“Failure to acknowledge the reality of trauma and abuse in the lives of children, and the long-term impact this can have in the lives of adults, is one of the most significant clinical and moral deficits of current mental health approaches.
Trauma in the early years shapes brain and psychological development, sets up vulnerability to stress and to the range of mental health problems.”
Professor Louise Newman, Psychiatrist, Director, Centre for Developmental Psychiatry and Psychology, Monash University (from ASCA Guidelines for Treatment of Complex Trauma1)
With thanks to ASCA for guidance and material extracted (or plagiarized) from their website in writing this article.
Okay, now that I’ve had my say about how underfunded, overwhelmed and inept government services are, there’s a few things survivors absolutely need to know to ensure that they get the correct diagnosis and adequate care – not the least of which is that unless you control your own destiny within the mental health system you’ll typically receive misdiagnosis and inadequate treatment/counselling/medication. The other extremely important thing to understand is that if you were abused as a child by a carer or someone that you should have been able to trust and this was never acknowledged/addressed/treated you, in all probability, suffer from complex trauma, otherwise known as Complex Post Traumatic Stress Disorder (C-PSTD). If you’re seeing a psyche or any other mental health provider raise this with them. Treatment guidelines can be accessed here. Ensure that your therapist is familiar with these.
About Complex Trauma
How a person responds to trauma is dependent on many things, including how pervasive and prolonged the trauma was, the context in which it took place (i.e. family setting, school, institution etc) and type of life experiences before and after the trauma was experienced, particularly in regard to the amount of appropriate care and support you received after the fact.
Complex trauma generally refers to traumatic stressors that are interpersonal. That is, they are premeditated, planned, and caused by other people, such as being violated and/or exploited by another person.
It stands to reason that the most devastating types of trauma are those that occur at the hands of carers. Child abuse, occurring in the context of a trust relationship, involves significant betrayal of the responsibilities of those relationships. In addition, it is often private and the child is cautioned or threatened to not disclose its occurrence. Unfortunately, when such abuse is observed or a child does disclose the abuse, adequate and helpful response is lacking, resulting in another betrayal and another type of trauma that has been labeled secondary traumatization or institutional trauma. It is for these additional reasons that complex traumatisation is often compounded and cumulative and becomes a foundation on which other traumatic experiences occur repeatedly over the course of the individual’s life. Research studies have repeatedly found that when a child is abused early in life, especially sexually, it renders him/her much more vulnerable to additional victimization. As a result surviviors of child abuse can become caught in an ongoing cycle of violence and retraumatization over their life course, especially if the original abuse continues to go unacknowledged and the after-effects unrecognized and untreated.
Recent studies have shown that abuse and other trauma result in changes in a child’s neurophysiological development that, in turn, among other things, result in changes in learning patterns, behavior, beliefs and cognitions, identity development, self-worth, and relations with others. Those who endure trauma for an extended period under the age of twelve are proven to present more devastating outcomes in adulthood than complex trauma afflicted upon someone who is already an adult. This is because morality, social skills, and life skills are all taught in childhood at crucial developmental periods for the brain and the brain develops in accordance to what it learns in relation to these experiences (the good news is that neuroscience has now shown that the brain can change through neuroplasticity).
Because complex trauma commonly results from trauma occurring in early childhood, healing takes more effort, dedication and time. It is imperative that a trauma specialist is familiar with C-PTSD and offers a sensitive, structured treatment plan.
Symptoms of Complex Trauma
- Emotional Regulation. May include persistent sadness, suicidal thoughts, explosive anger, or inhibited anger. (Difficulty regulating the emotions, swinging from one state to another)
- Consciousness. Includes forgetting traumatic events, reliving traumatic events (flashbacks), or having episodes in which one feels detached from one’s mental processes or body (dissociation)
- Self-Perception. May include helplessness, shame, guilt, stigma, and a sense of being completely different from other human beings.
- Distorted Perceptions of the Perpetrator. Examples include attributing total power to the perpetrator, becoming preoccupied with the relationship to the perpetrator, or preoccupied with revenge.
- Relations with Others. Examples include isolation, distrust, or a repeated search for a rescuer.
- One’s System of Meanings. May include a loss of sustaining faith or a sense of hopelessness and despair.
- Survivors may avoid thinking and talking about trauma-related topics because the feelings associated with the trauma are often overwhelming.
- Survivors may use alcohol or other substances as a way to avoid and numb feelings and thoughts related to the trauma.
- Survivors may engage in self-mutilation and other forms of self-harm (as a way to cope with overwhelming emotion and/or to numb out pain)
- Survivors who have been abused repeatedly are sometimes mistaken as having a “weak character” or are unjustly blamed for the symptoms they experience as a result of victimisation.
Complex Trauma vs. PTSD (Post Traumatic Stress Disorder)
The diagnosis of PTSD accurately describes the symptoms that result when a person experiences a short-lived trauma. For example, car accidents, natural disasters, and rape are considered traumatic events of time-limited duration. However, complex traumas continue or repeat for months or years at a time. Clinicians and researchers have found that the current PTSD diagnosis often does not capture the severe psychological harm that occurs with such prolonged, repeated trauma.
The effect of these traumatic events on the person is cumulative. When a person is repeatedly exposed to traumatic stress, disruptions can occur in brain functions and structures, endocrinological function, immunological function, and central and autonomic nervous system arousal. These biological disruptions interact with psychological, emotional, spiritual, and cognitive processes and a variety of disturbances can result that go beyond the re-experiencing, avoidance/numbing, and arousal symptoms that characterize PTSD.
Complex trauma influences deep layers of the personality. Some of the phenomena parallel those that appear in “regular” PTSD but some do not. In general, people who have experienced complex trauma tend to act in ways that endanger themselves –not only do they have difficulty controlling urges and emotions, they have trouble remaining in therapy.
Is the treatment of complex trauma different to that of PTSD?
Yes. Since the difficulties affect the deepest level of the personality –belief system, self-image and control of emotions and urges – treatment does not focus directly or solely on the trauma but rather on the entire person. For this reason, and because the victim has trouble maintaining a therapeutic alliance with a therapist, treatment lasts longer than therapy focused PTSD. CBT (Cognitive Behavioral Therapy) approaches can be useful for PTSD and even for C-PTSD to a degree, but will most often not be enough for someone seeking to heal from C-PTSD, as the approach needs to go deeper and be more holistic.
What Is Good Trauma Therapy? What Is a Good Trauma Therapist?
Historically, mental health treatment has been treated according to the “medical model”: the “sick” patient treated by the all-knowing doctor, who has the responsibility, the expertise, and the tools to cure the illness. The patient’s job is to be compliant and to follow orders. Don’t buy into this rubbish EVER!
As Dr Carl Rogers stated in 1957, “The clinician needs to be a “real” human being. Not an all knowing, all powerful, rigid, and controlling figure.”2
Thus, an effective trauma therapist empowers the survivor rather than imposes a cure. The idea of the therapist and client working together as partners, sharing responsibility and expertise, is still radical in many mental health settings. However, for trauma survivors, this is in many ways the key to success in therapy.
Studies in the past decade have determined that a core factor of positive therapeutic outcome is the therapeutic relationship. The strength of this is often more significant than the approach (or modality used). For instance, Lambert M.J. et al states:
“Common factors such as empathy, warmth, and the therapeutic relationship have been shown to correlate more highly with client outcome than specialized treatment interventions. The common factors most frequently studied have been the person-centered facilitative conditions (empathy, warmth, congruence) and the therapeutic alliance.” 2
For this reason, it is imperative that survivors seek out a therapist who they feel safe with and can trust. Sometimes people may get an immediate “vibe” from a therapist, but otherwise it would be fair to, at least, attend for several sessions to be clear about the decision to continue, or otherwise.
It is important to note is there are many approaches to therapy, and most good therapists are trained in several approaches and use them in combination. Approaches may be long or short-term and may be focused primarily on the past or on the present, but all should aim to alleviate distress and help you to learn how to acquire more effective coping strategies.
A lot of the process of healing is completely subjective and shaped by what is needed at the time. This also applies more broadly to the types of therapists and how to find one. Where a survivor is at and how successfully the therapist can work with them safely and appropriately is of utmost importance.
Different therapists and approaches are valid for someone and may simply not be right for another at that point in their journey. So, there is not a single answer to treatment, but a professional understanding of the client’s issues and past and competence to support that is critical, as is them being a person with whom an empathic, safe and supportive rapport can be established.
The four most important things a therapist has to offer a survivor are:
Consumers of mental health services have contributed to the following list of things to look for in a therapist:
- Find a therapist you feel comfortable with. Therapy is not an easy process and your therapist is not there to be your friend.
- Find a therapist who respects your individuality, opinions, and self.
- Find a therapist who will not get upset if you disagree with what he or she has said, but instead encourages you to express yourself when you do not agree.
- Find a therapist who never minimises your experiences and always respects your feelings.
- Find a therapist who will not try to force you to talk about things that you might not be ready for.
- Find a therapist who does not spend time talking about his or her own problems. Those sessions are for you, not your therapist.
- Find a therapist who wants neither a friendship nor a sexual relationship with you outside of your counseling sessions.
- Find a therapist who is more than willing to discuss problems that might arise between the two of you within the therapist/client relationship.
- Find a therapist who will help teach you new and healthier ways to cope.
- Find a therapist who will never make you feel like a failure or cause you to believe they are disappointed in you if you have a slip or a relapse. (Source: http://www.sidran.org/sub.cfm?contentID=62§ionid=4)
For more information on complex trauma and treatment options contact:
ASCA (Adults Surviving Child Abuse) at http://www.asca.org.au/
Professional Support Line: 1300 657 380
MSSAT (Male Survivors of Sexual Abuse Trust) www.survivor.org.nz/
Phone/Fax: +64 (03) 377 6747
Online Resources (with thanks to Dragan from ASCA for the link) http://www.klinic.mb.ca/resources.htm
Material for this article sourced (or extracted/plagiarized) from:
Special thanks to Dragan and Leo from ASCA for input and material (content) in drafting this article.
- Adults Surviving Child Abuse 2012 Practice Guidelines for Treatment of Complex Trauma and Trauma Informed Care and Service Delivery Adults Surviving Child Abuse: Authors Kezelman C.A. & Stavropoulos P.A
- Rogers, C. R. (1957). The Necessary and Sufficient Conditions of Therapeutic Personality Change. Journal of Consulting Psychology, 21, 95-103.
- By Lambert, Michael J.; Barley, Dean E. Psychotherapy: Theory, Research, Practice, Training, Vol 38(4), 2001, 357-361.
|
<urn:uuid:b5b4262c-5743-4d52-862c-9bcdf53c800e>
|
CC-MAIN-2014-35
|
http://newzealandchildabuse.com/adult-survivors-of-child-abuse-and-complex-trauma/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00341-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.942345 | 2,746 | 2.6875 | 3 |
American Made Lanterns
The Origins of The Tubular Lantern
The tubular lantern was invented by Aylett R. Crihfield. He received the first patent for a tubular lantern on April 2, 1867. Not long after, on May 28th, 1867, John H. Irwin, who was already a lantern inventor, also received a patent for a tubular lantern. For about 5 years, Crihfield’s patent was listed first on tubular lanterns produced, followed by Irwin, however Irwin having the backing of the companies began suing Crihfield, eventually Crihfield lost the royalties he was getting from the production of his lanterns. Royalties for those lanterns were worth tens of thousands of dollars at the time. His patent was removed from lanterns being made and as a result, Irwin is often credited with making the first tubular lanterns. Irwin would go on to patent many more lantern designs and improvements.
I’ve added both of their patents above, so you can made the decision yourself as to who invented the tubular principal. In the 1876 court case “IRWIN et al. v. DANE et. al.” Irwin’s case over Crihfield was that his patent “was the first to conceive the idea, and reduce it to a practice, of making an operative lamp, which should dispense with the chimney, and be supplied with fresh air by means of the bell and tube, as shown in his device, and for an operative combination of the globe and perforated bottom to the globe, by which a clear burning lantern was produced”. The court settled that Irwin was the first to invent a lantern with “a device for securing a blast of fresh air to the burner of a lamp by means of an inverted funnel or bell and one or more tubes, by which the air heated by the flame of the lamp is caused to rise into the tube, and be thence conducted into a close reservoir below the flame”. Essentially, what it came down to between Crihfield and Irwin was the bell shaped piece over the burner, rather than Crihfield’s design.
While I do not believe Irwin was the true inventor of the tubular principal, I do concede he invented the ‘blast’ idea, where the exhausting air directly blasted through the tubes with fresh air, making lanterns burn brighter. There is no doubt Irwin steered lanterns in the direction we know them today. His inventions and contributions to lanterns were instrumental in developing lanterns all around the world. Irwin passed away July 29, 1890.
My American Lanterns
While it’s obvious I’m a Canadian who loves Canadian lanterns, some lanterns from the USA have made their way into my collection for one reason or another. I love really early lanterns and the American ones are just fantastic. There is such a variety of American manufacturers, and lantern designs I will only ever be able to scratch the surface of them with my small collection. If you have questions about American lanterns, as always I will do my best to answer them if you contact me. My collection of non-Canadian lanterns is quite small, so I’m keeping them all to this page.
|
<urn:uuid:9365721a-8a76-43bc-ba3e-76b475c785ba>
|
CC-MAIN-2023-14
|
https://www.ontariolantern.ca/tubular-lanterns/american-lanterns/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00751.warc.gz
|
en
| 0.97444 | 680 | 3.40625 | 3 |
Every four years, college campuses are bombarded with volunteers trying to get register young voters, but this year many of these volunteers found that many of the students were already registered to vote.
Many colleges have established their own voting registration drives during orientation, class registration, and through online initiatives. This has resulted in thousands of students ensuring their right to vote, by updating their addresses or requesting an absentee ballot from their home state.
Dan Lewis, the director of Northwestern University's Center for Civic Engagement, one of the developers of the UVote project, claims that registration has become so difficult that, "You almost have to have a Ph.D now to figure out how to do it if you're not sitting in the same house for the past 20 years."
Dan Lewis has agreed with his university's president, saying that "We're not always going to have the incredible excitement among 18- to 22 year-olds that you did in 2008, so I think it's an obligation." He also agreed with the president's statement that, "We're supposed to teach citizenship."
Mr. Lewis explained that last year at Northwestern they began to register incoming freshman to vote during orientation as they picked up their student IDs. This year Northwestern was able to register almost 95 percent of the eligible freshmen. Mr. Lewis also stated that the university had expanded the effort to several other campuses including the University of Illinois at Chicago and Stanford University.
At Standford they used a similar method to get students to vote including registering students to vote while they waited on line to register their bikes with the university. This process lead to more than 700 new voters registered in two weeks.
Shelby Taylor, the digital and communications director for the Bob Graham Center at the University of Florida, has also developed a method of registering more young collegiate voters. Her goal is similar to Northwestern's — to create informed, engaged citizens. However, the registration outreach programs for students at the University of Florida is fairly distinct from the hands on approach taken at Northwestern.
Ms. Shelby has said, "One of the first steps we took was identifying a program called Turbovote, that we could partner with, that allows students to register with online digital tools, update their addresses, request absentee ballots, and receive updates on voting deadlines. From self-reported data from surveys on campus, we found we had a high level of students who were registered to vote, but in different areas of the state. So a lot of students leave their vote at home. We took out adds in the student papers, local media, got featured on the news, and we had an email sent to every student from the president."
Ms. Shelby went on to say that the program has been successful, "within three hours of the president's emails, 400 signed up for the Turbovote service."
Both Lewis and Shelby were adamant about their outreach programs not having partisan motivations, emphasizing their desire to increase students civic involvement.
|
<urn:uuid:452b9037-5693-4a70-acbe-3c60dd52baff>
|
CC-MAIN-2013-48
|
http://www.thetakeaway.org/story/243936-colleges-take-registering-incoming-freshman-vote/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164046334/warc/CC-MAIN-20131204133406-00015-ip-10-33-133-15.ec2.internal.warc.gz
|
en
| 0.983291 | 599 | 2.71875 | 3 |
Residents continue to drill for borewells, unaware that registration is now mandatory
In a bid to check unauthorised drilling of borewells in Bengaluru city, the Bangalore Water Supply and Sewerage Board (BWSSB) is planning to launch an initiative to identify unregistered borewells by visiting individual houses.
Bengaluru city is suffering from acute water shortage and fast depleting groundwater levels. Three months ago, the Karnataka government made it mandatory to have authorisation to dig wells or borewells in Bengaluru and 11 other districts of the state. The directive applies to Bengaluru east, north and southern areas and 32 taluks spread over the 11 districts. The government had also decided to penalise owners of illegal wells and borewells in these areas by disconnecting their electricity supply.
The new rule makes it mandatory for all existing borewell owners as well as those drilling new borewells or open wells, both for domestic and commercial purposes, to pay a fee and register in a prescribed format. Failure to comply can attract a penalty of up to Rs. 10,000 and/or imprisonment up to three years.
Unaware of the decision, residents continue to drill borewells. An official says, “When the rule was notified, not much was done to publicise it. So, now we are thinking of visiting each and every household.” According to officials, there is a lack of awareness among residents and hence only 100 households from the city have registered so far.
The government notification, dated December 3, 2012, made borewell registration compulsory under the Karnataka Groundwater (Regulation and Control of Development and Management) Act, 2011. The notification declared BWSSB as the implementing authority in the city. Existing borewell owners were also given time till March 31 to register.
More than 300 borewells run dry every month in Bengaluru. Of the 172,000 borewells in the city, 13,000 have been drilled by the BWSSB. Nearly 4,000 of these have gone dry, says BWSSB chief engineer Venkataraju.
City bursting at the seams
A study by the department of mines and geology states that over 2.4 million people in Bengaluru face the prospect of severe water shortage in the near future. Going by the growth in population of the city and the water requirement per person per day, the city harbours about 2.2 million more people than those who can be provided piped water supply.
According to official data, the demand for water in Bengaluru is calculated at 140 litres per capita per day (lpcd). For the current population, the requirement works out to 1,342 million litres per day (mld). The supply of water from the Cauvery (Stages I to IV, Phase I) and Arkavathy (Hesarghatta and Tippagondanahalli reservoirs) rivers works out to 975 mld. Conveyance and other losses reduce the effective supply by 30 per cent to 682 mld. The shortage therefore works out to 660 mld.
To meet part of the deficit, BWSSB draws water from 7,000 borewells to supply 35 mld. There are about 105,500 private borewells registered with BWSSB, from which an estimated 106 mld requirement is met. Then there are over 200,000 unregistered private borewells from which another 200 mld is drawn. The total supply from groundwater sources is therefore estimated at 341 mld.
Taken together, the overall supply of water, from the Cauvery river as well as from groundwater sources, to the city is 1,023 mld. On a total demand of 1,342 mld, that still leaves the city with 319 mld less than the requirement or over 2.2 million people without their quota of water.
The mining department’s study shows the total recharge of groundwater from various sources is about 90 mld. Against a withdrawal of 341 mld from the ground, the recharge is 90 mld. In other words, groundwater is overdrawn by a massive 378 per cent.
Tags: Web Specials
, Bangalore water supply and sewerage board (BWSSB)
, Department of Mines and Geology
, Groundwater depletion
, Karnataka Groundwater (Regulation and Control of Development and Management) Act
|
<urn:uuid:ada4eda1-a00c-4c36-b311-c1a5d20f8777>
|
CC-MAIN-2014-23
|
http://www.downtoearth.org.in/content/bengaluru-water-utility-launch-door-door-drive-check-illegal-borewells
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510259834.25/warc/CC-MAIN-20140728011739-00110-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.945562 | 907 | 2.515625 | 3 |
In light of a new independent study, the
World Wildlife Fund (WWF),
a conservation organization, is
urging people to drink tap water, which is often
as good as bottled water, for the benefit of the environment
and their wallets.
According to the study, Bottled
Water: Understanding a social phenomenon, commissioned
by WWF, bottled water
may be no safer, or healthier than tap water in many countries
while selling for up to
1000 times the price. Yet, it is the fastest growing
drinks industry in the world and is estimated to be worth
US $22 billion annually.
The study reveals that the bottled water
market is partly fueled by concerns over the safety of municipal
water and by the marketing of many brands that portray them
as being drawn from pristine sources and as being healthier
than tap water. However, some bottled waters only differ from
tap water in the fact that they are distributed in bottles
rather than through pipes.
In fact there
are more standards regulating tap water in Europe and the
US than those applied to the bottled water industry.
"Our attitudes towards tap water
are being shaped by the pollution which is choking the rivers
and streams which should be veins of life," argues Richard
Holland, Director of WWF's
Living Waters Campaign. "We must clean up and properly
protect these waters at source, and not just at the treatment
works, so that we can all rest easy in drinking from the tap."
The study also finds that every
year 1.5 million tons of plastic are used to bottle water.
Toxic chemicals can be released into the
environment during the manufacture and disposal of the bottles.
Furthermore, a quarter of the 89 billion liters of water bottled
worldwide annually are consumed outside their country of origin.
Emissions of the green house gas carbon dioxide, caused by
transporting bottled water within and between countries, contribute
to the global problem of climate change.
"Bottled water isn't a long term
sustainable solution to securing access to healthy water,"
said Richard Holland. "Clean water is a basic right.
Protecting our rivers, streams and wetlands will help ensure
that tap water remains a service which delivers good quality
drinking water for everyone at a fair price."
HERE for the full report (PDF file).
Wildlife Fund - Living Waters Campaign
|
<urn:uuid:0319292f-853b-40ab-9d00-1f7d2ae361e8>
|
CC-MAIN-2014-41
|
http://articles.mercola.com/sites/articles/archive/2001/05/23/bottled-water-part-one.aspx?aid=CD12
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131545.81/warc/CC-MAIN-20140914011211-00041-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
en
| 0.947633 | 498 | 3.265625 | 3 |
The study of respiratory mechanics concerns the motion of structures that control the movement of air into and out of the lungs. One set of structures controls intrathoracic pressure, which drives gas flow; these structures comprise the chest wall. Another system maintains the patency of the extrathoracic airway proximal to the glottis; the components of this system comprise the upper airway. This chapter will describe the functional consequences of anesthetic-induced alterations in the activation of skeletal muscles that control the chest wall and upper airway (often referred to as respiratory muscles). Anesthetics also have important effects on smooth muscle lining the airways, which also regulates gas flow within the lungs. This topic is reviewed elsewhere and is beyond the scope of this chapter. Historically, simple observation of external respiratory system motion provided a powerful tool to guide the proper administration of anesthesia . More recently, alterations in the control of respiratory muscles that lead to changes in respiratory mechanics have been invoked to explain much of the impairment of gas exchange observed during and after clinical anesthesia. Thus, anesthetic-induced changes in respiratory muscle activation and respiratory mechanics can have profound clinical consequences.
|Original language||English (US)|
|Title of host publication||Pharmacology and Pathophysiology of the Control of Breathing|
|Number of pages||49|
|State||Published - Jan 1 2005|
ASJC Scopus subject areas
|
<urn:uuid:c225c2a9-11cb-430b-bc02-6a1e79b2c2b9>
|
CC-MAIN-2023-14
|
https://mayoclinic.pure.elsevier.com/en/publications/general-anesthesia-and-respiratory-mechanics
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00702.warc.gz
|
en
| 0.867897 | 348 | 2.75 | 3 |
Part-Time Four-Wheel drive
The most basic type of four-wheel drive system, part-time four-wheel drive operates in two-wheel drive mode (usually rear-wheel drive) all of the time on dry pavement. Four-wheel drive can be engaged only off-road or when the pavement is very slippery because this system forces the front and rear wheels to travel at the same speed. Using four-wheel drive on dry pavement can damage the drivetrain and cause extreme tire wear. Virtually all part-time four-wheel drive systems also employ a two-speed transfer gearbox.
|
<urn:uuid:5a770ba3-1132-4e24-913b-0c6cb49147ff>
|
CC-MAIN-2013-20
|
http://www.carlist.com/autoglossary/term.php?id=92
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705318091/warc/CC-MAIN-20130516115518-00022-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.941938 | 122 | 2.78125 | 3 |
Source: Journal of Digital Learning in Teacher Education, Volume 28, No. 1, p. 16-26. Fall, 2011.
(Reviewed by the Portal Team)
Teaching teachers how to conduct an observation is a vital step in the analysis of teaching that perhaps is often skipped.
To address this gap in teacher preparation, the researchers developed an online workshop for teacher trainees.
The research questions this study explored were:
1. In what ways did an online workshop affect teacher trainees’ ability to recognize and describe teaching behaviors in a video observation?
2. Do participants believe they will apply their newly acquired observation skills in their own teaching practice?
Design of the Online Workshop
To prepare a group of teacher trainees to interpret videos of their own teaching, the researchers developed a self-paced, online workshop to introduce micro-ethnographic techniques for observing and analyzing teaching through video.
A series of video tutorials and activities guides teacher trainees through the process of viewing the same video clip through different lenses.
They are asked to reflect on student response opportunities, teacher use of praise, and feedback to student error.
In Module 1, the participants watch an instructional video that defines “response opportunities” for new teachers in terms of who the teacher chooses to call on to respond to a prompt or question and whether it is the whole class, or a student seated near or far from the teacher.
The video shows viewers how to complete the observation worksheet for response opportunities and guides them through one example.
The teacher candidates repeat the process in Modules 2 and 3.
Each module has a unique explanation of the teacher behavior to observe and a video clip that shows an example.
The use of praise in the second module is defined as type of praise and purpose of praise. The third module centers on response to error, in which teacher candidates asked to look for what the teacher does when a student provides an incorrect answer.
The three modules are followed by a posttest , which parallels the pretest in involving participants in viewing the same video they had selected in the pretest, with similar questions about what they are seeing.
Participants in in this study were forty-seven teacher candidates enrolled in four sections of a master’s-level seminar in teaching English to speakers of other languages (TESOL) at a large, urban, northeastern college of education.
The participants completed the entire workshop, including the pretest, posttest, and all three training modules, and the Blackboard course captured the results.
For each of the three online modules, teacher candidates completed an observation worksheet and then responded to five open-ended questions about that module.
Data collected from teacher candidates’ observation worksheets and responses to open-ended questions after each of the three online modules indicated that they were able to see, code, and describe the behavior that they were being directed to observe.
Therefore, the results showed that this training led to an increased awareness of the teacher’s actions in terms of how they related to or created student involvement.
Furthermore, the participants at the end of the workshop echoed their perception that video was important for the teaching profession.
In addition, 53% of the participants pointed out that the training helped them reflect on their own teaching.
Approximately one-third felt that they learned classroom observation and analysis skills from the online workshop.
In addition, 28% stated that they learned teaching strategies from the training videos.
The results indicate that though these teacher trainees were relatively new to video observation of teaching, most of them were able to correctly label and provide evidence for the observable teaching behaviors targeted in each observation.
The online workshop met the specific goals of teachers recognizing common patterns in classroom interaction.
The findings suggest that the online format may be particularly conducive to activities involving observation in video, since the online environment affords students the time and space in which to explore and reach deeper understanding.
|
<urn:uuid:997f194a-90ca-4f8d-82a5-f15b75f3adc7>
|
CC-MAIN-2023-40
|
http://education.eng.macam.ac.il/article/3201
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00091.warc.gz
|
en
| 0.955113 | 809 | 3.5625 | 4 |
Description: While some are divesting in oil stocks (see this week’s post on the Caisse’s divestment plans), the Bank of America is predicting oil prices will soar above $100 a barrel for the first time since 2014. Demand for the substance is expected to rise in the face of higher numbers of air travelers, oncoming winter temperatures, and a switch from natural gas to oil in the face of rising gas prices. The price rise may bring more unpleasant impacts to an economy already suffering under the weight of the pandemic.
Date: September 23, 2021
1) Does a prediction of rising oil prices surprise you, given the sluggish Covid economy?
2) If oil prices do rise, will this impact your student finances?
3) In Chapter 9 of Wiley’s Financial Accounting: Tools for Business Decision Making teaches us about reporting and analyzing long-lived assets. For an in-class activity, track down the recent financial statments of an oil and gas company. Determine what are the major long-lived assets on the balance sheet.
Leave a Reply
|
<urn:uuid:81cf59b6-7118-43aa-af43-627c87064e2a>
|
CC-MAIN-2023-23
|
https://wileyaccountingupdates.ca/2021/10/04/oil-at-100-again/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00663.warc.gz
|
en
| 0.867047 | 283 | 2.671875 | 3 |
Indoor plants are more than just decorative. They can change the atmosphere in an office and improve conditions for office workers.
The modern office is too often a sterile environment, set up for efficiency but a bit too cold and unnatural in its appearance.
‘Greening’ the office by adding some attractive plants will instantly improve the atmosphere and will also make those who work there feel better about their surroundings.
Plants also improve the health and well-being of people working in offices. Breathing dry air produced by air-conditioning systems can cause respiratory and other health problems, and plants help compensate for this drying effect.
Research by NASA in the 1980s showed that indoor plants increase humidity in offices with fixed windows, and that means it applies to just about every air-conditioned building in cities around the world.
Research has also shown that plants actually remove air pollutants by a process that takes place while most office workers are asleep.
Stomata – tiny holes on the undersides of leaves transpire and give off moisture at night. Moisture attracts airborne pollutants that leach down through the plant to the root zone where microbial action can change the pollutants into harmless compounds.
Office Plants Need Care
Like any living thing, indoor plants need regular care and attention. Considering all that they do for us it’s a small favor to ask in return, and indoor gardening can be an enjoyable pastime.
All plants need light. Different varieties need varying amounts of light and individual plants can be chosen to suit the light in different parts of the office.
Flowering houseplants and those with colorful foliage thrive where there’s lots of light, such as in positions near windows. Because indoor lighting tends to be less bright than that in the outdoors, plants needing lots of light will even appreciate a bit of extra help from lamps.
Leafy green plants prefer less light and do well in areas like corners or places shaded by pillars or furniture.
Keep Plants at a Steady Temperature
Temperature preferences vary between plants but since the ambient temperatures in office buildings are based on what’s comfortable for humans it’s not usually a major issue.
Try to keep indoor plants away from locations where there are temperature extremes or fluctuations, including next to heaters or air conditioning vents.
Plants can only go for about two weeks without water but an excessive amount of water can be equally fatal. Test the soil 1 inch (2.5cm) below the surface and if it’s dry add water. Don’t water if the soil in the pot is still damp.
Plants absorb water through their leaves as well as their roots, so spray indoor plants with a fine mist every week or so. It helps keep dust off leaves and freshens up the plants’ appearance.
Some indoor plants including ferns and African Violets require higher humidity than is usually available in offices. To supplement water in the office atmosphere, put a bowl of water near the plants or spray them with a fine mist every day.
Regular Maintenance is Part of Greening
It’s normal for portions of most plants to die off occasionally, and dead sections should be trimmed off to allow new growth to take its place. Use a sharp pair of scissors for the job and don’t try to just tear off the dead bits.
For green indoor plants with broad leaves wipe the leaves with a damp cloth every week or so to remove dust. For indoor plants with hairy leaves like African Violets use a small soft-bristled brush to lightly dust the tops of the leaves.
Thanks to the variety of commercially available fertilizers feeding isn’t a problem for the indoor gardener. Choose from dry, liquid or stick fertilizer sources and read the instructions on the package before using the product.
Many indoor plants will be just fine without any extra feeding so it’s a good idea to always use a bit less than indicated on the label.
Self-watering pots are excellent for office use. They help to retain moisture in the soil, even if the ‘plant-carer’ is away for an extended holiday. Another big advantage is that there are no worries about spills on the office furniture or carpets.
Plants should always have a bit of growing room in their containers. As indoor plants grow transfer them to larger pots – a task that’s best done in early Autumn or Spring.
|
<urn:uuid:8c61c198-7601-4d45-8254-1fc34f25b743>
|
CC-MAIN-2023-40
|
https://farmfoodfamily.com/plants-are-good-for-modern-offices/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511002.91/warc/CC-MAIN-20231002164819-20231002194819-00592.warc.gz
|
en
| 0.937827 | 915 | 3.390625 | 3 |
Computer programming in the punched card era
This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (September 2016) (Learn how and when to remove this template message)
A punched card is a flexible write-once medium that encodes data, most commonly 80 characters. Groups or "decks" of cards form programs and collections of data. The term is often used interchangeably with punch card, the difference being that an unused card is a "punch card," but once information had been encoded by punching holes in the card, it was now a "punched card." For simplicity, this article will use the term "punched card" to refer to either.
Often programmers first wrote their program out on special forms called coding sheets, taking care to distinguish the digit zero from the letter O, the digit one from the letter I, eight from B, two from Z, and so on using local conventions such as the "slashed zero". These forms were then taken by keypunch operators, who using a keypunch machine such as the IBM 029 punched the actual deck. Often another key punch operator would then take that deck and re-punch from the coding sheets - but using a "verifier" such as the IBM 059 that simply checked that the original punching had no errors.
A typing error generally necessitated repunching an entire card. The editing of programs was facilitated by reorganizing the cards, and removing or replacing the lines that had changed; programs were backed up by duplicating the deck, or writing it to magnetic tape.
In smaller organizations programmers might do all their own punching, and in all cases would often have access to a keypunch to make small changes to a deck.
The description below describes an all-IBM shop (a "shop" is programmer jargon for a programming site) but shops using other brands of mainframes (or minicomputers) would have similar equipment although because of cost or availability might have different manufacturer's equipment, e.g. an NCR, ICL, Hewlett-Packard (HP) or Control Data shop would have NCR, ICL, HP, or Control Data computers, printers and so forth, but have IBM 029 keypunches. IBM's huge size and industry footprint often caused many of their conventions to be adopted by other vendors, so the example below is fairly similar to most places, even in non-IBM shops.
A typical corporate or university computer installation would have a suite of rooms, with a large, access-restricted, air conditioned room for the computer (similar to today's server room) and a smaller quieter adjacent room for submitting jobs. Nearby would be a room full of keypunch machines for programmer use. An IBM 407 Accounting Machine might be set up to allow newly created or edited programs to be listed (printed out on fan-fold paper) for proof reading. An IBM 519 might be provided to reproduce program decks for backup or to punch sequential numbers in columns 73-80.
In such mainframe installations, known as "closed shops," programmers submitted the program decks, often followed by data cards to be read by the program, to a person working behind a counter in the computer room. During peak times, it was common to stand in line waiting to submit a deck. To solve that problem, the card reader could be reinstalled (or initially installed) outside of the computer room to allow programmers to do "self-service" job submission.
Many computer installations used cards with the opposite corner cut (sometimes no corner cut) as "job separators", so that an operator could stack several job decks in the card reader at the same time and be able to quickly separate the decks manually when they removed them from the stacker. These cards (e.g., a JCL "JOB" card to start a new job) were often prepunched in large quantities in advance. This was especially useful when the main computer did not read the cards directly, but instead read their images from magnetic tape that was prepared offline by smaller computers such as the IBM 1401. After reading the cards in, the computer operator would return the card deck - typically to one of a set of alphabetically labelled cubby holes, based on the programmer's last initial. Because programs were run in batch-mode processing it might be a considerable time before any hardcopy printed or punched output was produced, and put into these same cubby holes - however, on a lightly used system, it was possible to make alterations and rerun a program in less than an hour.
Dedicated programmers might stay up well past midnight to get a few quick turnarounds. Use of this expensive equipment was often charged to a user's account. A mainframe computer could cost millions of dollars and usage was measured in seconds per job.
Smaller computers like the IBM 1000, 1620 and 1130, and minicomputers such as the PDP-11 were less expensive, and often run as an "open shop", where programmers had exclusive use of the computer for a block of time. A keypunch was usually located nearby for quick corrections - although many of these smaller machines ran from punched tape.
Identification and sequenceEdit
Many early programming languages, including Fortran, Cobol and the various IBM assembler languages, used only the first 72 columns of a card — a tradition that traces back to the IBM 711 card reader used on the IBM 704/709/7090/7094 series (especially the IBM 704, the first mass-produced computer with floating point arithmetic hardware), which could only read 72 of the 80 columns in one pass.
Columns 73-80 were ignored by the compilers and could be used for identification or a sequence number so that if the card deck was dropped it could be restored to its proper order using a card sorter. Depending on the programming language, debugging output statements could be quickly activated and "commented out" by using cards with such statements punched with the comment character (e.g., 'C' in Fortran) in column 80 of the card; turning the card end-for-end would put the 'C' in the leading column, which transformed the now backwards card's contents into a comment while leaving the physical card in place in deck.
(An alternative, imperfect but commonly employed technique to maintain proper card order was to draw one or more diagonal stripes across the top edge of all the cards in a deck.)
In later years, as punch card data was converted to magnetic tape files the sequence numbers were often used as a column in an array as an index value that can be correlated to time sequences, such as in the natural sciences where the data on the cards were related to the periodic output of a measuring device such as water stage level recorders for rivers and streams in hydrology, or temperatures in meteorology. Entire vaults full of card decks could be reduced to much smaller racks of nine-track tapes.
- Fisk, Dale (2005) [Setting 1973; Written 2005]. "Programming With Punched Cards:
A Programmer's Memories of Learning the Craft in 1973" (PDF). Retrieved 2008-11-11.
Article about the programming culture that developed around use of the punched card, following Fisk's experience of "learning the craft" from people around him.
- Columbia University Computing History: The Columbia University Self Service Input/Output Area
- E. I. Organick, A Fortran IV Primer, Addison-Wesley, 1966
|
<urn:uuid:aef6078c-f00e-4c71-a219-cf73cf0d1251>
|
CC-MAIN-2020-16
|
https://en.m.wikipedia.org/wiki/Computer_programming_in_the_punch_card_era
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00443.warc.gz
|
en
| 0.960847 | 1,572 | 3.8125 | 4 |
Archaeologists have unearthed evidence of what is believed to be the largest Anglo-Saxon building found in Scotland.
|The dig at Glebe Field, Aberlady, has uncovered the foundations of a large Anglo-Saxon structure dating back to between the 7th and 9th century|
[Credit: Aberlady Angles Project]
Tests on an animal bone found at the scene have confirmed it dates back to between the 7th and 9th century.
Ian Malcolm, from Aberlady Conservation and History Society, described the first date evidence from the site as “very, very exciting”.
He said: “It is evidence that it was an important and a wealthy site.”
Read the rest of this article...
|
<urn:uuid:5bca939e-43d6-40e8-bca1-d8361f6cf194>
|
CC-MAIN-2017-26
|
http://anglo-saxon-archaeology-blog.blogspot.co.uk/2016/07/evidence-of-largest-anglo-saxon.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00138.warc.gz
|
en
| 0.964114 | 157 | 2.78125 | 3 |
The Central Asia states are redrawing the boundary of their region, so the cartographers will have to run to keep up.
“Central Asia” as a defined place of states and borders is a recent idea, dating to the immediate post-Soviet period when the former republics of Soviet Central Asia – Turkmenistan, Uzbekistan, Tajikistan, and Kyrgyzstan – declared that Central Asia should include the former Soviet republic of Kazakhstan. In 2018, Central Asia is again growing to include the neighbor to the south, Afghanistan.
Central Asia is actively engaging Afghanistan on several fronts, all of which will be necessary if Afghanistan is to join its neighbors as a “normal country.” But projects will need to be executed in a disciplined fashion, unlike efforts such as the Turkmenistan-Afghanistan-Pakistan-India (TAPI) gas pipeline, which since 1995 has generated political maneuvering but no gas.
Central Asia and Afghanistan are the “in between” states and must hang together, as the man said, so they can deal successfully with the regional hegemons, Russia, China, Pakistan, and Iran, and solicit balancing by the regional influencers, the United States, the EU, Turkey, and India.
Connecting Afghanistan to Central Asia will most immediately offer Afghanistan a northern route to Central Asia, across the Caspian Sea to Azerbaijan, and on to Europe. The northern option will motivate Pakistan and Iran to upgrade their infrastructure, secure their territory, and reduce corruption if they want to take part in the export of Afghanistan’s abundant minerals, estimated to be worth more than $3 trillion.
And offering Afghanistan a northern aperture will enable Kabul to be more assertive in its dealings with Pakistan over Islamabad’s sanctuaries for the Taliban and Haqqani Network. It will offer Afghanistan some strategic depth, the irony of which will probably be lost on Pakistan.
Most importantly, becoming part of Central Asia will expand Afghans’ mental vistas as they see the success of positive role models like Kazakhstan and Uzbekistan.
It seems like a curious lash-up: middle income, multiethnic, secular Muslim societies with industry and infrastructure, and the very definition of a failed state, riven by religious extremism, tribalism, and criminality. But that’s not how the locals see it, according to regional observer, Dr. S. Frederick Starr, who says, “It is accepted as given that Afghanistan is part of Central Asia.”
And the connections between the states predate the Soviet Union, the most common reference for the configuration of the region. The Soviets used borders to divide the peoples of a region governed previously by khanates and emirates and without fixed borders. They then installed a north-south transportation system, oriented to Moscow (and the region still lacks good east-west transport links).
As a result of the Soviet borders, each Central Asian state is multiethnic, and Afghanistan is no different as it hosts Uzbeks, Tajiks, Turkmen, and Kyrgyz, and Uzbek is recognized by the Afghan constitution as an official language. These links of language and culture will ease the integration of Afghanistan into Central Asia, as the people on both sides of the Amu Darya see it as not a barrier, but a pathway.
And the local people are eager to explore their shared history: In 2016, an Uzbek delegation restored and landscaped the tomb of Alisher Navoi, the national poet of Uzbekistan, who is buried in Herat, Afghanistan. The Uzbeks also reconstructed the Herat mausoleum of the painter, Kamoliddin Behzod.
So, what are the Central Asian countries doing?
Initially only those countries with healthy natural resource endowments – Kazakhstan, Uzbekistan, and Turkmenistan – will be able to put “bucks on the ground.” Of these, Turkmenistan’s neutrality may make it slow to engage with Kabul, though regional initiatives may draw it out. In the near term, Kazakhstan and Uzbekistan, as the Central Asian countries with the largest economy and the largest population, respectively, will take the lead.
In Afghanistan, Kazakhstan provided developmental assistance, military medics, and Explosive Ordnance Disposal units for demining missions. And Kazakhstan, together with Uzbekistan, provided early and continuing support for U.S. requests to ship military equipment to Afghanistan, allowing the coalition to avoid the on-again, off-again transit routes through Pakistan. The military shipments were also a valuable proof of concept demonstration of the cooperation required for a regional transport system.
In Uzbekistan, the transfer of power to President Shavkat Mirziyoyev, with the death in 2016 of longtime leader Islam Karimov, has seen a flurry of diplomatic activity as Tashkent upgraded ties to its neighbors.
In March, Uzbekistan welcomed Afghan President Ashraf Ghani on his second visit to the country and hosted a conference that declared “strong backing for the National Unity Government’s offer to launch direct talks with the Taliban, without any preconditions.” Tashkent then offered to host peace talks between Kabul and the Taliban. Uzbek President Mirziyoyev set the tone for the event when he declared, “Afghanistan’s security is Uzbekistan’s security.”
Uzbekistan plans to build on its past efforts in Afghanistan. In 2011, it completed an Asian Development Bank project to extend its rail line from the entrepot of Termez to Mazar-i-Sharif, which is now responsible for about half of Afghanistan’s imports. Tashkent recently pledged $500 million to the construction of the $1.8 billion, 657 km rail line from Mazar-i-Sharif to Herat in the northwest, near the borders of Iran and Turkmenistan.
In 2009, Uzbekistan commissioned a 150 megawatt electric transmission line from Tashkent to Kabul, and in March 2018 it announced it would soon start work on the 260-kilometer Surkhan-Pul-e-Khumri line to increase electricity exports to Afghanistan by 70 percent. Just for good measure, Mirziyoyev ordered that the rate for electricity deliveries to Afghanistan be dropped from $0.076 to $0.05 per kilowatt.
During Ghani’s first visit to Tashkent in December 2017, the governments confirmed 20 bilateral agreements in numerous areas, such as protection of the Termez–Hairaton bridge (the “Friendship Bridge”), the implementation of the Mazar-i-Sharif–Herat railway line, the implementation of the construction of the Surkhan-Pul-e-Khumri power transmission line, legal assistance in civil, family and criminal cases, mutual assistance in customs matters, and cooperation in customs, agriculture, higher education, and air traffic control. The Afghan and Uzbek business communities followed suit by signing $500 million in contracts.
High-profile infrastructure projects get the most attention, but cooperation in areas like family law, trade, and education have the most potential to positively shape the environment for young citizens of the region. In this vein, the Tashkent State University of Law plans to become the center for legal studies in Central Asia and has enrolled students from Kazakhstan, Russia, Turkmenistan, South Korea, and Azerbaijan in its English-language curriculum, and plans to enroll students from Afghanistan. In addition to growing future generations of advocates who think regionally, the English-language instruction will orient the students away from Russia and to North America and Europe.
What do the region’s leaders want? They want to create a zone of trade and tolerance.
Central Asia and Afghanistan must become adept at protecting their sovereignty by balancing among outsiders who may be tempted to consider the region a consequence-free zone in which to contest rivalries. Concerted effort will be required to keep Russian and Pakistani intrusions at bay, while ensuring Saudi Arabia and Iran settle their hash elsewhere. The region’s united leaders must ensure China’s Belt and Road Initiative doesn’t make them subject to Beijing’s predatory creditor imperialism by being the only option for infrastructure development, so the international community’s support for regional efforts such as the CASA-100 hydropower project and the Uzbekistan-Turkmenistan-Iran-Oman transport and communication corridor must not falter.
The record of the international community in Afghanistan since 2001 can charitably be described as “uneven.” It’s time to let the neighbors — who have more at stake than the United States and its coalition partners — apply their cultural affinity and local knowledge to economic growth and opportunity that will offer the young a future other than narcotics trafficking or terrorism.
On May 16, Mirziyoyev will meet President Donald Trump in Washington, D.C. It will be Mirziyoyev’s opportunity to explain to the American leader the vision for the region, by the region, and to solicit appropriate support by the United States, the EU, and international institutions, such as the World Bank, for the enlarged Central Asia.
What should Washington do?
First, support the region’s sovereignty and self-determination. The United States can do this by encouraging multilateral projects to improve infrastructure, connectivity, good governance, and security coordination, so the region can act as a unit in the face of challenges such as Russia’s arming of the Taliban. Remarks by American officials that the U.S. and Russia have a shared interest in Afghanistan’s stability raise doubts locally about America’s understanding of Russia’s true intentions in the region and are what the diplomats call “unhelpful.”
Second, encourage a collective security mindset by supporting the work of groups like the Uzbek-Afghan joint security commission. Collective security won’t necessarily take the form of troop deployments as Uzbekistan refuses to host foreign military bases or deploy troops outside its territory, and Turkmenistan is neutral, but it will foster productive habits of mind.
Finally, good governance should take precedence over feel-good action, like publicly critiquing each country’s’ presidential elections. Encouraging good governance from the bottom up is smart long-term thinking and more likely to elicit cooperation of the incumbent administrations, who also want to be productive and responsive, if they feel the U.S. government isn’t actively working to undermine them.
James Durso (@james_durso) served as a U.S. Navy officer for 20 years specializing in logistics and security assistance. He is presently managing director of Corsair LLC, a consulting firm specializing in project management and marketing support in the Middle East and Central Asia.
|
<urn:uuid:3ed9244e-d0b8-4670-8b4c-7a971065c4a3>
|
CC-MAIN-2023-40
|
https://thediplomat.com/2018/05/central-asia-opens-the-door-to-afghanistan/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00064.warc.gz
|
en
| 0.936922 | 2,247 | 2.5625 | 3 |
The Roman Catholic hierarchy was not instituted by the apostles but rather it evolved over time. It was not even conceived by the apostles as some sort of acorn flowering into a mature oak tree (John Henry Newman's and Vatican II's idea of development). When Jesus told Peter that "on this rock I will build my church" (Mt. 16:18), even if we grant that the "rock" is Peter, Jesus made the same promise of the authority of binding and loosing to the other apostles (Mt. 18:18, Jn. 20:23). There is nothing special about Peter except that he was the one chosen to show how slow and foolish the apostles were. In fact, despite the grace given to Peter, even after Pentecost and after years as a church leader, Peter still fell into error and needed to be rebuked publicly by Paul in Galatia (Gal. 2:11-14). And while Peter gave the sermon at Pentecost (Acts 2:14-36), it was James who presided over the Jerusalem council (Acts 15:12-21)
The idea of a Petrine chair of apostolic succession is historically and biblically without any foundation whatsoever. So how did the papal office came into being? It came into being slowly over centuries, built upon many choices the church made that slowly but surely deviated from the biblical norm of ecclesiastical governance.
The early church was a built upon the Presbyterian model. Elders were appointed in every church (Acts 14:23, Titus 1:5), and deacons were likewise elected into office (Acts 6:1-3). The Apostles, as the special office, slowly passed from the scene, leaving every church with a plurality of elders and deacons, yet connected to each other.
As the apostolic church became the early catholic church, bishops began to emerge. The first churches were founded in major cities like Jerusalem, Antioch, Rome, Athens, Corinth, Ephesus among others. But as the Church grew, churches began to be planted in minor cities and towns. The pastor of the church in the major cities began to take on a mentoring and leadership role over the smaller churches, and they became the bishops. Thus, the episcopal model of governance began purely as a matter of good, helpful and efficient practice. Over time, power and influence began to converge into five major churches in the cities of Rome, Jerusalem, Alexandria, Constantinople (New Rome), and Antioch. The bishops in those five cities began to be seen as in some sense superior to the others, as they claimed apostolic succession for their respective sees.
While Presbyterianism has ceased to be practiced in the early catholic church, the equality of bishops persisted for a longer time. At Nicea in 325AD, the bishop of Rome was not seen in a special light. In fact, Liberius in his capacity of bishop of Rome (352-366AD) condemned Athanasius on behalf of the Arians, and was in turn anathemized by Hilary of Poitiers (Hilary of Poitiers, "Liberius, to the Eastern Presbyters and Fellow Bishops," Book II, VII; "Letter of Liberius in Exile to Urascius, Valens and Germinius," Book II, IX.). The bishop of Rome in the fourth century was taken to be just one of many bishops. He was the bishop of the capital city of Rome, but that did not come with any special privileges.
As the age of the united Roman Empire drew to a close, chaos began to spread in society. Over time, the Empire became divided over the issue of language. The Western half of the Empire began to speak exclusively Latin, while the Eastern half spoke Greek. The coming of the "barbarians," the Goths, Huns and other non-Romans who attacked and bled the Roman Empire, created space for the spread of the influence of the church. As society collapsed in the West, the Church stepped in to provide services for society, which is certainly a good thing yet it had unintended consequences. East and West moved further and further apart over language and culture and even jurisdiction. The vacuum of power in the West led the Church to began to assume secular power for herself, creating the two swords doctrine. In the East, the Emperor controlled the church, resulting in caeseropapalism, the emperor (Caesar) as "pope."
As the "apostolic sees" emerge, the various Patriarchs still embraced an equality among them. Unfortunately for the Church, only one of the apostolic sees was in the West - Rome. The bishop of Rome, sitting in a land without any equals, and sitting among the vacuum of sociopolitical power, began to be elevated in power and stature. Despite the first real pope Gregory the Great denying the use of the appellation "universal" to any one single bishop, in time his successors would embrace the term, amassing power and wealth into the see of Rome. By the time of the High Middle Ages, the papal office as the supreme leader of Christendom was established. Of course, the history of the church in the setting up of church councils was not forgotten, so some people still held that councils are superior to the pope, a movement known as Conciliarism. The fiasco known as the Babylonian Captivity of the Church, where a series of popes and anti-popes ruled and anathemized each other, became an utter disgrace to the Church and strengthen the forces of Conciliarism. During the Medieval period, the pope did not have absolute power over all of the Church, but must rule with the help of bishops and other members of the clergy. The Pope of course was very powerful, but he did not yet have the power of an absolute monarch although he certainly acted the part.
This situation persisted through the Reformation. As powerful as the Pope was, if more bishops had turned on him during the Reformation, he would be forced to try to find a middle ground with the Protestants. But the Pope had enough powerful people who sided with him, and many people did not like being mocked and ridiculed by Martin Luther, an "uncultured" German. During the Jansenist controversy after the Reformation, and especially later in the Ultramontanist controversy, the various warring factions began to see Rome as the court of final appeal.
Ultramontanism makes for an interesting prequel to the next evolution of the papal office. As the Enlightenment dawned, the advances in learning split the French clergy between the smart and powerful who attend elite schools and were trained in the latest learning, and the poorly-educated priests who were trained in the old ways and used as fodder to fill pulpits. The lowly priests resented their smarter elite cardinals and bishops, and appeal to Rome was used to take the elite French clergy (Gallicans) down. France was of course the most advanced Roman Catholic country at that time, while the other Roman Catholic countries were just beginning to face the Enlightenment. The tremendous changes caused by the Enlightenment frightened the pope, who had a taste of is power when Napoleon humiliated one of his predecessors, Pius VI, by attacking the Vatican and taking him prisoner in the late 18th century. The pope thus had lots of reasons to fear the Enlightenment, and Ultramontanism dovetailed nicely with his mood as the pope was turned to as the defender of the faith.
In a reaction to the Enlightenment, Pius IX in 1864 published his Syllabus of Errors as a rejection of "Modernism." This dovetails nicely into the next evolution of the papal office into one of infallibility, as a safeguard against "modernism" in any form. Vatican I occurred from the years 1869 to 1870, and Pius IX pushed through the novel doctrine of papal infallibility, ignoring objections from learned Roman Catholic historians like Ignaz von Dollinger. Standing as THE defender against modernism, the pope gained support from the Ultramontanes and passed the encyclical Pastor Aeternus defining papal infallibility as dogma. Ironically, at the end of the council before it could be formally closed, the Italian nationalists invaded the Vatican, stripping the pope of his secular powers just as he claimed supreme spiritual powers for himself.
Vatican II marks another shift in the papal institution towards something closer to the situation of the early medieval period. This did not happen however because the pope suddenly became humble and decided to give up his power. Rather, the Roman Catholic Church had grown too large for the type of central control to take place, and the Asian, American, African, and Latin America bishops revolted and forced changes that promote a more decentralized model. Apart from suspending the council altogether, the pope could do nothing but acquiesce and try to work out something that preserved some of his power while ceding parts of it to the other clergy. Post-Vatican II, the Pope has become more of a figure of authority than an actual monarch. The Pope's absolute authority promulgated in Pastor Aeternus still remains on paper, but he has become more of a paper tiger than a real tiger in these modern times.
Thus marks the evolution of the papal institution. It began with helpful practices, mixed in with some heterodox teaching, and the special environment of its time cultivated the papacy. Good and helpful practices over time may result in error, and nowhere is this more apparent than in the evolution of the papacy.
Lionel Wickham, trans., Hilary of Poitiers, Conflicts of Conscience and Law in the Fourth Century Church (Liverpool: Liverpool University, 1977), 77-9. Translated Texts for Historians, Volume 25, Against Valens and Uracius: The Extant Fragments, Together with His Letter to the Emperor Constantius. Translated from A. Feder (CSEL), ed., Collectanea Antiariana Parisina (including Lber ad Constantium Imperatorem and Liber II ad (or con.) Constantium) (1916). As cited in William Webster, Holy Scripture: The Ground and Pillar of our Faith (Volume II: An Historical Defense of the Reformation Principle of Sola Scriptura; Battle Ground, WA: Christian Resources, 2001), 2:267
|
<urn:uuid:e606d3e3-1030-42b5-81a3-f37ebb78edd7>
|
CC-MAIN-2017-30
|
http://puritanreformed.blogspot.com/2016/09/the-evolution-of-papal-institution.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424090.53/warc/CC-MAIN-20170722182647-20170722202647-00414.warc.gz
|
en
| 0.974106 | 2,128 | 3.140625 | 3 |
Erdogan’s bad-faith recognition of the Native American Genocide
As the saying goes, “even a broken clock is right twice a day.” While the broken clock of Turkish President Recep Tayyip Erdogan was right when he proposed recognizing the genocide of Native Americans at the hands of the United States, his words did not stem from a deep-seated desire for justice: in fact, quite the opposite, as they are further evidence of Erdogan’s authoritarian worldview.
Erdogan’s statements were intended as a form of retaliation against a resolution by the United States Congress to officially recognize the Armenian Genocide. Previously, the U.S. Government had not officially recognized the atrocities that led to an estimated 1.5 million Armenians killed by the Turkish state between 1914 and 1923. The Turkish government has long denied the reality of the atrocities and is vociferously bellicose to any government that seeks to commemorate the event. It is from this mindset of defensive reactionaryism that Erdogan’s threat — where he exploits the inter-generational pain of an oppressed people as if it were a bargaining chip — was born.
While the total death toll of the indigenous people who populated the territories that are now the United States is incalculable, there is no denying that the colonization and westward expansion of the American continent by the United States government — particularly its cavalry divisions — was accompanied by atrocities duly worthy of the moniker genocide.
From the Indian Removal Act of 1830, which legalized the forcible relocation of Native populations from their lands, to the Trail of Tears, where U.S. forces marched various indigenous tribes hundreds of miles, depriving them of basic necessities (an estimated 6,000 died), to any of the other countless mass-killings committed at the hands of U.S. forces, the United States ethnically cleansed an entire continent in order to make space for its euro-descendant inhabitants. While Erdogan’s description of history comes from a place of bad faith, it is undoubtedly accurate.
Yet neither historical accuracy nor the plight of victimized Native Americans is the inspiration for Erdogan’s remarks. His attempt to use the legacy of genocide victims, and tangentially, the long struggle for justice that their their descendants have endured, as a political cudgel is a further illustration of his contemptible worldview: Erdogan perceives humanity in a system of hierarchies, where “the higher classes” (in his opinion, Americans and Turks) are under no obligation to respect the most basic human rights and decency of the “lower classes” (Natives and Armenians).
Through this lens, Erdogan views an objectively just act — his government recognizing the systemic slaughter of a group of people — as a negative, because it would require an admission that those he perceives as members of a “lower-class” are due the same rights and decency as he and the rest of the “higher-class” members. His concern for the marginalized groups exists only when it behooves him, such as wielding their victimization as an international cudgel.
It is not a new revelation that Erdogan interprets society as a collection of groups where the “higher” ones are worthy of rights and dignity, while the “lower” ones are not. He has repeatedly espoused that women are not equal to men, for example. His recognition of their struggle only when it benefits him shows that Erdogan considers Native Americans members of the subjacent class, and therefore unworthy of even the most basic human decency, such as the recognition of their suffering at the hands of state-sponsored genocide. The despicableness of such a belief cannot be overstated.
Erdogan’s comments on the experience of past and present Native Americans should not only not earn him praise, but they should also be sternly reciprocated with condemnation for attempting to exploit the inter-generational struggle of an oppressed people as if it were a bargaining chip: he has no sympathy for justice, only an interest in preserving the heinous worldview that he is a member of a group with superiority over others.
BY: Joe Mayall
Joe Mayall is an American writer from Boston, MA. While his political writing concentrates on foreign policy, he covers combat sports and enternainment as well.
Follow him on Twitter at @joemayall.
Written exclusively for News About Turkey (NAT)
|
<urn:uuid:71b849a9-b85c-44d6-8ea8-1bbd866c2d76>
|
CC-MAIN-2020-10
|
https://newsaboutturkey.com/2019/12/22/news-about-turkey-nat-erdogans-bad-faith-recognition-of-the-native-american-genocide/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00408.warc.gz
|
en
| 0.968437 | 910 | 2.90625 | 3 |
The Cambrian Period was the time when trilobites reached their peak. The period spanned 35,000,000 years and produced thousands of species. The word Cambrian is of Latin origin and means Wales. Scientists first studied exposed cambrian rock around 1835. The largest Cambrian exposures in North America extend from Southern California to British Columbia, Canada. Trilobites died out in great numbers at the end of this period.
|
<urn:uuid:eb40eac6-30a4-43c1-b979-8bbc2c6b72c7>
|
CC-MAIN-2014-15
|
http://phacops.homestead.com/cambrian.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00311-ip-10-147-4-33.ec2.internal.warc.gz
|
en
| 0.944132 | 90 | 4.28125 | 4 |
Find the equation for the line given the slope and point using point slope form
I show how to solve math problems online during live instruction in class. This is my way of providing free tutoring for the students in my class and for students anywhere in the world. Every video is a short clip that shows exactly how to solve math problems step by step. The problems are done in real time and in front of a regular classroom. These videos are intended to help you learn how to solve math problems, review how to solve a math problems, study for a test, or finish your homework. I post all of my videos on YouTube, but if you are looking for other ways to interact with me and my videos you can follow me on the following pages through My Blog, Twitter, or Facebook.
Questions answered by this video:
How do I find the equation of a line when I know the slope of the line and a point on the line?
What is slope intercept form of a line?
What is the equation of a line with a slope of 3 that passes through (2,11)?
Currently 4.0/5 Stars.
In this lesson, the teacher shows how to find the equation of a line when you know the slope of the line and a point on the line using by using the slope intercept form of the line.
|
<urn:uuid:ea1dfd16-fdac-4c03-947d-4dfb9ccb6834>
|
CC-MAIN-2023-40
|
https://www.mathvids.com/browse/high-school/algebra/linear-equations/writing-equations-for-lines/2256-finding-equation-of-a-line-using-point-3
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00307.warc.gz
|
en
| 0.942087 | 275 | 3.265625 | 3 |
Market Timing Theory
Market timing theory is a concept in finance that examines the decision-making process behind buying and selling securities based on the current state of the market. The theory suggests that investors can generate superior returns by buying low and selling high based on their predictions of market movements.
The basic premise of market timing theory is that markets are not perfectly efficient, and that investors can use their knowledge and analysis to outperform the market. This theory suggests that investors can make profitable trades by identifying trends, patterns, and anomalies in the market.
One strategy of market timing is to use technical analysis, which involves the use of charts and statistical indicators to identify market trends and patterns. Technical analysts believe that past market behavior can predict future market movements, and use this information to make buying and selling decisions.
Another strategy of market timing is to use fundamental analysis, which involves the analysis of a company’s financial statements and economic indicators to determine its intrinsic value. Fundamental analysts believe that the market can be inefficient in valuing companies, and that they can identify undervalued or overvalued securities by examining a company’s financials.
Despite the potential benefits of market timing, it is also subject to significant risks and limitations. Market timing requires a high level of expertise and knowledge, and even experienced investors can make incorrect predictions about market movements. Additionally, market timing can lead to increased trading costs, taxes, and lost opportunities if the investor misses out on profitable market movements.
Furthermore, research has shown that market timing is difficult to implement consistently and can be influenced by cognitive biases and emotions such as fear and greed. For these reasons, many financial experts advocate for a buy-and-hold strategy, which involves investing in a diversified portfolio of securities for the long-term, rather than trying to time the market.
In conclusion, market timing theory is a concept in finance that suggests investors can generate superior returns by buying and selling securities based on market movements. However, market timing requires a high level of expertise and knowledge, and can be subject to significant risks and limitations. While market timing may be a viable strategy for some investors, many financial experts advocate for a buy-and-hold strategy as a more consistent and less risky approach to investing.
|
<urn:uuid:7cae8b67-4dcb-4f32-9fb0-00b9dcbaa82f>
|
CC-MAIN-2023-23
|
https://www.financial.co.ke/market-timing-theory/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645417.33/warc/CC-MAIN-20230530063958-20230530093958-00057.warc.gz
|
en
| 0.952873 | 454 | 2.734375 | 3 |
DIGITAL STORYTELLING FOR GLOBAL CITIZENSHIP EDUCATION
The art of storytelling supports local and global communities. The program introduces to creative thinking design approaches, animated and digital narrative techniques along with shooting strategies and video editing elements. From the cinematic context to the means of social media communication, the course proposes an ecosystem of contents, scenarios and transversal languages in favour of the development of the 21st century skills.
The workshop is open to School teachers, VET trainers and Adults’ educators.
The aim of the training course is to provide participants with a specific methodology to implement Global Citizenship Education using Digital Storytelling technique both in schools and in non-formal educational contexts.
The course aims to respond to crucial training needs that are emerging among young people, such as intercultural and crosscultural skills, critical thinking, collaborative and project-based learning, creativity and digital literacy.
The main learning outcomes are:
- Analyse GCE opportunities for formal and non-formal education;
- Use digital storytelling in the classroom/group;
- Plan a digital storytelling workshop;
- Experience one concrete tool to facilitate the story-circle;
- Being engaged to use digital storytelling in the classroom.
Activities are structured per one or two weeks of intensive workshops, participants will achieve a consistent set of competences at the end of each week.
DAY 1 SIX MEMOS FOR THE NEXT
Let’s warm up! Ideational process and creative thinking. Digital storytelling and writing practices as a systemic process.
DAY 2 EXERCISES IN STYLE
quality of a project. The ‘Circle of stories’ methodology. Synopsis, script and storyboard.
DAY 3 CAMERA LUCIDA
DAY 4 THE FABLE GAME
DAY 5 IMAGES OF REALITY
potential in terms of digital storytelling production and societal action.
DAY 6 INTO THE WILD
DAY 7 GOODBYES
Here below you can download all the additional and support resources developed with the BRIGHTS project.
- Training Needs Analysis Report
- Informative Kit on GCE
- Project leaflet
- Course Curriculum
- Policy Recommendations
For more information, the videos realized with BRIGHTS and the translated version of some of these resources visit www.brights-project.eu
|
<urn:uuid:1375a083-7907-4184-809c-3490479006ae>
|
CC-MAIN-2023-40
|
https://www.erasmuspluska1.eu/courses_posts/digital-storytelling-for-global-citizenship-education/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.30/warc/CC-MAIN-20230924191454-20230924221454-00498.warc.gz
|
en
| 0.861858 | 520 | 3.546875 | 4 |
The Milgram Experiments
Of course, jurors are not supposed to assume that someone else will somehow sort out the mess they're presented with when a case is appealed. In fact, the law is supposed to make jurors feel a huge weight of moral responsibility for these life or death decisions. No doubt, many jurors do. When they don't, however, the result can be catastrophic as the Cruz case suggests. Why they don't assume responsibility may be explained, in part, by one of the classic experiments in social psychology.
Sound from an old audio recording: "Let me out of here. You have no right to keep me here. Let me out. Let me out. Let me out."
In 1961, Stanley Milgram, an assistant professor of psychology at Yale University,
conducted what have become known as the "Obedience Experiments." Milgram persuaded ordinary people to give painful electric shocks to people they had only just met.
The subjects, known as "teachers" were told to increase the strength of an electric shock given a so-called "learner" each time he made a mistake in memorizing a series of word pairs. The subjects believed they were part of a study to ascertain whether negative reinforcement could motivate learning. In reality, Milgram wanted to test whether ordinary people would obey an authority figure even to the point of being sadistic and cruel. To make the experiment even more provocative, the "teachers" were led to believe that the "learner" they were shocking had a heart condition.
Sounds from the Milgram experiment: A buzzer.
Learner: "Ohh! I can't stand the pain, let me out of here."
Teacher: "He can't stand it. I'm not going to kill that man in there. You hear him hollering.
What if something happens to him?"
In reality, the "learner" was not being shocked, and was deliberately giving wrong answers. But the subject or "teacher" didn't know this. Eventually the "learner" with his heart condition would stop screaming or banging on the wall and fall silent, leading the "teacher" to believe he'd lost consciousness.
Sounds from the Milgram experiment:
Teacher: "Something's happened to that man in there. You better check in on him, sir. He won't answer me or nothing."
Experimenter (Authority figure): "Please continue. Go on please."
Teacher: "You accept all the responsibility?"
Experimenter: "The responsibility is mine, correct. Please go on."
Although the "teacher" was clearly uncomfortable giving the electric shocks, goaded by the experimenter and assured that he was not personally responsible, the teacher eventually gave the unconscious "learner" more than 400 volts. In fact, half of the subjects in Milgram's experiment administered the maximum punishment. Nearly all the subjects gave the maximum voltageif they were part of a group, and led to believe that they were simply going along with a majority or unanimous decision.
Next: Skirting Responsibility
|
<urn:uuid:916db157-0648-421f-97c8-0bf61f232ff1>
|
CC-MAIN-2014-35
|
http://americanradioworks.publicradio.org/features/deadlydecisions/responsibility4.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831565.57/warc/CC-MAIN-20140820021351-00371-ip-10-180-136-8.ec2.internal.warc.gz
|
en
| 0.979052 | 637 | 2.765625 | 3 |
Click any word in a definition or example to find the entry for that word
90% of the time, speakers of English use just 7,500 words in speech and writing. These words appear in red, and are graded with stars. One-star words are frequent, two-star words are more frequent, and three-star words are the most frequent.
|present participle||dropping behind|
|past tense||dropped behind|
|past participle||dropped behind|
Mary dropped behind to walk with Sam.
Jensen has dropped behind into last place.
This is the British English definition of drop behind. View American English definition of drop behind.
money, food, or clothes given to poor people
to post a tweet, usually a negative one, that mentions a person without using the @ sign, so that they will not see the message on their Twitter feed …add a word
A must for anyone with an interest in the changing face of language. The Macmillan Dictionary blog explores English as it is spoken around the world today.global English and language change from our blog
|
<urn:uuid:297f8db9-65a4-489d-b81d-61470c93d123>
|
CC-MAIN-2014-23
|
http://www.macmillandictionary.com/dictionary/british/drop-behind_1
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274987.43/warc/CC-MAIN-20140728011754-00296-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.911952 | 227 | 3.296875 | 3 |
ESA observatory breaks world quantum teleportation record
An international research team using ESA’s Optical Ground Station in the Canary Islands has set a new distance world record in ‘quantum teleportation’ by reproducing the characteristics of a light particle across 143 km of open air.
Funded by ESA, researchers from Austria, Canada, Germany and Norway transferred the physical properties of one particle of light – a photon – onto its ‘entangled’ partner via quantum teleportation, thereby bridging a distance of 143 km between the Jacobus Kapteyn Telescope on La Palma and ESA’s Optical Ground Station on adjacent Tenerife.
Their results have been published in this week’s Nature Magazine.
Once entangled, the measurement of a certain property – such as polarisation or spin – will yield the same result for both particles, no matter how far apart the particles are located, and without any further signal being physically passed between them.
Quantum teleportation is not copying in the strictest sense however, since the act of transference destroys the original particle – its characteristics passing to its entangled counterpart.
A confounded Albert Einstein termed quantum entanglement “spooky action at a distance,” but it is a documented physical phenomenon, central to a coming generation of ultra-powerful quantum computers teleporting ‘quantum bits’ – qubits for short – of information, as well as for communication systems immune to interception.
“This achievement breaks new ground for long-distance quantum communications,” explained Eric Wille, overseeing the project for ESA.
“The first quantum teleportation took place in laboratory conditions. The challenge here was to maintain the entanglement between two photons separated by 143 km, despite being perturbed by atmospheric conditions, such that it could still be used for quantum teleportation.”
This involved such a low signal-to-noise ratio that the experiment had to be very carefully designed.
Ultra-low noise photon detectors were installed, and a separate process of quantum entanglement was used to keep the two station clocks synchronised to within three billionths of a second of each other.
This helped to assure that the correct photons were detected – the best that GPS signals could deliver was 10 billionths of a second.
Even then, the team were kept waiting almost a year, after a first attempt in 2011 failed due to exceptionally bad weather.
The two telescope stations located on the volcanoes 2400 m above sea level have to face harsh conditions, including rain, fog and strong winds or even snow and sand storms.
The experiment finally took place in May, establishing a new record for the teleportation distance.
“The next step would really be to achieve quantum teleportation to a satellite in orbit to demonstrate quantum communication on a global scale,” commented Dr Rupert Ursin of the Austrian Academy of Sciences.
The inter-island measurement campaign was commissioned by ESA within its General Studies Programme in order to demonstrate the feasibility of long-distance quantum teleportation for future space missions.
It is also an excellent example of bringing together the expertise of scientists from several ESA member states to perform extraordinary experiments from ESA’s Optical Ground Station.
|
<urn:uuid:dea93966-3342-4cd6-8c58-581125074cae>
|
CC-MAIN-2017-43
|
http://www.esa.int/Our_Activities/Space_Engineering_Technology/ESA_observatory_breaks_world_quantum_teleportation_record
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825154.68/warc/CC-MAIN-20171022075310-20171022095310-00275.warc.gz
|
en
| 0.935881 | 654 | 3.265625 | 3 |
The Full form of EEG is Electroencephalogram. EEG is a technique that detects electrical activity in your brain using small, electrodes (metal discs) attached to your scalp. Wires attach these electrodes to a machine, which records the electrical impulses. Your brain cells communicate via electrical impulses and are active all the time, even when you’re asleep. This activity shows up as wavy lines on an EEG recording. An EEG plays an important role in diagnosing brain disorders and is also one of the main diagnostic tests for epilepsy. The results in EEG are either printed out or displayed on a computer screen. Various forms of epilepsy are denoted by different patterns of electrical impulses. An EEG can determine changes in brain activity that might be useful in diagnosing especially epilepsy, brain disorders or another seizure disorder. An EEG might also be helpful for diagnosing or treating the disorders such as: Brain damage from head injury, brain tumor, Brain dysfunction that can have a variety of causes (encephalopathy), Inflammation of the brain (encephalitis), sleep disorders, stroke etc. EEGs are painless and safe. During the test, sometimes seizures are intentionally triggered in people with epilepsy, but appropriate medical care is provided if needed.
|
<urn:uuid:882da985-df72-473f-bad9-014ec029704c>
|
CC-MAIN-2023-23
|
https://www.successcds.net/full-form/eeg-full-form.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649741.26/warc/CC-MAIN-20230604093242-20230604123242-00116.warc.gz
|
en
| 0.898586 | 250 | 4 | 4 |
- Development & Aid
- Economy & Trade
- Human Rights
- Global Governance
- Civil Society
Monday, October 2, 2023
BULAWAYO, Zimbabwe, Apr 13 2021 (IPS) - With the two extremes of global hunger and obesity on the increase, a new report suggests a radical reset for food and nutrition to ensure the long-term sustainability of livelihoods and the environment.
According to a new Barilla Center for Food & Nutrition Foundation (BCFN) report, 690 million people globally lack sufficient food. COVID-19 has worsened these conditions, and it’s projected that between 83 and 132 million more people will join the ranks of the undernourished because of interrupted livelihoods caused by the pandemic.
A BCFN report, “A one health approach to food – The Double Pyramid connecting food culture, health and climate”, raises concerns that in some African countries, the consumption of cheap sources of high-quality protein, vitamins, and minerals – such as eggs – remains low. The report will be launched on Wednesday, April 14, 2021.
The Double Pyramid combines a health and climate pyramid that “serves as a guideline for daily food choices in enhancing people’s awareness and enriching their knowledge about the impacts of food choices to encourage dietary patterns that are healthy for humans and more sustainable for the planet.”
According to aid agencies, the model will resonate with the needs of perennially food stressed countries in the Global South, where climate change and food security have affected the livelihoods of millions who have only one meal a day.
“The African Double Pyramid attempts to illustrate that it is possible to respect local traditions and preferences while recommending a frequency with which foods should be consumed to promote improved health and a low impact on the environment,” says the report.
The experimental African Double Pyramid covered five countries – Nigeria, the Democratic Republic of Congo, South Africa, Tanzania, and Ethiopia. Researchers noted that while the African continent was diverse, “generally, some common traits can also be found, such as the single-course meal based on a starchy ingredient.”
But as Marta Antonelli, the Barilla Foundation’s head of research, told IPS, food politics have become a matter of different strokes for different folks.
“Different areas of the world have different priorities to look at. The principles of a sustainable and healthy diet can be applied in all contexts and inform a new approach towards food,” she said.
“Today, food systems fail to provide adequate and equitable food for all and pose an unsustainable burden on the environment. Health and the environment need to be considered together when addressing food systems, which are an extremely powerful leverage to improve both,” said Antonelli.
The report comes when the Food and Agriculture Organisation (FAO) says more than 3 billion people worldwide cannot afford a healthy diet. This is a paradox the Barilla Foundation has tackled in its past reports where it showed malnutrition in all of its forms – undernutrition, micronutrient deficiencies and overweight/obesity was increasing. Its Global Nutrition Report showed that 88% of countries face a serious burden of either two or three forms of malnutrition, namely undernutrition, micronutrient deficiency or overweight or obesity. Recent findings include that child and adult obesity have increased in almost all countries, burdening already struggling global health care systems.
Still, some African countries present immediate challenges to meeting the Double Pyramid model.
In a news release dated March 29, the World Food Programme’s Zimbabwe country director Francesca Erdelmann said more than 2.4 million people in urban areas struggled to meet their basic food needs.
“Reduced access to nutritious food has resulted in negative impacts for many. Families will find it difficult to put food on the table. The fortunate ones will skip meals while those without will have to go to bed with an empty stomach,” Erdelmann said, adding that “for the most vulnerable people, hunger will have a lasting effect on their lives.”
The Barilla report notes that healthy diets’ affordability is “compromised especially in low- and middle-income countries” while also calling for the reduction in the cost of nutritious foods. It also calls for a reorientation of agriculture priorities towards more nutrition-sensitive food and agricultural production.
The call could prove difficult for African countries that include Zimbabwe, where agriculture remains underfunded. The government has long struggled to convince smaller holders to plant more nutritious and drought-resistant crop varieties.
“Our children do not like food prepared with small grains. They are used to maize meal. That is why we continue growing maize,” Fanyana Jamela, a smallholder in Bulawayo, Zimbabwe’s second-largest city, told IPS.
But Katarzyna Dembska, one of Barilla’s lead researchers, says governments must rethink food policies if healthy diets are to be successfully realised.
“Reducing the cost of nutrition foods and increasing affordability start with a reorientation of agriculture priorities towards more nutrition sensitive food and agricultural production,” Dembska told IPS.
“The Double Health and Climate Pyramid shows that through a varied and balanced diet, we can promote our health, longevity and wellbeing, while reducing our carbon foot print,” she added.
Nathan Hayes, a senior Africa analyst at the Economist Intelligence Unit, says more needs to be done if countries, such as Zimbabwe, are to meet the recommendations of international agencies regarding food and nutrition.
“Over the long-term, Zimbabwe must increase the volumes of domestic food production and improve the distribution of food to improve food availability and to allow Zimbabweans to meet their nutritional needs,” Hayes told IPS.
“Even with a good harvest this year, food insecurity will remain significant in Zimbabwe, and the country is a long way from achieving agricultural self-sufficiency,” he said.
Among other policy recommendations to promote the Double Pyramid’s success, the Barilla report says there is a need to “promote training and education programs to support smallholder farmers to grow sustainably and access markets for nutritious food,” which was found lacking in many countries surveyed for the report.
“The emphasis on input-intensive cropping systems has arguably lowered resilience of food systems in the global South, hence the utilisation of traditional indigenous food resources should be emphasised for achieving food security and nutrition,” Dembska told IPS.
Webinar Registration here.
This story includes downloadable print-quality images -- Copyright IPS, to be used exclusively with this story.
IPS is an international communication institution with a global news agency at its core,
raising the voices of the South
and civil society on issues of development, globalisation, human rights and the environment
Copyright © 2023 IPS-Inter Press Service. All rights reserved. - Terms & Conditions
You have the Power to Make a Difference
Would you consider a $20.00 contribution today that will help to keep the IPS news wire active? Your contribution will make a huge difference.
|
<urn:uuid:de49dec1-a2ca-4449-8fa9-ca89a31b2715>
|
CC-MAIN-2023-40
|
https://www.ipsnews.net/2021/04/new-report-calls-improved-eating-habits-world-extremes/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00300.warc.gz
|
en
| 0.931087 | 1,504 | 2.890625 | 3 |
How many people died in the Cuban Revolution?
Consequences of the Cuban Revolution:
Cuba is located where the Caribbean Sea, Gulf of Mexico, and the Atlantic Ocean meet. Since 1965, it has been ruled by the Communist Party of Cuba. It has a long history of Spanish colonialism, African slavery, and close ties with the Soviet Union.
Answer and Explanation:
The final count of casualties numbered to 2 000 after ending the Cuban Revolution after 2 years. Reportedly 600 were executed from the side of...
See full answer below.
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Learn more about this topic:
from AP World History: Exam PrepChapter 26 / Lesson 7
|
<urn:uuid:134adffb-a874-4168-a102-d97a4fde1c9a>
|
CC-MAIN-2020-29
|
https://study.com/academy/answer/how-many-people-died-in-the-cuban-revolution.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886865.30/warc/CC-MAIN-20200705023910-20200705053910-00477.warc.gz
|
en
| 0.946273 | 180 | 2.828125 | 3 |
Free software versus open source software
"Free software" and "open source software" are two terms for the same thing: software released under licenses that guarantee a certain specific set of freedoms.
The term "free software" is older, and is reflected in the name of the Free Software Foundation (FSF), an organization founded in 1985 to protect and promote free software. The term "open source" was coined by Christine Peterson and adopted in 1998 by the founders of the Open Source Initiative (OSI).
Copyleft versus permissive
"Copyleft" refers to licenses that allow derivative works but require them to use the same license as the original work. For example, if you write some software and release it under the GNU General Public License (a widely-used copyleft license), and then someone else modifies that software and distributes their modified version, the modified version must be licensed under the GNU GPL too — including any new code written specifically to go into the modified version. Both the original and the new work are Open Source; the copyleft license simply ensures that property is perpetuated to all downstream derivatives.
A "permissive" license is simply a non-copyleft open source license — one that guarantees the freedoms to use, modify, and redistribute, but that permits proprietary derivative works.
There are two kinds of contributor agreements. In a Contributor License Agreement (CLA), the original contributor retains copyright ownership of their contributions, but grants the project a broad set of rights such that the project can incorporate and distribute the contributions as it needs to. In a Copyright Assignment Agreement (CAA), the contributor actually transfers copyright ownership of the contributions to the project, who can then license it however they want since they own it (but a CAA typically grants very broad non-exclusive rights back to the contributor so that they too can use, distribute, sublicense etc their contribution freely).
The following licenses are approved by Open Source Initiative and are popular, widely used, or have strong communities:
- Apache License 2.0
- BSD 3-Clause "New" or "Revised" license
- BSD 2-Clause "Simplified" or "FreeBSD" license
- GNU General Public License (GPL)
- GNU Library or "Lesser" General Public License (LGPL)
- MIT license
- Mozilla Public License 2.0
- Common Development and Distribution License
- Eclipse Public License
|
<urn:uuid:eb3e9c1f-bee6-4e9b-8407-2214e1bfd30d>
|
CC-MAIN-2017-34
|
https://notes.almccann.com/free-and-open-source-software/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00626.warc.gz
|
en
| 0.906302 | 504 | 3.390625 | 3 |
During the latter half of the 19th century, Sweden and Finland were the leading exporters of tar in the world. The people living in Björsbo, as well as farmers around the forest rich areas of Sweden made use of the stumps of pine to burn and extract tar. The little path that leads you to the east woodland lake at the camping, goes right next to the tar pit where our farm people spent consecutive days in early springtime to produce the valuable tar. Most of the production was probably sold to earn useful extra income. The rest was used for impregnation and medical disposal. Besides that, a cross of tar was painted at the front door of the barn to protect the animals at the farm from illness, evil and other horrors.
The forest around Björsbo also tell us that the farmers in this area produced a lot of charcoal during wintertime. On the eastern side of the same small woodland lake you find what the remains of the fireplace in a simple hut where the charcoal burners slept and ate while spending weeks away to guard the charcoal kiln from incineration. The charcoal was delivered by horse and wagon to Härryds factories where the charcoal was used to produce iron from bog ore collected from the bottom of the nearby lake Hären. Bring the picnic basket and view the old ruins of the blast furnace and factories in Härryd a few kilometres west of the camping – a perfect distance for a nice bike ride.
|
<urn:uuid:72d62963-3924-4b7e-afb0-fb38e2293422>
|
CC-MAIN-2017-30
|
http://bjorsbo.nu/faq/tar-and-charcoal-from-the-woods/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549448146.46/warc/CC-MAIN-20170728083322-20170728103322-00157.warc.gz
|
en
| 0.965724 | 299 | 3.15625 | 3 |
Strategic Bombers Overview
While the progressive development of SAC's surface-launched missile force was both important and
highly significant, it was by no means an isolated phenomenon. In most instances, it was paralleled
by qualitative improvements to the command's manned bomber force. When the Strategic Air
Command was first established on 21 March 1946 as one of the three major combat commands of
the United States Army Air Forces, its initial bomber force consisted of 148 veteran World War II
B-29 Superfortresses. Two years later, in 1948, this force was augmented with the first models of
two new bombers, the B-36 and the B-50. In 1951, the all jet-engine B-47 medium bomber made
its first appearance in SAC. This revolutionary aircraft was joined in 1955 by the first B-52
Stratofortress, destined to become the mainstay of the SAC manned bomber force into the 1990s.
Further additions to the SAC bomber fleet included the supersonic B-58 Hustler, introduced on 1
August 1960, and the FB-111A medium bomber accepted by SAC on 8 October 1969. The first operational B-1B joined the
command on 7 July 1985.
The growing maturity and sophistication of the SAC manned bomber force was matched by both
qualitative and quantitative improvements to the air defense systems of the leading communist
powers, especially the Soviet Union. This latter development over a thirty-year period led to SAC's
employment of a number of air-launched guided missiles to ensure that SAC's manned bombers
successfully penetrated enemy territory.
The bomber's unique strengths of payload, range, and responsiveness
coupled with precision attack are a cornerstone of America's airpower and force
projection. Prior to hostilities, bombers are a strong deterrent. When generated
for either conventional or nuclear alert, bombers provide a strong and highly
visible deterrent force just over the horizon from the enemy. During the initial
phases of a conflict, bombers launching from the United States with adequate
tanker support, can strike time-critical targets and stall the enemy attack
anywhere in the world. Delivering a large quantity and vast array of munitions,
our bomber force can attack an enemy's Weapons of Mass Destruction (WMD),
Command, Control and Communications (C 3 ) nodes, and advancing forces to
greatly reduce their effectiveness. The stand-off and stealth penetration
capability inherent in our bomber force allows them to operate with minimum
numbers of supporting operational assets during this initial phase. Utilizing
sustained forward-deployed operations, bombers provide increased firepower
while reducing the size of force packages and placing fewer aircrews at risk.
Should circumstances require, bombers can also provide rapid global response
without the need to deploy into theater before striking.
Today’s force structure is largely a legacy of the Cold War when
intercontinental bombers were part of the Nuclear Triad to deter, and if
necessary, defeat the former Soviet Union. While part of the bomber force
continues to support the nuclear deterrent mission, the environment permits,
in fact demands, increased emphasis on conventional missions. The bomber
force structure required to prosecute two nearly simultaneous major theater
wars and contribute to a nuclear posture that deters aggression has been
defined by studies and national guidance. The 1993 Bottom-Up Review (BUR)
confirmed today’s bomber force structure requirement. Today’s sizing of forces
was built on the philosophy that the US should maintain sufficient military
power to be able to win two nearly simultaneous, major theater wars. The BUR
determined the bomber requirement by 1999 to include up to 184 total
bombers (combat coded), with the ability to deliver “smart” conventional
munitions against attacking enemy forces and fixed targets.
To support the imperative of engagement in the National Security
Strategy, the DoD laid out a National Military Strategy of Shape, Respond,
Prepare Now: A Military Strategy for a New Era. Details of the strategy and
resultant defense program in the May 1997 Report of the Quadrennial Defense
Review (QDR), prescribe a total fleet of 187 bombers (95 B-1, 21 B-2, and 71
B-52). Since the QDR, two B-1s have been lost in peacetime accidents.
However, the Report of the Panel to Review Long-Range Air Power (LRAP)
concluded the existing bomber fleet cannot be sustained through the expected
life of the air frames and that additional aircraft will eventually be required. To
address this issue, the Air Force will add five additional B-52 attrition reserve
aircraft, bringing the B-52 total from 71 to 76 for a total bomber force of 190.
Early in the next century, the bomber fleet will see the fruition of the
plans described by the Air Force’s ‘92 Roadmap, including the transition of the
bomber force from a cold war instrument to a flexible response force. Consistent with the LRAP findings and the Defense Science Board’s 1998 Summer Study Task
Force Report, the Air Force is making significant near through long term investments in integrating precision munitions, enhancing connectivity and mission planning
responsiveness, and increasing bomber sortie rates. To this end, the Air Force
has already invested $3.6 billion in new combat capabilities and reliability and
maintainability upgrades, along with budgeting for $22.6 million in
bomber Forward Operation Location (FOL) requirements.
conventional modernization effort outlined in the 1992 Bomber Roadmap has
produced three large payload, long-range “bomber” platforms now able to
conventionally destroy multiple targets per sortie - creating desired effects at
the strategic, operational, and tactical levels simultaneously. By 2000 all three bombers will have large payload Joint Direct Attack Munition (JDAM) near-precision delivery capability. The bomber arsenal will also have impressive B-52 stand-off weaponry and Joint Stand-Off Weapon (JSOW) capability on-board the B-2. These enhancements have merged precision, stealth, and stand-off with the payload, range and responsiveness of the bomber arsenal. Bombers are now a critical element of a joint conventional aerospace team with unique capabilities to fulfill Commander-in-Chief (CINC) requirements across the full spectrum of conflict.
Sources and Resources
Maintained by Webmaster
Updated Thursday, April 22, 1999 8:11:26 AM
|
<urn:uuid:c9b447aa-0d32-450d-a712-b7db0530774f>
|
CC-MAIN-2017-43
|
https://fas.org/nuke/guide/usa/bomber/overview.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826210.56/warc/CC-MAIN-20171023164236-20171023184236-00620.warc.gz
|
en
| 0.916427 | 1,364 | 2.890625 | 3 |
Fun Fact Friday: The History of Snorkeling
I’ve had the pleasure on going on some pretty great snorkeling trips. There is something so overwhelmingly cool about being in the water and feeling like you’re part of a different environment, which you are when you’re swimming around the coral among schools of fish. Indeed, being in the water and wanting to be like fish has been around for as long as those who looked in the air and wanted to fly with the birds. That said, let’s take a look at snorkeling’s oldest predecessors in today’s edition of fun fact Friday: the history of snorkeling!
- 3000 B.C.: Skin divers off the island of Crete in the Mediterranean Sea use hollow reeds to allow them to breathe while they collect sponges underwater. This was about as basic as it got. Side note: the idea of opening your eyes sea water does not sound appealing.
- 900 B.C.: In Assyria, divers begin using animal skins filled with air as an oxygen supply during their dives.
- 500 B.C.: Greeks used hollow reeds to remain unobserved in the water so that the Persians could not spot them. It allowed one soldier to swim amongst the Persian fleet and cut them free of their moorings to avoid an attack using only a reed to breathe and ended up swimming nine miles to rejoin the Greek army.
- 300 B.C.: Alexander the Great encourages development of the diving bell which trapped a pocket of air for divers to replenish their air supply while underwater.
- Aristotle writes about diving bells, saying “…they enable the divers to respire equally well by letting down a cauldron, for this does not fill with water, but retains the air, for it is forced straight down into the water.” In his other writings, “Parts of the Animals” in particular, he mentions divers using a tube connected to the surface or the precursor to the snorkel. He remarks how it works like the trunk of an elephant.
- 1300 : Persian divers were making eye goggles from tortoise shells. They would slice the shell thin enough to make it translucent and then polish it for optimum visibility.
- 1400: Leonardo da Vinci made the first mention of air tanks in Italy. He also had proposed inventions that included diving tubes leading to floats containing air on the surface of the water all the way to a completely self-contained diving suit. Da Vinci was worried someone would use his diving inventions to sink ships or even commit murders.
- He also came up with a design for webbed swimming gloves.
- 1531: Guglielmo de Lorena completes a shipwreck dive using a diving bell designed by Da Vinci.
- 1538: Two Greeks performed a demonstration in the Tagus river using a large kettle that involved emerging out of the water with dry clothes and a lit candle.
- The diving bell technology was greatly limited and people began developing things that allow swimmers to breathe from the surface. For the diving world, people realized that diving systems were limited due to water pressure.
- 1717: Benjamin Franklin came up with an idea to help swimmers move faster through the water: wooden paddles that attached to the hands and feet.
- 1771: The invention of the air pump by John Smeaten, British engineer, expands the world of diving. Inventors realized that the air pump along with pressurized tubes allowed divers to go to much greater depths than thought possible.
- 1912: Modern fins are invented by Frenchman Louis de Corlieu. He demonstrated his invention for the French Navy. He would obtain a patent in 1933.
- 1930: Guy Gilpatric starts swim diving with waterproof goggles based on the design of swimming goggles invented by Maurice Fernez in 1920.
- Modern era: the development of rubber and plastic made it possible to create masks that would fit properly and not leak water. Materials are constantly being developed that are increasingly resistant to the ocean water and to see underwater better.
Pretty cool, huh? Crazy to think thousands of years went into the creation of such a hobby most people don't think twice about. Which development do you think was the most important in the creation of snorkeling?
Leave a Comment
|
<urn:uuid:2121925f-1f19-4374-9143-7d70e6ffa018>
|
CC-MAIN-2023-14
|
https://sunplay.com/blogs/posts/fun-fact-friday-history-snorkeling
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00392.warc.gz
|
en
| 0.961015 | 900 | 3.578125 | 4 |
As you are aware today is ‘World Book Day’ and I would love to find out what you enjoy reading. This can be a short blog about the book, why it’s enjoyable, who are some of the characters? What is the plot?…..ssshhh don’t give too much away!
One of my favourite books is ‘The Butterfly Lion’ by Michael Morpurgo. In this story Bertie rescues an orphaned white lion cub from the African veld. They are inseparable until Bertie is sent to boarding school far away in England and the lion is sold to a circus. Bertie swears that one day they will see one another again, but it is the butterfly lion which ensures that their friendship will never be forgotten. I really love stories about animals which is why this is one of my favourite stories. I also love Michael Morpurgo as an author because I have read many of his books and always find them captivating.
Now your turn.
This half term we will be looking at ‘The Man Who Walked Between the Two Towers’. So for our first day back after the holidays the children replicated some of the movements of the famous French aerobatic Philippe Petit from in the story. They took part in activities such as: juggling in pairs, javelin throwing, balancing on boxes and tight rope walking. Overall, the children have really immersed themselves into the story and are now working towards writing a flashback about Philippe Petit’s day walking across the towers. Here are some action shots of the children pretending to be Philippe.
This week children will be looking at multiplying and dividing numbers. In particular, the children will multiply and divide whole numbers and those involving decimals by 10, 100 and 1000.
Multiplying by 10, 100 or 1000
When you multiply by 10, move all the digits one place to the left, putting a zero in the empty space.
246 x 10 = 2460
When you multiply by 100, move all the digits two places to the left, putting a zero in the empty spaces.
When you multiply by 1000, move all the digits three places to the left, putting a zero in the empty spaces.
The decimal point always stays in the same place.
Dividing by 10, 100 or 1000
When you divide by 10, move all the digits one place to the right (the opposite way to multiplying).
246 ÷ 10 = 24.6
When you divide by 100, move all the digits two places to the right.
When you divide by 1000, move all the digits three places to the right.
The decimal point always stays in the same place.
Well… after an exciting and magical half term we have come to the end of our Harry Potter context. The children threw themselves into this context and I am so proud of the effort and enthusiasm they have put into their work this half term.
During this context, the children became newspaper reporters and have written their own newspaper report about Harry’s first Quidditch game. They have also put themselves in Harry’s shoes and have written a diary entry about Harry’s day when he first had detention in the terrifying Forbidden Forest. In addition, over this half term the children’s knowledge and understanding of the text has blown us away. This was shown in the reflection task the children made and presented to the children in Mr Carr’s class. The children put together a presentation to try and persuade someone why they should attend Hogwarts School of Witchcraft and Wizardry. We have also learned about Alnwick Castle in Northumberland which is famous for being used as Hogwarts Castle in the film.
Our trip to Harry Potter World was also a massive highlight of our half term and I’m sure this was a trip which the children will never forget.
Next half term our new context will be around the book ‘The Man Who Walked Between The Towers.’ It is very different to the context this half term but i’m sure the children will love it!
For our final week this half term, in Maths the children will be distinguishing between regular and irregular polygons based on reasoning about equal sides and angles.
What are ‘polygons’?
Polygons are shapes with many straight sides:
- Regular polygons have equal angles and sides of equal length.
- Irregular polygons have sides of different lengths.
Here are some common polygons:
Pentagons have 5 sides:
A regular pentagon
An irregular pentagon
Hexagons have 6 sides:
A regular hexagon
An irregular hexagon
Heptagons have 7 sides:
A regular heptagon
An irregular heptagon
Octagons have 8 sides:
A regular Octagon
An irregular Octagon
Here are various different shapes in regular and irregular forms:
Are these shapes regular or irregular? What are their names? How many sides do they have?
Shape 1 Shape 2 Shape 3
Shape 4 Shape 5 Shape 6
In Maths next week, the children will be identifying, describing and representing the position of a shape following a reflection or translation, using the appropriate language and knowing that the shape has not changed.
What do we mean by reflection?
When a shape is reflected in a mirror line, the reflection is the same distance from the mirror line as the original shape.
Here are some mirror lines:
- Vertical mirror line
- Horizontal mirror line
- Diagonal mirror line
What do we mean by translation?
Translation is when a shape slides from one place to another, without turning.
Here are some example translations:
- 2 squares to the left
- 3 squares down
- 1 square to the right and four squares up -
|
<urn:uuid:2b2d35e4-3db6-4337-ac49-d01ad53abbe7>
|
CC-MAIN-2020-24
|
http://cohort2012b.woodfoldblogs.co.uk/2018/02/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348519531.94/warc/CC-MAIN-20200606190934-20200606220934-00278.warc.gz
|
en
| 0.936811 | 1,206 | 2.9375 | 3 |
Deep House History
Deep house is a subgenre of house music that originated in the 1980s, initially fusing elements of Chicago house with 1980s jazz-funk and touches of soul music. Deep house tracks generally have a tempo of between 120 and 130 BPM (beats per minute), contrasting with the slower hip hop (100 beats per minute) and faster electronic-techno rave music (150 to 180 beats per minute).
This style of house music can often have an acoustic feeling. The online music store Beatport is credited with driving the popularity of deep house, but also mislabeling a number of artists in the process and giving rise to the future house genre.
Deep house is known for complex melody, use of unrelated chromatic chords underlying most sequences, and a soul, ambient, or lounge vibe to the vocals (if any). In the early compositions (1988–89), influences of jazz music were most frequently brought out by using more complex chords than simple triads (7ths, 9ths, 13ths, suspensions, alterations) which are held for many bars and give compositions a slightly dissonant feel. The use of vocals became more common in deep house than in many other forms of house music. Sonic qualities include soulful vocals (if vocals are included), slow and concentrated dissonant melodies, and a smooth, stylish, and chic demeanor. The use of women’s vocals is more common than male’s in deep house tracks. Deep house music rarely reaches a climax, but lingers on as a comfortable, relaxing sound.
Deep house was largely pioneered by Chicago producers such as Marshall Jefferson (On the House) and Larry Heard (Mr. Fingers) and with tracks such as “Mystery of Love” (1985) and “Can You Feel It?” (1986); the latter had a similar impact on deep house as Derrick May’s “Strings Of Life” (1987) did on Detroit techno. The jazzy sound became more common due to the favored use of electric pianos such as the Rhodes and Wurlitzer, and the Hammond organ. Author Richie Unterberger has stated that Heard’s deep house sound moved house music away from its posthuman tendencies back towards the lush, soulful sound of early disco music (particularly that of old Philadelphia International and Salsoul records).
|
<urn:uuid:5e0e0f17-377a-45c6-ab17-69e8a60c5ede>
|
CC-MAIN-2017-43
|
http://topghosttracks.com/privet-mir/
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00567.warc.gz
|
en
| 0.952588 | 484 | 2.890625 | 3 |
Stockholm+50 Interfaith Statement
Prepared by UNEP Faith for Earth
Endorsed by the International Environment Forum
Faith Values and Reach - Contribution to Environmental Policy
“A point has been reached in history when we must shape our actions throughout the world with a more prudent care for their environmental consequences. Through ignorance or indifference we can do massive and irreversible harm to the earthly environment on which our life and well-being depend. Conversely, through fuller knowledge and wiser action, we can achieve for ourselves and our posterity a better life in an environment more in keeping with human needs and hopes.”
(excerpt from Preambular Paragraph 6 of the 1972 Stockholm Declaration)
We, the representatives of various faith based organizations, Indigenous cultures and wisdoms from around the world participating in the Stockholm+50, committed to caring for ecological justice and for protecting our one Earth, hereby make the following statement to the governments, UN entities, civil society, and all stakeholders of the “Stockholm+50” processes.
The world is facing a triple ‘pandemic’ of climate change, biodiversity loss and pollution. Those hardest hit are those who have caused the least damage. We have less than three years for our carbon emissions to start dropping from the peak, and yet emissions continue to rise. We have already exceeded several thresholds critical to a stable and functioning planetary system, and we are currently on a pathway to overshooting dangerous tipping points, with irreversible consequences for all life.
Rainforests - the ‘lungs of earth’ – are ironically becoming a carbon emitter. Melting permafrost is already releasing enormous quantities of methane. Devastating heat waves, floods, and droughts impact many parts of the world. Climate-related disease outbreak and pest infestations are decimating communities’ resilience. Across the globe, conflict and war are fuelling increased competition for fossil fuel extraction and exploration.
The root causes of the triple planetary crises are deeply fueled by structural greed and apathy that underpin our current economic systems. Amassing of obscene wealth by corporations and select individuals is directly related to global environmental problems and solutions, which is morally and ethically unacceptable.
Without addressing these underlying causes, we are on a collision course to disaster.
Inspired by the values and principles of our various belief systems including faith, values and ethics, we recognise that:
1. Fossil fuel-based, extractive economies are accelerating climate change and loss of biodiversity;
2. Poor and marginalized people, especially women, children, older persons, Indigenous people and those with disabilities are most impacted by climate change;
3. We have abused nature and Indigenous peoples and have been complicit with colonial extractive practices. We need to change our relationship and learn to co-exist in a harmonious and symbiotic manner with earth and its ecosystems. The environment and the human family are interdependent;
4. We humans have failed in our responsibility as ‘earth keepers’ to protect the planet;
5. We must challenge the values, such as individualism and greed shaping our patterns of consumption and production;
6. We must rediscover the moral and spiritual roots of human beings, and rights and dignity of all beings;
7. We must strive to move from human superiority to human humility, from ego-centric to eco-centric and from being separate to nature, to interconnectedness;
8. We must urgently move from unbridled industrial growth to sustainable well-being.
We affirm that:
1. Faith and Indigenous leaders and actors have the potential to play an essential role in shaping global environmental governance and policy making. The traditions that we represent have unique capacities to convince, convene and contribute meaningful, moral, economic, spiritual and social substance to public deliberations;
2. More than 84% of people believe in a religion or a spiritual belief and religious leaders can be found in every part of the world, from the most distant desert village to the densest informal settlement. Faith-based Organizations (FBOs) bring reach and values to the environmental movement;
3. FBOs are strong institutions and are actors of local development and have demonstrated relevance to development around the world, for instance in health and education;
4. The 1972 Stockholm Declaration recognized and referred to the necessity of spiritual growth of humans towards living in harmony with nature;
5. Women and girls in all their diversity are unequally impacted by climate change, but should have equal opportunities, meaningful participation, leadership and influence in climate solutions and access to climate finance;
6. All persons irrespective of their abilities, physical or otherwise, are recognised as equal, and have a vital role to play to respond to climate challenges, and contribute to a better tomorrow.
Call to action:
We therefore call governments, UN entities, civil society, as well as our own constituencies to act on the following demands/action points:
1. Recognize the role of faith, ethics, spiritual and cultural values in environmental governance through adopting a resolution to that effect by the United Nations Environment Assembly and provide the required platform and programme for engaging faith actors in policy dialogue;
2. Implement the human right to a clean, healthy and sustainable environment as a key step towards achieving sustainable development, poverty eradication, inclusivity and gender equality, while respecting rights of nature;
3. Adopt a new development paradigm that integrates moral, spiritual and indigenous shared values;
4. Move from a neoliberal and “anthropocentric” worldview to an interconnected worldview;
5. Support a just transition from fossil-based extractive economy towards life-affirming “economy of life” and sustainable living, as promoted by the faith communities;
6. Adopt and implement an Ecocide law* and promote the Faith for Ecocide Law initiative by FBOs;
7. Ensure the human right to nutritious food and safe water and sanitation, including clean air for all in a healthy environment;
8. Implement the human right to a clean, healthy and sustainable environment as a key step towards achieving sustainable development, poverty eradication, inclusivity and gender equality;
9. Amplify the voice of women and girls in all their diversity as important stakeholders of climate solutions and climate finance;
10. Raise awareness of concerns around carbon offset/nature-based solutions that can lead to abuse of land and rural people.
We commit ourselves to:
11. Act and practice what we preach, and to become protectors of this earth, to strive to live in harmony and sustainability, through our daily actions, how we invest, how we manage assets, and how we engage with our faith communities;
12. Divest from fossil fuels and call for an immediate halt to new fossil fuel explorations and to promote a responsible climate finance as a moral imperative in protecting the most vulnerable from impacts of climate change;
13. Promote “refuse, reduce, reuse and recycle” in all public events, leading by example to reduce pollution, especially plastic waste;
14. Amplify the prophetic voices of young people, older persons, women and Indigenous people;
15. As faith leaders, representatives of faith-based organizations and faith communities, to lead by example to reduce our carbon and water footprints for a healthy planet;
16. Strengthen the interconnectedness of relevant UN mandates such as the two new Human Rights Council Resolutions on climate change and human rights.
*(as it was first mentioned at the Stockholm Conference in 1972 by the Swedish Prime Minister Olof Palme)
Last updated 24 May 2022
|
<urn:uuid:f72a79d5-dbe4-403a-97e0-2dcd55b77e99>
|
CC-MAIN-2023-23
|
https://iefworld.org/index.php/interfaithS50
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00788.warc.gz
|
en
| 0.927266 | 1,576 | 2.546875 | 3 |
The impact of the Value Added Tax, or VAT, on international trade is a complicated aspect of global commerce that is generally only comprehended by experts. Yet, the impact of the VAT may be the single most important variable in explaining U.S. balance of trade deficits that are spiraling out of control – to the detriment of both U.S. manufacturing and the continuing validity of the U.S. dollar itself.
The 2006 U.S. trade deficit is estimated to exceed $760 billion, while last year China also came to hold an historically high $1 trillion in foreign exchange reserves, $700 billion of which was held in U.S. dollar assets.
Simply put, the average United States citizen can no longer afford not to understand the VAT, especially now, in an era where the World Trade Organization dominates our international trade agenda and where NAFTA has morphed into NAFTA-plus, as demonstrated by the Security and Prosperity Partnership of North America, which President Bush declared with Mexico and Canada at the Waco, Texas, summit of March 23, 2005.
The modern VAT was created by French economist Maurice Laur? in the 1950s. The basic concept is that a “value added” tax is imposed at each stage in the chain of production of a good or service.
The goal is for the VAT to be charged to the consumer at the point of sale, such that the amount of the VAT ends up being calculated as a percentage of the final value of the good or service. The VAT charged the consumer is not reimbursed to the consumer, so at the final point of retail purchase the government gets to keep the VAT once and for all.
A VAT and a sales tax are both considered “indirect” taxes that ultimately are shifted to the consumer. Income taxes, in contrast, are considered “direct” taxes in that the tax cannot be shifted to someone else other that the person earning the income. The main difference between a VAT and a sales tax is that the VAT is applied at each stage of production, whereas sales taxes are usually only imposed once, at the final point of sale.
For our discussion, we do not need to master all the intricacies of exactly how the VAT works in practice. What we are concerned about is that in international trade there is a differential between how indirect taxes and direct taxes are treated, and that differential puts the U.S. at a decided disadvantage.
The United States does not use a VAT system. Some 137 countries, including every major U.S. trading partner such as the EU countries, China, Canada and Mexico, have VAT systems.
Here are the crucial points we need to understand:
- Exports from VAT countries enjoy rebates of VAT taxes that goods sold domestically in those same VAT countries would be subject to, while U.S. exports receive no such rebate of federal taxes and much smaller relief from state sales taxes.
- The problem is compounded when we realize that imports into VAT countries are subjected to VAT at the border, while imports into the U.S. are not taxed at the border.
- As a result, U.S. exports are taxed twice, while exports from VAT countries are traded free of certain types of taxes.
A simplified example (which excludes movement expenses and customs duties) helps understand the reason U.S. goods are disadvantaged by VAT systems.
A U.S. car that sells for $23,000 in the U.S. includes profit and covers various tax obligations of the producer, as well as pension costs for workers. When that car arrives in Germany, a 16 percent VAT will be added on to the $23,000 price, meaning that the car will be sold in Germany for $26,680.
Yet, no tax comparable to a VAT is imposed on a German car imported into the United States. Consider the example of a German car that is sold in Germany for $23,000 after the 16 percent VAT is imposed. The example is constructed to compare a U.S. car manufactured in the U.S. and sold in the U.S. that is roughly price competitive with a German car manufactured in Germany and sold in Germany.
The differential VAT taxation issue becomes a problem when the U.S. manufacturer suffers price disadvantages when the U.S. car is exported to Germany to be sold in Germany, compared to the price advantages the German manufacturer receives when the German car is exported to the U.S. to be sold in the U.S.
When the German car is imported to the U.S., Germany rebates the 16 percent VAT to the manufacturer, allowing the export value of the car to be $19,827.59. Moreover, when the German car is imported to the U.S. no U.S. tax comparable to the VAT is assessed, so the car is allowed to enter the U.S. market at a price under $20,000.
In this example, U.S. producers are disadvantaged two ways. On export, U.S. product that otherwise sells for the same price in domestic markets starts off with a disadvantage of $3,680 because of the VAT. At the same time, the German car, which sells at home for the same price as the U.S. car in America, is sold to the U.S. for a price which is $3,172.41 less than the U.S. car. When you add these disadvantages, U.S. car companies face disadvantages at home and abroad that can be measured as $6,852.41.
In effect, the rebate of the VAT to German exports serves as a German subsidy for exports, while the imposition of a VAT on the U.S. car imported to Germany serves as a German tariff imposed on U.S. imports. Still, the VAT is not defined in our free trade agreements either as a subsidy or a tariff, even though U.S. manufacturers are demonstrably disadvantaged both in exporting to VAT countries and in competing with VAT-country exports in the U.S. market.
None of this disadvantage applies to one VAT country importing into another VAT country, because typically the VAT is rebated in all countries when the good exported for sale in the foreign market.
Being the only major international trade country without a VAT, the U.S. is uniquely disadvantaged by the VAT system differential. The U.S. simply has no border-adjustable mechanism that can rectify the disadvantage that is imposed by VAT rebating to VAT country exporters and VAT cost imposition to non-VAT country exporters such as the U.S.
The VAT differential can end up disadvantaging U.S. exports by imposing what amounts to a considerable penalty (with VAT rates ranging up to 25 percent depending on the country) on the U.S. manufacturer, while advantaging the foreign exporter into the U.S. by a comparable price subsidy. In effect, VAT systems function as trade subsidies for exporters and as trade tariffs for importers, although VAT preferences are defined as neither “subsidies” nor as “tariffs” under the terms of international trade agreements, including both NAFTA and the WTO.
On May 17, 2005, testifying to the Subcommittee on Trade of the House Committee on Ways and Means, attorney Terence Stewart of the Washington firm Stewart and Stewart estimated the impact of differential taxation as follows:
On the issue of differential treatment of tax systems, the U.S. is seriously disadvantaged by the application of WTO rules on taxes. With 137 countries applying a VAT tax and a worldwide VAT tax advantage of 15 percent, the U.S. faces up to $450 billion total disadvantage to U.S. exports ($180 billion) and export subsidies to import competition ($270 billion).
Simply put, the VAT differential puts U.S. exporters in a huge hole from which it is almost impossible to dig out. At the same time, the VAT differential gives those who import into the U.S. such a huge edge that their dominance of our domestic market is virtually assured from the start.
Within NAFTA, Stewart estimates that in 2005, we had a $40.6 billion VAT disadvantage with Mexico and a $33 billion VAT disadvantage with Canada. Our largest VAT disadvantage in 2005 was $47.9 billion with China.
“Every major trading country in the world economy except for the U.S. has a VAT system,” notes Auggie Tantillo, executive director at the American Manufacturing Trade Action Coalition, in an interview with this writer. As Mr. Tantillo explained:
The important thing to understand is that the VAT disadvantage is a structural impediment or distortion, just like the currency manipulation China currently exploits by refusing to allow the value of their currency to freely float on world currency exchanges. The VAT differential is not simply a tangential, frivolous or superficial thing that a few countries do here or there, just a minor irritant. The VAT differential is a core, driving aspect as to why U.S. companies find themselves continuously at a disadvantage. If you extrapolate to the next step, the VAT differential is a core reason for why we see this escalating growth in the U.S. trade deficit on an annual basis.
Ironically, Mr. Tantillo argues that the VAT differential is a key reason many U.S. companies decide to move their manufacturing offshore, preferring to export into the U.S. market rather than to operate as U.S. domestic producers.
Mr. Tantillo elaborated these points in our interview:
When you add everything up and start to look at the cumulative effect, the board of a U.S. company sitting around a table has to say, “We can cut our labor costs by ‘x’ percent, we can get rid of our health care, we don’t have to produce a pension plan, and then we have this VAT situation where our taxes are rebated to us when we ship to the U.S., the market we want to impact anyway, so why do we stay here in the United States? Why not move our manufacturing to Asia?” The economic analysis just becomes compelling for many U.S. companies.
In other words, the VAT differential functions as an additional international trade structural incentive for U.S. companies to move manufacturing to foreign countries where they can participate in the VAT advantages as importers into the U.S. market. The point is not simply that wages are less in countries such as Mexico and Canada; both countries additionally possess structural advantages in international trade, including importantly the differential on how VAT systems work.
As explained by Tantillo, “It’s time for the U.S. to confront the WTO and NAFTA and say that we can no longer abide by a structural distortion that is massively and adversely impacting our manufacturing base.”
|
<urn:uuid:6d012848-28e7-467a-ad21-8d8012b98a1f>
|
CC-MAIN-2014-41
|
http://www.wnd.com/2007/02/39999/
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410665301782.61/warc/CC-MAIN-20140914032821-00242-ip-10-234-18-248.ec2.internal.warc.gz
|
en
| 0.959733 | 2,233 | 2.984375 | 3 |
1910: Pequot population falls to all-time low
According to the 1910 U.S. Census, there are 66 Pequot remaining from a population of 8,000 at the time of first contact with English settlers in the 1600s. Historically the Pequot population declined because of war, disease, and abduction into enslavement, usually to the West Indies, where conditions were harsh and escape difficult. Census workers find only a few Pequot families on the Mashantucket Reservation in Connecticut. Others live in nearby cities where they can get jobs.
|
<urn:uuid:0d926b70-1027-4b5b-8c77-d6ac51793441>
|
CC-MAIN-2014-23
|
http://www.nlm.nih.gov/nativevoices/timeline/425.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00202-ip-10-33-131-23.ec2.internal.warc.gz
|
en
| 0.94048 | 116 | 3.046875 | 3 |
Stormwater and Urban Issues
Stormwater runoff is generated when precipitation from rain and snowmelt events flows over land or impervious surfaces and does not percolate into the ground. As the runoff flows over the land or impervious surfaces (paved streets, parking lots, and building rooftops), it accumulates debris, chemicals, sediment or other pollutants that could adversely affect water quality if the runoff is discharged untreated.
Western New York Stormwater Management Training Series Returing Spring 2014!
Courses will be taught by John Dunkle. John is a licensed professional engineer in New York, a Certified Professional in Erosion and Sediment Control (CPESC), and a Certified Municipal Separate Storm Sewer System Specialist (CMS4s).
These classes are designed for consulting and municipal design engineers, code enforcement officers, teachers, landscape architects, municipal staff, municipal planning board members, landscape architects, and land-use planners and developers. All workshops are one-day sessions with a fee of $230 for each session. Fees include handouts distributed in class, refreshments, and lunch. Workshops will be held at the Monroe County Community Hospital, off of East Henrietta Road, in Rochester, NY
Workshops require the use of the NYS Stormwater Design Manual
or the NYS Standards and Specifications for Erosion and Sediment Control.
You may download them for free from the DEC websites, or you may order them hardcopy for a fee through the Empire State Chapter Soil & Water Conservation Society. For more information, visit the Empire State Chapter Soil & Water Conservation Society website.
Visit our stormwater training page for the schedule of our upcoming trainings, brochures and registration forms!
MCSWCD Stormwater Services
Subdivision Plan Review
Upon request from municipalities or the New York State Department of Environmental Conservation (NYS DEC), subdivision plans are reviewed for potential impacts to water resources.
Construction Site Inspections
In collaboration with the NYS DEC and the municipalities of Monroe County, we conduct construction site inspections and respond to citizens’ complaints in Monroe County for compliance with Phase II regulations for the State Pollution Discharge Elimination System (SPDES).
Municipal Stormwater Management Assistance
MCSWCD assists municipal staff with stormwater management concerns such as construction site erosion control, stormwater ponds, stream bank erosion and drainage.
Mapping Resource Services
Approximately 120 landowners and consultants request copies of aerial photos and natural resource maps each year.
Our technicians meet with landowners to discuss issues such as drainage concerns and techniques for improving backyard wildlife habitat.
Stormwater Training Seminars
The District hosts many stormwater training seminars throughout Monroe County. For more information on training details, click here!
Looking for More Trainings?
Upcoming Code Enforcement Officer Trainings and Conferences:
NYS DEC Stormwater Training Calendar:
Monroe County Soil and Water Conservation District
145 Paul Road, Building 5
Rochester, New York 14624
Email the District
Web hosting provided by Genesee Gateway
Page last updated:
October 23, 2013
|
<urn:uuid:709d184c-340a-4978-901f-08772719e25c>
|
CC-MAIN-2014-23
|
http://www.monroecountyswcd.org/Pages/Stormwater.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00476-ip-10-146-231-18.ec2.internal.warc.gz
|
en
| 0.873008 | 638 | 2.671875 | 3 |
The document “Math 1” provides an overview of the program standards for admission to training including TABE testing. Documents were developed so consistent evaluation and remediation of math skills occur statewide.
Math Resources from NSHC:
Resources for CHA Math Instruction
List the source/website of the resource and a brief description of what CHA math skill(s) it addresses.
The websites below provide exercises (self-graded) and instructional materials suitable for learning very basic decimals, fractions, metric system, number lines, etc.
The website below provides worksheets and key for the following skills (and others): multiplication, addition and subtraction, with and without carrying, adding and subtracting fractions, etc.
|
<urn:uuid:a90f6638-3367-436b-96d0-3a1b3d7788ee>
|
CC-MAIN-2017-30
|
http://www.akchap.org/html/library/math.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00028.warc.gz
|
en
| 0.910616 | 152 | 3.28125 | 3 |
Over 200,000 people are hospitalized annually because of the flu. Say what? And this year, flu season came early.
Forty-one states have seen influenza outbreaks , with over a dozen people (young and old) having died because of it. Yikes. Follow these tips and you can help keep the aches, fever and cough away.
How Flu Spreads:
- People with the virus can spread it to others up to 6 feet away.
- Experts believe that it spreads by droplets made when people with the flu cough, sneeze, or talk.
- Healthy adults can infect others 1 day before symptoms develop and up to 5 to 7 days after becoming sick.
How to Prevent It:
- Get vaccinated. (You need to do this EVERY year.)
- Stay home if you’re sick.
- Cover your nose and mouth with a tissue when you cough or sneeze.
- Wash your hands with warm water and soap often.
- Avoid touching your eyes, nose, and mouth. This is how germs spread.
- If you DO get the flu and it's really severe your doctor can provide antiviral drugs to treat your illness.
See more tips on what to do if you get the flu.
Share these tips around school and ask an administrator about putting hand sanitizers in classrooms (if you don't already have them). GO
|
<urn:uuid:1dc87139-3bce-4ab0-8484-15e9cbebc74c>
|
CC-MAIN-2014-10
|
http://www.dosomething.org/print/tipsandtools/action-tips-prevent-flu
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010765401/warc/CC-MAIN-20140305091245-00017-ip-10-183-142-35.ec2.internal.warc.gz
|
en
| 0.95794 | 292 | 3.875 | 4 |
45 minutes (Year 3)
50 minutes (Year 5)
The National Assessment Program – Literacy and Numeracy (NAPLAN) is an annual national assessment for all students in Years 3, 5, 7, and 9. All students in these year levels are expected to participate in tests in reading, writing, language conventions (spelling, grammar and punctuation) and numeracy.
The tests provide parents and schools with an understanding of how individual students are performing at the time of the tests. They also provide schools, states and territories with information about how education programs are working and which areas need to be prioritised for improvement.
NAPLAN tests are one aspect of each school’s assessment and reporting process, and do not replace the extensive, ongoing assessments made by teachers about each student’s performance.
|
<urn:uuid:308721bd-e075-4d60-b46b-5b314c76dd1c>
|
CC-MAIN-2020-10
|
https://lhps.nsw.edu.au/events/event/naplan-reading/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00241.warc.gz
|
en
| 0.945012 | 170 | 3.328125 | 3 |
Deontology is a school of philosophy that judges the morality of an action based on rules. It doesn’t take into consideration outcomes (or the weighing of which outcome is better).
To illustrate this idea, let me use a daily social interaction as an example.
A colleague sent picture of 2 designs of socks in our Whatsapp group chat, and asked the rest of us which one we preferred to be used as a door gift for the upcoming Christmas celebration in the office. The choices were: “Christmas” and “Marvel”.
The usual response everyone would have is to download the images and make a comparison before giving a verdict.
Deontology is to make the decision based on what is right for situation. I did not even download the photos to compare the design, because by Deontology, I should pick “Christmas” design over “Marvel”, as this was supposed to be a gift for a Christmas celebration.
Now, I’m not saying that my colleagues who chose “Marvel” was “wrong” or did not apply deontology principles. They could very well have demonstrated their thought processes to me to prove that their decision was of a deontological outcome. The key of this discussion is the process: did we make a decision based on what was the rules (or “what was right”)? Or did we decide it based on “who benefits most” or “who lost the least”?
And by the way, Happy Holidays, everyone!
|
<urn:uuid:271fa7de-0965-4fa2-85ff-fd3cacd60c58>
|
CC-MAIN-2020-05
|
https://zenovyap.com/simple-example-to-help-understand-deontology/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00202.warc.gz
|
en
| 0.982938 | 323 | 2.75 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.