content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
VOV.VN - As a country seriously affected by global climate change, Vietnam considers adaptation to climate change vital to socio-economic development, especially to the agricultural sector.
Vietnamese farm produce is sold in 180 countries and territories. Last year, Vietnam earned US$35 billion from exports. But natural disasters triggered by climate change have damaged hundreds of thousands of hectares of rice and other crops. Climate change is threatening the food security of millions of people. Consequently, agricultural restructuring to mitigate climate change has become one of Vietnam’s top priorities.
An Giang province in the Mekong Delta, a province with many agricultural advantages, especially in rice production, is one of the areas most affected by climate change. It has restructured its crop and livestock production in the direction of linking value chains and applying safe production processes and advanced technologies.
Tran Anh Thu, Director of the Agriculture and Rural Development Department of An Giang, said “We have formed hi-tech agricultural areas covering 100 to 200 hectares and invested in irrigation infrastructure to grow fruit trees and other crops instead of rice and promote aquaculture. We have been very careful in crop restructuring while ensuring market for the new crops and their profitability as well as public consensus. The change needs to be consistent with the planning.”
In addition to using sustainable agricultural production models adaptive to climate change, a number of advanced technologies have been applied, including tissue culture technology to improve forest trees, technology to produce safe vegetables meeting VietGAP standards, and hydroponic technology to produce vegetables in net houses.
Minister of Agriculture and Rural Development Nguyen Xuan Cuong said: “In the next phase, science and technology will be the key to promoting agricultural value chains to replace small holders. In this process, scientists and research institutes play a decisive role, becoming a "nucleus" to link farmers, enterprises, scientists, and the state. The Ministry of Agriculture and Rural Development will develop a cooperative mechanism to encourage new ideas to boost agriculture restructuring.”
Vietnam has created incentives for enterprises to invest in hi-tech agriculture to create high quality products with greater added value. Localities have conducted planning for hi-tech agricultural production areas and focused on personnel training which are necessary steps for Vietnam’s agriculture to respond to global climate change. | https://english.vov.vn/economy/agricultural-restructuring-in-response-to-climate-change-365880.vov |
Community Governance Considerations of Open Source Projects
On 05 Jun, 2017 By admin
DrupalCon was a great way to connect with the community and gauge the pulse from recent events involving Crell. After writing blog posts, I engaged with many people to share thoughts and hear perspectives. One common question that came up: what do other communities do for governance?
This motivated me to do some research of my own. I wanted to be better informed in discussions to know what other communities are doing in an effort to identify where our communal gaps might be. I am a firm believer that the Drupal community doesn’t need to “reinvent the wheel” or think we are “special snowflakes”. There are countless other projects out there dealing with the same problem space. I explored online documentation for other projects and open source communities. I also found research and blog posts on governance topics for communities. I’ve included all of my references below. This blog post is a simple effort to summarize and share what I learned about community governance.
Premise
There is a great deal of diversity in community governance of Open Source Software (OSS) projects. There are various considerations unique to each project and subsequently each project leverages different governance concepts and structures. While the use of governance structures may differ, there is a substantial amount of concepts shared among them. As such, my emphasis is to highlight a set of objectives/motivations and an associated set of governance concepts communities have hand-selected. Please note, I did not research every open source project and the ideas presented are likely not exhaustive.
I extracted a summary quote to provide a satisfactory overview defining the need for governance in line with my findings:
"OSS is best understood neither as primarily a technical development or social process perspective, but instead as an inherent network of interacting sociotechnical processes, where its technical and social processes are intertwined, codependent, coevolving, and thus inseparable in performance "
Effective governance must understand the diverse technical and social needs of the community to effectively serve members. Members of the community need governance to ensure the communities’ values are upheld by other members. Governance ensures fairness and helps the community establish and maintain an identity. This identity must be made clear so community members, often volunteers, feel confident they are participating in something that aligns with their value system. The need for community governance shares a similar argument presented by The Tyranny of Structureless written by Jo Freeman.
Community governance is not one-size-fits-all between communities. There are many factors and motivations. Community governance can change based on the size of the community, financial support, application of free and open source ideologies, desired leadership, and much more. Community governance influences include operational management, health and sustainability, distribution of both centralized and decentralized decision making, and separation of duties.
Community Governance Objectives/Motivations
The following list represents a set of objectives and motivations that seemed present in one or more of the communities I researched.
Shared Purpose - As a community, there needs to be a shared purpose that is the foundation for all community governance activities. This shared purpose is often a function of serving the community.
A Representation of Values - Members want community governance to be a representation of community values.
Simple - Community members want straightforward governance to understand how the community operates.
Transparent - Community members build trust through openness and governance activities should strive to be as transparent as possible.
Clarity - All aspects of the community governance should be clear and easy to understand.
Consistent - Community governance structures and processes need to be consistent to build member trust.
Evolving - As the needs of the community change, so must the communities’ governance.
Involvement - All governance activities need to involve the entire community in governance participation to ensure broader interests are represented.
Concepts
Leaders
Open Source projects are often founded by an individual that started with an idea and ran with it. When small, a leader can maintain a much larger and active role in contribution and direction as the principal decision maker. As open source projects grow, leadership often morphs into technical vision, product roadmaps, and high level prioritization of initiatives. As an enabler, a leader should understand the strengths of the project and community to ensure priorities are clear and the community is empowered to deliver on that vision. While the leader may still make decisions (mostly technical), he/she should aim to be representative of, and engaging with, the community members he/she represents.
Foundations
Many open source projects associate with a non-profit “foundation” that performs a variety of supporting oversight functions across one or more projects aimed at stewardship, growth, stability, health, operations, and advisory. The functions vary between foundations, but can represent project infrastructure (tools), manage project financing, sponsor events, and navigate licensing needs. Foundations often establish data and metrics to help evaluate project health. Examples include the Linux Foundation and Apache Foundation.
Special Interest Groups
Special Interest Groups represent a significant way to distribute communal responsibilities for both decision making and focus areas of the community. They serve both social and technical needs and can be considered in technical, working, or advisory capacities. For instance, Kubernetes uses Special Interest Groups like “Documentation” and “Apps” to provide dedicated community focus in those topics. Groups are often sanctioned by the community with a charter and a defined focus area and often serve as a way to help a community evolve as needs arise. Group size and structure vary depending on the amount of activity, but group activity is intended to be both open and inclusive of community member needs. Groups engage the communities through the use of issue queues, open meetings, posting of meeting notes, and more.
Boards
Boards serve communities in advisory capacities. While a foundation helps steer operational needs of a community (sponsoring tools, events, etc), a board works with both the leader and the various groups to help steer and inform community efforts. Boards can be viewed as an escalation from groups and an overall thinktank to help inform community strategy. Board composition should be diverse and consider periodic elections to ensure membership routinely changes and there is an influx of new and diverse ideas.
Other Community Roles
Communities clarify the different ways people can be involved. I’ve summarized roles I found:
BDFL - A type of community leader which often helps define overall communal direction.
Board member - Council member to serve in broad consultation.
Group member - Often a subject-matter expert or contributor assigned to nurture a specific focus area.
Foundation member - Individuals (paid and unpaid) tasked with helping support the work of the community.
Contributor - Someone who provides work back to enhance the efforts of the community. This is often code, documentation, or thought leadership within communal discussion
User / Consumer - People that use the open source project.
Corporate sponsor - A company that helps financially support an open source project through paid contribution, funding for the foundation, or sponsoring events.
Evangelists - Individuals that promote the work of the community.
Governance Documentation
Open Source community participants often seek alignment with their value systems. Communities leverage documentation to help communicate the identity of the community. This is represented in many different ways:
Mission statement - Captures the overall goal and purpose of the community.
Engagement
Open Source projects rely on various activities and tools from community members to be vibrant. Engagement can be performed in many different ways:
Code - Code contributions are one of the primary ways technical people give back to communities, which are often measured in commits.
Open meetings - Groups and boards hold open meetings that often allow community members to be informed and participate.
Elections / Voting - Community voting is a means of collecting feedback from community members on initiatives or membership.
Retrospectives - Members need opportunities to provide feedback (and not just through issue queues). Retrospectives share what people have learned and help identify next steps to evolve community efforts.
Meeting notes - Notes should be shared to allow non-present community members the ability to review at a later time.
Issue participation - Community members help to organize bugs, plans, and documentation needs inside of issue tracking systems. This also allows members to collaborate and test work (patches, pull requests) before finalizing an approach.
Wikis / documentation - Communal knowledge should be captured within wikis that, 1. Should be routinely updated by any community member, 2. Afford information sharing that
Direct Messaging channels - Tools like Slack, IRC, and more provide the ability to set up both generic and specific channels (groups) and one-on-one messaging that allow for community members to communicate more directly.
What About Drupal?
As many of you know, I am a member of the Drupal open source community. As I noted, my intent of this blog post is to see what other communities do for community governance. I have intentionally left out Drupal-specific details such that others can form their own opinions. My next step will be a follow-up blog post that captures my observations on current Drupal governance with respect to my research.
Resources
Please note that many of the resources identified below have auxiliary links that further clarify the concepts I mention. I have captured a subset of these links for pages that I found particularly informative.
| |
RobertDrew: ‘I’m determined to be asunobtrusive as possible. And I’m determined not to distort the situation’.(Hall,1991) BillNichols defines documentaries as, ‘Documentariesare about reality; they’re about something that actually happened… Documentaryfilm speaks about situations and events involving real people who presentthemselves within a framework. This frame conveys a plausible perspective onthe lives, situations, and events portrayed. The distinct point of view of thefilmmaker shapes the film into a way of understanding the historical worlddirectly rather than through a fictional allegory’.
(Nichols, 2001) ‘Every documentary has its own distinctvoice’ (Nichols,2001), like a fingerprint with distinct indentations and unique characters. Documentaryis proposed to depict some characteristic of reality, primarily forinstruction, education or maintaining some form of history. Nichols recognisedsix modes of documentary in his book ‘Introduction to documentary’ (2001) foundin documentary film that function as sub-categories of the film genre; poetic,expository, participatory, observational, reflexive and performative. The sixmodes establish a loose basis of rules which filmmakers can work within, set uparrangements that a film may implement and deliver through expectations viewersanticipate having fulfilled.
However, a film recognised within a mode, does nothave to solely contain of this mode. Instead, a reflexive documentary might containfragments of observational or participatory film. The modes merely give thefilm a respectable sense of structure, but do not dictate to every facet of it. Each mode ascendsthrough a sense of discontent amongst filmmakers with a prior mode, meaningthat the six modes do carry a sense of documentary past.
An example of this iswhen the accessibility of mobile 16mm cameras and magnetic tape recorders inthe 1960’s, observational mode arose. Meaning that the poetic documentary rapidlybecame too abstract and expository became too didactic when it became possibleto film events with minimal intrusion. Throughoutthis essay, I will be discussing the observational mode. To fully understandthis essay, it is foremost vital to outline what the observational mode is andhow it is used. As detailed earlier, after developments in Canada, Europe andthe United States in the 1960’s, smaller, lighter cameras and tape recordersbecame progressively obtainable and permitted for easier access as they couldbe handled by someone solely. Dialogue became harmonised with image without theneed of bulky equipment or cables allowing for free movement around eventscenes. The independence the technology gives the filmmaker allows forobservational footage being naturally filmed with no staging, composition or arrangement.
The filmmaker can purely observe reality with a camera without interfering onits focus, arguably the smaller and lighter pieces of equipment powered thebasis of the mode. This carried through to the editing process consequential ina documentary film with no voice over, no added music or sound effects, no re-enactmentsand no interviews. Anintimate portrayal of two women living an insular existence, Grey Gardens(1975) explores a relationship of mother and daughter. Big Edie and LittleEdie, the mother and daughter duo, pick fights with one another, make up, sing,eat together, recollect about the past and philosophise about life. The duo appearsto live in an eternal state, where what time of day or year is utterly unrelatedto them.
Albert andDavid Maysles know as the Maysles brothers frequently used the observationaldocumentary mode throughout ‘Grey Gardens’by taking their cameras within the walls of their home and lives. The brothers formedan authentic, provocative non-fiction feature film seizing the relationshipbetween mother and daughter. The films aesthetic embraces its insufficienciesas a mould of realism. The documentary follows the observational mode methods usinga handheld camera movement and the use of diegetic sound, the story reveals unscriptedand without narration. Theexposition of Grey Gardens was done usinga montage of news offcuts showing the exposure the film was receiving. The useof montage is effective as it places an emphasis on the actuality of the story.The Maysles brothers don’t bother to hide their occurrence in the film, whilstheadlines are being revealed, the brother’s voices are heard in the background.This could be argued an important factor of the film as it shows the filmmakersare supportive of the characters.
The filmmakers are also seen as sightsthrough the reflection of mirrors when the characters directly address thefilmmakers. Through showing images of the brothers with their equipment, the vieweracknowledges that although the film shows the real lives of the characters, itis presented as an industrial product. ‘These ‘flaws’ in themselves seemto guarantee authenticity and thus became desirable.’ (MacDonald, 1996, p.
250). Grey Gardens is filmed as a fly on thewall film, conforming to the conventions of observational film. The handheldcamera allows the filmmakers to move easily and capture the action successfullyin a reckless method, though at times the film lacks focus and amateur. The flyon the wall technique is advanced by the use of sound due to the absence ofscripts or cues and the competition for attention. ‘The beauty of a Maysles image most often arises through itsstartling immediacy, capturing and seizing the spontaneity of a moment…. ratherthan freezing the image into one of overly aestheticized beauty.’ JoeMcElhaney – Albert Maysles University of Illinois Press 2009. McElhaney pointsout that the filmmakers being there to capture the action unfold is part of theattraction of this documentary.
Observationaldocumentary can be seen as an approach which neglects fictional basics. Thiswould suggest that all we see on screen is the ‘truth’, However, it is throughthe editorial choices that the ‘truth’ captured by the camera is given ameaningful narrative. Editing methodsused through the documentary subsidise to the disordered feel of the film, theshots are short and jump between shots quickly and frequently. This suits the meansof the story as it features highly fragmented and challenging narrativestructures through to lack of concept of time. Combinedwith image, synchronous sound is played which is fully diegetic. Inobservational documentary, the filmmaker should not produce an artificial appealto the audience through the use of edited sound.
Both women that feature in Grey Gardens aspire to be entertainers,meaning music is a crucial part of the documentary film. Singing along to musicrecords, arguments and talking over one another are some of the way sound isproduced naturally throughout the film. GreyGardens does not use a narrator, as this is additional common aspect ofobservational documentary, the believe is that the subject matter isinteresting enough and does not require explanation. ‘High School’ (1968), Frederick Wisemans’ secondfilm after the controversial ‘TiticutFollies’ (1967) is also a film that uses observational mode throughout.Wiseman began his career during the period of observational documentary in the1960’s.
However, his style is evidently unlike to other filmmakers such as Albertand David Maysles. Wiseman outlines his footage for High School through the heavy use of editing, providing an artisticform and structure for the film that is distinctively different from thechronological approach. Most observational filmmakers tend to focus on fascinatingpersons as seen in Grey Gardens motherand daughter duo, whilst Wisemans films including High School shot in Northeast high school in Philadelphia, studysocial and executive sensitivities instead.
Initially,High School, photographed by RichardLeiterman, a significant Canadian cinematographer, has a loose structuresticking to a conformist ‘day in the life of’ method. The documentary opensfilming riding in a car, seemingly on the way to school in the morning. Thefirst classroom shots contain daily announcements and the ‘thought for the day’and about midway through the film there is a sequence of teachers having lunch.At the same time, the school’s approach to education is presented as being likean industrial process. Wiseman has said that when he first saw the school, hewas struck by how much it bared a resemblance to a factory. High School views the American public-schoolexperience as a factory-like process, with the pupils becoming the socialisedand consistent products it produces. Wisemansediting rapidly reveals the films sarcastic view of public education, with thecontents of the first lesson, a Spanish lesson, seeming ironic in the contextof its presentations with the teachers approach to have the entire class dronein unison. Wiseman cuts from the Spanish lesson to the percussion lesson, withthe mimic’s teachers conducting hand, emphasised by the framing of the shot.
Most of the scenes emphasise lack of personalisation and conceptual instruction.In the girl’s gym class, the camera emphases not on their faces but on theirbodies, clothed in identical uniforms, making them indistinguishable from oneanother. High School comprises 37 distinct segments,each one showing an episode of high-school life. Some segments, such as chorusrehearsal, are quite brief; others involve extensive dialogue. Formally, thefilm presents a challenging combination of structural sorts.
overall, the formis categorical. The main category is high-school life, and the subcategories containof typical activities: classes, student teacher confrontations, and sportsactivities. The way in which categorical, narrative, and associationalstrategies combine becomes clearer if we look at how Wiseman has selected andarranged his material. The film is not a full cross-section of high-schoollife. It omits many important aspects.
We never see the home life of studentsand faculty, and, strikingly, we never witness any conversations betweenstudents, either in class or outside ii. Wiseman has concentrated on one aspectof high school life: how the power of the authorities demands obedience frompupils and parents.The use ofthe observational mode by Wiseman reflects his view on his affiliation with histhemes. Wiseman is never presented inthe documentary, and states that it is not his profession ‘to intervene intheir lives…I want to show the reality without changing it’ (Ferguson, 1994).This lack of involvement generates a dependence on graphic and audio symbols.
Thisdocumentary is purposely manipulated in order to convey a message that is commonlyapplicable. HighSchool, thesubjects later resented their portrayal by Wiseman. They gave himcomplete access to view their lives or their lives at work and weredisappointed with Wiseman’s selective edits and the ensuing publiccriticism. This raises an important point about informed consent. If a filmmaker follows the direct cinema method of being a fly-on-the-wall,privy to most details and/or interactions of a subject’s life, then I wouldargue that the subject is unable to fully grasp how little and how much acamera is capable of capturing, nor what kind of effect the camera will haveupon their subsequent actions. As discussed,both films use the observational mode in very different ways, where the Mayslesbrothers use fascinating characters as their subject matter, Wiseman focuses ongovernmental and social issues yet they both captured reality and life in arealistic and compelling way. Observational footage is an important part offilm as it primarily allows the viewer to have the most unbiased view of asubject matter. Although the filmmaker is at liberty to choose which pieces offilm are put in to a documentary, without the narrator there is no biasedopinions or views on the viewer.
The viewer is therefore entitled to interpretthe film as they wish and form their own opinions on the subject matter. However,the Observational mode can only be so effective. Many people watch adocumentary in order to be educated, yet with nobody telling the viewer whatthey are watching, is it possible for one to learn? How much can a person learnor understand from just looking and nobody to explain what is happening in thesituation. Although the observational mode is supposed to be the mostunobtrusive mode with least interaction and just cameras, one may argue thatthe subject matter knowing there are cameras filming them will affect theirbehaviours therefore not creating a realistic and true insight to thesituation. | https://essayprepper.com/robert-situation-hall1991-bill-nichols-defines-documentaries-as-documentaries/ |
Current global value chains are highly efficient, specialised, and interconnected, but they are also highly vulnerable to global risks. The Covid-19 pandemic has been a stark demonstration of this point, causing supply-side disruptions in the first quarter of 2020 as China and other Asian economies were hit by the outbreak of the virus which eventually spread globally, leading to business closures in countries all around the world (Seric et al. 2020). The ensuing supply chain breakdown prompted policymakers in many countries to address the need for economic self-sufficiency, along with strategies to better deal with global risks, even at the expense of the efficiency and productivity gains that globalisation has brought (Michel 2020, Evenett 2020).
Escalating geopolitical tensions and trade restrictions
Addressing this need for self-sufficiency – especially with regard to economic dependence on China – has given rise to geopolitical tensions, exemplified by the escalation in trade interventions in the lead-up to early December 2020 (Evenett and Fritz 2020). Close to 1,800 new restrictive interventions have been imposed in 2020. This is over one and a half times the number in each of the two previous years, when the China-US trade dispute and a new wave of protectionism intensified (Figure 1).1 The adoption of discriminatory trade interventions outpaced liberalisations, despite the increase in new trade-liberalising measures or the lifting of some emergency trade restrictions during the period.
Figure 1 Number of new trade policy interventions implemented each year
Note: Reporting lag-adjusted statistics.
Source: Global Trade Alert, chart taken from Industrial Analytics Platform
China registered the highest number of both trade-discriminatory and trade-liberalising interventions of any country: almost 3,300 (43%) of the 7,634 discriminatory trade interventions implemented between November 2008 and early December 2020, and 1,315 (48%) of the 2,715 trade-liberalising interventions over the same period applied to China (Figure 2). Amid rising US-China trade tensions in 2018-19, China already faced a particularly high increase in trade restrictions relative to other countries, which further intensified during the Covid-19 crisis.
Figure 2 Number of trade policy interventions between November 2008 and early December 2020 by country affected
Note: This figure presents the top 5 most exposed countries. Reporting lag-adjusted statistics.
Source: Global Trade Alert, chart taken from Industrial Analytics Platform.
Signs of resilience in current global value chains
Covid-19 supply chain disruptions provide an unprecedented opportunity to examine the resilience of global value chains. Data on trade flows and manufacturing output over the course of the pandemic suggest that the supply chain disruptions of early 2020 were of a temporary nature (Meyer et al. 2020), and that extended global value chains currently interlinking many firms and economies seem to be resilient to trade and economic shocks, at least to some extent (Miroudot 2020).
The Container Throughput Index of the RWI – Leibniz Institute for Economic Research and the Institute of Shipping Economics and Logistics (ISL), for example, suggests that severe global trade disruptions first hit Chinese ports at the outbreak of the pandemic before spreading to other ports around the world (RWI 2020). The RWI/ISL Index also shows, however, that Chinese ports recovered swiftly, bouncing back to pre-pandemic levels in March 2020, and strengthening still further following a slight setback in April 2020 (Figure 3). The index further suggests an upturn in container ‘throughput’ for all other (non-Chinese) ports, although this recovery started later and has been weaker than China’s.
Figure 3 RWI/ISL Container Throughput Index: China and the rest of the world
Note: The RWI/ISL Index is based on container handling data collected from 91 ports around the world. These ports account for the majority of the world’s container handling (60%). With globally traded goods being mainly transported by container vessels, this index can be used as an early indicator of developments in international trade. The RWI/ISL Index uses a base year of 2008, and figures are seasonally adjusted.
Source: RWI – Leibniz Institute for Economic Research/Institute of Shipping Economics and Logistics, chart taken from Industrial Analytics Platform.
Similar trends have also been observed in world manufacturing output. China’s production output may have been the first to be hit by strict virus containment measures, but the country also saw an early return to economic activity. Its manufacturing output had rebounded to pre-pandemic levels by June 2020 and has continued to rise since (Figure 4). In step with the international spread of Covid-19, the production output of other countries was curtailed around two months later. Economic recovery in these countries seems to be much slower than in China; two months after China’s manufacturing output returned to pre-pandemic levels, the rest of the world was still lagging behind.
Figure 4 Index of world manufacturing output for selected regions
Note: This data uses a base year of 2015, and figures are seasonally adjusted.
Source: UNIDO, chart taken from Industrial Analytics Platform.
China’s strong economic recovery relative to other countries is even more starkly reflected at the industry level. The figure below shows year-over-year changes in output for September 2020 for China’s five fastest growing industries, all of which are highly integrated in manufacturing global value chains (Figure 5). While output for four of these five industries increased by (far) more than 10% in China, the corresponding output in industrialised economies over the same period decreased by more than 5%. Although manufacturing of computer, electronic, and optical products in industrialised countries (and across the world) expanded in September 2020, their growth rates were still weaker than China’s.
Figure 5 Manufacturing output growth by industry in September 2020
Note: This figure presents the output change of the five industries with the strongest growth in China in September 2020.
Source: UNIDO, chart taken from Industrial Analytics Platform.
China’s swift and strong recovery seems to indicate that Chinese firms are more resilient to global shocks than most others. In fact, the value chains Chinese firms are deeply involved in seem to be more resilient. One reason for this might be China’s success in quickly containing the local spread of Covid-19. Another reason could be that the country has more regionalised value chains compared to other countries. China has become a particularly attractive investment destination and trading partner for neighbouring economies over the years, especially the Association of Southeast Asian Nations (ASEAN). It has also focused on building international economic relationships within its own ‘neighbourhood’, through, for example, the Belt and Road Initiative, and the negotiation and conclusion of the Regional Comprehensive Economic Partnership (RCEP).
China’s deeper economic integration with ASEAN countries is evident in its trade data. According to UNCTAD data, the ASEAN bloc has become China’s largest trading partner, surpassing both the US and the EU2 (Figure 6).
Figure 6 Share of major trading partners in Chinese merchandise trade
Note: Merchandise trade refers to the sum of merchandise exports and imports.
Source: UNCTAD, chart taken from Industrial Analytics Platform.
ASEAN as an export target region had been of increasing importance leading up to the pandemic, with year-over-year growth exceeding 20% towards the end of 2019. This growth rate was much higher than that of China’s exports to many other major world markets, including the US, Japan, and the EU (Figure 7).
Although China’s exports to ASEAN were also affected by the containment measures associated with Covid-19 – decreasing by about 5% right at the beginning of 2020 – they were less severely affected than China’s exports to the US, Japan, and the EU. When China’s manufacturing output began recovering from the crisis in March 2020, its exports to ASEAN increased again and grew by more than 5% in March/April 2020, and by more than 10% every month between July and September 2020.
Figure 7 Growth in Chinese exports by destination
Note: Bilateral exports at current prices. Year-over-year changes from September/October 2019 to September/October 2020.
Source: General Administration of Customs of the People’s Republic of China, chart taken from Industrial Analytics Platform.
This apparent regionalisation trend in China’s trade structure is expected to have implications for how global value chains might be recalibrated, with ripple effects for China’s traditional trading partners.
Balancing risks and opportunities
If highly specialised and interconnected global value chains become more spatially dispersed and regionalised, transport costs – as well as vulnerabilities to global risks and supply chain disruptions – may decrease (Javorcik 2020). But strongly regionalised value chains may prevent firms and economies from efficiently allocating their scarce resources, from increasing their productivity or realizing higher potentials from specialization. Moreover, greater reliance on a more limited geographical area may reduce manufacturing firms’ flexibility, limiting their ability to find alternative sources and markets when hit by country- or region-specific shocks (Arriola 2020).
Changes in US imports from China can serve as an illustration of this point. Due to US-China trade tensions, US imports from China had been declining up until the first months of 2020. Yet reducing their reliance on China in favour of more regionalised value chains did not shield US firms from the economic shock the pandemic triggered. In fact, March and April 2020 saw a surge in US imports – in particular of medical supplies – from China, as the country scrambled to meet domestic demand (Qi 2020).
Globalization at the crossroads
In spite of global value chains showing some degree of resilience in the face of the current global economic shock, the temporary (yet extensive) supply disruptions have induced many countries to reconsider the potential benefits of regionalising or even localising their value chains. These recent developments, together with the increasing power of emerging economies relative to advanced economies in trade issues and negotiations, make it difficult to predict how future global value chains can best be recalibrated, restructured, and reorganised. Even though the rollout of effective vaccines in late-2020 and early-2021 may loosen the grip of Covid-19 on the global economy, ongoing protectionist and geopolitical trends suggest that the world is unlikely to see a return to ‘business as usual’. There is still a long and challenging way ahead.
Editor’s note: This column was originally published on 17 December 2020 by UNIDO’s Industrial Analytics Platform (IAP), a digital knowledge hub that combines expert analysis, data visualisations and storytelling on topics of relevance to industrial development. The views expressed in this column are those of the authors and do not necessarily reflect the views of UNIDO or other organisations that the authors are affiliated with.
References
Arriola, C, P Kowalski and F van Tongeren (2020), “Localising value chains in the post-COVID world would add to the economic losses and make domestic economies more vulnerable”, VoxEU.org, 15 November.
Evenett, S J (2020), “Chinese Whispers: COVID-19, Global Supply Chains in Essential Goods, and Public Policy”, Journal of International Business Policy 3: 408–429.
Evenett, S J and J Fritz (2020), “Collateral damage: Cross-border fallout from pandemic policy overdrive”, VoxEU.org, 17 November.
Javorcik, B (2020), “Global supply chains will not be the same in the post-COVID-19 world”, in Baldwin, R and S Evenett (eds) COVID-19 and Trade Policy: Why Turning Inward Won’t Work, CEPR Press.
Meyer, B, S Mösle and M Windisch (2020), “Lessons from past disruptions to global value chains”, UNIDO Industrial Analytics Platform, May 2020.
Michel, C (2020), “‘Strategic autonomy for Europe - the aim of our generation’ - speech by President Charles Michel to the Bruegel think tank”, 28 September.
Miroudot, S (2020), “Resilience versus robustness in global value chains: Some policy implications”, in Baldwin, R and S J Evenett (eds) COVID-19 and Trade Policy: Why Turning Inward Won't Work, CEPR Press.
Qi, L (2020), “Chinese Exports to the U.S. Get a Lifeline From Coronavirus-Related Demand”, The Wall Street Journal, 9 October.
RWI – Leibniz Institute for Economic Research (2020), RWI/ISL-Container Throughput Index.
Seric, A, H Görg, S Mösle and M Windisch (2020), “Managing COVID-19: How the pandemic disrupts global value chains”, UNIDO Industrial Analytics Platform, April.
Endnotes
1 The Global Trade Alert database includes policy interventions such as tariff measures, export subsidies, trade-related investment measures and contingent trade-liberalising/protective measures that may affect foreign commerce.
2 The United Kingdom is excluded from EU statistics in this column. | https://voxeu.org/article/risk-resilience-and-recalibration-global-value-chains |
Reviews, researches and processes vendor invoices and voucher requisitions by adhering to accounting policies and procedures in order to produce accurate financial statements and ensure timely payment of Health System bills.
Position Accountabilities
1. Process invoices and check requests.
- Opens mail on a daily basis.
- Review all check requests to ensure proper documentation and approval is provided by the department prior to processing payment.
- Obtains approval on all capital equipment purchases prior to processing.
- Keys invoices and check requests.
- Matches invoices and check requests to check.
- Shares responsibility of processing check runs
2. Resolves invoice discrepancies and outstanding issues on vendor statements.
- Sends complete, well-documented invoice discrepancy notes to the appropriate person in materials management.
- Reviews “on hold” listing on a daily basis to keep these items as current as possible.
- Reviews vendor statements and resolves all outstanding issues on a timely basis.
- Takes active role in meetings with materials management to create a team like atmosphere between the two departments to improve procedures.
3. Maintains and monitors multiple entity records by reviewing vendor invoices for taxing requirements in order to maintain compliance with all IRS regulations including 1099 income reporting requirements and taxing laws.
4. Maintain checks and balances to the system by researching and resolving discrepancies with vendors and hospital departments in accordance with established procedures to make timely payments and provide accurate financial reporting.
5. Maintains appropriate records by following department procedure for matching checks to documentation in order to meet audit requirements.
6. Performs balancing procedures by verifying invoice batch information entered into the system to system-generated reports. Balancing ensures integrity of payment to vendor and protection of hospital assets.
Position Qualifications
- Minimum Education: High School graduate or equivalent.
- Minimum Experience: 5 years of clerical experience in Accounts Payable.
- Preferred Experience: Experience in Excel, MSWord is preferred. | https://goshenhealthcareers.hctsportals.com/jobs/704546-finance-coordinator |
About This Source - Channel 4 News
Channel 4 News is the news programme from UK Channel 4 television. Channel 4 is a British public-service free-to-air television network headquartered in Leeds, United Kingdom. The channel was established in 1982 to provide a fourth television service to the United Kingdom in addition to the licence-funded BBC One and BBC Two, and the single commercial broadcasting network ITV.
Recent from Channel 4 News:
Channel 4 News published this video item, entitled “Dominic Cummings leaves No 10 after days of public infighting” – below is their description.
Carrying a single box, Dominic Cummings, the prime minister’s chief advisor, has reportedly left Downing Street, days after his right-hand man Lee Cain was also left with no option but to quit. A series of Conservative MPs have welcomed the news as a chance for Boris Johnson to get a fresh start. It was thought the two men would stay on until the end of the year when the transition period ends and Brexit really begins. But their end is apparently now. We spoke to former Justice Secretary David Gauke and to Caroline Slocock, who served as private secretary to Margaret Thatcher – and began by asking her what she made of Dominic Cummings.Channel 4 News YouTube Channel
Got a comment? Leave your thoughts in the comments section, below. Please note comments are moderated before publication.
In This Story: Boris Johnson
Boris Johnson has been Prime Minister of the United Kingdom and Leader of the Conservative Party since 2019.
7 Recent Items: Boris Johnson
In This Story: Brexit
Brexit is the name given to the United Kingdom’s exiting the European Union, which happened on 31 January 2020, following a narrow “Leave” referendum result in a June 2016 vote on EU Membership which took place in the country. News items related to Brexit are posted, below, chronologically, with the most recent items at the top, from a variety of outlets.
3 Recent Items: Brexit
In This Story: Dominic Cummings
Dominic Cummings is a British political strategist who has served as senior adviser to Prime Minister Boris Johnson since July 2019.
9 Recent Items: Dominic Cummings
In This Story: Lee Cain
Lee Cain is a British former journalist who has served as Downing Street Director of Communications under Boris Johnson since July 2019. In November 2020, Cain announced that he would resign from the position at the end of the year.
2 Recent Items: Lee Cain
In This Story: Margaret Thatcher
Margaret Thatcher served as Prime Minister of the United Kingdom from 1979 to 1990. | https://theglobalherald.com/news/dominic-cummings-leaves-no-10-after-days-of-public-infighting/ |
Berlin, (Business News Report)|| Germany suffers from a huge shortage of specialized labor, reaching record levels in the first quarter of this year.
Germany is facing a shortage of specialized labor in light of the huge economic burdens caused by the Coronavirus pandemic and the Russian-Ukrainian war.
The Efficiency Center for Securing Skilled Employment of the German Economic Institute (IW) stated that last March, the number of vacant jobs, for which there were no unemployed people suitably qualified at the level of Germany, rose to a new record high of 558,000 vacancies.
This means that the gap in skilled workers has increased by another 88,000 vacancies in just three months.
According to the study, the growing shortage of skilled labor affects the entire labor market.
The shortage is particularly evident in the areas of health, social affairs, and education, as well as in the fields of construction, architecture, surveying and building technology.
According to the study, six out of ten job opportunities did not find a suitably qualified unemployed person during March in the fields of health, social affairs and education alone.
The shortage of skilled labor is higher than the average in the sectors of raw material extraction, production, manufacturing, natural sciences, geography and information technology, the study said.
The number of vacancies requiring skilled labor in the fields of aviation and energy technology has also increased significantly recently.
A recent report issued by the German Federal Statistical Office warned of a rise in inflation rates in the country, which reached 7.3% last March, while prices rose in April by nearly 7.5%, compared to their level during the same month last year.
Germany faces three major challenges that impede the government’s efforts to try to escape the economic crisis resulting from the Ukrainian war.
The first is that the production sector has been greatly affected by high energy prices and also reduced supply and demand affected by the conditions of political and security tension in Europe in general.
The second challenge is the insane rise in commodity prices in general, and the inability of German companies to maintain production rates or reduce costs, as well as the shortage of some basic materials from the market, which affects unprecedented increases in their prices.
The third challenge relates to the weak demands on German products locally and internationally due to their high prices compared to Chinese or Asian products. | https://www.bnreport.com/en/germany-specialized-labor/ |
I ended my last post, on colors, by speculating that the best explanation for the rise of color vocabulary from 1820 to 1940 might simply be “a growing insistence on concrete and vivid sensory detail.” Here’s the graph once again to illustrate the shape of the trend.
It occurred to me that one might try to confirm this explanation by seeing what happened to other words that describe fairly basic sensory categories. Would words like “hot” and “cold” change in strongly correlated ways, as the names of primary colors did? And if so, would they increase in frequency across the same period from 1820 to 1940?
The results were interesting.
“Hot” and “cold” track each other closely. There is indeed a low around 1820 and a peak around 1940. “Cold” increases by about 60%, “hot” by more than 100%.
“Warm” and “cool” are also strongly correlated, increasing by more than 50%, with a low around 1820 and a high around 1940 — although “cool” doesn’t decline much from its high, probably because the word acquires an important new meaning related to style.
“Wet” and “dry” correlate strongly, and they both double in frequency. Once again, a low around 1820 and a peak around 1940, at which point the trend reverses.
There’s a lot of room for further investigation here. I think I glimpse a loosely similar pattern in words for texture (hard/soft and maybe rough/smooth), but it’s not clear whether the same pattern will hold true for the senses of smell, hearing, or taste.
More crucially, I have absolutely no idea why these curves head up in 1820 and reverse direction in 1940. To answer that question we would need to think harder about the way these kinds of adjectives actually function in specific works of fiction. But it’s beginning to seem likely that the pattern I noticed in color vocabulary is indeed part of a broader trend toward a heightened emphasis on basic sensory adjectives — at least in English fiction. I’m not sure that we literary critics have an adequate name for this yet. “Realism” and “naturalism” can only describe parts of a trend that extends from 1820 to 1940.
More generally, I feel like I’m learning that the words describing different poles or aspects of a fundamental opposition often move up or down as a unit. The whole semantic distinction seems to become more prominent or less so. This doesn’t happen in every case, but it happens too often to be accidental. Somewhere, Claude Lévi-Strauss can feel pretty pleased with himself. | https://tedunderwood.com/2010/12/21/the-rise-of-a-sensory-style/ |
Welcome to the heart and soul of our website – our team of passionate writers and hardworking employees!
Here at Coin Gorilla, we cover everything from casino reviews to crypto casinos to the latest industry news. We’re a group of knowledgeable and experienced individuals who are obsessed with all things gambling and cryptocurrency.
Our team of writers is dedicated to providing our readers with accurate and unbiased information. We take pride in our thorough research process and strive to bring you the highest quality content possible.
But it’s not just our writers who make our website what it is – we also have a talented team of employees working behind the scenes. From marketing and customer service to technical support, every member of our team plays a crucial role in bringing you the best online gambling experience possible.
Don’t hesitate to reach out with any feedback or suggestions – we love hearing from our readers! | https://coingorilla.com/about/ |
When art tells a story, audiences, intrigued, will look and contemplate. When art asks you to form the narrative yourself, artworks take on a life of their own. Where will your imagination take you during this solo exhibition?
Opening September 11 at Eau Claire, Wisconsin’s L.E. Phillips Memorial Public Library is an enthralling solo exhibition with narrative and history as its focus. Featuring recent diptych, triptych, and individual paintings by Kristie Bretzke, “Unspoken Narrative” invites the audience to participate fully in the revelation of each artwork’s story.
Kristie Bretzke, “Untitled, LAX Triptych,” (c) Kristie Bretzke 2016
Via the exhibition, “These images encourage a narrative on the part of the viewer. They are commonplace — an unmade bed, an elevator door opening or closing, an ordinary sink bathed in fluorescent light. Some depict real experiences. Some are inventions. Bretzke’s paintings provide the introduction to many ‘unspoken narratives.’”
Bretzke will be present for an opening reception at the library on Thursday September 15. To learn more, visit the L.E. Phillips Memorial Library.
This article was featured in Fine Art Today, a weekly e-newsletter from Fine Art Connoisseur magazine. To start receiving Fine Art Today for free, click here. | https://fineartconnoisseur.com/2016/09/diptychs-triptychs-andn-narratives/ |
Patent Number:
6,166,160
Title:
Process for making deodorized rubbery polymer
Abstract:
There is a need for odorless rubbery polymers that offer high heat resistance, ultraviolet light resistance and low fogging characteristics. For instance, rubbery polymers of this type are needed by the automotive and construction industries. The deodorized rubbery polymers of this invention are of particular value because they can be blended with polyvinyl chloride to make leathery compositions having good heat and ultraviolet light resistance. The present invention more specifically discloses a process for preparing a deodorized rubbery polymer which comprises the steps of (1) polymerizing in a first stage (a) butyl acrylate, (b) at least one member selected from the group consisting of methyl methacrylate, ethyl methacrylate, methyl acrylate and ethyl acrylate, (c) acrylonitrile and (d) a crosslinking agent under emulsion polymerization conditions to produce a seed polymer containing latex, wherein said polymerization is initiated with a redox initiator system, wherein the redox initiator system is comprised of a free radical generator and a reducing agent; (2) adding (a) styrene, (b) additional acrylonitrile, (c) additional crosslinking agent and (d) additional free radical generator to the seed polymer containing latex under emulsion polymerization conditions which result in the formation of an emulsion containing the rubbery polymer; and (3) recovering the rubbery polymer from the emulsion containing the rubbery polymer.
Inventors:
Ngoc; Hung Dang (Limeil Brevannes, FR)
Assignee:
The Goodyear Tire & Rubber Company
International Classification:
C08F 265/00 (20060101); C08F 265/04 (20060101); C08F 220/12 ()
Expiration Date: | http://www.expiredip.com/Search/OpenData.aspx?pn=6,166,160&t=p |
Three year back when I started preparing for the CAT examination, I came across the word called ‘oxymoron’. Initially I thought it would be a word describing a huge aggressive moron, someone like me. The dictionary had some other opinions and it said Conjoining contradictory terms (as in ‘deafening silence’). I never could really understand that why do one need to conjoin the contradictory terms. A year and a half into MBA and here are the 10 new oxymorons (for ppl like Irfan and Reens: I know it is a verb, but I don’t know the plural & noun form for oxymoron) I have learned during the “GRIND”.
1- Business School
2- MBA Education
3- Professional Relationship
4- Friendly Group Discussion
5- Cooperative Boss
6- MBA Babes
7- Interesting Subject
8- Great Faculty
9- Dream Company
10- Intelligent Question
11- This one is specially for my SIBM mates- Reena Sharma
For anyone of you who have a doubt or an argument that anyone of these is not an oxymoron may feel free to comment object. If you want to call me because I just remember one more of it, which speaks of me – Receptive Person. | http://prats.co.in/45/?replytocom=98 |
Vibration is easily attributed to misalignment, a bent shaft or a deficiency with a propeller. Often, though, none of these assumptions are true, and the true source of the problem is engine misfire.
The combustion process is one of the most important functions of a diesel engine. It is comprised of several physical and chemical reactions, and it has four stages:
An engine misfire occurs when one or more of the cylinders inside the engine doesn’t fire correctly, especially when there is an interruption of the air-to-fuel ratio inside the combustion chamber in the cylinder.
Engine misfire causes the power to drop and reduces vessel performance. It also increases emissions and can cause severe vibration throughout the vessel. Misfires can happen constantly or intermittently.
There are actually three categories of misfires: fuel, ignition and engine mechanical. In diesel engines, a misfire is caused when the ignition is bad. Bad ignition can be premature, delayed or incomplete combustion.
It’s important to note that misfire doesn’t always occur at all RPMs. It could actually occur at idle.
Other scenarios that can make a misfire more likely to occur include:
Misfires are one of the most difficult problems to diagnose and correct because there are so many situations that can cause it. When an engine misfires, performance suffers. When performance of a mechanical component suffers, so does fuel economy and power output. This also increases emissions by reducing the efficiency of combustion.
The cylinder cutout test is often used to diagnose a weak or failed injector or a misfire that could be caused by something other than the injector. The test disables each injector and measures the difference in the delivered fuel volume with the injector disabled compared to enabled.
With the cylinder cutout, load on the remaining cylinders is higher and the delivered fuel volume increases to compensate for the disabled cylinder. If a failed injector is cut out, delivered fuel volume on the remaining cylinders will not change. You will get the same results if a cylinder is not firing due to some other mechanical problem, such as leaking valves or piston rings. Performing the cylinder cutout test with a load on the engine tends to give more accurate results.
Another tool for diagnosing misfire is vibration analysis. When a general hull and machinery vibration survey is conducted, the engine misfire vibration is picked up as a half-order harmonic of the engine RPM.
This vibration is usually absorbed through the engine’s isolators, but excessive misfire may be transferred through the isolators or exhaust system and into the hull.
A traditional vibration analyzer will not pick up the exact cylinder or cylinders misfiring nor identify the exact nature of the misfire. To more precisely diagnose the misfire, use a diesel engine analyzer that uses advanced crankshaft referenced vibration and ultrasonic measurements across different vibration ranges on each cylinder to identify and detect change and degradation to internal engine components.
By identifying specific issues, they can be fixed before they become unexpected, unplanned and expensive repairs. If left untreated, engine misfire can cause imbalance or overloads to the crankshaft journals and connecting rods, which can be detrimental to an engine’s life. This would mean having to conduct complete engine overhauls much earlier than usual and replacing major components.
Today, diesel engines are technically advanced and built to survive in demanding environments. However, these are large machines that have thousands of parts and multiple systems that operate at high speeds. When any one component degrades or fails, it can lead to a host of negative or catastrophic events.
Most engineers follow the prescribed maintenance and routine inspections, but even with that, a significant amount of maintenance expense goes into unexpected failures. Condition monitoring programs with tools such as vibration analysis can help identify problems before they cause failures.
Rich Merhige is owner of Advanced Mechanical Enterprises and Advanced Maintenance Engineering in Ft. Lauderdale (www.AMEsolutions.com). Comments are welcome at [email protected]: | https://www.the-triton.com/2016/09/engine-misfire-largely-to-blame-for-vibration-onboard/ |
Integrated Project Delivery (IPD) is a newly-developed project delivery approach for construction projects, and the level of collaboration of project management team is crucial to the success of its implementation. Existing research has shown that collaborative satisfaction is one of the key indicators of team collaboration. By reviewing the literature on team collaborative satisfaction and taking into consideration the characteristics of IPD projects, this paper summarizes the factors that influence collaborative satisfaction of IPD project management team. Based on these factors, this research develops a fuzzy linguistic method to effectively evaluate the level of team collaborative satisfaction, in which the authors adopted the 2-tuple linguistic variables and 2-tuple linguistic hybrid average operators to enhance the objectivity and accuracy of the evaluation. The paper demonstrates the practicality and effectiveness of the method through carrying out a case study with the method. | https://plosjournal.deepdyve.com/lp/springer-journals/evaluation-on-collaborative-satisfaction-for-project-management-team-a4atBRAuf5 |
Over the past 9 days, V.F. Corporation (NYSE: VFC) stock was observed to have a Historic Volatility of 7.75%. That figure, when expanded to look at the past 100 days, VFC goes up to 36.87%. In the last 5 days, this stock’s average daily volume is shown as 1,661,280 shares per day, which is higher than the average of 2,511,604 shares per day as measured over the last 100 days. Moving on to look at the price, the movement in the past 5 days was +0.78, while this stock’s price moved +19.48% lower in the past 100 days.
A widely-used method of evaluating a stock’s price at any given moment is looking at it in relation to its 52-week price range. This stock’s recent dip arrives on a trading day that exhibited lower volume than its usual average of 2.48M (measured over the past 3 months). On March 1st, 2019, volume rose to about 1,441,886 transactions. During the trading period, the first transaction completed was recorded at $87.28 per share, which has dropped by -0.09% by closing bell when the final transaction of the day was recorded at 88.28. At the moment, this stock’s 52-week high is $97.00 and its 52-week low is $67.18.
This publicly-traded organization managed to generate a trailing 12-month revenue of 13.68B. Bearing that in mind, this company is experiencing top-line progress, as its year-over-year quarterly revenue has grown by 17.20%. This company’s current market capitalization is 34.67B.
Recently, V.F. Corporation (NYSE: VFC) has caught the attention – and in-depth analysis – of numerous Wall Street analysts. In a research note published on January 22nd, 2019 from Telsey Advisory Group Reiterated the shares of VFC to $96. Similarly, in a research note sent out on January 16th, 2019 from Pivotal Research Group, analysts Reiterated the shares of this stock to Hold and set a price target at $78.Additionally, in a research note made public on January 14th, 2019, Analysts at Telsey Advisory Group Reiterated common shares of VFC stock to Outperform – combined with a 12-month price target of $88.
Is Flowserve Corporation a good investment? Let’s take a look at what leading Wall Street experts have to say about this particular stock. For shares of Flowserve Corporation (NYSE: FLS), there are currently ratings available from 12 different stock market analysts who have all given their professional opinions. On average, these analysts currently have a Hold recommendation with a mean rating of 3.33. This is in comparison to the average recommendation from a month ago, which was a Hold with an average rating of 3.31. Similarly, the average rating observed 2 months ago was a Hold with the mean numerical rating of 3.31, and the average rating observed 3 months ago was a Hold with a mean numerical rating of 3.29.
But what do Wall Street experts have to say about how this company is performing behind the scenes? Looking at its overall profits, Flowserve Corporation reported earnings of 0.58 for the quarter ending Dec-18. This compares to the average analyst prediction of 0.58, representing a difference of 0, and therefore a surprise factor of 0.02. For the financial results of the preceding quarter, the company posted earnings of 0.49, in comparison to the average analyst forecast of 0.43 – representing a difference of 0.06 and a surprise factor of 15.02.
Continuing discussion of current price performance, Flowserve Corporation has a total market value of 6.03B at the time of writing – representing $134.11M outstanding shares. Turning to other widely-considered trading data, this company’s half yearly performance is observed to be negative at -14.20%. The Average True Range for this company’s stock is currently 1.17, and its current Beta is sitting at 1.64.
Now let’s take a look at what’s on the horizon: for the financial results of the current quarter, So far – there have been 9 different Wall Street analysts that have provided investors with their professional projections for Flowserve Corporation For net profit, these analysts are collectively forecasting an average estimate of $0.33 per share, versus the $0.27 per share reported in the year-ago quarter. The lowest earnings per share prediction was $0.48 per share, with the highest forecast pointing toward $0.54 per share. Compared to the year-ago period, experts are projecting a growth rate of +22.22%. | https://finbulletin.com/2019/03/04/the-vibes-behind-this-transformation-v-f-corporation-vfc-flowserve-corporation-fls/ |
Artificial Intelligence (AI) has rapidly matured over the years and can be deployed in just about any activity of the health sector, from clinical decision-making to biomedical research and medicine development. It presents great potential to enhance the efficiency and effectiveness of health care systems, yet its successful application requires a deep understanding of its strengths and…
For the second half of 2021 Slovenia will hold the Presidency of the Council of the EU. During the 6-month mandate, Digital Minister Boštjan Koritnik aims to reach an agreement with the other EU countries on Artificial Intelligence (AI), with the hope that negotiations can be continued into 2022.
Today – on World Telecommunication and Information Society Day – the Holomedicine Association becomes open to new members, signifying a huge step towards the advancement of digital health.
Within the next decade the Artificial Intelligence (AI) healthcare market is predicted to be worth more than $61 billion. The COVID-19 pandemic has seen AI used to detect the virus in chest x-rays. In the context of this enormous growth and social salience, the European Commission (EC) has published its proposal for a ‘Regulation Laying Down Harmonised Rules On Artificial Intelligence (Artificial…
The Scottish Government has laid out their plans to become a world leader in the development and use of trustworthy, ethical, and inclusive AI in their newly released AI Strategy, complementing the existing UK Government AI Sector Deal and ahead of the forthcoming UK AI Strategy.
Serbia has leveraged AI to lead the world in its immunization efforts. The small Balkan country is ranked 7th globally and 2nd in Europe in the number of people vaccinated against Covid-19.
As COVID-19 continues to focus the worlds minds on global healthcare systems, governments and advisors clamour to attract medical AI research and development.
As the United Kingdom looks set to wind up the Brexit Transition Period in December, the UK has cast back to old allegiances to set new collaborative horizons, with artificial intelligence research and development a central pillar of new economic collaboration.
As part of its long-term response to COVID-19, and its commitment to detecting three-quarters of cancers at an early stage by 2028, the UK Government has announced a further £50 million in funding to support diagnostic centres of excellence in developing AI to diagnose disease.
As innovation in artificial intelligence-based technologies freely progresses in both healthcare and broader society, calls have mounted for regulatory oversight to control, support, and direct AI-based innovations and their application in society.
AI-powered supercomputing has come to the fore in the European fight against COVID-19. The EU-funded, public and private consortium Exscalate4CoV (E4C) has escalated the fight against COVID-19 through supercomputer-driven drug discovery.
As COVID-19 continues to disrupt healthcare delivery across the world, telehealth – the provision of healthcare remotely through telecommunication technology – has taken centre stage to support remote and virtual healthcare.
AI is proving to have newfound use in medical imaging of symptoms in COVID-19 patients across the world. The use of AI in diagnostics has proven bountiful over the last few years, and the COVID-19 pandemic has provided further avenues for the application of AI in healthcare.
London’s Middlesex University has announced its new investment into AI technology as a new teaching tool for nursing students.
The European Commission (EC) has published their new Data and AI strategy, which sets out plans to expand Europe’s technological and AI capacities.
In the UK, a new £140million Artificial Intelligence award, which aims to bring life-saving innovation to the NHS, has been launched by the Health secretary Matt Hancock at the ‘Parliament and HealthTech Conference’.
Required
If you send us your registration via the registration form, your details from the inquiry form, including the contact details you provided there, will be stored in order to process the request and in case of follow-up questions. We will not share this information without your consent. Your consent is voluntary and you can do so at any time by a simple statement (by e-mail, by mail to RPP GROUP SPRL, Rue Guimard 10, 1040 Brussels, Belgium). | https://holomedicine-association.org/news-publications |
The SPECIFIC response of individual tissues to a single steroid receptor cannot be explained simply by DNA sequence alone. For example, in the same animal a given steroid receptor is capable of interacting with the nucleus of two different cell types, resulting in unique gene expressions despite the presence of a similar genome. Historically, these differences in response to a single type of steroid receptor within target tissues in the same animal have been suggested to occur through different alterations in chromatin structure. However, the molecular mechanisms of this tissue specificity remain unexplained. It is possible that in different cell types the specific three-dimensional organization of the genome and cell skeletal elements interplay may vary to accomplish the hormonal regulation of specific gene expression. It is the purpose of this review to address the potential role of cell structure as a central component of hormone action. | https://jhu.pure.elsevier.com/en/publications/the-tissue-matrix-cell-dynamics-and-hormone-action-4 |
"The Global Center of Excellence for Unmanned Systems and Technology"
The AUVSI San Diego Lindbergh Chapter is dedicated to the advancement of Unmanned Systems and Technology in the greater Southwest region. The Chapter enthusiastically engages in initiatives with the public and private sectors for the advancement and advocacy of Unmanned Systems, and is a value added resource for the integration and interoperability of space, air, ground, sea and undersea systems.
Please join us for the AUVSI San Diego Chapter Luncheon for updates on the San Diego FAA UAS Integration Pilot Program (IPP) April 11, 2019 (11am - 1pm) Click here for more details Admiral Baker Golfcourse Clubhouse 2400 Admiral Baker Road San Diego
AUVSI News
Researchers at MIT say that they have developed a technique that allows robots to quickly identify objects hidden in a three-dimensional cloud of data.
According to the researchers, sensors that collect and translate a visual scene into a matrix of dots help robots “see” their environment. The researchers note, though, that conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.
With the new technique developed by MIT researchers, it takes a robot just seconds from when it receives the visual data to accurately pick out an object that is otherwise obscured within a dense cloud of dots, such as a small animal. This technique can help improve a variety of situations in which machine perception must be both speedy and accurate, the researchers say, including driverless cars and robotic assistants in the factory and the home.
Carnegie Mellon University (CMU) and Argo AI have announced a five-year, $15 million sponsored research partnership that will result in Argo AI funding research into advanced perception and next-generation decision-making algorithms for autonomous vehicles.
CMU and Argo AI will establish the Carnegie Mellon University Argo AI Center for autonomous vehicle research. Through advanced research projects, the center will seek to help overcome the hurdles associated with enabling self-driving vehicles to operate in various real-world conditions.
iRobot Corp. has acquired Root Robotics, the developer of an educational robot called the Root coding robot, which teaches children as young as four years old coding and 21st century problem-solving skills.
With the addition of the Root coding robot to its product lineup, iRobot says that the acquisition of Root Robotics supports its plans to diversify its educational robot product offerings, as it continues to showcase its commitment to make robotic technology more accessible to educators, students and parents.
AUVSI Member Sign In
Why join AUVSI
AUVSI is the world’s largest organization devoted exclusively to advancing the unmanned systems and robotics industries. We provide our members with a unified voice in advocacy for policies and regulations that encourage growth and innovation; we provide education to the public and media on the safe and beneficial uses of unmanned systems; and we enable market growth by providing our members with custom resources to realize their full potential within the industry.
Join Now
AUVSI Events
AUVSI Unmanned Systems—Defense. Protection. Security. | https://www.auvsi.org/san-diego-lindbergh-chapter |
When it comes to congregational worship, I believe these three things are true:
- Content is the most important characteristic of a sacred song.
- Structure makes content accessible.
- Most members of a congregation are not trained singers, so unfamiliar music hinders them in their worship.
Taken together, these three things indicate that the most useful songs for the congregation contain good content in a highly structured form with music that is as easy to learn as possible. I don’t think it’s any accident that this description matches many of the best traditional hymns.
After all, traditional hymn form didn’t fall from the sky, nor was it defined by the Pope. Instead, it evolved in response to the needs of worshipers. Not surprisingly, hymns bear considerable formal similarity to secular folk song. In both cases, the circumstances demand a lyrical and musical expression that ordinary people can easily sing together.
Consider, for instance, the hymn standard “O Thou Fount of Every Blessing”. Like all hymn texts that have survived from the eighteenth century, the lyrics are in a regular meter. This is critically important. Regular meter allows congregations to sing multiple verses with different content to the same four-phrase tune. Without perfect meter, one of three things would happen to “O Thou Fount”.
- The tune would have to be through-composed, which (given the same three verses of content) would require the congregation to learn three times as much music to worship with equal content. Frankly, why make non-singers jump through that hoop?
- The lyrics would have to contain lots of repetition to allow for musical repetition. As long as you’re singing the same words, you can use the same music (which is why many contemporary praise songs are repetitive). However, past a certain point, repetition limits content, thereby violating Rule 1. If you’re not singing a sacred song for the content, why are you singing it?
- The tune would have to be an imperfect match to different verses with irregular meter. Broken meter (when the meter varies from verse to verse) is kryptonite for congregational singing. It causes problems even in otherwise excellent hymns such as “Follow Me”. When severe, it can make hymns with strong content, such as “The Ninety and Nine”, practically unsingable. Congregations much prefer to worship with hymns with regular meter because regular meter allows them to focus on content rather than rhythm, worship rather than singing.
We see then, that the simple decision to use regular meter makes “O Thou Fount” economical in its musical demands on the congregation. The tune, NETTLETON is similarly economical. It’s written in rounded-bar form. In other words, the first, second, and fourth musical phrases are identical, with the third phrase offering a musical variation. As a result, in order to sing a full eight-line hymn (with multiple verses), the congregation only has to learn two musical phrases (one of which repeats itself three times). Again, all other things being equal, a rounded-bar hymn tune will be twice as easy to learn as a tune of similar length with four dissimilar phrases.
As a result, “O Thou Fount” reduces musical demand both with repetition across verses and with repetition within verses. Imagine again our one-verse, 24-line version of “O Thou Fount”. To sing it, the congregation has to learn twelve phrases of music—six times as many as in the version we actually sing. Song-introducers who demand that the congregation learn six times as much music for the same content clearly care more about music than content and have missed the point of worship.
The formula has worked for centuries, and it still works today. Look at the work of Stuart Townend and Keith and Kristyn Getty. What do you see over and over again? “In Christ Alone”? Regular meter, multiple verses, rounded-bar hymn tune. “O Church, Arise”? Regular meter, multiple verses, rounded-bar hymn tune. “How Deep the Father’s Love for Us”? Regular meter, multiple verses, rounded-bar hymn tune. The same thing is true of “Jesus, Draw Me Ever Nearer” (Getty tune, lyrics by Margaret Becker). Regular meter, multiple verses, rounded-bar hymn tune. Congregations love these hymns because they are extremely easy even for non-singers to pick up, and the Townend-Getty circle has been smart enough to spot the pattern and exploit it.
Of course, the right form will get you only so far. As a rule, Townend-Getty hymns have strong, appealing content to go with congregation-friendly form. It’s possible to write vast numbers of hymns with regular meter, multiple verses, and a rounded-bar hymn tune, yet never produce anything the congregation wants to sing. However, the farther one departs from the form, the more likely one is to write a sacred song that the congregation can’t sing (at least easily, sometimes at all), regardless of how much they might want to.
If you’re a lyricist or a composer, write this way, or in another way that is similarly undemanding and economical. If you’re a song-selector who is concerned with congregational singing, look for works written in this way or in a way that is similarly economical. If you don’t, you are unwittingly building barriers between your chosen hymn and its enthusiastic adoption. | https://withgodsword.com/2018/06/01/hymn-form-and-the-congregation/ |
Have you ever complain or feel bad because of poor concentration on the task? Is it because of distractions, or are there any other factors that involve? University of Kent has archived a good article on the topic of concentration. It talks bout what is concentration (or lack of it), factors that may cause it, the link between your physical and concentration, how to train, and how to maintain your concentration. I like the section on maintaining concentration:
Be Active
* Vary your activities to keep your mind from wandering: make notes, highlight, underline, ask yourself questions, prepare questions for discussion, associate new material with old material, visualise a concept, etc.
* Change the subject/topic you are studying every two hours or so to maintain your interest.
Take Regular Breaks
It is important to take a break before you feel tired and lose your concentration completely. Regular breaks at least once an hour helps to sustain your concentration. If the work is not going too well and you have difficulties in concentrating, you may need a long break and go back to it later. Alternatively, you can try working for shorter period of time, such as 20 minutes, and have more frequent short breaks.
Oxygenate
* When you sit for long periods, gravity draws the blood to the lower part of your body. When you take a break, take a few deep breaths and get more oxygen to your brain: try walking around and doing some light stretching for a few minutes. It will help to release tension in your body, and help your circulation. (Try ‘Focus on Your Breath’ exercise.)
* If you have been working on a computer, relax your eyes by focusing at a distance, and relieve your eyes from the glare of the computer by covering your eyes with the palm of your hands for a moment. | https://www.lifehack.org/articles/lifehack/online-teaching-on-concentration.html |
---
abstract: 'We measure the mass difference, $\Delta m_0$, between the $D^{*}(2010)^+$ and the $D^0$ and the natural line width, $\Gamma$, of the transition $D^{*}(2010)^+\to D^0 \pi^+$. The data were recorded with the detector at center-of-mass energies at and near the $\Upsilon(4S)$ resonance, and correspond to an integrated luminosity of approximately $477 \invfb$. The $D^0$ is reconstructed in the decay modes $D^0 \to K^-\pi^+$ and $D^0 \to K^-\pi^+\pi^-\pi^+$. For the decay mode $D^0\to K^-\pi^+$ we obtain $\Gamma = \left(83.4 \pm 1.7 \pm 1.5\right) \kev$ and $\Delta m_0 = \left(145\,425.6 \pm 0.6 \pm 1.8\right) \kev$, where the quoted errors are statistical and systematic, respectively. For the $D^0\to K^-\pi^+\pi^-\pi^+$ mode we obtain $\Gamma = \left(83.2 \pm 1.5 \pm 2.6\right) \kev$ and $\Delta m_0 = \left(145\,426.6 \pm 0.5 \pm 2.0\right) \kev$. The combined measurements yield $\Gamma = \left(83.3 \pm 1.2 \pm 1.4\right) \kev$ and $\Delta m_0 = \left(145\,425.9 \pm 0.4 \pm 1.7\right) \kev$; the width is a factor of approximately 12 times more precise than the previous value, while the mass difference is a factor of approximately 6 times more precise.'
bibliography:
- 'refs\_bad2523.bib'
---
-[PUB]{}-[12]{}/[032]{}\
SLAC-PUB-[15374]{}\
arXiv:1304.5009\
Introduction {#sec:Introduction}
============
The $D^{*}(2010)^{+}$ ($D^{*+}$) line width provides a window into a nonperturbative regime of strong physics where the charm quark is the heaviest meson constituent [@Becirevic201394; @actapolb.30.3849; @Guetta2001134]. The line width provides an experimental check of models of the $D$ meson spectrum, and is related to the strong coupling of the $D^{*+}$ to the $D\pi$ system, $g_{D^* D \pi}$. In the heavy-quark limit, which is not necessarily a good approximation for the charm quark [@PhysRevC.83.025205], this coupling can be related to the universal coupling of heavy mesons to a pion, $\hat{g}$. There is no direct experimental window on the corresponding coupling in the $B$ system, $g_{B^*B\pi}$, since there is no phase space for the decay $B^* \to B\pi$. However, the $D$ and $B$ systems can be related through $\hat{g}$, which allows the calculation of $g_{B^*B\pi}$. The $B^*B\pi$ coupling is needed for a model-independent extraction of $\left|V_{ub}\right|$ [@PhysRevD.49.2331; @PhysRevLett.95.071802] and is presently one of the largest contributions to the theoretical uncertainty on $\left|V_{ub}\right|$ [@2009PhRvD79e4507B].
We study the $D^{*+}\to D^0 \pi^+$ transition using the $D^0\to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$ decay modes to measure the values of the $D^{*+}$ line width, $\Gamma$, and the difference between the $D^{*+}$ and $D^0$ masses, $\Delta m_0$. The use of charge conjugate reactions is implied throughout this paper. The only prior measurement of the width is $\Gamma = \left(96 \pm 4 \pm 22\right) \kev$ by the CLEO collaboration where the uncertainties are statistical and systematic, respectively [@PhysRevD.65.032003]. That measurement is based on a data sample corresponding to an integrated luminosity of $9 \invfb$ and reconstructed $D^0\to K^- \pi^+$ decays. In the present analysis, we have a data sample that is approximately 50 times larger. This allows us to apply tight selection criteria to reduce background, and to investigate sources of systematic uncertainty with high precision.
The signal is described by a relativistic Breit-Wigner (RBW) function defined by
$$\frac{d \Gamma(m)}{d m} = \frac{m \Gamma_{D^*D \pi}\left(m\right) \, m_0 \Gamma} {\left(m_0^2 - m^2\right)^2 + \left(m_0 \Gamma _{\text{Total}}(m) \right)^2},
\label{eq:rbw}$$
where $\Gamma_{D^*D \pi}$ is the partial width to $D^0\pi^+$, $m$ is the $D^0 \pi^+$ invariant mass, $m_0$ is the invariant mass at the pole, and $\Gamma_{\text{Total}}(m)$ is the total $D^{*+}$ decay width. The partial width is defined by
$$\Gamma_{D^*D \pi}(m) = \Gamma
\left(\frac{\mathcal{F}^{\ell}_{D\pi}(p_0)}{\mathcal{F}^{\ell}_{D\pi}(p)}\right)^2\left(\frac{p}{p_0}\right)^{2\ell+1}\left(\frac{m_0}{m}\right),
\label{eq:partialwidth}$$
where $\mathcal{F}^{\ell = 1}_{D\pi}\left(p\right) = \sqrt{1+r^2 p^2}$ is the Blatt-Weisskopf form factor for a vector particle with radius parameter $r$ and daughter momentum $p$, and the subscript zero denotes a quantity measured at the pole [@blatt; @PhysRevD.5.624]. The value of the radius is unknown, but for the charm sector it is expected to be $\sim 1\gev^{-1}$ [@Albrecht1993435]. We use the value $r = 1.6 \gev^{-1}$ from Ref. [@Schwartz:2002hh] and vary this value as part of our investigation of systematic uncertainties.
The full width at half maximum (FWHM) of the RBW line shape ($\approx 100 \kev$) is much less than the FWHM of the almost Gaussian resolution function which describes more than 99% of the signal ($\approx 300 \kev$). Therefore, near the peak, the observed FWHM is dominated by the resolution function shape. However, the shapes of the resolution function and the RBW differ far away from the pole position. Starting $(1.5 - 2.0) \mev$ from the pole position, and continuing to $(5 - 10) \mev$ away (depending on the $ D^0 $ decay channel), the RBW tails are much larger. The signal rates in this region are strongly dominated by the intrinsic line width, not the resolution functions, and the integrated signals are larger than the integrated backgrounds. We use the very different resolution and RBW shapes, combined with the good signal-to-background rate far from the peak, to measure $ \Gamma $ precisely.
The detailed presentation is organized as follows. Section \[sec:detector\] discusses the detector and the data used in this analysis, and Section \[sec:evtsel\] describes the event selection. Section \[sec:matmodel\] discusses a correction to the detector material model and magnetic field map. Section \[sec:fitstrategy\] details the fit strategy, Section \[sec:systematics\] discusses and quantifies the sources of systematic uncertainty, and Section \[sec:combmodes\] describes how the results for the two $D^0$ decay modes are combined to obtain the final results. Finally, the results are summarized in Section \[sec:conclusion\].
The detector and data {#sec:detector}
======================
This analysis is based on a data sample corresponding to an integrated luminosity of approximately $477\invfb$ recorded at and $40 \mev$ below the $\Upsilon\left(4S\right)$ resonance by the detector at the PEP-II asymmetric energy collider [@Lees2013203]. The detector is described in detail elsewhere [@ref:babar; @ref:nim_update], so we summarize only the relevant components below. Charged particles are measured with a combination of a 40-layer cylindrical drift chamber (DCH) and a 5-layer double-sided silicon vertex tracker (SVT), both operating within the $1.5$-T magnetic field of a superconducting solenoid. Information from a ring-imaging Cherenkov detector is combined with specific ionization $(dE/dx)$ measurements from the SVT and DCH to identify charged kaon and pion candidates. Electrons are identified, and photons measured, with a CsI(Tl) electromagnetic calorimeter. The return yoke of the superconducting coil is instrumented with tracking chambers for the identification of muons.
Event selection {#sec:evtsel}
===============
We reconstruct continuum-produced $D^{*+}\rightarrow D^0 \pi_s^+$ decays in the two Cabibbo-favored channels $D^0 \to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$. The pion from the $D^{*+}$ decay is called the “slow pion” (denoted $\pi_s^+$) because of the limited phase space available. The mass difference of the reconstructed $D^{*+}$ and $D^{0}$ is denoted as $\Delta m$ (e.g. $m\left(K^-\pi^+\pi_s^+\right) - m\left(K^-\pi^+\right)$ for the $D^0 \to K^-\pi^+$ channel). The resolution in $\Delta m$ is dominated by the resolution of the $\pi_s^+$ momentum, especially the uncertainty of its direction due to Coulomb multiple scattering. The selection criteria for the individual $D^0$ channels are detailed below; however, both modes have the same $D^{*+}$ requirements. The selection criteria were chosen to enhance the signal-to-background ratio ($S/B$) to increase the sensitivity to the long RBW tails in the $\Delta m$ distribution; we have not optimized the criteria for statistical significance. Because this analysis depends on the RBW tails, we pay particular attention to how the selection criteria affect the tail regions.
The entire decay chain is fit using a kinematic fitter with geometric constraints at each vertex and the additional constraint that the $D^{*+}$ emerges from the luminous region, also referred to as the beam spot. The confidence level of the $\chi^2$ for this fit must must be greater than 0.1%. In addition, the confidence level for the $\chi^2$ from fitting the $D^0$ daughter tracks to a common vertex must be at least 0.5%. These confidence level selections reduce the set of final candidates by approximately 2.1%. The beam spot constraint improves the $\Delta m$ resolution by a factor of 2.5, primarily because it constrains the direction of the $\pi^+_s$. If there is more than one $D^{*+}$ candidate in the event, we choose the one with the highest full decay chain confidence level. The reconstructed $D^0$ mass must be within the range $1.86 \gev$ to $1.87 \gev$. The mass difference between the $D^{*+}$ and $D^0$ is required to satisfy $\Delta m < 0.17 \gev$. A large amount of the combinatorial background is removed by requiring $p^*(D^{*+}) > 3.6 \gev$, where $p^*$ is the momentum measured in the center-of-mass frame for the event.
![(color online) Disjoint sets of $D^0 \to K^-\pi^+$ candidates illustrating the candidates that fail the tracking requirements have worse $\Delta m$ resolution. Each histogram is normalized to its peak. The events that populate the narrowest peak are the nominal $D^{*+}$ candidates that pass all selection criteria. The events that populate the intermediate and widest peaks pass all selection criteria except either the slow pion candidates or $D^0$ daughters fail the SVT requirements or fail the DCH requirements, respectively.[]{data-label="fig:tracking_cuts"}](prd_tracking.eps)
To select well-measured slow pions we require that the $\pi_s^+$ tracks have at least $12$ measurements in the DCH and have at least 6 SVT measurements with at least 2 in the first three layers. For both $D^0\rightarrow K^- \pi^+$ and $D^0 \rightarrow K^- \pi^+\pi^-\pi^+$, we apply particle identification (PID) requirements to the $K$ and $\pi$ candidate tracks. To select candidates with better tracking resolution, and consequently improve the resolution of the reconstructed masses, we require that $D^0$ daughter tracks have at least 21 measurements in the DCH and satisfy the same SVT measurement requirements for the slow pion track. Figure \[fig:tracking\_cuts\] illustrates the signal region distributions for three disjoint sets of $D^0 \to K^-\pi^+$ candidates: those passing all tracking requirements (narrowest peak), those otherwise passing all tracking requirements but failing the SVT hit requirements (intermediate peak), and those otherwise passing all tracking requirements but failing the requirement that both $D^0$ daughter tracks have at least 21 hits in the DCH and the $\pi_s^+$ track has at least 12 hits in the DCH (widest peak). The nominal sample (narrowest peak) has better resolution and S/B than candidates that fail the strict tracking requirements. We reduce backgrounds from other species of tracks in our slow pion sample by requiring that the $dE/dx$ values reported by the SVT and DCH be consistent with the pion hypothesis. Figure \[fig:pis\_baddedx\] shows the $\Delta m$ distribution for candidates otherwise passing cuts, but in which the slow pion candidate fails either the SVT or DCH $dE/dx$ requirement. The $dE/dx$ selections remove protons from slow pion interactions in the beam pipe and detector material as well as electrons from the $D^{*0}$ decay chain discussed below. As shown in Fig. \[fig:pis\_baddedx\], while this requirement removes much more signal than background, the S/B ratio of the removed events is distinctly worse than that in the final sample.
![Events with $D^{*+}$ candidates from $D^0\to K^- \pi^+$ that pass all selection criteria, but the slow pion candidate fails the $dE/dx$ requirement.[]{data-label="fig:pis_baddedx"}](prd_pis_baddedx.eps)
![Events with $D^{*+}$ candidates from $D^0\to K^- \pi^+$ that pass all selection criteria, but the slow pion candidate is identified by the algorithms as either a photon conversion in the detector material or a $\pi^0$ Dalitz decay.[]{data-label="fig:pis_badconv"}](prd_pis_badconv.eps)
The Dalitz decay $\pi^0\rightarrow \gamma e^+ e^-$ produces background where we misidentify an positron as a $\pi_s^+$. We eliminate such candidates by reconstructing a candidate $e^+e^-$ pair and combining it with a $\gamma$. If the $e^+e^-$ vertex is within the SVT volume and the invariant mass is in the range $115 \mev< m\left(\gamma e^+ e^-\right) < 155 \mev$, then the event is rejected. Real photon conversions in the detector material are another source of background where electrons can be misidentified as slow pions. To identify such conversions we first create a candidate $e^+e^-$ pair using the slow pion candidate and an identified electron track from the same event and perform a least-squares fit with a geometric constraint. The event is rejected if the invariant mass of the putative pair is less than $60 \mev$ and the constrained vertex position is within the SVT tracking volume. Figure \[fig:pis\_badconv\] shows the $\Delta m$ distribution for candidates otherwise passing cuts, but in which the slow pion candidate is identified as an electron using either of these $\pi^0$ conversion algorithms. As shown in Fig. \[fig:pis\_badconv\], only a small number of $D^{*+}$ candidates pass all other selection criteria but have a slow pion rejected by these algorithms. Again, the S/B ratio of this sample is distinctly worse than that of the final sample.
We identified additional criteria to remove candidates in kinematic regions where the Monte Carlo (MC) simulation poorly models the data. The MC is a cocktail of $q\bar{q}$ and $\ell^+ \ell^-$ sources where $q = u, d, s, c, b$ and $\ell = e, \mu, \tau$. The simulation does not accurately replicate the momentum distributions observed in data at very high and low $D^{*+}$ momentum values, so we require that $3.6 \gev < p^*(D^{*+}) < 4.3 \gev$ and that the laboratory momentum of the slow pion be at least $150 \mev$. In an independent sample of $K_{S}^{0}\to \pi^- \pi^+$ decays, the reconstructed $K_S^0$ mass is observed to vary as a function of the polar angle $\theta$ of the $K_S^{0}$ momentum measured in the laboratory frame with respect to the electron beam axis. We define the acceptance angle to reject events where any of the daughter tracks of the $D^{*+}$ has $\cos \theta \ge 0.89$ to exclude the very-forward region of the detector. This criterion reduces the final data samples by approximately 10%.
The background level in the $D^0 \rightarrow K^-\pi^+\pi^-\pi^+$ mode is much higher than that in $D^0 \rightarrow K^-\pi^+$, and so we require $D^0$ daughter charged tracks to satisfy stricter PID requirements. The higher background arises because the $D^0$ mass is on the tail of the two-body $K^-\pi^+$ invariant mass distribution expected in a longitudinal phase space model, however it is near the peak of the 4-body $K^-\pi^+\pi^-\pi^+$ invariant mass distribution [@feynman1972]. In addition, there is more random combinatorial background in the 4-track $D^0 \to K^- \pi^+\pi^-\pi^+$ mode than in the 2-track $D^0 \to K^- \pi^+$ mode.
The initial fit to the $D^0 \to K^-\pi^+\pi^-\pi^+$ validation signal MC sample had a bias in the measured value of the $D^{*+}$ width. An extensive comparison revealed that the bias originated from regions of phase space that the MC generator populated more frequently than the data. Evidently, there are amplitudes that suppress these structures in the data, that are neither known nor included in the MC generator. We avoid the regions where the MC disagrees with the data by rejecting a candidate if either $\left(m^2\left(\pi^+ \pi^+\right) < -1.17 \, m^2\left(\pi^- \pi^+\right) + 0.46 \gev^2\right)$ or $\left( m^2\left(\pi^-\pi^+\right)< 0.35 \gev^2\right.$ and $\left.m^2\left(K^-\pi^+\right) < 0.6 \gev^2 \right)$. This veto is applied for each $\pi^+$ daughter of the $D^0$ candidate. Including or excluding these events has no noticeable effect on the central values of the parameters from the data. These vetoes reduce the final candidates by approximately 20%.
\
There is an additional source of background that must be taken into account for the $K^-\pi^+\pi^-\pi^+$ channel that is negligible for the $K^-\pi^+$ channel. In a small fraction of events ($<1$%) we mistakenly exchange the slow pion from $D^{*+}$ decay with one of the same-sign $D^0$ daughter pions. From the fits to the validation signal MC sample we find that this mistake would shift the reconstructed mass values and introduce a $\mathcal{O}(0.1 \kev)$ bias on the width. To veto these events we recalculate the invariant mass values after intentionally switching the same-sign pions, and create the variables $m' \equiv m\left(K^-\pi^+\pi^-\pi_s^+\right)$ and $\Delta m' \equiv m\left(K^-\pi^+\pi^-\pi^+\pi_s^+\right) - m\left(K^-\pi^+\pi^-\pi_s^+\right)$. There are two pions from the $D^0$ decay with the same charge as the slow pion, so there are two values of $\Delta m'$ to consider. In this procedure the correctly reconstructed events are moved away from the signal region, while events with this mis-reconstruction are shifted into the signal region. Figure \[fig:mdmprime\_corr\] shows the $(m', \Delta m')$ distribution for MC events with correctly reconstructed $D^0$, where the majority of events are shifted past the bounds of the plot and only a small portion can be seen forming a diagonal band. The events with the slow pion and a $D^0$ daughter swapped are shown in Fig. \[fig:mdmprime\_swap\] and form a clear signal. We reject events with $\Delta m' < 0.1665 \gev$. Using fits to the validation signal MC sample, we find that this procedure removes approximately 80% of the misreconstructed events and removes the bias reconstructed mass and the fitted value of the width. The $(m', \Delta m')$ distribution for data is shown in Fig. \[fig:mdmprime\_data\]. Removing the $\Delta m'$ region reduces the final set of $D^0 \to K^-\pi^+\pi^-\pi^+$ candidates by approximately 2%. The phase space distribution of events in MC and data differ slightly, so we expect differences in the efficiency of this procedure.
Material modeling {#sec:matmodel}
=================
In the initial fits to data, we observed a very strong dependence of the RBW pole position on the slow pion momentum. This dependence is not replicated in the MC, and originates in the magnetic field map and in the modeling of the material of the beam pipe and the SVT. Previous analyses have observed the similar effects, for example the measurement of the $\Lambda_c^+$ mass [@PhysRevD.72.052006]. In that analysis the material model of the SVT was altered in an attempt to correct for the energy loss and the under-represented small-angle multiple scattering (due to nuclear Coulomb scattering). However, the momentum dependence of the reconstructed $\Lambda_c^+$ mass could be removed only by adding an unphysical amount of material to the SVT. In this analysis we use a different approach to correct the observed momentum dependence and adjust track momenta after reconstruction.
![Sample of $K_{S}^{0}\rightarrow \pi^+\pi^-$ candidates from $D^{*+}\to D^0 \pi_s^+ \to (K_{S}^{0} \pi^-\pi^+)\pi_s^+$ decay where the $K_{S}^{0}$ daughter pions satisfy the same tracking criteria as the slow pions of the $D^{*+}$ analysis.[]{data-label="fig:ksmass"}](prd_ksmass.eps)
We determine correction parameters using a sample of $K_{S}^{0}\rightarrow \pi^+\pi^-$ candidates from $D^{*+}\to D^0 \pi^+$ decay, where we reconstruct $D^0 \to K_{S}^{0} \pi^-\pi^+$. In this study we require that the $K_{S}^{0}$ daughter pions satisfy the same tracking criteria as the slow pions of the $D^{*+}$ analysis. The $K_{S}^0$ decay vertex is required to be inside the beam pipe and to be well-separated from the $D^{0}$ decay vertex. These selection criteria yield an extremely clean $K_{S}^0$ sample (approximately $160000$ candidates, $>99.5\%$ pure), which is shown in Fig. \[fig:ksmass\]. This sample is used to determine fractional corrections to the overall magnetic field and to the energy losses in the beam pipe ($E_{\text{loss}}^{\text{bmp}}$) and, separately, in the SVT ($E_{\text{loss}}^{\text{svt}}$). The points represented as open squares in Fig. \[fig:ksmass\_corr\] show the strong dependence of the reconstructed $K_{S}^0$ mass on laboratory momentum. Adjusting only the estimated energy losses and detector material flattens the distribution, but there is still a remaining discrepancy. This discrepancy is shown by the open squares in Fig. \[fig:ksmass\_corr\] at high momentum and indicates an overall momentum scale problem. These two effects lead us to consider corrections to the laboratory momentum and energy of an individual track of the form
$$\begin{aligned}
p&\rightarrow p\left(1+a\right) \notag \\
E&\rightarrow E+b_{{\text{bmp}}} E_{\text{loss}}^{\text{bmp}} +b_{{\text{svt}}} E_{\text{loss}}^{\text{svt}}
\label{eq:epcorr}\end{aligned}$$
where the initial energy losses are determined by the Kalman filter based on the material model. To apply the correction to a pion track, the magnitude of the momentum is first recalculated using the pion mass hypothesis and the corrected energy as shown in Eq. (\[eq:epcorr\]) where the energy losses ($E_{\text{loss}}^{\text{bmp}}$ and $E_{\text{loss}}^{\text{SVT}}$) are taken from the original Kalman fit. Then, the momentum is scaled by the parameter $a$ shown in Eq. (\[eq:epcorr\]) and the energy of the particle is recalculated assuming the pion mass hypothesis. The order of these operations, correcting the energy first and then the momentum, or vice versa, has a negligibly small effect on the calculated corrected invariant mass. After both pion tracks’ momenta are corrected the invariant mass is calculated. Then the sample is separated into 20 intervals of $K_S^0$ momentum. Figure \[fig:ksmass\_corr\] shows $m(\pi^+\pi^-)$ as a function of the slower pion laboratory momentum and illustrates that the momentum dependence of the original sample (open squares) has been removed after all of the corrections (closed circles). We determine the best set of correction parameters to minimize the $\chi^2$ of the bin-by-bin mass difference between the $\pi^+\pi^-$ invariant mass and the current value of the $K^{0}_S$ mass ($m_{\text{PDG}}\left(K_{S}^0\right)\pm1\sigma_{\text{PDG}} = 497.614 \pm 0.024 \mev$) [@ref:pdg2012].
To estimate the systematic uncertainty in values measured from corrected distributions, we find new parameter values by tuning the $\pi^+\pi^-$ invariant mass to the nominal $K^{0}_S$ mass shifted up and down by one standard deviation. These three sets of correction parameters are listed in Table \[table:corr\_params\]. The resulting average reconstructed $K_{S}^{0}$ masses after correction are $497.589 \pm 0.007 \mev$, $497.612 \pm 0.007 \mev$, and $497.640 \pm 0.007 \mev$ for target masses $m_{\text{PDG}}(K_{S}^0)-1\sigma_{\text{PDG}}$, $m_{\text{PDG}}(K_{S}^0)$, and $m_{\text{PDG}}(K_{S}^0)+1\sigma_{\text{PDG}}$, respectively. As these average values are so well-separated we do not include additional systematic uncertainties from parameters that could describe the central value. The systematics studies of fit result variations in disjoint subsamples of laboratory momentum remain sensitive to our imperfect correction model.
[cc@c@c]{}\
& Nominal &\
\
\
& $m_{\text{PDG}}(K_{S}^0)$ & $m_{\text{PDG}}+1\sigma_{\text{PDG}}$ & $m_{\text{PDG}}-1\sigma_{\text{PDG}}$\
\
\
$a$ & 0.00030 & 0.00031 & 0.00019\
$b_{\text{bmp}}$ & 0.0175 & 0.0517 & 0.0295\
$b_{\text{svt}}$ & 0.0592 & 0.0590 & 0.0586\
\
\[table:corr\_params\]
![(color online) Mass value of the $K_S^0$ obtained by fitting the invariant $\pi^+\pi^-$ mass distribution shown as a function of the slower pion laboratory momentum before (open squares) and after (closed circles) all energy-loss and momentum corrections have been applied. Note that the horizontal scale is logarithmic.[]{data-label="fig:ksmass_corr"}](ks0_mass_correction)
The best-fit value of $a=0.00030$ corresponds to an increase of $4.5$ Gauss on the central magnetic field. This is larger than the nominal $2$ Gauss sensitivity of the magnetic field mapping [@ref:babar]. However, the azimuthal dependence of $\Delta m_0$ (discussed in Sec. \[sec:systematics\]) indicates that the accuracy of the mapping may be less than originally thought.
The momentum dependence of $\Delta m_0$ in the initial results is ascribed to underestimating the $dE/dx$ loss in the beam pipe and SVT, which we correct using the factors $b_{\text{bmp}}$ ($1.8\%$) and $b_{\text{SVT}}$ ($5.9\%$). Typical $dE/dx$ losses for a minimum ionizing particle with laboratory momentum $2 \gev$ traversing the beam pipe and SVT at normal incidence are $4.4 \mev$. The corrections are most significant for low-momentum tracks. However, the corrections are applied to all $D^{*+}$ daughter tracks, not just to the slow pion. The momentum dependence is eliminated after the corrections are applied. All fits to data described in this analysis are performed using masses and $\Delta m$ values calculated using corrected 4-momenta. The MC tracks are not corrected because the same field and material models used to propagate tracks are used during their reconstruction.
Fit method {#sec:fitstrategy}
==========
To measure $\Gamma$ we fit the $\Delta m$ peak (the signal) with a relativistic Breit-Wigner (RBW) function convolved with a resolution function based on a Geant4 MC simulation of the detector response [@geant4]. As in previous analyses [@PhysRevD.65.032003], we approximate the total $D^{*+}$ decay width $\Gamma_{\text{Total}}(m) \approx \Gamma_{D^*D \pi}(m)$, ignoring the electromagnetic contribution from $D^{*+}\rightarrow D^+ \gamma$. This approximation has a negligible effect on the measured values as it appears only in the denominator of Eq. (\[eq:rbw\]). For the purpose of fitting the $\Delta m$ distribution we obtain $d \Gamma(\Delta m)/d \Delta m$ from Eqs. (\[eq:rbw\]) and (\[eq:partialwidth\]) by making the substitution $m = m(D^0) + \Delta m$, where $m(D^0)$ is the current average mass of the $D^0$ meson [@ref:pdg2012].
Our fitting procedure involves two steps. In the first step we model the resolution due to track reconstruction by fitting the $\Delta m$ distribution for correctly reconstructed MC events using a sum of three Gaussians and a function to describe the non-Gaussian component. The second step uses the resolution shape from the first step and convolves the Gaussian components with a relativistic Breit-Wigner of the form in Eq. (\[eq:rbw\]) to fit the $\Delta m$ distribution in data, and thus measure $\Gamma$ and $\Delta m_0$. We fit the $\Delta m$ distribution in data and MC from the kinematic threshold to $\Delta m = 0.1665 \gev$ using a binned maximum likelihood fit and an interval width of $50 \kev$. Detailed results of the fits are presented in the Appendix \[app:fitresults\].
Modeling experimental resolution {#sec:resfit}
--------------------------------
\
We generate samples of $D^{*+}$ decays with a line width of $0.1 \kev$, so that all of the observed spread is due to reconstruction effects. The samples are approximately 5 times the size of the corresponding samples in data. The non-Gaussian tails of the distribution are from events in which the $\pi_s$ decays to a $\mu$ in flight and where coordinates from both the $\pi$ and $\mu$ segments are used in track reconstruction. Accounting for these non-Gaussian events greatly improves the quality of the fit to data near the $\Delta m$ peak.
We fit the $\Delta m$ distribution of the MC events with the function $$\begin{aligned}
f_{NG} S_{NG}\left(\Delta m; q, \alpha \right)
+ (1 - f_{NG}) \left[f_1 G\left(\Delta m; \right.\right.&\left.\left.\mu_1, \sigma_1\right) \right. \nonumber \\
\left.+ f_2 G\left(\Delta m; \mu_2, \sigma_2\right) + \left(1 - f_1 - f_2\right) G\left(\Delta m; \mu_3, \right.\right.&\left.\left.\sigma_3\right)\right]
\label{eq:respdf}\end{aligned}$$ where the $G\left( \Delta m; \mu_i, \sigma_i \right)$ are Gaussian functions and $f_{NG}, f_1, f_2$ are the fractions allotted to the non-Gaussian component and the first and second Gaussian components, respectively. The function describing the non-Gaussian component of the distribution is $$S_{NG}\left(\Delta m; q, \alpha\right) = \Delta m \, u^q\, e^{\alpha u},
\label{eq:resng}$$ where $u \equiv \left(\Delta m/\Delta m_{\text{thres}}\right)^2 - 1$ and $\Delta m_{\text{thres}} = m_\pi$ is the kinematic threshold for the $D^{*+}\to D^0 \pi^+$ process. For $\Delta m < \Delta m_{\text{thres}}$, $S_{NG}$ is defined to be zero.
Figure \[fig:resfits\] shows the individual resolution function fits for the two $D^0$ decay modes. Each plot shows the total resolution probability density function (PDF) as the solid curve, the sum of the Gaussian contributions is represented by the dashed curve, and the $S_{NG}$ function as a dotted curve describing the events in the tails. The resolution functions should peak at the generated value, $\Delta m_0^{MC} = m(D^{*}(2010)^{+}) - m(D^0)$ [@ref:pdg2012]. However, the average value of the $\mu_i$ is slightly larger than the generated value of $\Delta m_0^{MC}$. The $S_{NG}$ function is excluded from this calculation as the peak position is not well defined and $S_{NG}$ describes less than 1% of the signal. We take this reconstruction bias as an offset when measuring $\Delta m_0$ from data and denote this offset by $\delta m_0$. The $\delta m_0$ offset is $4.3 \kev$ and $2.8 \kev$ for the $D^0 \to K^-\pi^+$ and $D^0 \to K^-\pi^+\pi^-\pi^+$ modes, respectively. As discussed in Sec. \[sec:systematics\], although the values of $\delta m_0$ are larger than the final estimates of the systematic uncertainty for $\Delta m_0$, they are required for an unbiased result from fits to the validation signal MC samples. The systematic uncertainty associated with $\delta m_0$ is implicitly included when we vary the resolution shape, as discussed in Sec. \[sec:systematics\]. The parameter values, covariance matrix, and correlation matrix are present for each decay mode in the Appendix in Tables \[tab:mcres\] - \[tab:mcres\_k3pi\_corr\].
Fit Results {#sec:datafit}
-----------
\
The parameters of the resolution function found in the previous step are used to create a convolved RBW PDF. In the fit to data, $S_{NG}$ has a fixed shape and relative fraction, and is not convolved with the RBW. The relative contribution of $S_{NG}$ is small and the results from the fits to the validation signal MC samples are unbiased without convolving this term. We fit the data using the function,
$$\begin{aligned}
\mathcal{P}(&\Delta m; \epsilon, \Gamma, \Delta m_0, c) = \nonumber \\
&f_{\mathcal{S}} \frac{\mathcal{S}(\Delta m; \epsilon, \Gamma, \Delta m_0)}{\int{\mathcal{S}(\Delta m)\, d \left(\Delta m\right)}}+(1-f_{\mathcal{S}}) \frac{\mathcal{B}(\Delta m; c)}{\int{\mathcal{B}(\Delta m)\, d \left(\Delta m\right)}}\end{aligned}$$
where $f_{\mathcal{S}}$ is the fraction of signal events, $\mathcal{S}$ is the signal function $$\begin{aligned}
\mathcal{S}(\Delta m) &= RBW \otimes \nonumber \\
(1 - f_{NG}^{MC})& \left[f_1^{MC} G\left(\Delta m; \mu_1^{MC} - \Delta m_0^{MC}, \sigma_1^{MC} \left(1+\epsilon\right)\right) \right. \nonumber \\
\left. + f_2^{MC} \right.&\left.G\left(\Delta m; \mu_2^{MC} - \Delta m_0^{MC}, \sigma_2^{MC}\left(1+\epsilon\right)\right) \right. \nonumber \\
\left. + \left(1 - \right.\right.&\left.\left.f_1^{MC} - f_2^{MC}\right) \right. \nonumber \\
\left. \right.&\left.G\left(\Delta m; \mu_3^{MC}- \Delta m_0^{MC}, \sigma_3^{MC}\left(1+\epsilon\right)\right)\right] \nonumber \\
+ f_{NG}^{MC} &S_{NG}(\Delta m; q^{MC}, \alpha^{MC}),
\label{eq:sigpdf}\end{aligned}$$ and $\mathcal{B}$ is the background function $$\mathcal{B}(\Delta m) = \Delta m \, \, \sqrt{u} \, \, e^{c u},
\label{eq:bkgpdf}$$ where, again, $u \equiv \left(\Delta m/\Delta m_{\text{thres}}\right)^2-1$. The nominal RBW function has a pole position located at $ m = \Delta m_0 + m(D^0) $ and natural line width $ \Gamma $. The Gaussian resolution functions convolved with the RBW have centers offset from zero by small amounts determined from MC, $ \mu_i - \Delta m_0^{MC} $ (see Table \[tab:mcres\] in the Appendix). The widths determined from MC, $\sigma_i^{MC} $, are scaled by $ (1 + \epsilon ) $ where $ \epsilon $ is a common, empirically determined constant which accounts for possible differences between resolutions in data and simulation. As indicated in Eq. (\[eq:sigpdf\]), the parameters allowed to vary in the fit to data are the scale factor $(1+\epsilon)$, the width $\Gamma$, pole position $\Delta m_0$ and background shape parameter $c$. The validation of the fit procedure is discussed in Sec. \[sec:validation\].
Figure \[fig:rdfits\] shows the fits to data for both $D^0$ decay modes. The total PDF is shown as the solid curve, the convolved RBW-Gaussian signal as the dashed curve, and the threshold background as the dotted curve. The normalized residuals show the good agreement between the data and the model. Table \[table:kpik3pi\_rd\_summary\] summarizes the results of the fits to data for the two modes. The covariance and correlation matrices for each mode are presented in Tables \[tab:rd\_kpi\_cov\] - \[tab:rd\_k3pi\_corr\] in the Appendix. The tails of the RBW are much longer than the almost Gaussian resolution function. The resolution functions determined from the fits to MC drop by factors of more than 1000 near $\Delta m \approx 147 \mev$ with respect to the peak. At $\Delta m = 148 \mev$ the resolution functions have dropped by another factor of 10 and are dominated by the $S_{NG}$ component. The resolution functions used in fitting the data allow the triple-Gaussian part of the resolution function to scale by $(1+\epsilon)$, but the events observed above $148 \mev$ are predominantly signal events from the RBW tails and background. The signal from a zero-width RBW would approach 3 events per bin (see Fig. \[fig:resfits\]). The observed signal levels are of order 30 events per bin (see Fig. \[fig:rdfits\]). Table \[table:kpik3pi\_rd\_summary\] also shows the fitted $S/B$ at the peak and in the $\Delta m$ tail on the high side of the peak. The long non-Gaussian tail of the RBW is required for the model to fit the data so well.
As the observed FWHM values from the resolution functions are greater than the intrinsic line width, the observed widths of the central peaks determine the values of $\epsilon$. The scale factor, $(1+\epsilon)$, allows the resolution functions to expand as necessary to describe the distribution in real data. As one naively expects, the fitted values of the scale factor are strongly anti-correlated with the values for $\Gamma$ (the typical correlation coefficient is -0.85).
[c@c@c]{}\
\[-1.7ex\] Parameter & $D^0\to K\pi$ & $D^0\to K\pi\pi\pi$\
\
Number of signal events & $138\,536 \pm 383$ & $174\,297\pm 434$\
$\Gamma \,(\kev)$ & $83.3 \pm 1.7$ & $83.2 \pm 1.5$\
scale factor, $(1+\epsilon)$ & $1.06 \pm 0.01$ & $1.08 \pm 0.01$\
$\Delta m_0\,(\kev)$ & $145\,425.6 \pm 0.6$ & $145\,426.6 \pm 0.5$\
background shape, $c$ & $-1.97 \pm 0.28$ & $-2.82 \pm 0.13$\
\
$S/B$ at peak & &\
($\Delta m = 0.14542 \,(\gev)$) & &\
\
\[-1.8ex\] $S/B$ at tail & &\
($\Delta m = 0.1554\, (\gev)$) & &\
\
\[-1.8ex\] $\chi^2/\nu$ & $574/535$ & $556/535$\
\
\[table:kpik3pi\_rd\_summary\]
Systematic Uncertainties {#sec:systematics}
========================
[c@ccc@ccc]{}\
& & & &\
\
& $K\pi$ & $K\pi\pi\pi$ & & $K\pi$ & $K\pi\pi\pi$ &\
\
Disjoint $p$ variation & 0.88 & 0.98 & 0.47 & 0.16 & 0.11 & 0.28\
Disjoint $m\left(D^0_{\text{reco}}\right)$ variation & 0.00 & 1.53 & 0.56 & 0.00 & 0.00 & 0.22\
Disjoint azimuthal variation & 0.62 & 0.92 & -0.04 & 1.50 & 1.68 & 0.84\
Magnetic field and material model & 0.29 & 0.18 & 0.98 & 0.75 & 0.81 & 0.99\
Blatt-Weisskopf radius & 0.04 & 0.04 & 0.99 & 0.00 & 0.00 & 1.00\
Variation of resolution shape parameters & 0.41 & 0.37 & 0.00 & 0.17 & 0.16 & 0.00\
$\Delta m$ fit range & 0.83 & 0.38 & -0.42 & 0.08 & 0.04 & 0.35\
Background shape near threshold & 0.10 & 0.33 & 1.00 & 0.00 & 0.00 & 0.00\
Interval width for fit & 0.00 & 0.05 & 0.99 & 0.00 & 0.00 & 0.00\
Bias from validation & 0.00 & 1.50 & 0.00 & 0.00 & 0.00 & 0.00\
Radiative effects & 0.25 & 0.11 & 0.00 & 0.00 & 0.00 & 0.00\
\
Total & 1.5 & 2.6 & & 1.7 & 1.9 &\
\
\[table:syswithcorr\]
We estimate systematic uncertainties associated with instrumental effects by looking for large variations of results in disjoint subsets. The systematic uncertainties associated with our fit procedure are estimated using a variety of techniques. These methods are summarized in the following paragraphs and then discussed in detail.
To estimate systematic uncertainties from instrumental effects, we divide the data into disjoint subsets corresponding to intervals of laboratory momentum, $p$, of the $D^{*+}$, azimuthal angle, $\phi$, of the $D^{*+}$ in the laboratory frame, and reconstructed $D^0$ mass. In each of these variables we search for variations greater than those expected from statistical fluctuations.
After the corrections to the material model and magnetic field, the laboratory momentum dependence of the RBW pole position is all but eliminated. We find that $\Gamma$ does not display an azimuthal dependence, however $\Delta m_0$ does. Neither $\Gamma$ nor $\Delta m_0$ displays a clear systematic shape with reconstructed $D^0$ mass.
The uncertainties associated with the various parts of the fit procedure are investigated in detail. We vary the parameters of the resolution function in Eq. (\[eq:respdf\]) according to the covariance matrix reported by the fit to estimate systematic uncertainty of the resolution shape. Changing the end point for the fit estimates a systematic uncertainty associated with the shape of the background function. We also change the background shape near threshold. To estimate the uncertainty in the Blatt-Weisskopf radius we model the $D^{*+}$ as a point-like particle. We fit MC validation samples to estimate systematic uncertainties associated with possible biases. Finally, we estimate possible systematic uncertainties due to radiative effects. All of these uncertainties are estimated independently for the $D^0\rightarrow K^-\pi^+$ and $D^0\rightarrow K^-\pi^+\pi^-\pi^+$ modes, and are summarized in Table \[table:syswithcorr\].
\
\
\
Systematics using disjoint subsets {#sec:disjointsub}
----------------------------------
We chose to carefully study laboratory momentum, reconstructed $D^0$ mass, and azimuthal angle $\phi$ in order to search for variations larger than those expected from statistical fluctuations. For each disjoint subset, we use the resolution function parameter values and $\Delta m_0$ offset determined from the corresponding MC subset.
If the fit results from the disjoint subsets are compatible with a constant value, in the sense that $\chi^2/\nu \leq 1$ where $\nu$ denotes the number of degrees of freedom, we assign no systematic uncertainty. However, if we find $\chi^2/\nu > 1$ and do not determine an underlying model which might be used to correct the data, we ascribe an uncertainty using a variation on the scale factor method used by the Particle Data Group (see the discussion of unconstrained averaging [@ref:pdg2012]). The only sample which we do not fit to a constant is that for $\Delta m_0$ in intervals of azimuthal angle. We discuss below how we estimate the associated systematic uncertainty.
In our version of this procedure, we determine a factor that scales the statistical uncertainty to the total uncertainty. The remaining uncertainty is ascribed to unknown detector issues and is used as a measure of systematic uncertainty according to
$$\begin{aligned}
\label{eq:sys}
\sigma_{\text{sys}} &= \sigma_{\text{stat}} \sqrt{S^2 - 1} \end{aligned}$$
where the scale factor is defined as $S^2 = \chi^2/\nu$. The $\chi^2$ statistic gives a measure of fluctuations, including those expected from statistics, and those from systematic effects. Once we remove the uncertainty expected from statistical fluctuations, we associate what remains with a possible systematic uncertainty.
We expect that $ \chi^2 / \nu $ will have an average value of unity if there are no systematic uncertainties that distinguish one subset from another. If systematic deviations from one subset to another exist, then we expect that $\chi^2/\nu$ will be greater than unity. Even if there are no systematic variations from one disjoint subset to another, $ \chi^2 / \nu $ will randomly fluctuate above 1 about half of the time. To be conservative, we assume that any observation of $ \chi^2 / \nu > 1 $ originates from a systematic variation from one disjoint subset to another. This approach has two weaknesses. If used with a large number of subsets it could hide real systematic uncertainties. For example, if instead of 10 subsets we chose 1000 subsets, the larger statistical uncertainties wash out any real systematic variation. Also, if used with a large number of variables, about half the disjoint sets will have upward statistical fluctuations, even in the absence of any systematic variation. We have chosen to use only three disjoint sets of events, and have divided each into 10 subsets to mitigate the effects of such problems.
We choose the range for each subset to have approximately equal statistical sensitivity. In each subset of each variable we repeat the full fit procedure (determine the resolution function from MC and fit data floating $\epsilon, \Gamma, \Delta m_0,$ and $c$). Figs. \[fig:plabwidth\] and \[fig:plabrbw\] show the fit results in subsets of laboratory momentum for $\Gamma$ and $\Delta m_0$, respectively. Neither $D^0$ mode displays a systematic pattern of variation; however, we assign small uncertainties for each channel using Eq. (\[eq:sys\]). Similarly, Figs. \[fig:mslicewidth\] and \[fig:mslicerbw\] show the results in ranges of reconstructed $D^0$ mass for $\Gamma$ and $\Delta m_0$. While neither mode displays an obvious systematic pattern of variation, the width for the $K^-\pi^+\pi^-\pi^+$ mode is assigned its largest uncertainty of $1.53 \kev$ using Eq. (\[eq:sys\]).
Figures \[fig:azwidth\] and \[fig:azrbw\] show $\Gamma$ and $\Delta m_0$, respectively, in subsets of azimuthal angle. In this analysis we have observed sinusoidal variations in the mass values for $D^0 \rightarrow K^-\pi^+$, $D^0\rightarrow K^-\pi^+\pi^-\pi^+$, and $K_{S}^{0}\rightarrow \pi^+\pi^-$, so the clear sinusoidal variation of $\Delta m_0$ was anticipated. The important aspect for this analysis is that, for such deviations, the average value is unbiased by the variation in $\phi$. For example, the average value of the reconstructed $K_S^0$ mass separated into intervals of $\phi$ is consistent with the mass value integrating across the full range. The width plots do not display azimuthal dependencies, but each mode has $\chi^2/\nu > 1$ and is assigned a small systematic uncertainty using Eq. (\[eq:sys\]). The lack of sinusoidal variation of $\Gamma$ with respect to $\phi$ is notable because $\Delta m_0$ (which uses reconstructed $D$ masses) shows a clear sinusoidal variation. The results for the $D^0 \rightarrow K^-\pi^+$ and $D^0\rightarrow K^-\pi^+\pi^-\pi^+$ datasets are highly correlated, and shift together. The signs and phases of the variations of $\Delta m_0$ agree with those observed for $D^0 \rightarrow K^-\pi^+$, $D^0\rightarrow K^-\pi^+\pi^-\pi^+$, and $K_{S}^{0}\rightarrow \pi^+\pi^-$. We take half of the amplitude obtained from the sinusoidal fit shown on Fig. \[fig:azrbw\] as an estimate of the uncertainty. An extended investigation revealed that at least part of this dependence originates from small errors in the magnetic field from the map used in track reconstruction. There is some evidence that during the field mapping (see Ref. [@ref:babar]) the propeller arm on which the probes were mounted flexed, which mixed the radial and angular components of the magnetic field.
\
The FWHM values of the resolution functions vary by about 8% for each decay channel. For $D^0\rightarrow K^-\pi^+$ the FWHM ranges from $275 \kev$ to $325 \kev$ for the 30 disjoint subsets studied. The FWHM of the $D^0\rightarrow K^-\pi^+\pi^-\pi^+$ resolution function ranges are $310 \kev$ to $350 \kev$ for the 30 disjoint subsets studied. Fig. \[fig:sysep\] shows the values of the scale factor corresponding to the values of $\Gamma$ and $\Delta m_0$ shown in Fig. \[fig:sys\].
Additional systematics
----------------------
We estimate the uncertainty associated with the correction parameters for the detector material model and magnetic field by examining the variation between the nominal parameter values and those obtained by tuning to the $m_{\text{PDG}}\left(K_{S}^{0}\right)\pm 1\sigma_{\text{PDG}}$ mass values [@ref:pdg2012]. The width measured from the $D^0\to K^-\pi^+$ mode fluctuates equally around the value from the fit using the nominal correction parameters. We take the larger of the differences and assign an uncertainty of $0.29 \kev$. The value of $\Delta m_0$ for this mode fluctuates symmetrically around the nominal value and we assign an uncertainty of $0.75 \kev$. The width measured from the $D^0\to K^-\pi^+\pi^-\pi^+$ fluctuates asymmetrically around the nominal value, and we use the larger difference to assign an uncertainty of $0.18 \kev$. The value of $\Delta m_0$ for this mode fluctuates symmetrically around the nominal value, and we assign an uncertainty of $0.81 \kev$.
We use the Blatt-Weisskopf radius $r = 1.6 \gev^{-1}$ ($\sim0.3$ fm) [@Schwartz:2002hh]. To estimate the systematic effect due to the choice of $r$ we refit the distributions treating the $D^{*+}$ as a point-like particle ($r=0$). We see a small shift of $\Gamma$, that we take as the estimate of the uncertainty, and an effect on the RBW pole position that is a factor of 100 smaller than the fit uncertainty, that we neglect.
We determine the systematic uncertainty associated with the resolution function by refitting the data with variations of its parametrization. We take the covariance matrix from the fit to MC resolution samples for each mode (see Tables \[tab:mcres\_kpi\_cov\] and \[tab:mcres\_k3pi\_cov\] in the Appendix) and use it to generate 100 variations of these correlated Gaussian-distributed shape parameters. We use these generated values to refit the data, and take the root-mean-squared (RMS) deviation of the resulting fit values as a measure of systematic uncertainty. This process implicitly accounts for the uncertainty associated with the reconstruction offset.
Our choice of fit range in $\Delta m$ is somewhat arbitrary, so we study the effect of systematically varying its end point by repeating the fit procedure every $1 \mev$ from the nominal fit end point, $\Delta m = 0.1665 \gev$, down to $\Delta m = 0.1605 \gev$. Altering the end point of the fit changes the events associated with the RBW tails and those associated with the continuum background. Each step down allows the background to form a different shape, which effectively estimates an uncertainty in the background parametrization. Values below $\Delta m = 0.16 \gev$ are too close to the signal region to provide a reasonable choice of end point. There is no clear way to estimate the associated systematic uncertainty, so we take the largest deviation from the nominal fit as a conservative estimate.
The shape of the background function in Eq. (\[eq:bkgpdf\]) is nominally determined only by the parameter $c$ and the residuals in Figs. \[fig:rdfits\_kpi\] and \[fig:rdfits\_k3pi\] show signs of curvature indicating possible systematic problems with the fits. Changing the end points over the range considered changes the values of $ c $ substantially from $-1.97$ to $-3.57$, and some fits remove all hints of curvature in the residuals plot. We also examine the influence of the background parametrization near threshold by changing $\sqrt{u}$ in Eq. (\[eq:bkgpdf\]) to $u^{0.45}$ and $u^{0.55}$. The value of the fractional power controls the shape of the background between the signal peak and threshold. For example, at $\Delta m = 0.142$ changing the power from 0.5 to 0.45 and 0.55 varies the background function by +18% and -15%, respectively. The RBW pole position is unaffected by changing the background description near threshold while $\Gamma$ shifts symmetrically around its nominal values. We estimate the uncertainty due to the description of the background function near threshold by taking the average difference to the nominal result.
In the binned maximum likelihood fits we nominally choose an interval width of $50 \kev$. As a systematic check, the interval width was halved and the fits to the data were repeated. The measured $\Gamma$ and $\Delta m_0$ values for both modes are identical except for the width measured in the $D^0\to K^-\pi^+\pi^-\pi^+$ decay mode. We take the full difference as the systematic uncertainty for the choice of interval width.
Fit Validations {#sec:validation}
---------------
We generate signal MC with $\Gamma = 88 \kev$ and $\Delta m_0 = 0.1454 \gev$. The background is taken from a MC cocktail and paired with the signal in the same ratio as from the corresponding fits to data. Fits to both decay modes describe the validation samples well. The fit results are summarized in Table \[table:valsummary\]. We observe a small bias in the fitted width for the $D^0 \to K^-\pi^+\pi^-\pi^+$ mode. We take the full difference between the fitted and generated value of the width and assign a $1.5 \kev$ error.
We also investigated the uncertainty due to radiative effects by examining the subset of these events generated without PHOTOS [@Barberio1994291]. The values of the RBW pole are identical between the fits to the total validation signal MC sample and the subsets, so we do not assign a systematic uncertainty to the poles for radiative effects. The widths measured in each mode show a small difference to the results from the nominal validation sample. We take half of this difference as a conservative estimate of the systematic uncertainty associated with radiative effects.
[cc@c@c]{}\
Fit value & Generated & $D^0\to K\pi$ & $D^0\to K\pi\pi\pi$\
\
\
$\Gamma [\kev]$ & 88.0 & $88.5 \pm 0.8$ & $89.5 \pm 0.6$\
scale factor, $1+\epsilon$ & 1.0 & $1.003 \pm 0.004$ & $1.000 \pm 0.001$\
$\Delta m_0 [\kev]$ & 145400.0 & $145399.7 \pm 0.4$ & $145399.2 \pm 0.4$\
$\chi^2/\nu$ & – & $613/540$ & $770/540$\
\
\[table:valsummary\]
Determining correlations {#sec:detcorr}
------------------------
The fourth and seventh columns in Table \[table:syswithcorr\] list the correlations between the $D^0\to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$ systematic uncertainties. These correlations are required to use information from both measurements to compute the average. The correlations in laboratory momentum, reconstructed $D^0$ mass, and azimuthal angle disjoint subsets are calculated by finding the correlation between the 10 subsets of $D^0\to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$ for each of the variables. In a similar way we can construct datasets using the sets of correction parameters for magnetic field, detector material model, and the $\Delta m$ fit range. We assume no correlation for the resolution shape parameters and the validation shifts, which are based on the individual reconstructions. Our studies show that the values chosen for the Blatt-Weisskopf radius and interval width affect each mode identically, so we assume that they are completely correlated.
Consistency checks
------------------
In addition to the investigations into the sources of systematic uncertainty, we also perform a number of consistency checks. These checks are not used to assess systematics, nor are they included in the final measurements, but serve to reassure us that the experimental approach and fitting technique behave in reasonable ways. First, we lower the $p^*$ cut from $3.6 \gev$ to $2.4 \gev$. This allows in more background and tracks with poorer resolution, but the statistics increase by a factor of three. Correspondingly, the signal-to-background ratios measured at the peak and in the tails decrease by approximately a factor of three. The fit results for this larger dataset are consistent with the nominal fit results. The second consistency check widens the reconstructed $D^0$ mass window from $10 \mev$ to $30 \mev$. Again, this increases the number of background events and improves statistical precision with central values that overlap with the nominal fit results. Finally, we fix the scale factor in the fit to data to report statistical uncertainties on $\Gamma$ similar to those in the measurement by CLEO [@PhysRevD.65.032003]. Our reported “statistical” uncertainties on $\Gamma$ are from a fit in which $\epsilon$ floats. As expected, there is a strong negative correlation between $\epsilon$ and $\Gamma$ with $\rho\left(\Gamma, \epsilon\right) \approx -0.85$. If less of the spread in the data is allotted to the resolution function then it must be allotted to the RBW width, $\Gamma$. We refit the $D^0\to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$ samples fixing $\epsilon$ to the value from the fit where it was allowed to float. This effectively maintains the same global minimum while decoupling the uncertainty in $\Gamma$ from $\epsilon$. The statistical uncertainty on the width decreases from $1.7 \kev$ to $0.9 \kev$ for the $D^0\to K^-\pi^+$ decay mode and from $1.5 \kev$ to $0.8 \kev$ for the $D^0\to K^-\pi^+\pi^-\pi^+$ decay mode.
Combining results {#sec:combmodes}
=================
Using the correlations shown in Table \[table:syswithcorr\] and the formalism briefly outlined below, we determine the values for the combined measurement. For each quantity, $\Gamma$ and $\Delta m_0$, we have a measurement from the $D^0\to K^-\pi^+$ and $D^0\to K^-\pi^+\pi^-\pi^+$ modes. So, we start with a $2\times2$ covariance matrix
$$\begin{aligned}
\begin{split}
V &= \left(
\begin{array}{cc}
\sigma_{K\pi}^2 & {\text{cov}}(K\pi, K\pi\pi\pi)\\
{\text{cov}}(K\pi, K\pi\pi\pi) & \sigma_{K\pi\pi\pi}^2 \end{array} \right) \\
&=\left(
\begin{array}{cc}
\sigma_{K\pi, {\text{stat}}}^2 + \sigma_{K\pi, {\text{sys}}}^2 & \sum_{i}{\rho_i\, \sigma_{K\pi, i} \,\sigma_{K\pi\pi\pi, i}}\\
\sum_{i}{\rho_i \, \sigma_{K\pi, i}\, \sigma_{K\pi\pi\pi, i}} & \sigma_{K\pi\pi\pi, {\text{stat}}}^2 + \sigma_{K\pi\pi\pi, {\text{sys}}}^2 \end{array} \right)
\end{split}\end{aligned}$$
where $i$ is an index which runs over the sources of systematic uncertainty. In the final step we expand the notation to explicitly show that the diagonal entries incorporate the full systematic uncertainty and that the statistical uncertainty for the individual measurements plays a part in determining the weights. The covariance matrices are calculated using Table \[table:syswithcorr\] and the individual measurements. From the covariance matrix we extract the weights, $w$, for the best estimator of the mean and variance using $w_i = \sum_{k}{V^{-1}_{i k}}/\sum_{j k}{V^{-1}_{j k}}$:
$$w_{\Gamma} = \left( \begin{array}{cc}
w_{K\pi} \\
w_{K\pi\pi\pi}
\end{array}\right)
= \left( \begin{array}{cc}
0.650 \\
0.350
\end{array}\right)$$
$$w_{\Delta m_0} = \left( \begin{array}{cc}
0.672\\
0.328\end{array}\right).$$
The weights show that the combined measurement is dominated by the cleaner $D^0\to K^-\pi^+$ mode. The total uncertainty can be expressed as
$$\begin{aligned}
\begin{split}
\sigma^2 = &\sum_{i=1,2}{\left(w_{i} \sigma_{\text{stat}, i}\right)^2} \\
&+ \sum_{i=1,2}{\left(w_{i} \sigma_{\text{sys}, i}\right)^2} + 2 w_1 w_2 \sum_{j=1,11}{\rho_{j} \sigma^{K\pi}_{\text{sys}, j} \sigma^{K\pi\pi\pi}_{\text{sys}, j}}.
\end{split}
\label{eq:combo_statsys}\end{aligned}$$
The statistical contribution is the first term and is simply calculated using the individual measurements and the weights. The remaining two terms represent the systematic uncertainty, which is simply the remainder of the total uncertainty after the statistical contribution has been subtracted. The weighted results are $\Gamma = \left(83.3 \pm 1.2 \pm 1.4\right) \kev$ and $\Delta m_0 = \left(145\,425.9 \pm 0.4 \pm 1.7\right) \kev$.
Summary and conclusions {#sec:conclusion}
=======================
We have measured the pole mass and the width of the $D^{*+}$ meson with unprecedented precision, analyzing a high-purity sample of continuum-produced $D^{*+}$ in collisions at approximately $10.6 \gev$, equivalent to approximately $477 \invfb$, collected by the detector. The results for the two independent $ D^0 $ decay modes agree with each other well. The dominant systematic uncertainty on the RBW pole position comes from the azimuthal variation. For the decay mode $D^0\rightarrow K^-\pi^+$ we obtain $\Gamma = \left(83.4 \pm 1.7 \pm 1.5\right) \kev$ and $\Delta m_0 = \left(145\,425.6 \pm 0.6 \pm 1.7\right) \kev$ while for the decay mode $D^0\rightarrow K^-\pi^+\pi^-\pi^+$ we obtain $\Gamma = \left(83.2 \pm 1.5 \pm 2.6\right) \kev$ and $\Delta m_0 = \left(145\,426.6 \pm 0.5 \pm 1.9\right) \kev$. Accounting for correlations, we obtain the combined measurement values $\Gamma = \left(83.3 \pm 1.2 \pm 1.4\right) \kev$ and $\Delta m_0 = \left(145\,425.9 \pm 0.4 \pm 1.7\right) \kev$.
The experimental value of $g_{D^*D\pi}$ is calculated using the relationship between the width and the coupling constant,
$$\begin{aligned}
\Gamma &= \Gamma\left(D^{0}\pi^+\right) + \Gamma\left(D^{+}\pi^0\right) + \Gamma\left(D^{+}\gamma\right) \\
&\approx \Gamma\left(D^{0}\pi^+\right) + \Gamma\left(D^{+}\pi^0\right) \\
&\approx \frac{g^2_{D^* D^0\pi^+}}{24\pi m^2_{D^{*+}}} p^3_{\pi^+} + \frac{g^2_{D^* D^+\pi^0}}{24\pi m^2_{D^{*+}}} p^3_{\pi^0}\end{aligned}$$
where we have again ignored the electromagnetic contribution. The strong couplings can be related through isospin by $g_{D^*D^0\pi^+} = -\sqrt{2} g_{D^* D^+\pi^0}$ [@PhysRevD.65.032003]. Using $\Gamma$ and the mass values from Ref. [@ref:pdg2012] we determine the experimental coupling $g_{D^*D^0\pi^+}^{\text{exp}} = 16.92 \pm 0.13 \pm 0.14$. The universal coupling is directly related to the strong coupling by $\hat{g} = g_{D^*D^0\pi^+} f_\pi / \left(2 \sqrt{m_{D}m_{D^*}}\right)$. This parametrization is different from that of Ref. [@PhysRevD.65.032003] and is chosen to match a common choice when using chiral perturbation theory, as in Refs. [@PhysRevC.83.025205; @PhysRevD.66.074504]. With this relation and $f_\pi = 130.41 \mev$, we find $\hat{g}^{\text{exp}} = 0.570 \pm 0.004 \pm 0.005$.
[c@ccc]{}\
& & $R$ &\
& & (model) &\
\
\
$D^{*}\left(2010\right)^{+}$ & $96 \pm 4 \pm 22 \kev$ & $143 \kev$ & $0.82 \pm 0.09$\
$D_{1}\left(2420\right)^{0}$ & $18.9^{+4.6}_{-3.5} \mev$ & $16 \mev$ & $1.09 ^{+0.12}_{-0.11}$\
$D_{2}^{*}\left(2460\right)^{0}$ &$23 \pm 5 \mev$ & $38 \mev$ & $0.77 \pm 0.08$\
\
\[table:eichten\]
The paper by Di Pierro and Eichten [@PhysRevD.64.114004] quotes results in terms of a ratio, $R = \Gamma/\hat{g}^2$, which involves the width of the particular state and provides a straightforward method for calculating the corresponding value of the universal coupling constant within their model. The coupling constant should then take the same value for the selected $D^{\left(*\right)}$ decay channels listed in Table \[table:eichten\], which shows the values of the ratio $R$ extracted from the model and the experimental values for $\Gamma$, as they were in 2001. At the time of publication, $\hat{g}$ was consistent for all of the modes in Ref. [@PhysRevD.64.114004]. In 2010, published much more precise results for the $D_1\left(2420\right)^0$ and $D_2^*\left(2460\right)^0$ [@PhysRevD.82.111101]. Using those results, this measurement of $\Gamma$, and the ratios from Table \[table:eichten\], we calculate new values for the coupling constant $\hat{g}$. Table \[table:zach\_eichten\] shows the updated results. We estimate the uncertainty on the coupling constant value assuming $\sigma_{\Gamma} \ll \Gamma$. The updated widths reveal significant differences among the extracted values of $\hat{g}$.
[c@ccc]{}\
& & $R$ &\
& & (model) &\
\
$D^{*}\left(2010\right)^{+}$ & $83.3 \pm 1.2 \pm 1.4 \kev$ & $143 \kev$ & $0.76 \pm 0.01$\
$D_{1}\left(2420\right)^{0}$ & $31.4 \pm 0.5 \pm 1.3 \mev$ & $16 \mev$ & $1.40 \pm 0.03$\
$D_{2}^{*}\left(2460\right)^{0}$ &$50.5 \pm 0.6 \pm 0.7 \mev$ & $38 \mev$ & $1.15 \pm 0.01$\
\
\[table:zach\_eichten\]
After completing this analysis, we became aware of Rosner’s 1985 prediction that the $D^{*+}$ natural line width should be $83.9 \kev$ [@Rosner:1985dx]. He calculated this assuming a single quark transition model to use P-wave $K^* \to K\pi$ decays to predict P-wave $D^* \to D\pi$ decay properties. Although he did not report an error estimate for this calculation in that work, his central value falls well within our experimental precision. Using the same procedure and current measurements, the prediction becomes $(80.5 \pm 0.1) \kev$ [@rosnerPrivate]. A new lattice gauge calculation yielding $\Gamma( D^{*+} ) = ( 76 \pm 7{}^{+8}{}_{-10} ) $ keV, has also been reported recently [@Becirevic201394].
The order of magnitude increase in precision confirms the observed inconsistency between the measured $D^{*+}$ width and the chiral quark model calculation by Di Pierro and Eichten [@PhysRevD.64.114004]. The precise measurements of the widths presented in Table \[table:zach\_eichten\] provide solid anchor points for future calculations.
Acknowledgments
===============
{#app:fitresults}
In this appendix we present the covariance and correlation matrices for the fits described in Sect. \[sec:resfit\] and \[sec:datafit\].
[ccc]{}\
Parameter & $D^0\to K^-\pi^+$ & $D^0\to K^-\pi^+\pi^-\pi^+$\
\
\
$f_{NG}$& $0.00559 \pm 0.00018$ & $0.0054 \pm 0.00016$\
$\alpha$ & $1.327 \pm 0.091$ & $1.830 \pm 0.092$\
$q$ & $-23.04 \pm 1.02$ & $-29.24 \pm 1.07$\
$f_1$ & $0.640 \pm 0.013$ & $0.730 \pm 0.008$\
$f_2$ & $0.01874 \pm 0.00086$ & $0.02090 \pm 0.00069$\
$\mu_1\, (\kev)$ & $145402.36 \pm 0.33$ & $145402.84 \pm 0.24$\
$\mu_2\, (\kev)$ & $145465.37 \pm 9.39$ & $145451.63 \pm 7.83$\
$\mu_3\, (\kev)$ & $145404.58 \pm 0.75$ & $145399.07 \pm 0.81$\
$\sigma_1\, (\kev)$& $119.84 \pm 0.84$ & $112.73 \pm 0.52$\
$\sigma_2\, (\kev)$ & $722.89 \pm 20.6$ & $695.04 \pm 15.75$\
$\sigma_3\, (\kev)$& $212.31 \pm 2.42$ & $209.54 \pm 2.41$\
\
\[tab:mcres\]
[cccccccccccc]{}\
& $f_{NG}$ & $\alpha$ & $q$ & $f_1$ & $f_2$ & $\mu_1$ & $\mu_2$ & $\mu_3$ & $\sigma_1$ &$\sigma_2$ &$\sigma_3$\
\
\
$f_{NG}$& 3.263e-08 &&&&&&&&&&\
$\alpha$ & 1.002e-05 & 8.311e-03 &&&&&&&&&\
$q$ & -1.139e-04 &-8.914e-02& 1.033e+00 &&&&&&&&\
$f_1$ & -7.780e-07& -3.250e-04& 3.662e-03 & 1.581e-04 &&&&&&&\
$f_2$ & 5.671e-08 & 2.336e-05 &-2.627e-04 &-6.724e-06 & 5.761e-07 &&&&&&\
$\mu_1$& 1.064e-13 &-2.634e-11 &-4.741e-10 & 1.426e-10 & -3.353e-12 & 1.081e-13 &&&&&\
$\mu_2$ & -1.998e-10 &-1.059e-07 & 9.350e-07& 2.265e-08& -1.913e-09 & 2.996e-13 & 8.823e-11 &&&&\
$\mu_3$ & -1.016e-11& -3.919e-09 & 4.775e-08 & 1.158e-09 &-6.553e-11 &-1.423e-13 &-1.102e-12 & 5.624e-13 &&&\
$\sigma_1$ & -4.662e-11& -1.949e-08 &2.196e-07 & 1.012e-08 &-3.980e-10& 9.854e-15 & 1.342e-12 & 7.143e-14 &7.072e-13 &&\
$\sigma_2$ & -2.474e-09& -1.035e-06& 1.173e-05& 1.584e-07 &-1.306e-08& 1.144e-14 & 4.486e-11 & 1.887e-12 & 9.422e-12 & 4.260e-10 &\
$\sigma_3$ & -1.756e-10 & -7.341e-08 & 8.275e-07 & 2.942e-08 &-1.469e-09 & 2.487e-14 & 5.008e-12 & 2.302e-13 & 1.818e-12 &3.528e-11 & 5.872e-12\
\
\[tab:mcres\_kpi\_cov\]
[cccccccccccc]{}\
& $f_{NG}$ & $\alpha$ & $q$ & $f_1$ & $f_2$ & $\mu_1$ & $\mu_2$ & $\mu_3$ & $\sigma_1$ &$\sigma_2$ &$\sigma_3$\
\
\
$f_{NG}$& 1.000 &&&&&&&&&&\
$\alpha$ & 0.608 & 1.000 &&&&&&&&&\
$q$ & -0.621& -0.962& 1.000&&&&&&&&\
$f_1$ & -0.343& -0.284& 0.287& 1.000 &&&&&&&\
$f_2$ & 0.414& 0.338& -0.340& -0.705& 1.000 &&&&&&\
$\mu_1$& 0.002& -0.001& -0.001& 0.034& -0.013& 1.000&&&&&\
$\mu_2$ & -0.118& -0.124& 0.098& 0.192& -0.268& 0.097& 1.000&&&&\
$\mu_3$ & -0.075& -0.057& 0.063& 0.123& -0.115& -0.577& -0.156& 1.000&&&\
$\sigma_1$ & -0.307& -0.254& 0.257& 0.958& -0.624& 0.036& 0.170& 0.113& 1.000 &&\
$\sigma_2$ & -0.664& -0.550& 0.559& 0.611& -0.834& 0.002& 0.231& 0.122& 0.543& 1.000 &\
$\sigma_3$ & -0.401& -0.332& 0.336& 0.966& -0.799& 0.031& 0.220& 0.127& 0.892& 0.705& 1.000\
\
\[tab:mcres\_kpi\_corr\]
[cccccccccccc]{}\
& $f_{NG}$ & $\alpha$ & $q$ & $f_1$ & $f_2$ & $\mu_1$ & $\mu_2$ & $\mu_3$ & $\sigma_1$ &$\sigma_2$ &$\sigma_3$\
\
\
$f_{NG}$& 2.746e-08 &&&&&&&&&&\
$\alpha$ & 9.170e-06 &8.565e-03 &&&&&&&&&\
$q$ & -1.076e-04 &-9.539e-02 &1.149e+00 &&&&&&&&\
$f_1$ & -3.981e-07& -1.799e-04& 2.071e-03& 6.953e-05 &&&&&&&\
$f_2$ & 4.133e-08& 1.829e-05& -2.100e-04& -3.847e-06& 4.784e-07 &&&&&&\
$\mu_1$& 1.274e-12& 5.343e-10& -6.776e-09& -1.097e-10& 9.246e-12& 5.648e-14 &&&&&\
$\mu_2$ & -1.434e-10& -7.936e-08& 6.757e-07& 1.332e-08& -1.478e-09& 1.399e-13& 6.134e-11 &&&&\
$\mu_3$ & -1.909e-13& 2.382e-10& 2.094e-09& -6.916e-10& 1.981e-11& -1.016e-13& -1.394e-12& 6.582e-13 &&&\
$\sigma_1$ & -2.191e-11& -9.918e-09& 1.142e-07& 4.099e-09& -2.061e-10& -5.895e-15& 7.264e-13& -4.344e-14& 2.724e-13 &&\
$\sigma_2$ & -1.669e-09& -7.535e-07& 8.781e-06& 7.332e-08& -8.820e-09& -2.122e-13& 2.902e-11& -1.152e-13& 3.967e-12& 2.480e-10 &\
$\sigma_3$ & -1.428e-10& -6.452e-08& 7.441e-07& 1.919e-08& -1.303e-09& -3.679e-14& 4.432e-12& -1.616e-13& 1.084e-12& 2.561e-11& 5.806e-12\
\
\[tab:mcres\_k3pi\_cov\]
[cccccccccccc]{}\
& $f_{NG}$ & $\alpha$ & $q$ & $f_1$ & $f_2$ & $\mu_1$ & $\mu_2$ & $\mu_3$ & $\sigma_1$ &$\sigma_2$ &$\sigma_3$\
\
\
$f_{NG}$& 1.000 &&&&&&&&&&\
$\alpha$ & 0.598 & 1.000 &&&&&&&&&\
$q$ & -0.606 &-0.962 & 1.000&&&&&&&&\
$f_1$ & -0.288& -0.233 & 0.232& 1.000 &&&&&&&\
$f_2$ & 0.361& 0.286& -0.283 & -0.667 &1.000 &&&&&&\
$\mu_1$& 0.032& 0.024& -0.027& -0.055& 0.056& 1.000&&&&&\
$\mu_2$ & -0.110& -0.109& 0.080& 0.204& -0.273& 0.075& 1.000&&&&\
$\mu_3$ & -0.001& 0.003& 0.002& -0.102& 0.035& -0.527& -0.219& 1.000 &&&\
$\sigma_1$ & -0.253& -0.205& 0.204& 0.942& -0.571& -0.048& 0.178& -0.103& 1.000 &&\
$\sigma_2$ & -0.639& -0.517& 0.520& 0.558& -0.810& -0.057& 0.235& -0.009& 0.483& 1.000 &\
$\sigma_3$ & -0.358& -0.289& 0.288& 0.955& -0.782& -0.064& 0.235& -0.083& 0.862& 0.675& 1.000\
\
\[tab:mcres\_k3pi\_corr\]
[ccccccc]{}\
& $\Delta m_0$ & $\epsilon$ & $N_{sig}$ & $N_{bkg}$ & $c$ & $\Gamma$\
\
\
$\Delta m_0$ & 3.181e-13 & &&&&\
$\epsilon$ & 4.060e-10 & 4.909e-05 & &&&\
$N_{sig}$ & 3.782e-06 & 3.533e-01 &1.199e+04 &&&\
$N_{bkg}$ & -3.692e-06 &-3.448e-01 &-8.631e+03 & 1.470e+05 & &\
$c$ & -6.288e-09 &-5.534e-04 &-1.711e+01 & 1.668e+01 & 7.936e-02 &\
$\Gamma$ & -1.017e-13 &-9.965e-09 &-1.084e-04 & 1.058e-04 & 1.779e-07 & 2.920e-12\
\
\[tab:rd\_kpi\_cov\]
[ccccccc]{}\
& $\Delta m_0$ & $\epsilon$ & $N_{sig}$ & $N_{bkg}$ & $c$ & $\Gamma$\
\
\
$\Delta m_0$ & 1.000 & &&&&\
$\epsilon$ & 0.103 &1.000 &&&&\
$N_{sig}$ & 0.061& 0.461& 1.000& &&\
$N_{bkg}$ & -0.017 &-0.128& -0.206& 1.000& &\
$c$ & -0.040& -0.280& -0.555& 0.154& 1.000&\
$\Gamma$ & -0.106& -0.832& -0.579& 0.161& 0.370& 1.000\
\
\[tab:rd\_kpi\_corr\]
[ccccccc]{}\
& $\Delta m_0$ & $\epsilon$ & $N_{bkg}$ & $N_{sig}$ & $c$ & $\Gamma$\
\
\
$\Delta m_0$ & 2.206e-13 & &&&&\
$\epsilon$ & 2.586e-10 & 4.605e-05 & &&&\
$N_{bkg}$ & 3.251e-06 & 4.233e-01 & 2.259e+04 &&&\
$N_{sig}$ & -3.208e-06 & -4.179e-01& -1.313e+04 & 1.874e+05 & &\
$c$ & -1.742e-09& -2.021e-04& -8.226e+00 & 8.095e+00 & 1.678e-02 &\
$\Gamma$ & -6.213e-14 & -8.633e-09 & -1.191e-04 & 1.175e-04 & 6.072e-08 & 2.289e-12\
\
\[tab:rd\_k3pi\_cov\]
[ccccccc]{}\
& $\Delta m_0$ & $\epsilon$ & $N_{bkg}$ & $N_{sig}$ & $c$ & $\Gamma$\
\
\
$\Delta m_0$ & 1.000 & &&&&\
$\epsilon$ & 0.081 &1.000 &&&&\
$N_{bkg}$ & 0.046 & 0.415 & 1.000& &&\
$N_{sig}$ & -0.016 & -0.142 & -0.202 & 1.000 &&\
$c$ & -0.029 &-0.230 &-0.422& 0.144& 1.000 &\
$\Gamma$ & -0.087& -0.841& -0.524& 0.179& 0.310& 1.000\
\
\[tab:rd\_k3pi\_corr\]
| |
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world.
Consequently, what is the Darwin theory of evolution?
Darwinism is a theory of biological evolution developed by the English naturalist Charles Darwin (1809–1882) and others, stating that all species of organisms arise and develop through the natural selection of small, inherited variations that increase the individual's ability to compete, survive, and reproduce.
What is the theory of evolution by natural selection?
Darwin's theory. In 1859, Charles Darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. He defined natural selection as the "principle by which each slight variation [of a trait], if useful, is preserved". | https://answersdrive.com/what-is-the-definition-of-scientific-theory-5330209 |
With all the focus on whether or not to pass New START in a lame duck session before Christmas, very few people noticed that, in the blink of an eye, Congress passed the 1,000 page National Defense Authorization Bill of Fiscal Year 2011 (also referred to as “NDAA”) with no deliberation or amendments. This bill, which usually requires some two weeks of floor debate, affects policy and sets funding levels for all defense programs, including nuclear weapons. This year, the NDAA contained a number of noteworthy provisions related to nuclear weapons force structure, funding, and maintenance. There were also a few provisions that, thankfully, were left on the cutting room floor. I am going to do a couple of short posts on NDAA throughout the week.
I mentioned section 3114 in a previous post. That was the provision based on Nunn-McCurdy that will create cost and schedule requirements if DOE exceeds 125% of its original baseline cost estimate for nuclear weapons programs.
In that post I also mentioned section 1049 of the NDAA, but I provide more information here. Section 1049 requires the National Nuclear Security Administration (NNSA) to develop a methodology and criteria for determining the safety and security features for nuclear weapons.
Over the past several years, NNSA has argued that it needs to make modifications to the nuclear stockpile to increase the safety of U.S. weapons against accidents and their security against theft and unauthorized use. As has been reported in other blogs, NNSA officials have said that the goal is to make nuclear weapons as “safe as a coffee table.” In other words, NNSA’s goal is to make an accidental or unauthorized explosion impossible. While this may sound good, it is simply not feasible for deployed weapons that are intended to be operational at a moment’s notice. However, by using the “coffee table” standard as its baseline, NNSA would likely end up making expensive and endless modifications to warheads in a futile quest for perfect safety and security.
On the other hand, in the past, NNSA sometimes ignored even modest safety standards if they proved inconvenient.
This new legislation will hopefully force DOE to come up with reasonable standards for safety and security, based on realistic criteria for the likelihood of theft or accident, and to stick to them.
The Report language in the Senate version of the NDAA provides some more detail:
For instance, at one point a standard for the nuclear stockpile was to have fire resistant pits in all nuclear weapons. A decision as to whether or not a warhead type actually was designed to have a fire resistant pit was made based on the requirements for the warhead, including the environment in which the warhead would be stored and deployed. While exceptions to the standard were made in the past, exceptions to the new baseline safety and security criteria should be undertaken only with a clear understanding of the risk entailed by such a decision.
In addition to making sensible threat assessments, the legislation also requires NNSA to do a cost/benefit analysis for warhead modifications. The NDAA report states:
While the committee believes strongly that new threats and vulnerabilities should be addressed, the committee also believes that there should be standards established and a review as to how best to meet the standards and address the vulnerabilities even in a constrained budget environment.
This means that NNSA will have to determine if there is a cheaper way of increasing the safety and security of nuclear warheads other than its preferred choice. As a hypothetical example, it might not make sense for the United States to spend a lot of money to modify the B61 warhead to make it slightly less vulnerable to unauthorized use while it is deployed in Europe if it could achieve the same result by spending less money to increase the security of the weapon’s storage and transportation.
This legislation is particularly important because, over the next 10 years, the United States plans to spend around $4 billion modifying the B61 warhead. In setting clear standards and making these assessments, NNSA will hopefully make better decisions about whether warhead modifications are actually needed, or if there are less expensive or intrusive options.
Posted in: Nuclear Weapons
Tags: arms control, nuclear weapons
Support from UCS members make work like this possible. Will you join us? Help UCS advance independent science for a healthy environment and a safer world. | https://allthingsnuclear.org/nroth/fiscal-year-2011-defense-authorization-bill-and-nuclear |
The utility model discloses a hot metal ladle heat-preserving device which is a heat-preserving cover used for covering a hot metal ladle. The hot metal ladle heat-preserving device comprises a cap body and a cover body, wherein the shape of the cap body is identical with the shape of a ladle opening of the hot metal ladle, and the cover body is arranged along the edges of the cap body. Heat-preserving cotton is paved on the surface, located in the cover body, of the cap body, and a lifting lug used for lifting is further arranged on the cap body. Compared with the prior art, the hot metal ladle heat-preserving device is provided with the heat-preserving cover capable of covering the hot metal ladle, and the heat-preserving cotton with the very good heat preservation function is further arranged in the heat-preserving cover, thus, heat in the hot metal ladle does not dissipate out easily, and the good heat preservation function is achieved; after the hot metal ladle undergoes pouring, the temperature of the hot metal ladle can be prevented from decreasing, and when the hot metal ladle is used again, heat can be utilized well; outlet cold ladle melted iron is reduced, the decreasing speed of the temperature of the melted iron is reduced, and pouring temperature is ensured. | |
What is Stigma?
What Is Stigma?
Stigma is a degrading and debasing attitude of the society that discredits a person or a group because of an attribute (such as an illness, deformity, color, nationality, religion, etc.) The resulting coping behavior of affected person results in internalized stigma. This perceived or internalized stigma by the discredited person is equally destructive whether or not actual discrimination occurs. Stigma destroys a person’s dignity; marginalizes affected individuals; violates basic human rights; markedly diminishes the chances of a stigmatized person of achieving full potential; and seriously hampers pursuit of happiness and contentment.
When stigma is associated with a medical condition or disability it prevents individuals from seeking evaluation and treatment or from disclosing the diagnosis to the people most likely to provide support to them. It also affects individuals from following treatment guidelines. While there are many illnesses that have been severely stigmatized in the past, it is generally agreed that HIV/AIDS is the most stigmatized medical condition in the history of mankind.
While society elevates the status of those receiving treatment for some conditions such as cancer or serious injuries as heroes, those who have acquired HIV are subjected to layers upon layers of stigma with assumptions that these individuals are deserving of punishment for their “assumed behavior that led them to get HIV” and they are often shunned.
Stigma prevents individuals from getting tested for HIV, seeking medical care, disclosing diagnosis and in adhering to treatment and follow up. Fear of social abandonment and losing intimate partners prevents many with HIV from sharing the diagnosis with their loved ones and sexual partners. Stigma has become a major reason why the HIV epidemic continues and millions of people are getting infected and dying with HIV every year.
| |
New research suggests that interbreeding between early humans and Neanderthals was not the rarity scientists had once thought it was, but rather a more regular occurrence over several thousands of years.
In a study published Thursday in the journal Nature Ecology and Evolution, a pair of researchers from Temple University explained how the interbreeding started about 75,000 years ago -- not long after early humans moved out of Africa and into Europe and certain parts of Asia. There, the early humans first encountered Neanderthals. As noted by History, earlier studies had suggested that most modern humans have about 2 percent Neanderthal DNA as a result of interbreeding between the two species.
While a number of papers had suggested in recent years that modern humans, with the exception of those whose ancestors never left Africa, got their Neanderthal DNA from occasional encounters with the extinct hominid species, the researchers behind the new study stated otherwise. Said research team suggested that the early humans who populated Eurasia after leaving Africa interbred with Neanderthals at "multiple points in time" over a span of 35,000 years.
As quoted by the Daily Mail, study co-author Joshua Schraiber explained that there might have been "much more" interbreeding between early humans and Neanderthals in Eurasia over that timeframe.
"Some of the fantastical aspects come from a lack of clear definition of species in this case. It is always very hard to know if an extinct group constituted a different species or not," Schraiber continued.
"My guess is that any time two different human groups lived in the same place at the same time for a while, they probably had some sort of breeding contact."Given the theory that East Asians have about 12 to 20 percent more Neanderthal DNA than Europeans, Schraiber and co-author Fernando Villanea performed computer simulations to determine whether there might have been a substantially greater number of interbreeding episodes between early humans and Neanderthals.
The researchers' AI-based techniques revealed that modern humans have varying percentages of Neanderthal DNA due to the frequent interbreeding that took place between Neanderthals, East Asians, and Europeans -- which Schraiber described as proof of a "more complex" series of interactions involving our ancestors.As further proof of the frequent interbreeding theorized by the Temple researchers, the early human mandible known as Oase 1 was mentioned as a more recent example. According to Phys.org, this fossil was first discovered in 2002 in Romania -- and was believed to have had a Neanderthal ancestor removed by about four to six generations.
"It had recent Neanderthal ancestry. These fossils are about 37,000 to 38,000 years old -- so at least some interbreeding must have been going on as recently as then," Schraiber said.
The new study, however, comes with its share of limitations. According to GenomeWeb, the Temple researchers' model took the assumption that Neanderthal ancestry in present-day humans is neutral -- as opposed to being deleterious, or harmful -- and did not take into account the possibility that Neanderthal ancestry in today's East Asians might have actually been associated with the Denisovans, a recently discovered species of ancient hominids.
Despite those limitations, Max Planck Institute for Evolutionary Anthropology scientist Fabrizio Mafessoni explained in a commentary on the study that the findings mesh with the "emerging view of complex and frequent interactions" between various hominid groups. | https://www.inquisitr.com/5181062/neanderthals-and-early-humans-interbred-far-more-often-than-once-believed-new-study-claims/ |
O Timothy, guard the deposit entrusted to you. Avoid the irreverent babble and contradictions of what is falsely called “knowledge,” 21 for by professing it some have swerved from the faith. (1 Tim. 6:20-21)
The third problem that the traditional interpretation of Genesis has is that it believes that Moses was correct when he recorded that the creation of vegetation preceded the creation of the sun.
3) Vegetation is created before the sun, making photosynthesis impossible. Had a college atheist biology major bring that one up. If you are going to insist on a 6 literal days and that the light of Day 1 kept the vegetation alive until Day 4 then you run into other problems…
Now, this is probably one of the less interesting of the supposed problems for the traditional interpretation of Genesis 1-2. This way of reading Scripture is inculcated with enlightenment rationalism. The historical-critical method of interpreting Scripture, which came to prominence on the heels of the enlightenment, places human reason in the judgment seat and Scripture in the dock. In the name of scientific objectivity, some are reconsidering the straightforward claims of the Bible. And in this case, something as straightforward as the supernatural creative activity of God and his subsequent supernatural revelation of Scripture become the objects of human judgment. Keep in mind, the issue here is not a matter of grammar. Behind any grammatical questions, there is the question of a reliable account of the creation order as outlined here in Genesis 1.
The Problem of the Impossibility of Photosynthesis without the Sun
It is not surprising that a non-Christian would find it problematic that photosynthesis could occur prior to the Sun being created because, well, that would violate the laws of physics. Photosynthesis, 6CO2 + 6H2O ——> C6H12O6 + 6O2, is required in order for vegetation to survive. Therefore, without the Sun, there is no photosynthesis. And without photosynthesis, there can be no vegetation. The only way such a scenario could present a valid objection to the creation account is if one presupposed that God would have to create or order the universe in accordance with pre-existing laws of nature. As we know, the Christian worldview has historically affirmed the doctrine of creation ex nihilo. That is to say that God created the entire physical universe and everything that is within it by the word of his power, from nothing, in six days and rested on the seventh. (Gen. 2:1-3; Ex. 20:8-11; Heb. 4:4; Col. 1:16; Heb. 11:3)
My claim is that the sun nor photosynthesis as we know it were necessary to sustain vegetation as created on the third day. If God exists, then miracles are possible. Vegetation surviving apart from the laws of physics would be defined after the fact as a miracle. Therefore, vegetation surviving apart from photosynthesis is possible according to the Christian worldview. This leads me to conclude that the claim that the traditional interpretation of Genesis encounters a problem due to this particular order of creation is mistaken.
The issue this raises concerns the appropriateness of applying the laws of nature to the act of supernatural creation. There are two basic commitments set against one another in this objection to the traditional interpretation of creation. Christian belief takes the grammar of the text and its record at face value. There is no ambiguity in the text. It claims that God created vegetation on day three and the sun on day four. The reader has a choice at this point. Believe what the text claims or reject the text. In order for the decision to attain a degree of credibility, it should have some warrant, backing, in other words, there should be some rational grounding whatever the final decision turns out to be.
The decision to reject the Genesis account as either out of order or even to relegate it to myth or saga is not the product of grammatical analysis. The grammar offers no support here. Instead, the decision must be based on something other than grammar. Specifically, the decision seems most likely to be the outworking of philosophical commitments. The real issue is that science, or better, the philosophy of science has displaced the Christian philosophy of revelation and the epistemic authority of Scripture has been subordinated to autonomous human reason. Is it a good practice for Christians to accept secular theories or philosophies of science? Vern Poythress writes, “The particular form that sciences have taken in our time is greatly influenced by a historical development that has contained both good and bad influences. The existing form of sciences therefore cannot serve as a norm for us.” [Poythress, Philosophy, Science, and the Sovereignty of God, 7] In short, the answer is not no. Without a philosophy of science and of the world, scientific method is impossible. J.P Moreland points out one of the most serious problems attaching itself to science: First, there is no definition of science, no set of necessary and sufficient conditions for something to count as science, no such thing as the scientific method, that can be used to draw a line of demarcation between science and nonscience. [Moreland, Christianity and the Nature of Science, 13] If there is no agreed upon definition, then it seems that this criticism of the traditional interpretation of Genesis 1-2 has no grounding, either in exegesis, in philosophy, or in science. Science is the modern tool by which rebellious men seek to control everything, including the divine revelation. Rushdoony observes, “In terms of this evolutionary perspective, science is not so much the understanding of things as the controlling of things. [Rushdoony, The Mythology of Science, 30]
Christians must improve their critical thinking skills if they are going to accurately discern the hidden agenda of the blackened and depraved heart of secular man. A popular expression by the National Science Foundation is displayed in every high school textbook: “Science extends and enriches our lives, expands our imaginations and liberates us from the bonds of ignorance and superstition.” [Berlinski, The Devil’s Delusion, 15] Indeed, it is not just unwise to uncritically bow the knee of Christian theology at the throne of science, it can be reckless and is quite often catastrophic. For instance, the last few months and years have witnessed a number of popular pastors attempting to help Christians determine which parts of the bible they don’t have to accept as true on the ground that they are just too outrageous for modern sensibilities. Andy Stanley’s position on the virgin birth stands out as one example. Mike Licona’s reductionist view that all we have to defend is the resurrection miracle where Christian belief is concerned. Everything else is fair game. The slippery slope argument may be a logical fallacy, but that does not mean it is not a tragic reality of many supposed leaders inside the Christian church.
In addition to its lack of coherence with Christian belief, this objection regarding vegetation being created before the sun has another problem. If detractors are going to employ the laws of nature to determine when a teaching of Scripture is rationally acceptable or not, then we will have to toss out all of the miracles of the Bible. For example, Numbers 22:28 records that Balaam’s donkey literally spoke to him. I have to say that enlightened man simply cannot accept such an outrageous story. Surely donkeys cannot speak and this story cannot be taken literally! Another example is found in 2 Kings 6:6 where Elisha makes an axe head float. This miracle is so trivial there is no reason for us to expect that it is little more than legend, myth, or exaggeration. The laws of physics would not permit an axe head to float any more than they would permit living vegetation without the sun. The number of miracles that would have to be eliminated if Nathan’s objection from photosynthesis were valid, is overwhelming. Once the dominos begin to fall, there is no end to what amounts to the ultimate collapse of anything remotely resembling consistency in Christian belief. The virgin birth, the resurrection, Christ’s walking on the water, etc. If plants cannot survive without naturally occurring photosynthesis, then a donkey didn’t talk, an axe head didn’t float, the Messiah didn’t walk on water, a virgin didn’t become pregnant, and a dead man did not rise from the dead. This small objection logically leads to the end of Christian belief.
In summary then, it seems that the claim that the traditional interpretation of Genesis 1, an interpretation of the text that literally places the creation of vegetation before the creation of the sun, is itself wrought with problems is based on prior commitment to a philosophy of science. Since there is no agreed upon definition for what is and is not science, and since there are numerous philosophies of science to choose from, and since science is often proven to be wrong, it seems that to base one’s criticism of the traditional interpretation of Genesis 1-2 on science is tenuous at best. The ground upon which such an objection rests feel more like quicksand that solid ground. The warrant for this argument is incredibly weak. Surely if God can create something from nothing, and since God is the author of what appears to be natural law to begin, and since God orders the physical properties of the universe in whatever way he pleases, according to His own purpose, any objection to the order of creation based solely on the belief that it somehow violates some existing law of nature reduces to absurdity. | https://reformedreasons.com/2017/05/07/the-battle-for-the-beginning-3-of-12/ |
The invention provides a method for predicting delirium through artificial intelligence, and belongs to the field of critical patient evaluation. The method comprises the steps: related data of a patient and a hospital are included through a structured language; feature variables are screened based on a random forest, and a prediction model is constructed through regularization logistic regression, K nearest neighbor, a support vector machine, the random forest, limit gradient lifting and a deep neural network algorithm; and through prediction probabilities of different methods, ensemble learning is carried out through a limit gradient lifting algorithm to finally predict whether a critical patient has delirium or not, and the occurrence probability of the delirium is further calculated. According to the method, patient data and hospital feature information are utilized as much as possible, and individualized evaluation for predicting the critical patient is carried out, so that prediction bias caused by a certain model is weakened, and prediction accuracy is improved. | |
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
1. Field of the Invention
The present invention relates to a fixture, and more particularly to a fixture that is mounted on a photosensitive seal machine to provide a locating effect to stamps and may improve the exposure effect of the photosensitive seal machine.
2. Description of Related Art
FIGS. 13 and 14
90
80
91
90
80
81
82
83
81
84
85
84
81
85
81
84
86
81
85
82
84
81
83
81
With reference to , stamps may be put on a photosensitive seal machine to form patterns or characters on printing faces of the stamps . The photosensitive seal machine has a seat , an exposure apparatus , and a cover . The seat has a chamber and an opening . The chamber is formed in the seat . The opening is formed through a top surface of the seat and is in communication with the chamber . A plate is translucent, is mounted on the top surface of the seat , and covers the opening . The exposure apparatus is mounted in the chamber of the seat . The cover is pivotally mounted on and covers the seat .
70
86
70
71
72
71
70
72
70
71
91
90
71
70
86
90
72
71
83
81
90
81
A transfer paper is put on the plate . The transfer paper has multiple exposed areas and multiple dotted lines . The exposed areas are formed on the transfer paper at spaced intervals. The dotted lines are formed on the transfer paper and respectively surround the exposed areas . The printing face of each stamp is deposited on a respective one of the exposed areas of the transfer paper mounted on the plate . After an outer edge of each stamp aligns with the dotted line that surrounds the corresponding exposed areas , the cover may be covered on the seat to perform an exposure operation of each stamp on the seat .
90
70
83
81
90
70
72
91
90
71
70
90
However, each stamp may not be located well to deposit on the transfer paper . Upon shock or while the cover is covering the seat , each stamp deposited on the transfer paper is easy to deviate from a corresponding dotted line . The printing face of each stamp is easy to move out of the corresponding exposed area of the transfer paper . Therefore, the exposure effect of the stamp is bad.
To overcome the shortcomings, the present invention tends to provide a fixture to mitigate or obviate the aforementioned problems.
The main objective of the invention is to provide a fixture that may locate the stamps for improving the exposure effect.
The fixture for a photosensitive seal machine is deposited on a transfer paper on the photosensitive seal machine to locate at least one stamp, and the fixture has a frame. The frame has a top surface, a bottom surface, multiple locating recesses and multiple retaining faces. The bottom surface is opposite to the top surface. The locating recesses are formed through the top surface of the frame and the bottom surface of the frame at spaced intervals. The retaining faces are formed in the frame and respectively surround the locating recesses of the frame.
The transfer paper is deposited on the photosensitive seal machine. The fixture is deposited on the transfer paper. The transfer paper on the photosensitive seal machine is pressed by the frame of the fixture. The locating recesses of the frame respectively align with the exposed areas formed on the transfer paper. The stamps are respectively deposited into the locating recesses of the frame and are respectively retained by the retaining faces of the frame. The stamps cannot move out of the frame. A printing face of each stamp aligns with a corresponding exposed area on the transfer paper. Therefore, each stamp is located well by the fixture and cannot deviate from a corresponding dotted line. The locating effect of the fixture is good to increase the exposure effect of the stamps and the yield rate of the stamps.
FIGS. 1 and 2
10
10
11
12
10
10
11
10
12
10
11
10
13
13
10
10
11
With reference to , a first embodiment of a fixture for a photosensitive seal machine in accordance with the present invention comprises a frame . The frame has a top surface, a bottom surface, multiple locating recesses , and multiple retaining faces . The bottom surface of the frame is opposite to the top surface of the frame . The locating recesses are formed through the top surface and the bottom surface of the frame at spaced intervals. The retaining faces are formed in the frame and respectively surround the locating recesses of the frame . Furthermore, the fixture has an auxiliary locating face . The auxiliary locating face is formed around an outer surface of the frame . In the first embodiment of the fixture, the frame is rectangular in shape. The locating recesses are rectangular in shape.
FIG. 3
10
11
14
14
12
11
11
14
11
10
With reference to , in a second embodiment of the fixture in accordance with the present invention, the frame is rectangular in shape. The locating recesses are circular in shape. Furthermore, the fixture has multiple locating notches . The locating notches are respectively formed in the retaining faces and are respectively in communication with the locating recesses . Stamps (not shown) having circular outer surfaces may be deposited to the locating recesses , which are circular. Each one of the circular outer surfaces of the stamps has a retaining protrusion. The retaining protrusion is inserted into a corresponding locating notch of one of the locating recesses of the frame for preventing rotation of the stamp having the circular outer surface. Therefore, the stamp having the circular outer surface is retained securely.
FIGS. 4, 9, and 10
40
30
20
30
40
With reference to , in an exposure progress of the stamps , a transfer paper is deposited on the photosensitive seal machine and the fixture is deposited on the transfer paper to locate at least one of the stamps .
FIGS. 5 and 10
20
21
24
22
23
24
21
22
21
24
23
21
With reference to , the photosensitive seal machine has a seat , a plate , an exposure apparatus , and a cover . The plate is translucent and is mounted on a top surface of the seat . The exposure apparatus is mounted in the seat below the plate . The cover is pivotally connected to and covers the seat .
FIG. 6
30
24
21
30
32
31
32
30
32
31
30
32
With reference to , the transfer paper is put on the plate that is mounted on the seat . The transfer paper has multiple exposed areas and multiple dotted lines . The exposed areas are formed on the transfer paper at spaced intervals and have patterns or characters formed on the exposed areas . The dotted lines are formed on the transfer paper and respectively surround the exposed areas .
FIG. 7
10
30
11
10
32
30
11
31
32
With reference to , the frame is deposited on a top surface of the transfer paper . The locating recesses of the frame are respectively corresponding to the exposed areas of the transfer paper in shape. The outer edges of the locating recesses respectively align with the dotted lines surrounding the exposed areas .
FIGS. 8 to 10
40
11
10
41
40
32
30
40
12
10
40
10
23
41
40
32
30
41
40
32
30
With reference to , the stamps are respectively deposited into the locating recesses of the frame . The printing faces of the stamps respectively contact the exposed areas of the transfer paper . The outer surfaces of the stamps are respectively retained by the retaining faces of the frame . The stamps may be positioned securely in the frame . Upon shock or vibration or when the cover is covering the seat, the printing faces of the stamps may still contact the exposed areas of the transfer paper . The printing faces of the stamps are hard to deviate from the exposed areas of the transfer paper .
FIG. 11
23
41
40
22
20
32
30
41
40
With reference to , when the cover covers the seat, the printing faces of the stamps are exposed by the exposure apparatus of the photosensitive seal machine . The patterns or the characters shown on the exposed areas of the transfer paper are transfer printed on the printing faces of the stamps .
FIGS. 11 and 12
21
25
25
21
24
10
30
25
13
10
10
With reference to , the seat further has a wall . The wall is formed on the top surface of the seat above the plate . When the frame is mounted on the transfer paper , the wall surrounds the auxiliary locating face of the frame for retaining the frame .
30
11
32
30
40
11
10
41
40
32
30
32
30
40
40
11
10
41
40
40
40
Accordingly, the fixture may be deposited on the transfer paper . The locating recesses of the fixture respectively align with the exposed areas of the transfer paper , and then the stamps are further deposited into the locating recesses of the frame . The printing faces of the stamps align with the exposed areas of the transfer paper and would not move out of the exposed areas of the transfer paper . The adjustment time of positioning each one of the stamps is decreased. After each one of the stamps is aligned with the locating recess of the frame , the printing face of each one of the stamps is hard to deviate. Therefore, the locating effect of the fixture is good to increase the exposure effect of the stamps and raise the yield rate of the stamps .
20
40
20
40
20
For producing a certain mass quantity of the stamps, the times of exposure of the photosensitive seal machine are decreased since the yield rate of the stamps is increased. In addition, within certain times of exposure of the photosensitive seal machine , the production quantity of the exposed stamps is increased since the yield rate of the stamps is raised. The fixture for a photosensitive seal machine has an advantage of improving the working efficiency.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a perspective view of a first embodiment of a fixture for a photosensitive seal machine in accordance with the present invention;
FIG. 2
FIG. 1
is a cross-sectional side view of the fixture in ;
FIG. 3
is a perspective view of a second embodiment of a fixture for a photosensitive seal machine in accordance with the present invention;
FIG. 4
FIG. 1
is an operational top view of the fixture in with the photosensitive seal machine, a transfer paper, and multiple stamps;
FIG. 5
FIG. 4
is an enlarged top view of the photosensitive seal machine in ;
FIG. 6
FIG. 4
is an operational top view of the transfer paper in , mounted on the photosensitive seal machine;
FIG. 7
FIG. 1
is an operational top view of the fixture in , mounted on the transfer paper;
FIG. 8
FIG. 4
is an operational top view of the stamps in , deposited on the fixture;
FIG. 9
FIG. 1
is an exploded perspective view of the fixture in for positioning the stamps;
FIG. 10
FIG. 1
is a side view in partial section of the fixture in , deposited on the photosensitive seal machine and not covered by a cover of the photosensitive seal machine;
FIG. 11
FIG. 1
is a side view in partial section of the fixture in , deposited on the photosensitive seal machine and covered by the cover of the photosensitive seal machine;
FIG. 12
FIG. 8
12
12
is an enlarged rear side view in partial section of the fixture along line - in ;
FIG. 13
is an operational top view of a photosensitive seal machine, multiple conventional stamps, and a conventional transfer paper in accordance with the prior art; and
FIG. 14
FIG. 13
is a side view in partial section of the photosensitive seal machine, multiple conventional stamps, and a conventional transfer paper in . | |
We thank Chairperson Etta Rosales and two other Commissioners of the CHR (Commission on Human Rights) for the visit on 25 March to find out if our human rights were respected during our arrest.
We emphasized to the CHR commissioners that the most important and serious issues relative to our human rights are our being framed-up, accused of fabricated charge of illegal possession of firearms and explosives and the planting of evidence to justify our illegal arrest and detention and that of our five companions in the house.
This gross violation of human rights completely negates all the efforts of CHR to initiate in the PNP and AFP the respect for human rights. The few and trickled measures to effect in the PNP and AFP the respect for human rights are rendered inutile if the more fundamental right against illegal arrest is wantonly practiced by the whole institution and the State. This creates intense cynicism not only among the victims but also in the ranks of the police and soldiers.
The true and serious promotion of the respect for human rights is towards the welfare and aspirations of the people. In view of this, we urge the CHR to launch an earnest campaign against the contempt for human rights of the PNP, AFP and the ruling State.
Reply to Mr. Alex Padilla, and with regards to the violation of the JASIG
First released on 24 March 2014
We reiterate our condemnation of our arrest as an outright violation of the Joint Agreement on Safety and Immunity Guarantees (JASIG) that gives us immunity as national consultants of the National Democratic Front of the Philippines with NDFID No. 978226 (Wilma Austria) and NDFID No. 978227 (Benito with the name Crising Banaag).
The claim of the head of the GPH peace panel, Alex Padilla, that we are not covered by the JASIG is a warped lie. Wilma was openly confirmed as NDFP national consultant by former GRP peace panel head Ambassador Howard Dee, as well as by Pres. Ramos when he ordered the release of Wilma Austria in 1994 as a confidence and goodwill building measure. Meanwhile, Benito was among the original recipients of the NDFP Document of Identification and JASIG immunity, a thing that was expected and indubitable. Whatever pretexts the GPH panel will come up with, its claim that Benito has no role in the peace process and is not covered by the JASIG is utterly farcical.
NDFP consultants, like us, can significantly contribute to the peace process because of our crucial role in the struggle. Our role in the struggle is a requisite to our role in the peace process. The involvement of the movement is inherent in a peace process; hence, the position of a few that the struggle must cease first before we participate in the peace process is unreasonable and will not help. We believe that for the realization of a true, lasting and just peace, the pursuit of the struggle with all our might is not contrary but rather in unity with and corresponds to the peace process.
With regards the allegation of the AFP that Benito Tiamzon and Wilma Austria are the brains behind the “anti-infiltration hysteria”
First released on 24 March 2014
The allegation made by the Philippine National Police officer (after our statement was read) that the brains behind the anti-infiltration hysteria “kahos” were Benito and Wilma Tiamzon is a stark lie and fabrication. This is contrary to the truth.
The “kahos” was a local campaign which was planned and started in some parts of Mindanao in 1984, at the time that Benito and Wilma were in Luzon and had no direct responsibility of the movement in Mindanao.
When the belated report on the “kahos” was received, this was immediately stopped, investigated and the error, fully rectified.
Benito and Wilma played a crucial role in leading the investigation and rectification, as well as in formulating, approving and disseminating clear and stricter rules on the investigation, trial and judgment.
The movement has clear and strict policies and rules on the respect for human rights and democratic rights of its members and the people. These are systematically and continuously disseminated to all.
The Comprehensive Agreement on Respect for Human Rights and International Humanitarian Law (CARHRIHL) that was signed by the NDFP and GPH reflects the stand and policies of the Party and the movement on the respect for human rights and democratic rights and the humane conduct of the armed struggle.
On the other hand, the human rights abuses of the GPH and its armed forces all the more increased and intensified. Extrajudicial killings, which Palparan grossly propagated, massacres such as the ruthless mass killing of the peasants in Palo, Leyte continue. Militarization, especially in the rural areas, and all the turpitude and abuses linked to it, such as illegal arrest, torture, killing, and dislocation of whole barangays and communities are widespread.
Rights groups slam trumped-up charges vs activists, peace consultants in talks with government
Rights groups held a protest action in front of the Manila Regional Trial Court Tuesday morning, when the court conducted a clarificatory hearing on the multiple murder case filed against former Bayan Muna Rep. Satur Ocampo and peace consultants in talks with the government, including Randall Echanis, Rafael Baylosis, Vicente Ladlad, and recently arrested consultants Benito Tiamzon and Wilma Austria.
“The revival of these trumped up charges against Ocampo and the peace consultants signals the intensification of political persecution under the Aquino administration. These were charges hatched under the auspices of former Pres. Gloria Macapagal Arroyo’s Inter-Agency Legal Action Group (IALAG), which was deemed by United Nations Special Rapporteur Philip Alston as a means by which the government prosecutes and punishes “enemies of the state,” said Cristina Palabay, Karapatan secretary general.
Palabay said that while the IALAG was abolished due to extensive campaign of human rights groups and the international community, the policy and practice of filing trumped-up criminalized charges continues under the Aquino government.
Karapatan has documented 570 cases of illegal arrests and detention from June 2010 to December 2013. The group also documented 427 political prisoners, as of December 2013, including 152 persons arrested under Aquino’s term. Palabay added that almost all of cases, like those of detained NDFP consultants, are criminal charges spuriously filed based on highly questionable evidence and fabricated testimonies.
“Leaders of people’s organizations in Negros, for instance, are constantly threatened with fabricated criminal charges of the AFP and the PNP. Under the Aquino government, the assault on political dissenters through the filing of trumped-up charges is on the rise. In an attempt to silence opposition, they make up all sort of charges using the wildest of their imagination,” Palabay said.
Organization of ex-political detainees SELDA (Samahan ng Ex-Detainees Laban sa Detensyon at Aresto), of which Ocampo is board member, also condemned the revival of charges against Ocampo and consultants who are performing tasks in the peace talks on the side of the National Democratic Front of the Philippines.
“How the police and military have arrested, demonized and dealt with the latest political prisoners Benito Tiamzon, Wilma Austria and their five companions, and the arrest of the late Ka Roger Rosal’s daughter, is vintage martial law practice. The “planting” of evidence has been a long-standing practice of the police and military, extensively used during the Martial Law period. They use this to justify illegal arrests and detention. They also exploit the use of John and Jane Does, even aliases, to charge anyone as respondents to a case,” Bonifacio Ilagan, vice chairperson of SELDA, said.
Ilagan, who was imprisoned during the Martial Law years, recalled “As early as the 1970s, I remember being accosted with fellow activists after a rally, brought to the police precinct at UN Avenue and slapped with illegal possession of explosives, even if we only carried banners and streamers.”
“Circumstances of arrests and detention are highly anomalous, and the so-called evidences improbable,” said Ilagan, “human rights lawyers call the circumstances cited in trumped-up charges as beyond human experience, like soldiers’ testimonies that they identified the respondents by virtue of seeing their faces in alleged military encounters.”
Karapatan and SELDA joined calls to free all political prisoners, and demanded that the Aquino government stop filing trumped-up charges.
“Trumped-up charges are obviously meant to stifle the freedom of movement of political dissenters. This is the bigger crime. The Aquino government should stop silencing its critics, or his regime is bound to face bigger protests for violating human rights here and there,” Palabay ended.
References:
Cristina Palabay
Karapatan secretary general
Bonifacio Ilagan
SELDA vice-chairperson
———————————————————————
PUBLIC INFORMATION DESK:
[email protected]
———————————————————————
Alliance for the Advancement of People’s Rights
2nd Flr. Erythrina Building
#1 Maaralin corner Matatag Streets
Central District, Diliman
Quezon City, PHILIPPINES 1101
Telefax: (+63 2) 4354146
Web: http://www.karapatan.org
KARAPATAN is an alliance of human rights organizations and programs, human rights desks and committees of people’s organizations, and individual advocates committed to the defense and promotion of people’s rights and civil liberties. It monitors and documents cases of human rights violations, assists and defends victims and conducts education, training and campaign. | https://ichrp.net/statements-of-political-detainees-benito-tiamzon-and-wilma-austria/ |
The COVID-19 pandemic has hastened the adoption of online learning throughout the world. Despite online learning becoming the new normal for many learners, the future of education does not necessarily revolve solely around it. Rather, the answer may lie in that of a hybrid learning model. This is a sentiment echoed by Associate Professor Gabriel Gervais, Director, Online Learning, SUSS.
He states his firm belief that a hybrid learning format, which includes both online learning as well as face-to-face classes, will be a mainstay in years to come. And according to him, the COVID-19 pandemic has actually legitimised online learning, awakening institutions and educators to the need of including online learning elements into their training and education programmes.
There are also problems such as the inaccessibility of online learning for people with low income or adults with disabilities, as well as security issues. In an April 2020 incident, an online class on Zoom was hijacked mid-stream by hackers sharing inappropriate graphic images.
However, the benefits of including online learning, such as facilitating better time management for both learners and educators to name one, make a strong case for its inclusion into part of the course delivery. Crucially, when we look at the bigger picture, it is evident that a hybrid, or blended learning approach, helps build a more resilient education system.
According to Dr Uma Natarajan, a researcher and educator in the field of K-12 education, and Dr N. Varaprasad, former CEO of the National Library Board and currently a partner with the Singapore Education Consulting Group, a blended learning approach provides students, parents and educators with the experience to navigate situations, where they are required to seamlessly transition to a different mode of learning as a result of disruptions.
As one of the early adopters of blended learning, SUSS actively supports educators and learners in adapting to the hybrid learning environment. Prof Gervais cites an example of this by mentioning the efforts of the SUSS online learning unit, which readied programmes in business, logistics, supply chain management and analytics for full online delivery if required. This online learning unit employs the help of learning development specialists as well, who assist in designing and converting face-to-face courses and their material for an online data resource.
Learner-support at SUSS is implemented primarily by academic units interacting with the learners. This data-driven support, which creates effective teaching experiences in the process as well, is facilitated by the institution’s Business Intelligence & Analytics and Teaching & Learning Centre. Additionally, SUSS deploys success coaches who give students personal support to adjust to online learning.
On a more general note, Prof Gervais points at how more institutions are accepting online learning as a component of the same course delivery. Prior to COVID-19, the online portion of a course usually functioned as a supplement to the actual content and delivery of the course. Nowadays, online learning serves to complement the face-to-face delivery of a course instead.
As a result of COVID-19, MOE realised the urgent need to make swift adjustments to the delivery of education at secondary and pre-university levels. Students now learn through a mix of home-based and in-school activities, and can leverage both online and offline approaches. This provides them with more opportunities to study at their own pace and empowers them to take charge of their learning. There will also be an emphasis on student-initiated learning, whereby students may pursue their own interests and learn outside of the curriculum.
A key objective of a hybrid learning approach is to develop students into self-directed learners. But learners also need to adapt to the virtual classes and make the most out of the online learning experience. One way to achieve this is for learners to take greater ownership of online learning. Self-directed learners usually go beyond the call, or the set materials provided to them. With such initiative, these learners excel because they also look out for additional educational materials to supplement the course materials.
Another way to become better learners is to acquire basic skills in adapting to the hybrid learning environment. Students may have different levels of proficiency, but they all can gain the basic knowledge of how to succeed in a hybrid learning environment. They can start learning these basic skills on their own simply by looking into online resources from credible sources like global institutions.
For educators, they need to develop their digital skills and pedagogical effectiveness that are relevant to the hybrid learning approach. Depending on the context of the course or curriculum, the educators need to have the ability to identify the suitability of different forms of hybrid learning. This requires effective professional development of educators and supporting coaches or instructors.
However, one of the main challenges for educators is the assessment of learning outcomes. Prof Gervais adds that teachers constantly need to evaluate student engagement. “Am I being understood? Are the students engaged? Are they interested? These questions cannot be answered without visual cues. So you have to find different techniques to assess whether or not they are learning.”
One such potential assessment strategy is the use of entrance and exit tickets. Using digital quizzes or talks, educators can test the students' background knowledge on the day’s lesson, and their understanding of the topic at the end of the lesson.
As for online platforms like Zoom, where the hacking incidents occurred, security measures have been enhanced to prevent such incidents during online lessons. Zoom has changed default settings for education users, and added passwords for its free basic users, while regularly updating its systems to provide better experiences. But to ensure optimum safety, users should also familiarise themselves with the platform’s security features, update the app and set up the security protection settings.
The COVID-19 pandemic has showcased the value of hybrid learning, and essentially resulted in the largest hybrid learning experiment in history, with everyone scrambling to react to school closures and their gradual partial reopening. But Singapore is perhaps better prepared than others, having previously learned from previous challenges to the education ecosystem. This is evident in the growth of the e-learning market in Singapore even before the pandemic, which was initially estimated to be worth around US$106 million in 2005. Since then, it has been valued at US$792.97 million in 2019 and is forecasted to grow to over US$2 billion by 2027. However, considering the uncertainty of the pandemic, there may well be further challenges down the road for learners and educators, but there is no doubt that hybrid learning is the way to keep up.
Its impact can transcend beyond the education sector as well. With a recent McKinsey survey revealing that 90% of organisations will move towards establishing hybrid workspaces, hybrid learning not only prepares students to enter the workforce of tomorrow by helping them better acclimate to such working arrangements. Its flexibility empowers them to thrive in these environments as well, providing them with opportunities to better pursue on-the-move upgrading and build career resilience.
Tag:
Share:
Speaker: Dr Daniel Seah
18 Nov 2020
29 Mins Audio
02 Jul 2021
Author: -
01 Nov 2019
3 Mins Read
Speaker: Dr Victor Seah
05 May 2022
8 Mins Audio
Notify me when SUSS posts new articles and podcasts.
Thank you for signing up! You will be receiving our latest news updates!
*Please select at least 1 field | https://www.suss.edu.sg/blog/detail/hybrid-learning-a-lesson-in-adaptability |
Thursday, December 27, 2007
Shopping in the US
What a long day of shopping! Rhoda, my BFF and I decided to do our "Boxing Day" Shopping in the States even though it had snowed the night before and both of our moms wouldn't let us take one of their cars, we were heading out there anyways! Besides, it wasn't really snowing when we left and we were praying that it would be clear when we got back. Since, I had missed out on Boxing Day yesterday which I heard was super busy and I cannot shop in all the madness, we decided to head down to the states to do some "boxing day" shopping. Wait, ! I did have an early Boxing Day shopping at Aritzia, picked up a pair of Rock and Republics for $175 (normally $235+) cos they started their Boxing Day sale earlier in the week. I still needed to find a pair of denim for my runners and flats cos both my 7s and RRs are 34" inseam which are perfect for my heels not so good with my flats. Our goal was to find a pair of inexpensive designer jeans at Nordstrom Rack in Bellevue! First stop of course was the Seattle Premium Outlet in Tulalip cos Rhoda had to hit a few stores in there, she ended up buying a really beautiful Michael Kors winter coat at a great price plus she's been wanting this jacket for a while and it finally dropped to an affordable price. I didn't find anything in the outlet but that's ok cos I was very focused on what I wanted and needed. We decided to head out after an hour cos we still had a long drive ahead of us. We drove for an hour and decided we should now think about lunch, every time we head to the States, we always lunch at Jack in the Box. We started to head towards {a city where the rich people live, can't remember the name}'s downtown but could not locate the Jack in the Box, after 30 minutes of driving in circles, we decided to find the next thing closest place where they serve food and it was Denny's. Boy was that a mistake! Our tummies did not feel good for the rest of the day! After lunch, we headed our way to the first destination which is the mall that we had gone to last time which has Sephora, Forever 21 and Wet Seal. I picked up a cute knit sweater with a hoodie for $10 USD and Too FacedLash Injection mascara which I have heard can do wonders for Asian eye lashes! After shopping at the mall, we were on our way to Nordstrom Rack in Bellevue. Gosh was it ever super busy!! We headed towards the designer denim rack and I was a bit disappointed to see such a small collection of the smaller sizes. I quickly scanned through it and grabbed an armful of jeans in a few sizes to try on. Right after the jeans, I headed over to the shoe collection. Wow, there were so many designer shoes!! At such low prices!! Marc Jacobs for $120!! But I was focused cos I only brought $150 USD and that is all I'm spending! Luckily I didn't find any shoes that really stood out or maybe I wasn't really looking for anything in particular. I went through the rest of the store and picked up a couple of tops to try on as well. Once inside the changing room, I didn't like the tops but really loved the jeans! I really liked the William Rast pair but they were priced at $80 (normally priced at $240!) and were def. too long which meant I would have to wear them with my heels, the Rich and Skinny pair were perfect but in the wrong size, I needed one size up (which I tried looking for but all of them were gone!!!) and finally ended up buying a really nice pair of Vigoss Studio jeans for $40 USD!! Just within my budget! I really like them too, back pockets are set a bit lower than the usual and very dark denim which is what I love! Feeling a little bit bad about leaving Kay at home, I picked up a really cute Vans purse for her which only cost $10USD! Rhoda ended up with a couple of pairs of shoes but in the end decided against them (not in love with them) but she did find a pair of work shoes for Nel. After the Rack, we headed next door to the DSW and another great place for shoe lovers or not (I have a terrible addiction to shoes), Rhoda picked up a beautiful pair of heels by Michael Kors (her fave. designer) and I picked up a cute pair of brown shoes with pom poms for only $25 USD and a pair of leopard red trimmed slippers for $5!! After our shopping, it was def. time to head back and we thought we were doing great time until we realized we were now stuck in the middle of rush hour traffic! Traffic wasn't too bad all the way but we kept getting stuck in it when there was an exit to another suburban neighborhood. As we were getting close to the border, we had the strangest weather, at some points, it was snowing pretty hard and other parts, it was just rain or clear. Very strange. Finally when we were 5km from the border, we were stuck in the traffic to get into Canada, we had to wait an hour before reaching it and lucky for us, the customs officer decided to wave us through even though we spent over the tax free amount! It was a great day and lots of great shopping finds!
1 comment:
I love the outlet mall at Tulalip, and the casino has a fantastic lunch buffet for only $10. What is the name of the mall you went to... would love to check it out sometime.Glad you got some good jeans at a good price! I need to go dark-jeans shopping soon too :)
Who am I?
I.AM. independent, trustworthy, strong willed, stubborn, confident, great listener, honest, opinionated, passionate, charming, sexy, loving, young at heart, silly, techy geek.
I.AM. a mother, daughter, sister, lover, best friend, confidant, therapist, chef, chauffeur, housekeeper, life coach, administrative assistant who enjoys life to it's fullest and does not allow it to slow me down.
I surround myself with happy, good and positive people.
L I F E as a cool M.O.M (modern & objective momma).... my BeBe is my life. I live for her each day. Because of her, I am who I am today. At this moment, I am happy and living life one day at a time.
| |
Graduate Urban Designer - BDP
BDP’s Manchester studio has an opportunity for a Graduate Urban Designer to join our Urbanism Group. This is a key appointment to support the wider Urbanism team and will suit someone developing their career in Urban Design.
Working in the Urbanism team, you will contribute to a wide range of urban design solutions; from single sites to whole districts, in existing and proposed urban areas, across the UK. You will work closely with architects, landscape architects, town planners and other consultants who work with BDP to deliver our commissions for both public and private sector clients.
Qualifications and Experience
The position requires a qualification in either urban planning, architecture or landscape architecture, and preferably, also urban design. You will have up to 2 year’s relevant experience, preferably from a commercial organisation.
Typical aspects of experience should include:
- Urban design analysis to support masterplanning and concept urban design;
- Urban design frameworks and detailed masterplans, covering a range of development scales and land uses.
Responsibilities & Duties
General
- Support the Urban Design team and wider Urbanism team.
- Work in close collaboration with other professions.
- Have an awareness and overview of the full design and delivery process, recognising the need for deliverable design solutions and high quality final products.
- Keep up-to-date with developments in the industry.
- Have a keen interest in urban development, design, and construction.
Technical
- Work closely with planners, architects, landscape architects and engineers particularly at the early project stages, continually looking for creative and innovative urban design solutions, ensuring that the company adds to its reputation for creating places for people.
- Have the skill to convert conceptual ideas into developed design proposals within the context of financial and programme parameters.
- Maintain and develop exceptional skills in appropriate visualisation techniques.
- Comfortable in AutoCAD and Revit, with good 3D skills along with proficiency in use of all Adobe Creative Suite programmes.
The successful candidate will be able to demonstrate:
- Good understanding of urban design issues.
- Well developed graphic skills.
- Ability to work under pressure whilst monitoring goals and communicating with the team.
- An ability to work within a multi-disciplinary team environment and a flexible approach.
- Good communication skills with the ability to articulate and pitch ideas both verbally and in written form.
To apply
Apply here. Attach your current CV and a cover letter, stating why your skills, knowledge and importantly your application of these applied in the workplace, demonstrate that you suitable for the position.
In addition to your application, please provide a portfolio of design work, clearly highlighting your role in relation to any projects and images provided in support of your application.
BDP is an equal opportunities employer
No agency or third party applications will be accepted.
Browse by Discipline
Urban Design Jobs
Upcoming UDG Events
Unless otherwise stated, all events are held in The Gallery, 70 Cowcross Street, London EC1M 6EJ at 6.30 pm. | http://www.udg.org.uk/jobs/yorkshire/graduate-urban-designer-bdp |
Hand weaving formed a part of socio-cultural tradition of the peoples of Manipur which has a rich cultural heritage. Handloom industry in the State, which has a legacy of unrivalled craftsmanship, is spread throughout the length and breadth of the State. It is a household cottage industry with decentralized set-up. In terms of employment generation, it is next only to agriculture for the womenfolk of the State. The traditional skill of handloom weaving is not only a status symbol for the women-folk, but it is an indispensable aspect of socio-economic life in Manipur.
The National Handloom Census, 1995-96 reported that Manipur has 4.62 lakh handloom workers (4.25 lakh weavers, 0.29 preparatory, 7,488 dyers & hired which is 2nd position among the top States of the country, 2.81 lakh looms which is 4th position among top States, consuming 12.196 lakh Kg. of yarn per month which is 7th position among top States and produced 96.07 lakh meters of handloom fabrics which is also 7th among top States of the country. About 70% of the total weavers are outside the Co-operative fold and the remaining 30% are under Co-operative fold and. Therefore, a large number of weavers are self earners.
n 2011-12, the Department of Commerce & Industries is envisaged to further broaden and intensify the development of the handloom industry and safe guard the health care of the weavers with impregnation of additional components of the existing programmes/projects while, at the same time, providing adequate funds for State share contribution required to be implemented in the Centrally Sponsored Schemes of the Ministry of Textiles.
About 100% weavers are women who have generally been ignored in their health, perhaps, in most cases and instances. The Govt. of India has given more emphasis on this scheme for safeguarding the health of the weavers. The annual target for enrolment of the weavers under the scheme has gradually increased to 50, 000. Thus, the contribution of State share has to be taken care of.
Most of the weavers are the bread earner of the family. Thus, the life of the weaver is too important for weaver family. The Govt. of India introduced the Mahatma Gandhi Bunkar Bima Yojana which is implemented through the LIC of India to provide enhanced insurance cover to the handloom weavers in the case of natural as well as accidental death and in cases of total or partial disability and also scholarship to the children of parents who are covered under the scheme. Thus, it is important to make necessary awareness, publicity, supervision, monitoring in all the procedures right from the enrolment to the settlement of claims so that the weavers and their family are not suffered unnecessary.
In view of the growing competitiveness in the textile industry both in the national and international markets, a growing need has been felt for adopting a focused yet flexible and holistic Cluster Approach in the sector to facilitate handloom weavers to meet the challenges of a globalize environment. Thus, it is to focus on formation of handloom weavers’ groups as a visible production group in a selected Handloom Clusters for becoming self –sustainable.
Merchandising and Marketing have been recognized as being central to the growth and development of the handloom industry. Domestic marketing is important for providing linkage between the producer and the consumer to promote for marketing and sales of handloom products both inside and outside State. Therefore, strengthening of State Level Handloom Organizations for improvement in the Marketing & Export of handloom items, organization festive fairs, exhibitions, etc would facilitate in the promotion of marketing of handloom products.
It is a shawl of Tangkhul tribe of Manipur. The Tangkhul have broadly eleven kinds of hand woven cloths. In most cases, these cloths have woven on Loin Loom (an indigenous looms). It is now gradually converting to produce on frame loom also. It is used by both men and women. These cloths exhibit ubiquitous characteristics. In most of Tangkhul cloth, Red colour has a major portion which is complemented by a little white and black. These cloths have now been diversified to many other products, like, wall hanging, cushion cover, curtain. These cloths produced both mercerized and acrylic yarn of 2/32s, 2/34s.
It is also a shawl of Tangkhul tribe of Manipur. It is among the eleven kinds of hand woven cloths of their tribe. The cloth has been woven on Loin Loom with acrylic yarn both in warp and weft. The motifs of animals, insects have been are hand embroidered. It is now gradually converting to produce on frame loom also. It is generally used by women. This cloth also exhibit ubiquitous characteristics. This cloth has now been diversified to many other purposes. It is woven with mercerized or acrylic yarn of 2/32s, 2/34s.
It is woven with 100% silk (Eri) with 20/22 Denier. In most cases, silk saris are woven on both throw and fly shuttle looms. Designs of saris are of a mixture of hill tribal textiles and that of floral designs, etc. In most cases, designs have been woven with traditional temple design in border and floral in cross border. Saris are diversified to many other products and purposes including curtain. | https://dcimanipur.gov.in/handloom.html |
Dear friends,
As we entered 2021, our solidarity organisations – in the fields of health and education – remain mobilised to continue to fight Covid-19, supporting educational communities, activists, members, patients and citizens around the world.
This year again, the structuring concerns of our Network will be at the very centre of burning issues: how to support students and education staff, protect their physical and mental health, fight against school dropout? How to strengthen or defend our social security systems, for greater resilience in the face of crises? How to continue to involve young people in social citizenship matters?
If 2020 has disrupted our lives, our habits, and the way we work together, we truly believe one thing: solidarity can overcome these huge challenges. At the heart of our social security systems, or in our schools, solidarity, democracy and cooperation will play a key role in providing responses to the health and climate crisis and rethinking our societies.
This past year has also shown that we have a strong need for spaces and solutions for sharing, support and unity around common struggles and issues, across borders. In that respect, I would like to thank you and your civil society and social and solidarity economy organisations for your continued commitment to the Network.
In 2021, we want to continue to form this united international community, by sharing our ideas, our experiences, and by coming together around concrete initiatives, with the objectives of better living conditions and social justice.
On behalf of all the ESN, we wish you all the best for 2021. May this new year bring you peace, health and success… But above all else, may it allow you to be happy about being together again. | https://www.educationsolidarite.org/en/happy-new-year-2021/ |
Bob Dylan ( Robert Allen Zimmerman) was born in 24 May 1941. The famous American singer, songwriter, artist and writer has been influential in popular music and culture for more than five decades. Much of his most celebrated work dates from the 1960s when his songs chronicled social unrest, although Dylan repudiated suggestions from journalists that he was a spokesman for his generation.
Early songs such as “Blowin’ in the Wind” and “The Times They Are a-Changin” became anthems for the American civil rights and anti-war movements. Leaving his initial base in the American folk music revival, Dylan’s six-minute single “Like a Rolling Stone” altered the range of popular music in 1965. His mid-1960s recordings, backed by rock musicians, reached the top end of the United States music charts.
Dylan’s lyrics have incorporated various political, social, philosophical, and literary influences. He has amplified and personalized musical genres. For 50 years, Dylan has explored the traditions in American song—from folk, blues, and country to gospel, rock and roll, and rockabilly to English, Scottish, and Irish folk music, embracing even jazz and the Great American Songbook. Dylan plays guitar, keyboards, and harmonica. He has toured steadily since the late 1980s on what has been dubbed the Never Ending Tour. He is succesful as a recording artist and performer, but his greatest contribution is considered his songwriting.
Since 1994, Dylan has published six books of drawings and paintings, and his work has been exhibited in major art galleries. As a musician, Dylan has sold more than 100 million records, making him one of the best-selling artists of all time; he has received numerous awards including Grammy, Golden Globe, and Academy Award. | https://spirossoutsos.com/bob-dylan-painting-portrait-poster/ |
It’s safe to assume that each of us will experience some type of fall in our lifetime. Every fall impacts body function, movement, mechanics, and efficiency.
Different types of falls injure the body in distinct ways. Once you understand the mechanism behind a fall you can predict what areas of the body will typically be prone to pain. Forward falls onto an outstretched hand can cause injury to your wrist, elbow, and shoulder. However, the impact force travels up the arm and exits in the cervical spine (neck) and thoracic spine (upper back) similar to a whiplash type injury. Residual delayed symptoms may appear, which include headaches, neck pain, muscle spasm, tingling or numbness in the arm, and pain between the shoulder blades. Backward falls on the buttocks cause trauma to the spine, pelvis, hips, and head. Concussions are extremely common in backward types of falls due to the sudden whipping motion of the head. The tailbone portion of the spine is often bruised or fractured from impact velocity of the backwards fall. The energy transfer through the spine exits at the top of the head leading many people to complain of severe headaches and neck pain. Severe symptoms might not appear for several days or weeks following the fall. Falls from a height landing on the feet may injure the ankles, knees, hips, pelvis, and spine. Hairline fractures are often a side effect of foot landing falls, particularly in the shin bone and pelvis.Lower back pain is the most common spinal complaint after a foot landing fall due to the compressive forces of the impact.
All falls cause mechanical and functional damage to the body leading to inefficient movement and compensations. These neurological compensations are part of your nervous systems hardwired survival mechanism to avoid pain at all cost by taking the path of least resistance. This mechanism involves adaptation of muscles, connective tissue (fascia), bones, joints, ligaments, and nerves. Postural changes are ingrained in your movement patterns to protect and guard you from future injury.Common chronic side effects from traumatic falls include; arthritis, muscle spasm and tightness, soreness, spinal disc degeneration, disc herniations, and visual postural distortions. You may notice one shoulder becomes higher than another, rounded shoulders, neck far out over the shoulders, hips become tight and you walk with a foot flare. These dysfunctional movement patterns manifest into pain and injury years after the trauma. Everything in your health history contributes to the possibility of future injury. Even that fall you had off the swing on the playground when you were a kid.
So what can and should you do after a fall to help minimize injury? First and foremost is to determine the seriousness of the injury. If severe headaches, dizziness, nausea, slurred speech or sleepiness are present immediately seek emergency medical attention for these are common sign of a concussion(impact injury to the brain). Anticipate the onset of symptoms in the next several days following a fall. For swelling, inflammation, and muscle spasm apply ice for the first 72 hours. Heat is best used for chronic injuries and over muscles. Ice tends to be a more effective alternative for joint related pain to reduce swelling.
Pain is the warning signal from your body that something is wrong. Do not ignore the pain message and hope things resolve without professional intervention. It is essential to visit a skilled clinician in manual therapy such as a chiropractor to ensure proper alignment of the spine and joint systems of the body. A doctor of chiropractic is an expert in assessment and treatment of acute and chronic musculoskeletal injuries with programs of preventive medicine
Chiropractors will work in conjunction with your primary healthcare provider to ensure you receive the most effective care program for your type of injury. Once pain symptoms have improved your chiropractor will put you on a corrective exercise program involving strengthening and stretching for balance. This will train your body with proprioception (balance) to help improve your chances of catching yourself before falling in the future.
Stay aware of your surroundings, especially your footing. Try to keep your hands free for balance and rest if you are getting tired from being on your feet too long.
Stay active, but be careful. | http://drkmatheson.com/has-a-fall-got-you-down/ |
Charge-density analysis of a protein structure at subatomic resolution: the human aldose reductase case.
The valence electron density of the protein human aldose reductase was analyzed at 0.66 angstroms resolution. The methodological developments in the software MoPro to adapt standard charge-density techniques from small molecules to macromolecular structures are described. The deformation electron density visible in initial residual Fourier difference maps was significantly enhanced after high-order refinement. The protein structure was refined after transfer of the experimental library multipolar atom model (ELMAM). The effects on the crystallographic statistics, on the atomic thermal displacement parameters and on the structure stereochemistry are analyzed. Constrained refinements of the transferred valence populations Pval and multipoles Plm were performed against the X-ray diffraction data on a selected substructure of the protein with low thermal motion. The resulting charge densities are of good quality, especially for chemical groups with many copies present in the polypeptide chain. To check the effect of the starting point on the result of the constrained multipolar refinement, the same charge-density refinement strategy was applied but using an initial neutral spherical atom model, i.e. without transfer from the ELMAM library. The best starting point for a protein multipolar refinement is the structure with the electron density transferred from the database. This can be assessed by the crystallographic statistical indices, including Rfree, and the quality of the static deformation electron-density maps, notably on the oxygen electron lone pairs. The analysis of the main-chain bond lengths suggests that stereochemical dictionaries would benefit from a revision based on recently determined unrestrained atomic resolution protein structures.
| |
In this activity, students explore how ethical frameworks can be used to guide decision-making on an ethical issue. The activity forms part of the unit plan Ethics of transgenic cows.
Purpose
This activity will help students identify ethical issues raised by transgenic animals and find out more about the 5 common ethical frameworks used to guide ethical decision-making – Consequences, Rights and responsibilities, Autonomy and the right to choose for oneself, Virtue ethics, Multiple perspectives.
The activity will encourage students to discuss the issue of genetically modifying animals to help treat human diseases.
An update to the transgenic cows research: changes in funding mean that AgResearch is no longer active in biomedical research projects. In its 2017 report to the EPA, AgResearch noted that it still has around 40 transgenic cows in its Waikato containment facility. Most of these cows are for casein and beta-lactoglobulin (BLG) research. Although some research mentioned in the transgenic cows story has ceased, the ethical issues involved in genetic modification of animals remain.
Keywords
Ethics, transgenics, ethical frameworks, genetic modification. | https://www.sciencelearn.org.nz/resources/861-ethical-frameworks-and-transgenics |
The Environment studies is a multi-disciplinary science because it comprises various branches of studies like chemistry, physics, medical science, life science, agriculture, public health, sanitary engineering etc. It is the science of physical phenomena in the environment.
Why is environmental education considered multidisciplinary?
Environmental studies deals with every issue that affects an organism. It is essentially a multidisciplinary approach that brings about an appreciation of our natural world and human impacts on its integrity.
What is multidisciplinary nature of environment?
Multidisciplinary Nature of Environmental Studies The study of environmental components is multi disciplinary in nature. Since it includes all disciplinary such as science, humanities, commerce, meteorology, climatology, geography and other disciplines.
Why are environmental problems multidisciplinary?
These four systems interact at physical and chemical phases and are in constant change. … Because of the interaction between these different components, in addition to the influence of human activities, the environment and the ensuing problems involve several disciplines and thus are multidisciplinary in nature.
What makes environmental science an interdisciplinary science?
Environmental Science is an interdisciplinary field of study, which combines ideas and information from the natural sciences (biology, chemistry, geology) and the social sciences (economics, ethics, politics). … It provides a real and direct connection for students between the study of science and the world around them.
What are the different multidisciplinary of environmental science?
Zoology, biology, mineralogy, oceanology, physics, chemistry, plant science, limnology, soil science, geology, physical geography, and atmospheric science, are all included in information sciences. Since Environmental Science consists of knowledge based on numerous subjects, it is known as a multidisciplinary field.
Why Environmental Studies is considered as a multi disciplinary subject explain importance of this course in providing solutions to our environmental problems?
It helps us for establishing standard, for safe, clean and healthy natural ecosystem. It also deals with important issues like safe and clean drinking water, hygienic living conditions and clean and fresh air, fertility of land, healthy food and development.
Why is a multidisciplinary approach important?
One of the benefits of a multidisciplinary approach in education is you get a more holistic understanding of the world. Rather than looking at individual departments and their subject matters separately, a multidisciplinary approach integrates parts of each department into the study programs of the other.
Why is environmental science important?
Environmental science is important because it enables you to understand how these relationships work. For example, humans breathe out carbon dioxide, which plants need for photosynthesis. … Plants are sources of food for humans and animals. In short, organisms and humans depend on each other for survival.
What are the scope and importance of multidisciplinary nature of environmental studies?
The scope of the Multidisciplinary Nature of Environmental Studies includes biological, cultural, social, and physical elements. It is also linked to science, geography, economics, statistics, health, technology, population, and ecology.
What is the multidisciplinary approach is helpful in solving various environmental problems?
Multidisciplinary approaches are required to address the complex environmental problems of our time. Solutions to climate change problems are good examples of situations requiring complex syntheses of ideas from a vast set of disciplines including science, engineering, social science, and the humanities. | https://enrichedearth.org/ecosystems/why-is-environmental-science-considered-multidisciplinary.html |
Electricity Detailed Survey-Level FilesThe Form EIA-861 and Form EIA-861S (Short Form) data files include information such as peak load, generation, electric purchases, sales, revenues, customer counts and...
-
NEPAssist | National Environmental Policy Act | US EPAEJSCREEN allows users to access high-resolution environmental and demographic information for locations in the United States, and compare their selected locations to the rest of...
-
Alternative Fuels Data CenterData related to alternative fuels and advanced vehicles. Internet Archive URL: https://web.archive.org/web/2019*/http://www.afdc.energy.gov/data_download/
-
Form EIA-411 DataThe EIA 411 report, aka "Coordinated Bulk Power Supply and Demand Program Report" collects electric reliability information from the Nation’s power system planners about the...
-
SciTech Connect: Your connection to science, technology, and engineering rese...SciTech Connect includes technical reports, bibliographic citations, journal articles, conference papers, books, patents and patent applications, multimedia, software, and data...
-
EIA Open DataThe U.S. Energy Information Administration is committed to enhancing the value of its free and open data by making it available through an Application Programming Interface...
-
EIA Data Tools & ModelsThe US Energy Information Administration is committed to making its data available through an Application Programming Interface (API) to better serve our customers. APIs allows...
-
Open Data CatalogueThe mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science...
-
Energy.gov Congressional TestimonyThe Office of Congressional and Intergovernmental Affairs is dedicated to its mission of providing guidance on legislative and policy issues, informing constituencies on energy...
-
Renewable & Alternative FuelsFind statistics on renewable energy consumption by source type, electric capacity and electricity generation from renewable sources, biomass and alternative fuels. Internet...
-
Monthly Energy ReviewA publication of recent and historical energy statistics. This publication includes statistics on total energy production, consumption, and trade; energy prices; overviews of...
-
Sea Level Rise and Storm Surge Effects on Energy AssetsThe U.S. Department of Energy’s (DOE) Office of Electricity Delivery & Energy Reliability (OE) undertook this study to assess the potential sea level rise (SLR) and storm...
-
Preliminary Monthly Electric Generator Inventory (based on Form EIA-860M as a...The monthly survey Form EIA-860M, ‘Monthly Update to Annual Electric Generator Report’ supplements the annual survey form EIA-860 data with monthly information that monitors the...
-
Database of State Incentives for Renewables & EfficiencyDSIRE is the most comprehensive source of information on incentives and policies that support renewables and energy efficiency in the United States. Established in 1995, DSIRE...
-
Energy Footprint Tool with Sample DataDeveloped by the U.S. Department of Energy, the Energy Footprint Tool can help manufacturing, commercial and institutional facilities to track their energy consumption, factors...
-
EV Everywhere: Charging on the Road | Department of EnergyGives information on electric charging stations in the U.S. Downloaded underlying data as a CSV from the Alternative Fuels Data Center website.
-
EERE: Clean Energy in My State Home PageThe U.S. Department of Energy Office of Energy Efficiency and Renewable Energy's Clean Energy in My State site provides state-specific renewable energy and energy efficiency... | https://www.datarefuge.org/dataset?_res_format_limit=0&groups=data-rescue-events&_organization_limit=0&res_format=JSON&organization=department-of-energy&res_format=ZIP |
The Corporate Strategist will have a passion for innovation and creativity that can drive the goals and objectives of the bank as well as a mindset for world-class results.
S/he will shape the strategic agenda, providing relevant and timely internal and external analysis, identifying strategic opportunities and making recommendations to the Executive.
Support the CEO and the leadership team in driving the development of corporate and SBU strategic plans.
Analyze market trends and competitor actions to understand implications and recommend potential changes to the bank’s strategy.
Review the strategic plan and advise changes (as necessary) towards the next plan period.
Plan and execute projects/initiatives aimed at strengthening the framework for operations across the bank’s business units.
Continuously critique the bank’s business model to ensure it aligns with the bank's mission and values based on internal and external needs assessment.
Collaborate with Human Resources and other key stakeholders across business units to develop people strategies and execute tasks relating to the delivery of the bank’s strategic initiatives.
Develop integrated marketing strategies and tactical plans to meet and exceed the marketing and business goals and objectives.
Bachelor's degree in Economics, Business Management, Engineering or a related discipline; MBA or relevant professional qualification/certification is added advantage.
Not less than 7 years’ work experience, including in a financial Institution preferably in any of the following areas: strategic planning, loan origination and operations risk management.
Proven experience in planning, execution and leadership of new business strategies.
Excellent written and verbal communication skills; articulate and persuasive communicator with excellent presentation skills as well as proven analytical and problem solving abilities.
Excellent leadership skills, ability to motivate colleagues and inspire potential clients. | https://www.jobgurus.com.ng/jobs/view/corporate-strategist |
Police’s DNA backlog: It has ‘catastrophic consequences for the criminal justice system’ – DA
- The DA will ask for an urgent debate on the police’s DNA backlog.
- FF Plus leader Pieter Groenewald said it was “treasonous” that the police hadn’t processed any DNA in January and February.
- Police Minister Bheki Cele admitted that he only learned “by chance” that no DNA was processed during that period.
The DA and FF Plus are not backing off on the matter of the police’s DNA backlog, as well as the information system for tracking and tracing DNA samples being offline since June last year.
The chairperson of the Portfolio Committee on Police, Tina Joemat-Pettersson, also laid down the law to the police at Wednesday’s meeting.
The DA will write to the Speaker of the National Assembly, Thandi Modise, to request a debate of national importance on these matters, said DA MP Andrew Whitfield on Thursday.
FF Plus leader Pieter Groenewald said he, too, will continue to raise the issue.
At Wednesday’s meeting, Police Minister Bheki Cele admitted he only learned “by chance” last week that the police haven’t processed any DNA evidence samples in January and February.
It also emerged that the police were, on two occasions, ready to pay for a system by Forensic Data Analysts (FDA) to keep track of evidence, the Property Control and Exhibit Management System (PCEM), but Cele blocked it on both occasions.
This led to the system being switched off in June and eight million pieces of evidence can subsequently not be found.
Last week, the committee heard from Major-General Edward Ngokha, head of the National Forensic Science Laboratories (NFSL), that they have not done any processing during January and February. The backlog is now over 172 000 cases.
In a statement released on Thursday, Joemat-Pettersson said: “It is our considered view that the turnaround strategy doesn’t deal decisively with the challenges faced by the division, mainly the issue of contract management, especially in relation to information technology.
“As a result, we have recommended that SAPS implement an agreement they have with FDA, in order to enable the functioning of the PCEM system.
“This resolution was mainly premised on the need to ensure functionality of the NFSL and ensure effective prosecution of gender-based violence suspects.”
In a statement, Whitfield said the collapse of the police’s DNA processing capabilities has catastrophic consequences for the criminal justice system.
“Thousands of violent criminals are being let loose on the streets to torment their victims and commit new crimes. Murderers and rapists have been given a license to commit violent crime, without impunity, by this incapable state institution with a DNA backlog fast approaching 200 000 case exhibits,” he said.
“The fact that the minister and the national police commissioner, General Khehla Sitole, seems to be locked in a power struggle is undermining SAPS, and our communities and citizens are left to deal with the fall-out.”
Groenewald said, in a statement released after Wednesday’s meeting, that Cele’s admission that there are, at present, no chemicals available to test DNA samples in any of the police laboratories across the country “comes down to treason against the people of South Africa”.
“The minister’s nonchalant statement that he ‘accidentally’ found out about it, while visiting the scene of a crime, is indicative of the tension and conflict between the minister and the police commissioner, which is to the detriment of South Africa,” Groenewald said.
At Wednesday’s meeting, Groenewald proposed that contracts with FDA must be accepted at once, so that the PCEM system can be activated again.
The committee agreed and this was one of Joemat-Pettersson’s rulings.
She ordered the police to immediately stop their legal wrangling with FDA – the police have lost nine court cases against FDA – and have the system up and running when they again meet with the committee next Wednesday.
The committee also resolved that an investigation must be instituted to understand how the ICT impasse has taken so long to resolve.
Failure to institute such an investigation will result in the committee writing to Modise to seek alternatives to instituting the necessary investigation.
Whitfield welcomed Joemat-Pettersson’s decision in this regard.
Furthermore, the committee resolved that the police and the State Information Technology Agency (SITA) must present a plan of action, with timelines, to the committee by next week Wednesday. This is to ensure that solutions are urgently found to reverse the unacceptable situation.
Lastly, the committee will receive a detailed report from SAPS, SITA and National Treasury on their engagements with FDA, with suggested solutions.
“Ultimately, what the committee is interested in is ensuring a criminal justice system that serves the people of this country,” reads Joemat-Pettersson’s statement. | https://read.newspages.co.za/polices-dna-backlog-it-has-catastrophic-consequences-for-the-criminal-justice-system-da/ |
According to nonperturbative QCD, quarks and gluons don't exist and in nonperturbative QED with two spinors (e.g. proton and electron) hydrogen isn't composed of a proton and an electron.If you look at the history of our natural-science knowledge about matter, in physics there are two ways of investigations about the world. The one is to figure out the tinier and tinier building blocks of matter, starting from condensed matter, extracting molecules, atoms, stripping of the electrons, finding the nucleus, splitting it into protons and neutrons and finally finding out that these themselves consist of quarks or quarks and gluons, which according to todays knowledge seem to be the fundamental building blocks of all matter (together with the electrons forming the neutral atoms, molecules and matter around us). | https://www.physicsforums.com/threads/how-can-quarks-exist-if-they-are-confined.958432/ |
Since their introduction in the 1960s, drugs categorized as benzodiazepines, which include diazepam (Valium) and alprazolam (Xanax), have been widely prescribed to treat anxiety and insomnia, alcohol withdrawal, and other conditions. Although they are highly effective for their intended uses, these medications must be prescribed with caution because they can be addictive. Now, work by NIDA-funded researchers has established that benzodiazepines cause addiction in a way similar to that of opioids, cannabinoids, and the club drug gamma-hydroxybutyrate (GHB). The discovery opens the door to designing new benzodiazepines that counteract anxiety but are not addictive.
Dr. Christian Lüscher and colleagues at the University of Geneva, Switzerland, studied benzodiazepines as part of a larger project to identify the point of convergence for all neurobiological pathways to drug addiction. Their findings strongly suggest that this juncture occurs when dopamine surges in response to drug taking initiate a change in synaptic plasticity in dopamine-producing cells.
Mechanisms of Benzodiazepine Addiction (Left Image) Both inhibitory interneurons (labeled GABA) and dopaminergic neurons (labeled DA) are subject to the restraining influence of the inhibitory neurotransmitter GABA. A key difference, however, is that GABA influences the inhibitory interneurons largely via the alpha-1 subset of GABA A receptors and the dopaminergic neurons largely via the alpha-3 subtype. (Right Image) Benzodiazepines currently on the market do not interact strongly with alpha-3 GABA A receptors on dopaminergic neurons and so have no direct impact on dopamine release. However, the drugs do interact strongly with alpha-1 GABA A receptors, thereby curtailing inhibitory interneurons’ release of GABA into synapses with dopaminergic neurons. The net result is a lessening of GABA restraint on the dopaminergic neurons and an increase in dopamine release. (Left Image) Both inhibitory interneurons (labeled GABA) and dopaminergic neurons (labeled DA) are subject to the restraining influence of the inhibitory neurotransmitter GABA. A key difference, however, is that GABA influences the inhibitory interneurons largely via the alpha-1 subset of GABAreceptors and the dopaminergic neurons largely via the alpha-3 subtype. (Right Image) Benzodiazepines currently on the market do not interact strongly with alpha-3 GABAreceptors on dopaminergic neurons and so have no direct impact on dopamine release. However, the drugs do interact strongly with alpha-1 GABAreceptors, thereby curtailing inhibitory interneurons’ release of GABA into synapses with dopaminergic neurons. The net result is a lessening of GABA restraint on the dopaminergic neurons and an increase in dopamine release.
Text description This illustration provides diagrams of neurotransmitter release at synapses in the presence or absence of benzodiazapines. The first diagram shows that in the absence of benzodiazapines, GABA released from an axon of a neuron earlier in the pathway binds to an alpha-1 GABA A receptor on an inhibitory interneuron. Synapses on the axon of that interneuron then release GABA that binds to an alpha-3 GABA A receptor at a synapse of a dopamine neuron. The binding reduces that neuron’s dopamine release at its axonal synapse. The second illustration shows that benzodiazapines currently on the market bind to the alpha-1 GABA A receptor on an inhibitory interneuron, reducing GABA binding there and subsequent GABA release by that neuron. Without the normal GABA influence, the dopamine neuron releases more dopamine than in the first diagram.
From Receptor Activation to Dopamine Surge
The pleasurable sensations that make addictive drugs disastrously attractive for vulnerable individuals occur when dopamine levels in the brain’s reward area abruptly surge. Researchers had worked out how most addictive drugs, but not benzodiazepines, precipitate these surges. Dr. Lüscher and colleagues have now demonstrated that benzodiazepines weaken the influence of a group of cells, called inhibitory interneurons, in the brain’s ventral tegmental area (VTA). These neurons normally help prevent excessive dopamine levels by downregulating the firing rates of dopamine-producing neurons. Two negatives make a positive, so when benzodiazepines limit the interneurons’ restraining influence, the dopamine-producing neurons release more dopamine.
The Swiss researchers traced benzodiazepines’ effect on VTA interneurons to the drugs’ activation of a subset of GABA A (gamma-aminobutyric acid type-A) receptors on the interneurons. Although benzodiazepines typically activate multiple subtypes of GABA A receptors, their activation of the the alpha-1 subtype is decisive for their impact on VTA interneuron behavior. These interneurons are highly sensitive to such activation because they carry abundant numbers of these receptors. By staining brain tissue, the researchers showed that 81 percent of VTA interneurons carry GABA A receptors that contain the alpha-1 subunit.
To prove that activation of alpha-1 GABA A receptors underlies benzodiazepines’ dopamine effect, the researchers administered a typical benzodiazepine, midazolam, to two groups of mice. The results supported the researchers’ proposed mechanism: In normal animals, the firing rate of interneurons decreased in response to the drug, while that of dopamine-producing neurons increased. In contrast, in animals that were genetically altered to prevent benzodiazepines from potentiating alpha-1 GABA A receptors, the drug had little or no impact on neuron firing.
A behavioral finding completed the chain of proofs linking benzodiazepines’ stimulation of alpha-1 GABA A receptors to their rewarding effects. When given the option of drinking sugar water or a sweetened solution of midazolam, normal mice imbibed roughly three times as much drug-laced as drug-free liquid. Mice with altered alpha-1 GABA A receptors, however, drank equal amounts of each, thereby exhibiting no evidence of finding one drink more rewarding than the other.
When benzodiazepines limit the interneurons' restraining influence, the dopamine-producing neurons release more dopamine.
Benzodiazepines’ newly discovered mechanism for producing reward is comparable to those of opiates, cannabinoids, and GHB. Each of the four drugs reduces an inhibitory influence on dopamine-producing cells, thereby promoting dopamine spikes.
From Surge to Addiction
Dopamine surges are transient events, but addictive drugs cause long-lasting changes in the reward system. Among the earliest of these along the path from voluntary to compulsive drug use and addiction is the migration of certain AMPA receptors (i.e., GluA2-lacking receptors) from the interior to the surface of the dopamine-producing neurons. These receptors render the cell more susceptible to stimulation by the excitatory neurotransmitter glutamate, and as a result, the cells respond to future drug exposures with larger dopamine surges that produce even more intense pleasure. Scientists also have evidence that these special AMPA receptors initiate a series of changes in neural transmission that cumulatively give rise to the range of addictive symptoms.
Dr. Lüscher and colleagues showed that benzodiazepines induce AMPA receptor migration via the alpha-1 GABA A receptors. In these experiments, brain tissue from normal mice exhibited GluA2-lacking AMPA receptors after a single injection of midazolam, but tissue from mice with benzodiazepine-insensitive alpha-1 GABA A receptors did not. Recordings of intracellular electrical currents confirmed synaptic changes of dopamine-producing neurons in the normal mice and not the altered mice. To pin down the relationship further, the researchers injected mice with two other compounds, one (zolpidem) that preferentially activates only the alpha-1 GABA A receptors, and one (L-838417) that antagonizes these receptors. GluA2-lacking AMPA receptors were expressed in dopamine-producing neurons following a treatment with zolpidem, but not with L-838417.
Conclusive Proof
The Swiss researchers hypothesize that although different addictive drugs produce dopamine surges by various mechanisms, the subsequent chain of effects is the same. Consistent with this idea, they showed that even in the absence of any drug, artificial stimulation of the dopamine-producing neurons is sufficient to induce the appearance of GluA2-lacking AMPA receptors.
In this experiment, the researchers introduced a virus containing a light-activated protein, channelrhodopsin, into the dopamine-producing cells of mice. When exposed to light pulses from an optical fiber inserted into the animals’ VTA, the channelrhodopsin stimulated neuron firing in bursts similar to those produced by addictive drugs. The result was an increase in GluA2-lacking AMPA receptors comparable to that seen following exposure to addictive drugs.
“This was a nail-in-the-coffin study to show that activity of dopaminergic neurons leads to synaptic adaptation that is involved in addiction,” says Dr. Lüscher. “This is why addiction is so difficult to treat. Even if you clear the drug from the body, there are long-lasting changes in brain architecture.”
Toward Better Benzodiazepines
Taken together, the data from the studies show that the activation of alpha-1-containing GABA A receptors by benzodiazepines calms inhibitory interneurons, increasing dopaminergic neuron firing, and leads to the strengthening of excitatory synapses that favor addictions. Dr. Roger Sorensen of NIDA’s Functional Neuroscience Research Branch says, “This is the first demonstration that acute benzodiazepine use can increase dopamine release, supporting its addictive potential.”
“Now that we know that it’s the alpha-1-containing GABA A receptor that is responsible for benzodiazepine addiction, we can design benzodiazepines that do not touch those particular receptors,” says Dr. Lüscher. Drugs that bind only to alpha-2-containing GABA A receptors, he adds, might relieve anxiety nonaddictively. “Such substances already exist for research purposes,” Dr. Lüscher says. “It’s possible that we can also create them for clinical use.”
Sources
Brown, M.T.C., et al. Drug-driven AMPA receptor redistribution mimicked by selective dopamine neuron stimulation. PLoS One. 5:12: e15870, 2010. Full Text Available (PDF,2.2MB)
Riegel, A.C., and Kalivas, P.W. Neuroscience: Lack of inhibition leads to abuse. Nature 463: 743–744, 2010. Abstract Available
| |
Reiteration of The End is Near, Here first created in 2016 for Emirates Fine Arts Society Annual Showcase at Sharjah Art Museum.
The End is Near, Here (2018) is a video piece with dual channel audio in English and Chinese. The piece was originally created for the Emriates Fine Arts Society Annual Showcase with the theme of apocalypse.
The narrative of the video goes through the thoughts and feelings one would encounter when facing at the end of life. It is written on the perspective of a person left excluded from the safe haven of Noah’s ark and facing the apocalyptic doom of the world. The first part of the video shows the destruction and chaos the world, symbolized by the spread of the red coloring in a clear bowl of water, and the realization of fear growing more and more intense and heart beating louder and faster. The visual of the second half of the video is rewind of the first video, which symbolizes going back to the status quo of the world before the existence of everything. The video is paradoxical in the way that the end is shown as a return to the beginning but in fact, with the apocalyptic end of life, one can never return to the beginning of life. The return to the beginning is symbolic to the lament of one wishing that life would start all over again. The narration of the second half of the video shows that despite all the suffocating fear, one would feel a sudden moment of calm right before the imminent end.
This iteration was presented at Art Education Expo, Shanghai Expo Center, Shanghai, China, April 16-18, 2018 and at Shenzen Design Week, Shenzen, China, April 27 - 28, 2018.
Voice by Jingyi Sun & Adley Kim and translation from English to Chinese by Jingyi Sun. | http://jiwonshin.com/create/end2018.html |
Legislation has directed schools to convene school councils that typically address issues related to curriculum, instruction, budget, and governance as one means to improve schooling. However, the expectation for improved schools through this involvement remains a challenge. The study examined issues connected to council operation in two large Kentucky school districts. Seventy-six former council members responded to twenty-nine items on a mail-out questionnaire. The areas investigated included training, support, and member effectiveness from the perspective of community members and teacher members. The findings include suggestions to improve council effectiveness and new emphasis for principal and member training.
A Nation At Risk (National Commission on Excellence in Education, 1983) called for substantial educational reform in the areas of expectations for student achievement, assessment, use of instructional time, and curriculum. Of the many recommendations in the report the authors emphasized an increase of citizen involvement in oversight of school reform efforts and in school governance.
Decentralization of decision making was a central tenet of education reform. Decentralization of authority was not new to school restructuring proposals and was influenced by societal drive to decentralization in business and government (Rallis, 1990). School based-decision making refers specifically to the decentralization of authority from the district to the school-level, including teachers, parents and administrators (Riesgraf, 2002). Hallinger, Murphy, and Hausman (1991) identified four things that have to change during restructuring: (a) decentralize, both administratively and politically, (b) empower those closest to the students, (c) create new roles and responsibilities for all the stakeholders, and (d) restructure the teaching-learning process. From this cauldron emerged the concept of the locally controlled schools, so that decisions that most affect the local school are actually made at the school level. Most of these local governance structures at the schools consist of the principal, teachers, and parents. The Chicago school system initiated one of the first efforts to pursue a major decentralization. In describing the Chicago initiative Hess (1991) stated that Local School Councils (LSC) were established at each school site to make decisions regarding the goals of this law and to utilize allocated resources to support school improvement.
In 1985, sixty-six of the one-hundred seventy-six school districts in Kentucky filed a lawsuit claiming that the finance system for education violated the state constitution because it did not provide an efficient system of education for all students in the commonwealth. In 1989, in Rose v. Council for Better Education, 790 S. W. 2d 186, the Kentucky Supreme Court ruled that the state's school system was unconstitutional. The court ordered the General Assembly to reform the property tax system and to provide an adequate education for every child. In defining an adequate education, the court specified learning goals.
Publication information: Article title: Support and Resources for Site-Based Decision-Making Councils: Perceptions of Former Council Members of Two Large Kentucky School Districts. Contributors: Schlinker, William R. - Author, Kelley, William E. - Author, O'Phelan, Mary Hall - Author, Spall, Sharon - Author. Journal title: Florida Journal of Educational Administration and Policy. Volume: 2. Issue: 1 Publication date: Fall 2008. Page number: 29+. © Florida Journal of Educational Administration and Policy. COPYRIGHT 2008 Gale Group. | https://www.questia.com/read/1G1-188276576/support-and-resources-for-site-based-decision-making |
Watch this series examining top tips from leading innovators in the financial industry on how to make innovation a success. This week: Interview with Tim Bosco, Brown Brothers Harriman.
The innovation team at Brown Brothers Harriman (BBH) is charged with advancing the firm’s Investor Services strategy through the discovery, incubation and delivery of new and differentiated products. “We believe that for any commercial entity, the number one goal of an innovation strategy should be to create value - whether that’s to generate new revenue opportunities, protect market share, or uncover material cost savings,” says Tim Bosco, Senior Vice President at BBH.
The key components to BBH’s innovation framework include cultivating a diverse and active portfolio of initiatives, limiting scope in the right way, investing in rapid prototype capabilities, and directly engaging with the FinTech ecosystem. “It’s important to keep a pulse on new technologies but also put yourself in a position to potentially be an early adopter and gain a competitive advantage in the market,” Bosco says.
It’s important to keep a pulse on new technologies but also put yourself in a position to potentially be an early adopter and gain a competitive advantage in the market.
Disclaimer: The positions expressed in the interview and video are those of the author. This material should not be construed as legal or tax advice. Brown Brothers Harriman & Co. does not monitor or maintain any of the information made publicly available on e-paying.info nor represent or guarantee that such information is accurate or complete, and it should not be relied upon as such. | http://e-paying.info/news-events/news/innovation-essentials_in-conversation-with-tim-bosco |
|Time Limit: 1 second(s)||Memory Limit: 32 MB|
Alice and Bob are playing game of Misère Nim. Misère Nim is a game playing on k piles of stones, each pile containing one or more stones. The players alternate turns and in each turn a player can select one of the piles and can remove as many stones from that pile unless the pile is empty. In each turn a player must remove at least one stone from any pile. Alice starts first. The player who removes the last stone loses the game.
Input starts with an integer T (≤ 200), denoting the number of test cases.
Each case starts with a line containing an integer k (1 ≤ k ≤ 100). The next line contains k space separated integers denoting the number of stones in each pile. The number of stones in a pile lies in the range [1, 109].
For each case, print the case number and 'Alice' if Alice wins otherwise print 'Bob'. | http://www.lightoj.com/volume_showproblem.php?problem=1253 |
DELEGATING tasks can achieve a number of results, namely a decrease in workload of the manager and an increased sense of participation and involvement in the employee.
So why do so many managers complain about being stressed and too busy, and yet are reluctant to delegate? There are two main reasons:
It takes some time and effort up front to organise your workload before delegation, and at a subconscious level, you fear someone else may do something better than you.
The trouble with delegating
When considering delegating work, do you experience any of the following symptoms of “delegatitis” (my term for having difficulty in delegating)? Here they are:
• I can do it better and faster myself;
• I can’t trust my colleagues or subordinates to do it;
• I don’t have time to explain what needs to be done;
• They already have enough to do;
• I don’t want to give this task up as I enjoy doing it;
• They messed up last time so there is no point asking them to do this; and
• I am the only person who knows how to do this.
Your degree of delegatitis varies from mild with one yes, to critical with five or more. The good news is, there are a number of great antidotes to delegatitis.
Six key points for effective delegation
• Consolidate: Decide exactly what you want to delegate. Remember that delegating is different to assigning tasks that are already part of an individual’s job description. Also, it is not an opportunity to pass on mundane activities you do not want to do. Give employees something different and stimulating every so often. When you delegate, you are still responsible for the outcome.
• Eliminate: Get rid of all the tasks that don’t need to be done. If you don’t need to do a task yourself, consider if it needs to be done at all. As management guru Peter Drucker puts it: “Do the first things first and the second not at all.”
• Interrogate: Ask yourself what you can delegate, as not all tasks are suitable. Never delegate sensitive projects. If it is confidential in any way, the work should not be outsourced. If you were assigned the task due to your specific expertise, be careful when delegating. Ensure you understand the task as you will not be able to delegate an assignment if you do not know how to accomplish it.
• Abdicate: Resist the temptation to dictate how the task should be done. You are delegating the objective, not your own methodology. Be sure to delegate the autonomy along with the responsibility. Avoid the “Let me show you” syndrome, as you will end up doing the task yourself. Ask for progress reports and don’t look over the shoulder of the person you have delegated the work to every step of the way.
• Candidate: Once you know what a particular task entails, find the best person for the job. You may find that initially he takes longer to do it than you, but that is because you are the expert. Be patient. If you have chosen the best person and clarified the objectives, he will quickly become competent and reliable. Don’t always give tasks to the same people. Share responsibilities and involvement.
• Elucidate: Explain clearly what the task involves and make the instructions as clear as possible. Set a time frame for the assignment and schedule update meetings to monitor progress and determine any need for assistance. Ask the person who is doing the work to give you his understanding of both the task and the goals. If there is a mismatch in his answers and your expectations, review the matter in detail again.
When carried out properly, delegation is a win-win situation as it allows you to make the best use of your time and skills as well as help to grow and develop other team members to reach their full potential. You will be surprised how taking on additional responsibility or new tasks can motivate and stimulate employees. | https://www.stjobs.sg/articles/123867 |
As you become more established in running your own organization and generating more business, it will be necessary to delegate functions and tasks to other employees. The main purpose of bringing on team members is to lighten your own workload, but if you fail to select capable people or do not clearly lay out your expectations, you may find yourself spending more time micromanaging and less time focused on building your business.
Douglas R. Conant, the recently retired CEO and president of Campbell Soup Company, recently shared some wise words about team building in a piece for the Harvard Business Review blog. He said “building effective teams isn’t rocket science, but it’s just as hard.”
Just as you would not try to build a rocket without a plan, only to check the schematics after the launch was a failure, you would not simply hire someone when you need the extra help and then try to figure out later why they are not meeting all of your expectations. Both efforts demand strategic planning, allocation of resources, calculations of how one system (or person) will work with another and constant reassessment of what is or is not working and why.
Before creating any new positions and hiring team members, you must understand a few basic principles of managing, empowering and engendering trust. Front loading the work of designing clear job positions and selecting talented people will save you hours later on. You will need to seek out top talent that is capable of fulfilling your expectations and tackling the functions you create in your organization. As hires work, equip them with the tools to do their jobs and a clear roadmap of what you want them to accomplish. By assigning employees functions instead of tasks, requiring them to constantly report back to you will be less of a necessity, freeing up your days to manage the business instead of individuals.
Do you have any tips for establishing trust within an organization? What kinds of team-building exercises do you use to engage employees? Have you created a hierarchy that gives everyone in the company guidance on the roles they fill and how these relate to those of their colleagues? | https://covenantgroup.com/create-teams-that-can-run-themselves/ |
Sixth Grade students have a unique opportunity this year in their new Make It: 3D Design class. Taught by David Wells, Director of Maker Programming at the New York Hall of Science, and Katie Rocker, Upper Division Director, students are exploring the field of design and the emerging field of additive manufacturing, also known as 3D printing.
While today’s commercially available 3D printers are typically used to make toys, the same techniques are used in almost all modern design workshops, from engineering to fashion design, to rapidly prototype objects, or to manufacture lower density parts used in everything from rocket engines to medical prostheses. As these techniques are refined, more products produced using these methods are available on the market.
"It is important for students to explore emerging technologies, learn design skills, and explore ways to apply the design thinking process to their daily lives,” said Ms. Rocker. “Understanding the dialogue between high- and low-tech materials and tools, and then creatively using them to achieve desired results, are integral steps to building a mindset of life-long learning."
In their first class, students hand-bound their own design notebooks, used throughout the rest of the class to record their thoughts and ideas, as well as to sketch designs for their projects. Next, students began to explore the 3D design process, not on computers, but with clay by creating coil pottery.What do ancient pottery methods have to do with some of the most advanced manufacturing robots available today? Both use a process of building up material in layers to create shapes not possible by other methods. Coil pottery uses a long snake of clay to build up the sides of the vessel, layering the clay on top of itself. 3D printing uses almost exactly the same technique, except instead of clay, it uses a long line of thermoplastic filament, melted and cooled to form the object out of hundreds of layers of material.
After gaining an understanding of how their designs would be printed, students began exploring the software they would use to create their designs. Using simple shapes on a flat plane, their first task was to design a nametag with their name on it.
As the end of the year approaches, students are now brainstorming their final designs, which they will select one of to render on the computer and print using the school’s two 3D printers. By the end of the year, students will have not only an object that they designed themselves, but a deeper understanding of design thinking and the process by which objects are created and produced. | https://www.sthildas.org/welcome/news/school-news-post/~board/latest-news/post/students-explore-emerging-technologies-in-make-it-3d-design |
Why Beacon?
Complete Extended Research
Return to a favorite project, broaden its scope, reconsider its findings, or dive into additional data, reports, or criticism.
Find A Mentor
Guidance from an enthusiastic faculty member who has expertise in your topic's field, enhance your questions, ideas, arguments, and presentation.
Get Academic Writing Experience
Refine, revise, expand, and meticulously document sustained academic prose.
Present To An Enthusiastic Audience
Present your work to a judge, other panel presenters, and a mix of faculty, family, and friends, all listening closely and able to ask questions about your work.
Receive Recognition
Panel presenters have submitted the best, most interesting papers, and they compete for Panel Awards that include monetary prizes and publication.
Get Conference Experience
Participate in this academic and professional conference, presenting your ideas and listening to others' ideas, and gain a range of experiences that encourage further development of valuable skills that transfer into many other contexts. | https://beaconconference.org/ |
Review is the crucial step in Eli Review, designed to help writers get the feedback they need to take an initial Writing Task and improve it in Revision Tasks. Eli is designed to help writers give and get better feedback through review to help improve the quality of their revisions.
Reviews can be very simple or very complex, depending on how instructors design them, but the Eli review process generally looks like this:
Each student receives a report after they’ve completed a review that includes all of the feedback they received from their reviewers, intended to help them improve their writing, and a breakdown of how the feedback they gave was received by the writers they reviewed, intended to help them learn to give more helpful feedback.
To begin a review, click on the review name from your Course Page. You’ll be taken to an overview of Instructions and Progress where you’ll see instructions for how to respond and your overall progress toward completion.
When you’re ready to begin responding, click on the name of one the writers you’ve been assigned review. This will take you to the review display where you’ll be able to read and respond to classmates. Note that if your instructor chose to keep reviews anonymous, instead of names you’ll see “Classmate #1″, “Classmate #2″, etc.
The main review display is where you’ll prepare responses to send to the writers you’ve been asked to review.
Your progress will be saved incrementally as you review – if you accidentally close your browser window, or if you have to stop in the middle of a review, Eli will have your work saved for you. The “Save” button is just an extra precaution.
In some cases, particularly with scaled responses, instructors may ask you to explain your response where you can describe for the writer why you rated their writing the way you did, as well as (hopefully) ideas on what the might do differently.
Once you’ve finished your reviews, you’ll be able to access a report of what happened in the review. This report has two sections, a “Responses to Your Writing” tab that details all of the feedback you received from the review, and “Responses to Your Reviews,” which details all of the feedback you gave during the review.
This section of the review report gives detailed feedback about writing. It will help you gauge your performance, allow you to rate the feedback you got, and begin making revision plans.
This section of the review report gives detailed feedback about your performance as a reviewer. It will show you how your classmates rated the feedback you gave them and if your instructor endorsed any of your feedback.
Instructor Endorsement: some instructors may choose to “endorse” individual comments. While the reasons for doing this will vary, most instructors use this feature to send a “thumbs up” both to the writer, saying “you should consider using this in your revision,” and to the reviewer, saying “this is a great comment, more like this please!”
Revision is where better writing happens. Eli exists to help students learn to revise effectively by using high-quality feedback to plan those revisions. The Revision Plan is one of Eli’s most important features, and writers utilize it by adding feedback they receive from their reviewers.
In the review report, each comment has an “Add to Revision Plan” link. Clicking this link allows you to add that piece of feedback to any revision plan for any piece of writing, so you can use one comment to revise many pieces of writing. You’ll be able to edit the comment and add your own notes.
To learn more about revision plans, see the Revision Task section of the user guide. | https://elireview.com/support/guides/student/review-tasks/ |
On Martin Luther King Jr.’s birthday, it is a timely and important matter to distinguish between what is and what is not in fact activism, and what is simply performance.
After the 2020 resurgence of human rights activism due to the killing of George Floyd, the notion that actions speak louder than words seem to be a long-lost concept for some in the music industry.
Without a doubt, there were many effective activist approaches to the demand for justice and accountability. Yet another far less action-based approach also took hold, which in some cases took away from the functional activist approaches by leaders, organizers and well-equipped protestors.
In the following months, as someone who has worked in the audio forensics field sparingly and has taught others, I decided to assist pro bono lawyers with audio forensics analysis and enhancement in cases where protestors were met with police brutality. I fell into this field in the late 90’s, although there was no such name for it at that time.
The idea was not a publicity stunt, but was intended first to be an offline conversation between the lawyers of the victims, the victims themselves, myself and my colleague unrelated to the UPA. However, once the cat was out of the bag the idea resonated across the field of music production and sound engineering and became the fuel for social media performative activism by those who were not in fact knowledgeable of audio forensics.
I purposefully left out colleagues in the field of professional audio forensics here in the United States as they work with law enforcement—their bread and butter—and their relationship to law enforcement is essential to the world as well. While I myself have only worked in this field primarily for journalists in the Middle East and elsewhere, I had to take full responsibility for the action with another colleague abroad, and not involve even my own students in fear of tainting their careers if they choose to pursue audio forensics because of my own social activism – a choice I make and do not place upon others unless there is a real will to participate.
Once the action went public on its own, I determined how to make the best use of the situation to gain more traction with actual protestors and those in need. Instead we received hundreds of emails from producers, engineers and artists who asked to be involved – only two of them were in fact equipped to commit. Although many were capable engineers in terms of music production and sound design, they were not aware of the field and needed to have a clear understanding of the protocols of audio forensics to avoid important evidence becoming inadmissible in court.
Instead of getting help, as these well-intentioned individuals most definitely were here to do, it created more work that distracted from the issue itself. As a result, I was forced to give free consultations to try to keep well intentioned individuals from corrupting important evidence, which then wasted more time as it is a field that must be studied and practiced for years in order to be useful and effective.
We are led to believe that due to the arts being intertwined with performance, activism should also be a part of that artistic practice. If so, an artist who has developed their craft in performance can do much for the state of affairs that we live in.
But then there is another side which kicks in, especially with people who are not involved on a deep and personal level in the actual struggle. The statement “art imitates life” becomes a truly unhelpful form of activism.
Things are only made worse (or better depending on strategy and organization) when the main stage becomes social media, especially during a pandemic. It becomes far too easy to drop a meme and feel good about one’s level of activism, but, just as the word ‘producer’ refers to being productive, so does the word ‘activism, which refers to actual action. Whether it be simple or elaborate, without strategy it becomes only a reaction to a need to participate, inevitably leaving both the performer with good intentions and their audience feeling hopeless.
Perhaps the most overlooked dangers in performative activism are a lack of organization that only makes the problem itself seem unorganized, which is furthest from the truth when it comes to systematic racism in America and across the globe. The history of racism and why it continues has everything to do with organization, which the saying “systematic racism” most definitely brings home.
Although it makes sense in a field like the arts where attention is needed for survival without any funding as we see in our country, these aspects of our desires for the stage prevent artists from remaining relevant in the long run. It affects the originality of their work and deprives their art of integrity and truth. | https://media.upa.nyc/writings/performativeactisiminthearts/ |
Great powers die more often by suicide than murder. Political paralysis and short-term thinking robs them of the ability to keep interests and commitments in balance, to innovate militarily and economically, and to maintain their competitive edge. See the history of the Roman Empire, the Habsburg Empire, the Ottoman Empire, the Romanov dynasty , the various Chinese dynasties, and too many others to count. Is the United States the next great power slated to suffer this melancholy fate?
That was the question on my mind as I read a blockbuster report from that rare blue-ribbon panel that has something significant and compelling to say — and manages to say it in a way that even nonexperts can understand. The National Defense Strategy Commission, led by former undersecretary of defense Eric Edelman and retired Adm. Gary Roughead, was tasked by Congress to review the United States’ defense posture to determine whether it will keep us safe. The answer is a resounding and alarming no. The commission writes that “America has reached the point of a full-blown national security crisis.”
“If the United States had to fight Russia in a Baltic contingency or China in a war over Taiwan, Americans could face a decisive military defeat,” the report warns. “These two nations possess precision-strike capabilities, integrated air defenses, cruise and ballistic missiles, advanced cyberwarfare and anti-satellite capabilities, significant air and naval forces, and nuclear weapons — a suite of advanced capabilities heretofore possessed only by the United States.”
Even if the United States were able to prevail, it would face “harder fights and greater losses than at any time in decades.” Air superiority, which the United States has taken for granted since World War II, is no longer assured. And, without control of the skies, U.S. ships and soldiers would be vulnerable in ways that are difficult to imagine.
How did we get to the point where the United States would be hard-put to win even one major war — much less the two that it had planned to fight simultaneously during the Cold War? The commission pins the blame on the partisan gridlock which, starting in 2011, produced the sequestration process that automatically reduced the defense budget to cut the deficit. The commission’s report says that, in constant 2018 dollars, U.S. defense spending fell from $794 billion in fiscal year 2010 to $586 billion in 2015 — “the fastest drawdown since the years following the Korean War.” The result is that, by 2017, “all of the military services were at or near post-World War II lows in terms of end-strength, and all were confronting severe readiness crises and enormous deferred modernization costs.”
The Republican-controlled Congress, with President Trump’s support, provided a temporary boost by raising defense spending this year to $716 billion. But Trump is already tweeting about his desire to cut defense spending, with the White House suggesting that the defense budget will be reduced to $700 billion next year. If that were to happen, it would make it impossible to fund the ambitious modernization program called for by the Defense Strategy Commission. The commission’s call for annual 3 to 5 percent increases in defense spending above inflation would necessitate a defense budget of between $752 billion and $766 billion in fiscal year 2020. In absolute terms, we can afford the extra spending — defense is currently only 3.1 percent of gross domestic product and 15 percent of the federal budget. In 1958, by comparison, defense was 10.2 percent of GDP and 56.8 percent of the budget. But that was before the rise of entitlement spending.
It is difficult to imagine that Congress — especially a Democratic-controlled House — will pony up the needed defense funds at a time when the budget deficit is heading toward $1 trillion a year, in large part, because of $1.5 trillion in unnecessary tax cuts passed under Trump. The commission notes that even though the United States has been at war since the Sept. 11 terrorist attacks, we have repeatedly cut rather than raised taxes, placing this country in an unsustainable fiscal situation. “No serious effort to address growing debt can be made,” the commission writes, “without either increasing tax revenues or decreasing mandatory spending — or both.”
Yet it is impossible to imagine our political system, as currently constituted, either raising taxes or cutting spending on programs such as Social Security, Medicare and Medicaid. The only way out of this fiscal quagmire is the kind of bipartisan agreement that President George H.W. Bush and Democratic leaders forged during marathon negotiations at Andrews Air Force Base in 1990. They enacted a package of tax increases and spending cuts that set the country on a path toward a balanced budget by 1998. But the odds of such compromise in our polarized political climate are slim — especially with Republicans under Trump having given up any pretense of fiscal conservatism.
So we’re in deep trouble. We are losing the military edge that has underpinned our security and prosperity since 1945. And we have no one to blame but ourselves. Political paralysis and partisanship are sabotaging American power. | |
The Viviry watershed, located at the western edge of the Montreal urban area, has a mix of agricultural, forest, residential and recreational land cover. In the last 80 years this peri-urban region has seen extensive change. The historical landscape of agriculture, forest and sparse human settlement has become a forested residential area - a change in land use that has increased flows in the Viviry River. Compared to forests and farmland, developed areas with more impermeable surfaces increase runoff for a given precipitation event. Hence, forests, wetlands and agricultural fields provide a significant ecosystem service in this peri-urban landscape: flood mitigation. There is a direct tradeoff between development within the watershed and increased flood risk. The purpose of this study is to quantify the difference in peak flows between historical, present and future land use patterns, therefore quantifying the ecosystem service provided by natural spaces within the watershed. The scenarios analyzed are 1) historical land use (1933), 2) present day land use, 3) approved future development plans and 4) densification of development, with certain green spaces set aside for conservation under proposed local development plans.
Results/Conclusions
We calculated peak flows under the four land use scenarios and found that they have increased since 1933, and will continue to increase in the future if current development plans are followed. Historical peak flows are 27% lower than present day. This change can largely be attributed to development at the expense of forests and agricultural fields. While approved future development plans (scenario 3) will not significantly increase peak flows (1% increase), longer term development plans for the region of Montreal (scenario 4) leave forests vulnerable to development and result in increased peak flows (22%). Without thoughtful planning, the ecosystem service of flood mitigation provided by the undeveloped areas within the watershed will be reduced considerably. This analysis demonstrates the need to consider ecosystem services and watershed boundaries when planning for the conservation of green spaces. | https://eco.confex.com/eco/2015/webprogram/Paper54219.html |
NTT Software Innovation Center will not only proactively contribute to the open source community but undertake promoting the research and development by the open innovation and create innovative software platforms and computing platform technologies to support the evolution of the IoT/AI service as a professional group on IT. We will also contribute to reduction of CAPEX/OPEX for IT or strategic utilization of IT, using the accumulated technologies and know-hows regarding software development and operation.
The main mission of NTT Software Innovation Center is to build computing systems that support various services and applications through the power of software.
As our organization is called a "Center" instead of a "Laboratories," we place importance on the practical use of technologies. Moreover, as suggested by "Software Innovation," we aim to transform the world through the power of software.
In the field of software engineering, we are working on the development of human resources who can respond to software development methods suitable for each particular system, such as waterfall and agile, and on the research of software development frameworks and development methodologies. As a result of these efforts, we have achieved results such as papers being accepted at top-level international conferences. One of our most distinctive activities is open source development, and we have major developers of the world's leading open source software (OSS). As a member of NTT Software Innovation Center, "OSS Center" is a group of engineers who support the NTT Group's use of OSS.
By utilizing our software engineering technologies and open source software, we aim to benefit the world by creating useful computing systems, such as high-speed and advanced data processing systems. To truly contribute to society, it is necessary to think broadly about "software," including how to leverage hardware and how to create value from the perspective of users, rather than thinking about software in a narrow sense. We will take on the challenge of transforming the world by creating new value, with people from not only within NTT R&D but also around the world.
Here we post the address, map, and access information for NTT Software Innovation Center. | https://www.rd.ntt/e/sic/overview/index.html |
7 Reasons To Consider A Meditation Practice
Meditation is simply directed concentration and involves learning to focus your awareness and direct it toward something specific: your breath, a phrase or word repeated silently, a memorized inspirational passage, or an image in the mind’s eye. The benefits of meditation are numerous, and include:
- Helping lower blood pressure
- Decreasing heart and respiratory rates
- Increasing blood flow
- Enhancing immune function
- Reducing perception of pain and relieving chronic pain due to arthritis and other disorders
- Maintaining level mood
- Bringing awareness and mindfulness to everyday aspects of life
A simple form of meditation that can be practiced by anyone is to walk or sit quietly in a natural setting and allow your thoughts and sensations to occur; observing them without judgment. There are now a number of guided meditation apps that can be used on your smartphone or tablet. They may have specific focuses for your practice like happiness or creativity. Give them a trial and see if they make starting meditation more enjoyable and more regular for you. | https://www.drweil.com/blog/health-tips/7-reasons-to-comsider-a-meditation-practice/ |
When you think of etiquette, you likely think of things like keeping your elbows off the table or not talking with your mouth full. As a society, we have certain rules or conventions we’ve agreed to regarding the proper way to behave in certain settings.
Well, business etiquette is the same thing: It’s about how we behave or interact with others in our work environment. And while most of the personal rules we tend to agree on apply in a work setting as well (nobody needs to see your lunch while you’re talking about your fundraising goals for next quarter), here are three tips specific to business etiquette and growing your nonprofit. (And yes, like your mama told you, manners DO matter.)
1. Be responsive. Whether it’s with the people you serve, your volunteers, your funders, or your critics, show them you’re listening. It’s great to be passionate about your mission. It’s better still to make sure what you’re offering aligns with the needs of your stakeholders. Are you paying attention when your clients tell you what they need? Are you delivering on promised deliverables to grant funders? Are you communicating—both when things are going well and when you hit an inevitable obstacle or challenge?
You want buy-in from others if you’re looking to grow your nonprofit. An easy way to get that? Let them know their feedback matters.
2. Be collaborative. There are lots of nonprofits out there, each doing important work. And the reality is that you will often find yourself competing against other nonprofits for finite resources. Despite that, resist the urge to develop an “us against them” attitude. Instead, be open to partnerships and collaborations. Together, you can pool resources and, more importantly, amplify your collective impact.
3. Be generous. Thank your donors. Recognize your volunteers. Celebrate your team members. As a nonprofit, you’ll likely always be worried about funding. But lucky for you, praise, attention, and time are free. Share them generously with those who support the work you do.
If you’re interested in growing your nonprofit (and even if you’re not), don’t underestimate the value being responsive, collaborative, and generous can have on the work you do. | https://www.powerhouseplanning.com/manners-matter-when-growing-your-nonprofit/ |
Arabesques are an elaborate design of intertwined floral drawings or complex geometrical patterns. This ornamental design is manly used in Islamic art and architecture. It can also refer to an ornate composition, especially for the piano or a dance position in which the dancer stands on one leg, with the other raised backwards, and the arms outstretched.
From French arabesque, from Italian arabesco, from arabo (“Arab”).
The arabesques and geometric patterns of Islamic art are often said to arise from the Islamic view of the world. The depiction of animals and people is generally discouraged, which explains the preference for merely geometric patterns.
The Arabesque used as a term in European art, including Byzantine art, is, on one definition, a decorative motif comprising a flowing and voluted formalistic acanthus composition.
"The Hollywood Hallucination introduces Parker Tyler’s critical arabesques, elaborated in his later books, concerning Mae West, Mickey Mouse, the Good Villain and the Bad Hero"
There has been some debate over the meaning of Poe's terms "Grotesque" and "Arabesque" in his short story collection Tales of the Grotesque and Arabesque. Both terms refer to a type of art used to decorate walls. It has been theorized that the "grotesque" stories are those where the character becomes a caricature or satire, as in "The Man That Was Used Up." The "arabesque" stories focus on a single aspect of a character, often psychological, such as "The Fall of the House of Usher."
Von Arabesken, a text by Johann Wolfgang von Goethe on arabesques.
Unless indicated otherwise, the text in this article is either based on Wikipedia article "Arabesque" or another language Wikipedia page thereof used under the terms of the GNU Free Documentation License; or on original research by Jahsonic and friends. See Art and Popular Culture's copyright notice.
This page was last modified 07:05, 4 July 2014. | http://artandpopularculture.com/Arabesque |
Our assembly song this week is a little different because it’s not a patriotic or military song. It’s the first song found in the green, Level Three Share The Music textbooks, called “I’d Like to Teach the World to Sing”. This is a pop song from 1971 that was originally used as a Coca-Cola commercial.
But the lyrics were revised, and the song was recorded by a British pop group called The Seekers. The message of the song is to inspire peace, hope and love throughout the universe. As you learn the song, feel free to try the motions you’ll see in the video.
Click below to hear our song of the week.
Listening Example: "My Guitar Gently Weeps"
This week we will be hearing music by Jake Shimabukuro. [shim-a-BOO-ku-ro] Jake Shimabukuro is an ukulele [ook-koo-le-le] virtuoso, a music arranger and composer. He is known for his fast and complex finger work. His music combines elements of jazz, blues, funk, rock, bluegrass, classical, folk, and flamenco.
Ukuleles [ook-koo-le-les] are like small guitars, but they have only four strings. They actually are made in eight different sizes.
But only four sizes are used most of the time: the small soprano, the concert, the tenor, and the large baritone. Their overall length ranges from 21 inches to 30 inches.
Shimabukuro has made arrangements of all kinds of popular music, from “Somewhere Over the Rainbow” from The Wizard of Oz, to “Bohemian Rhapsody” by the rock group Queen. Today we’re going to hear the song that first made him famous. Shimabukuro became known internationally in 2006, when one of his friends posted it on YouTube without his knowing it. It was an instant hit, causing his career to take off. It’s the beautiful song written by George Harrison of the Beatles, “My Guitar Gently Weeps”.
The first two minutes introduce the gentle theme. The next two minutes show Jake really getting down on the uke, rock style. In the last two minutes, he shows his virtuoso skills as he returns to the main theme.
"While My Guitar Gently Weeps". Duration 6:00 minutes.
Compare Shimabukuro's version to that sung by Beatle Paul McCartney and Eric Clapton. Duration 5:50 minutes.
Jake plays "Bohemian Rhapsody" by Queen. Duration 10:15 minutes.
Jake Shimabukuro was born and raised in Honolulu, Hawaii. His life has always centered on the ukulele [OOK-u-le-le]. He started playing at the age of four, urged by his mother who also played.
Jake’s playing surpasses people’s expectations of how ukulele music sounds. No slow strumming on the beach for him!
Jake is known for his energetic strumming on the ukulele. His performances contain thoughtful, sophisticated arrangements and spontaneous, improvised passages. He also makes use of electronic devices associated with electric guitars.
Our featured video is called “Dragon” from the 2005 album of the same name. It won Shimabukuro two Hawaiian music awards, including Favorite Entertainer of the Year.
Jake creates a variety of moods in this piece, from slow, thoughtful and shimmering to hard rock style. At the beginning, he uses a pedal device to create special effects.
Pedal technology is used to create the sounds of an electric guitar. Duration 6:57 minutes.
"Kawika" is a pop-rock style video. Duration 4:03 minutes.
"Let's Dance" sounds like a Spanish flamenco guitar. Duration 3:24 minutes.
Documentary on playing techniques. Duration 4:25 minutes.
Listening Example: "Hula Girls Theme"
Jake Shimabukuro is a 5th generation Japanese-American. Once his mother began teaching him ukulele as a child, he practiced many hours a day. It takes a great amount of love, self-discipline and dedication to acquire skills on an instrument the way Jake has.
many original compositions, including the entire soundtracks for two Japanese films. His music for Hula Girls won a Best Film Score award in 2007.
Shimabukuro’s concert performances, work with legendary musicians, media appearances, and music productions have snowballed since then. In 2012, an award-winning documentary was released about his life, career and music. It is titled Jake Shimabukuro: Life on Four Strings. This documentary has aired repeatedly on PBS and been released on DVD.
Our listening example is the theme music from Hula Girls, a film which featured hula dancing and a Hawaiian spa resort as its primary theme and setting.
The music creates an image of gentle breezes, soft waves upon the sand, and the graceful movements of the hula dancers.
This theme from Hula Girls has beautiful Hawaiian scenery to accompany the music. Duration 3:48 minutes.
Jake's brother, Bruce, is also an accomplished player. Here they play a duet.
Live stage performance of "Hula Girls". Duration 4:16 minutes.
Listening Example: "The Good, the Bad and the Ugly"
Hawaii is not the only place ukuleles are popular. The United Kingdom has a professional ukulele ensemble called the United Kingdom Ukulele Orchestra. They perform classical, pop and rock arrangements on their instruments, including some special effects and lots of humor.
One of the most entertaining is the theme from an old Clint Eastwood western called The Good, the Bad and the Ugly.
"The Good, the Bad, and the Ugly" is based on music from a Clint Eastwood film of the same name. Duration 5:06 minutes.
"Born to Be Wild". Duration 3:10 minutes. | https://www.weinerelementary.org/jake-shimabukuro.html |
A basic tenet of organizational theory is that, whenever the formal structures are inadequate, other structures emerge to compensate. And that, in a sentence, may explain why KDE Neon has emerged.
KDE Neon is a project organized by Jonathan Riddell, the ex-community leader of Kubuntul. Its goal is to provide a repository for the latest KDE packages.
That sounds like a simple goal, but it was questioned almost immediately. In particular, podcaster and openSUSE marketer Bryan Landuke criticized KDE Neon as being basically Kubuntu under another name. Landuke also seized on a comment that KDE Neon was based on Ubuntu because it was "the best technology" as an insult to other Linux distributions. KDE Neon violated the usual distinction between desktops and distros, he suggested, and that blurring could cause trouble.
I suspect, however, that it's far too late to worry, because that blurring has already occurred. If it wasn't already happening, it started when Ubuntu withdrew from GNOME development to develop Unity. Ubuntu has done much the same in developing Upstart and Mir, preferring to develop its own backend tools rather than staying with its more traditional role as a packager of software and becoming a creator.
Both GNOME and KDE have flirted with the idea of their own distros, and for several years prominent KDE developers did their best to develop a free-licensed tablet -- a goal that Ubuntu is about to make a reality.
An argument can be made that free software projects should stick to what they know best for the sake of efficiency. However, after some six years of this blurring of functions, I suspect that anyone who carea is more or less used to it, whether it's a good idea or not. For instance, far from being insulted by KDE Neon's use of Ubuntu, I suspect contributors to other distributions will take the comment about "the best technology" to mean no more than the obvious fact that Neon's builders are most familiar with Ubuntu, and prefer to work with it.
Organization Compensating for Organization
As for KDE Neon, it has become necessary because KDE no longer uses a single release number for its software. Instead, since 2014, KDE divides its releases into KDE Frameworks, KDE Plasma (the desktop) and KDE Applications.
This division is convenient for developers. For example, instead of Plasma developers trying to sync their work with that of developers who work on various applications, everyone can go their own way without rushing to catch up with one sub-project, or waiting for another.
In the last few years, when KDE was transitioning to Qt5 and Plasma 5, I imagine that the arrangement made sense, especially with everyone eager to avoid the misunderstandings that surrounded the release of the KDE 4 release series in 2008. The arrangement makes it easier to prevent distributions from packaging software that is not meant to be in the hands of general users yet.
For users, however, the three different numbering systems is nothing except confusion. Should they care about an announcement of a new KDE Frameworks or Plasma? More importantly, how do they know that they're getting the latest software? Having three sets of version numbers makes answers difficult to come by.
The situation is especially confusing for reviewers or anyone outside KDE who wants or need to keep up to date about developments. In theory, they can download the latest code and compile it for themselves, but often these people have no time for compiling. They may also be unsure how to compile without looking up the procedure.
For such people, KDE Neon offers a relatively straightforward alternative. In 20-30 minutes, they can download and burn a CD, and start looking at the latest software.
Returning to the old numbering system might be even easier, but, I see no sign of KDE considering that solution. Besides, returning to a single version number would cause more confusion while the release numbers were re-synced, and users would still have to wait for their distribution of choice to install the latest versions.
By contrast, so long as KDE Neon meets its goals, users who need or want the latest of everything should be able to get want. Meanwhile, developers in any of the three sections of KDE development can continue to enjoy the advantages of this arrangement, the problem solved by the addition of another organization.
Very few, I suspect, will ever install KDE Neon on their hard drive -- or, perhaps, burn its latest image to DVD. Instead, many will not only install it on a virtual machine on a regular schedule, furiously taking notes all the way.comments powered by Disqus
Issue 248/2021
Buy this issue as a PDF
News
-
Nvidia and Valve Collaborate to Bring DLSS to Linux
Both powerhouses in the gaming industry are trying to make the experience on Linux much improved, by way of DLSS.
-
Kali Linux 2021.2 Official Release Now Available
The latest iteration of security fan-favorite Kali Linux has been released with new tools, themes, and plenty of improvements.
-
Entroware Unleashes a Beast of a Linux Laptop
If you're looking for a portable workhorse, look no further than the Entroware Proteus laptop.
-
System76 Unveils its “Launch” Keyboard
The open-source darling, System76, is about to launch the Launch keyboard and you can pre-order yours now.
-
Armbian 21.05 Now Available
The Armbian developers have released the latest update to the Debian-based Linux distribution geared for ARM and embedded devices.
-
StarLabs has Released Another Linux Laptop
A new 14" Linux laptop has been released by the company that created the 11" Star Lite and can be purchased with your choice of Linux distribution.
-
Ubuntu 21.04 Adds Support for Active Directory and Other Major Changes
In a bid to help make Linux more viable for the business desktop, Canonical adds support for Active Directory and a few other notable changes.
-
GNOME 40 Available on openSUSE
The rolling release edition of openSUSE, Tumbleweed, now offers the latest version of the GNOME desktop.
-
Apple M1 Hardware Support to be Merged into Linux Kernel 5.13
Linux users will be able to install their favorite distribution on Apple’s M1-based hardware.
-
KDE Launches the Qt 5 Patch Collection
To support and maintain a stable Qt 5 for KDE Gears and Frameworks, KDE will maintain a patch collection. | https://www.linux-magazine.com/index.php/Online/Blogs/Off-the-Beat-Bruce-Byfield-s-Blog/Compensating-with-Neon |
A Transportation Innovation Zone (TIZ) is a geographic area that hosts testing of transportation and public realm approaches and technologies in a real-world environment. A TIZ has been established at Exhibition Place. The City is also developing a Transportation Innovation Challenge program to invite and manage Trials at that site and, in the future, in other areas of the city.
By establishing a controlled public testing site with transparent monitoring and evaluation, and inviting third parties to conduct trials, we can learn about emerging technologies while supporting local innovation initiatives. The program will last until at least 2025.
The City is developing the TIZ and the Transportation Innovation Challenges to learn about emerging transportation technologies and approaches and how they could meet some of Toronto’s transportation needs. The program will facilitate trials by industry and academic actors in the real-world environment of Toronto’s streets. With careful design, this program can support research and development, grow local economic activity and talent, and advance the City’s mobility-related goals such as Vision Zero, accessibility, and TransformTO.
Also referred to as “testbeds” and “living labs,” the concept of providing space in a city to trial new innovations is being explored in many places around the world. Models range from purpose-built test tracks (Singapore), city-wide test zones (Torino, Italy), challenge-based partnerships with participation funding (TransLink in Vancouver), and testing on City-owned assets (Calgary).
Technologies and approaches that could be tested include those that use streets and sidewalks for the movement of people and goods, improve transportation operations, or improve the streets and sidewalks themselves. They will be demonstrated, monitored and evaluated in a controlled, real-world environment to understand potential benefits and impacts to key variables like safety, accessibility and privacy.
City Council asked staff to develop the TIZs in October 2019 as part of the Automated Vehicles Tactical Plan. In July 2020, City Council asked staff to work with Exhibition Place to establish a TIZ at Exhibition Place. City staff will be reporting back to council on the design of the program in early 2021.
In November 2020, the City posted a TIZ Draft Framework (PDF & Accessible Word Document) for review and comment by stakeholders. This Framework will form the basis of the forthcoming report to City Council and the design of the program.
The City of Toronto and Exhibition Place (an agency of the City) are working together to establish the first, flagship TIZ at Exhibition Place.
Exhibition Place is Canada’s largest entertainment and business events venue, attracting over 5.5 million visitors a year. The 192-acre urban waterfront site is an integral component of Toronto and Ontario’s economy. This landmark destination combines urban parkland with business events, sports and entertainment in Toronto’s western downtown has been a hallmark of innovation for over a century.
As a testing site, the buildings and infrastructure offer a dynamic urban environment with a range of infrastructure including roads, local and regional transit, sidewalks, cycling lanes, intersections, parking areas, indoor facilities, electric vehicle chargers and more.
Exhibition Place’s new Master Plan outlines a vision for Exhibition Place as a place of innovation, inspiration and economic development, alongside other pillars in the plan.
Collaborating and co-producing knowledge with stakeholders and the public is at the core of the TIZs program.
In mid-September 2020, the City hosted four online workshops to gather feedback and recommendations from stakeholders on the proposed Transportation Innovation Framework.
The workshops had a total of 93 participants representing a wide range of organization types and interests and generated over 400 questions, suggestions and ideas. The majority of suggestions were prioritized by participants through collaborative idea rating.
Below are the reports summarizing the Stakeholder workshops:
All results were considered in details, with key suggestions directly influencing and reflect in the draft framework document.
The TIZ concept was first developed in 2018-2019 as part of the Automated Vehicles Tactical Plan, which included consultation with academic institutions, community stakeholders and non-profits, automotive and technology industry members and associations, international experts, and the public.
The specific proposal for a TIZ in Exhibition Place has been supported as part of Phase 1 of the Exhibition Place Master Plan (Next Place), which included public and stakeholder consultation.
The City has also received public input about the use of digital technology through the City’s Digital Infrastructure Plan.
In total, these consultation processes recorded the opinions of thousands of residents and hundreds of organizations, contributing to the core values, principles and general concept for the Transportation Innovation Framework.
In December 2020, the City issued a Call for Applications and Application Form for organizations developing new technologies to clear snow or apply salt to sidewalks. This Automated Sidewalk Winter Maintenance Challenge would be a first-of-its-kind pilot Transportation Innovation Challenge at the new TIZ at Exhibition Place. Applications closed on December 18, 2020 and the City is working with applicants to determine next steps.
Type (don’t copy and paste) your email address into the box below and then click “Subscribe” to receive bi-monthly updates about the Transportation Innovation Zones program. You will receive an email with instructions to confirm your subscription. | https://www.toronto.ca/services-payments/streets-parking-transportation/transportation-projects/transportation-innovation-zones/ |
PURPOSE: To provide a steel for soft-nitriding excellent in machinability as hot-rolled or as forged and furthermore excellent in soft-nitriding properties.
CONSTITUTION: A steel having a steel compsn. contg. 0.05 to 0.30% C, ≤1.20% Si, 0.60 to 1.30% Mn, 0.70 to 1.50% Cr, ≤0.10% Al, 0.006 to 0.020% N, 0.05 to 0.20% V, 0 to l.00% Mo, 0 to 0.0050$ B,. 0 to 0.060% S, 0 to 0.20% Pb and 0 to 0.010% Ca and satisfying 0.60≤C+0.1Si+0.2Mn+0.25Cr+1.65V+0.55Mo+8 B≤1.35, and the balance Fe with inevitable impurities is subjected to hot rolling or hot forging and is thereafter cooled to regulate its structure into a bainitic one having 200 to 300Hv core part hardness or a mixed structure of ferrite- bainite having <80% ferritic fraction without executing heat treatment. Thus, the reduction of the soft-nitriding time and the reduction of the cost are made possible.
COPYRIGHT: (C)1995,JPO | |
Nintendo’s official home for The Legend of Zelda. Find out all about The Legend of Zelda, Link, and the kingdom of Hyrule. Find out all about The Legend of Zelda, Link, and the kingdom of Hyrule. Games, videos, and more.
Ancient Origins articles related to legends in the sections of history, archaeology, human origins, unexplained, artifacts, ancient places and myths and legends.
Aqhat Epic, ancient West Semitic legend probably concerned with the cause of the annual summer drought in the eastern Mediterranean. The epic records that Danel, a sage and king of the Haranamites, had no son until the god El, in response to Danel’s many prayers and offerings, finally granted him a child, whom Danel named Aqhat.
the ancient greek hero pdf Ancient Greek religion encompasses the collection of beliefs, rituals, and mythology originating in ancient Greece in the form of both popular public religion and cult practices.
Legend of the Ancient Hero - Benjamin Yeo (Score) - Download as PDF File (.pdf), Text File (.txt) or read online. | http://dev-network.com/ontario/legend-of-the-ancient-hero-pdf.php |
If a blank PING response is received during partial testing, there can be several reasons for that, that usually come from Step 2 of API Configuration.
When using Change Test Lead Data option for date testing, enter date only in PX format (yyyy-MM-dd).
Otherwise, the test will not work properly and PING response area will be blank.
There are buyers that require sending the expiration date of the current insurance policy. However, sometimes leads do not contain this field, even if the person is insured. In this case, no value is specified in the mandatory Expiration Date field. This causes an error on buyer API and testing shows blank PING response.
select Currently not insured in Check value dropdown menu.
According to these configurations, PX system checks whether the person from the lead is insured. If yes, the Expiration Date field is sent in the request. If no, the field is not sent.
Save your mappings and test them on Step 4 for Insured and Not Insured leads to check if the request is accepted by buyer API and everything works properly.
Check if all values for fields entered into the Template Editor are filled in on the Mapping Panel. Each buyer field must be mapped to the most appropriate PX field. Otherwise, PING request can be rejected by the buyer API.
Check also if all Default Value and Custom Value For Non Collected Leads (mainly used for User Agent and TCPA Text fields) fields are filled in on the Mapping Panel.
Check if all Index tags are configured inside Cloning tags, no matter what format (XML, HTTP, JSON or SOAP) is used. Otherwise, PING area can be blank when testing API Configuration on Step 4.
All closing tags for Perform Check exist and are not duplicated.
All opening Perform check tag numbers correspond to closing tag numbers.
If you receive an Unmapped Error message during Complete testing, there can be two reasons for that.
1. All or some of the Regular Expressions on Step 3 are configured incorrectly. Go back to the previous step and follow Regular Expression Templates document to check your regular expression configurations.
2. All or some of Status Codes on Error Mapping tab on Step 3 are not mapped properly.
Usually, buyer specification includes all errors that should be mapped to appropriate PX fields. Otherwise, if buyer has a testing environment, you can test your API configurations on all test leads until all errors are found. If there is no testing environment, it is recommended to contact buyer and ask the necessary information.
All field values are properly mapped on the Mapping Panel in Step 2.
Date formats are specified correctly in Step 2.
If the request mappings are checked and everything is well on PX side, it is recommended to contact a buyer on this matter.
If Missing Field error occurs on a Complete Test, PING RESPONSE code area contains a Message tag, where the missing field is specified. Return to Step 2 and map this field and its values according to buyer specification.
Not all buyers have such an error and sometimes there will be just Rejected response without any explanations. In this case, check once more if all fields and their values mentioned in buyer specification are mapped properly and sent in the request.
If everything is configured properly, it is recommended to contact the buyer. | https://support.px.com/hc/en-us/articles/115005953207-Testing-API-Configuration-Troubleshooting-Guide- |
Measurements are a part of daily routine on construction sites, this simple looking task can be quite painful at times when you have to do the same thing again & again. By that I mean to say that 1st you have to take the measurements and then again type those same measurements on a system to calculate the quantities. The measurements are usually taken on a paper or triplicate book to ensure that all parties are having the record of the real measure. This triplicate system does provide a sense of security but if the work & measurements are too many then it becomes a pain. The volume of too many measurements can cause errors & ultimately lead to poor quality of work leading to distrust & spoiled relations between the parties involved in the construction work.
|Image Credit: https://archiparti.co/how-to-take-site-measurements-diy-5-simple-steps/|
Solution,
Getting Hi-tech!
Google forms can come to aid at such an instance!
Here's the story...
I was working as an Engineer at a site in a remote area, my job included management of site works, leveling, taking measurements & coordinating activities. I used to take measurements with the company engineer at site for different items such as excavation, PCC, stub columns, etc. The weather was extremely bad, hot & humid climate added to the difficulty. Initially I used to take measurements in a book, while taking the measurements I used to have two labors to hold the measuring tap while taking linear measurements or the level staff while taking the vertical levels. Then after taking these readings I used to copy those in a triplicate book so that the engineer could keep a copy, the contractor could keep one & one could be safe at site. After the readings are copied then its now time to compute the quantities, oh god, this was the most painful task. I had to manually type in all the values in the calculator to get the quantities. I used to program the calculator but even then it used to appear a never ending task but somehow I used to complete it. Now this is a good mechanism but it conflicts with my lazy nature, for a person like me who wants to automate everything & have the least efforts. I had to come up with something to end this donkey work & enhance my productivity on site.
|Measurements of Excavation & Plinth Works|
Then, I realized that I had used Google forms for such task while I was a student in an Engineering college. The purpose behind making such forms was to get data such as Name, class, etc. isn't it the same - I figured out that the same logic could be used on site as well. Next time I made a google form & carried my phone along while taking the readings, sounded simple? It was difficult to explain the engineer that it was a trust-able & that he can see what values are being entered. I opened the spreadsheet that pulls in value from the form & he could monitor those along with me. Within a couple of minutes he got confidence & was happy that I thought about such technique. The plus point was that he could read it, believe me my hand-writing is even worse than doctors sometimes even I can't interpret what I've written. Next and additional plus point was that once the values were available in the spreadsheet format there was no need of me to manually compute those values. I just used Google Spreadsheets & by placing proper formulas everywhere I was able to compute the quantities within minutes after taking the readings. What a relief it was! The work that took me almost two days was completed withing 2 hours. Such is the power of technology. Later I discovered that I could use Google Spreadsheets app to log in the magnitudes but later I scrapped it as it was not that good because I had to navigate, zoom & pan which made it quite time consuming & also it was not possible for the engineer to monitor the real-time inputs in such a sheet so I scrapped the idea of using Google Spreadsheets & instead continued using Google Forms. I was fortunate that the place where my was had good internet connection. The only problem was that there was no electricity to power my laptop so almost everything was to be done my phone itself. I made the form on my phone & also computed the quantities on the phone only, you see apps these days make life extremely easy.
So this is how Google forms save me a lot of effort & increased accuracy on site. It is as if I created a software for myself that pings data back to the server where I can use it for processing later. Now a days, I don't visit the site but instead just send links to the supervisors there who in coordination with the engineer take the measurements & complete the task for me. And yes, just to verify if everything was correct I do ask them to click & email me the snaps of the work so that I can just have a random sample check if everything is correct.
An example:
I had prepared this form as I had to take measurement of excavation pits at the site, as you can see the form contains fields such as
Column No. - to type in the column number
HI - to mention the HI of the AutoLevel
AG, BG, CG, DG - These were the fields to mention the Staff reading for ground surface
AS, BS, CS, DS - These were the fields to mention the Staff reading for ground surface
The form that I used is as below, you can go ahead & fill it with sample values to test how the output works.
The form that I used is as below, you can go ahead & fill it with sample values to test how the output works.
So the form is like the input terminal to your database & spreadsheets are the place where you collect the data & analyse it as per your requirement.
Here is the spreadsheet
As you can see the data from the form is directly logged into the spreadsheet to which I've hooked a few formulas at the end that help me calculate the average ground level & the average excavation level along with their depth & reduced levels. This makes the task of quantity survey very simple & easy.
So this how I used & continue using Google forms for much of the automation works, would be glad to learn about how you'd implement such tools. Leave them in the comments block below... | http://www.iecivil.com/2018/10/using-google-forms-on-construction-sites.html |
What Are the Most Common Manufacturing Work Injuries?
Workplace accidents can happen no matter what industry you work in. However, some industries are more dangerous than others. Factory workers in the manufacturing industry, for example, are commonly at high risk for suffering severe and even fatal injuries.
According to the National Institute for Occupational Safety and Health (NIOSH), over 13 million manufacturing workers are at risk for fatal and nonfatal injuries. Of all industries in the United States, 8% of workplace fatalities occur in the manufacturing industry.
If you are injured as a factory worker in Mississippi, you may be eligible to receive workers’ compensation benefits. Applying for workers’ compensation can sometimes be a hassle, and in some cases, your claim may get denied, or you may not be awarded the full benefits you deserve. However, it is your right as an injured worker to receive the benefits you need while recovering. Working with an experienced Mississippi workers’ compensation attorney can ensure your claim is approved so you can get the full benefits you deserve.
Common Manufacturing Industry Injuries in Mississippi
Manufacturing workers come in contact with various elements daily that can put them at risk, such as heavy machinery, hazardous materials, and high noise levels. Typically, the manufacturing industry has high safety standards and protocols in place to protect workers, but unfortunately, accidents still happen. Some of the most common injuries that occur in the manufacturing setting include the following:
Slips and Falls
Often, factory work requires accessing raised platforms, ladders, or other elevated structures. For this reason, injuries from slips and falls are the most reported in the manufacturing workplace setting. A fall from a great height can cause severe injuries such as broken or crushed bones, damage to internal organs, and traumatic head and brain injuries. In many cases, fall injuries are fatal.
Overexertion
Manufacturing work often requires intense physical labor under extreme conditions. As a result, it is common for workers in this line of work to strain or overexert themselves while performing their daily tasks. When this occurs, the body may be pushed past its physical limits, leading to exhaustion, muscle fatigue, strains, and back injuries.
Repetitive Motion Injuries
Workers in factories also often perform the same type of tasks repeatedly on a daily basis which can lead to repetitive strain. The lower back, shoulders, knees, and other joints are commonly affected by repetitive strain.
Thermal and Chemical Burns
Working with high heat, chemical substances, and other combustible materials can lead to chemical and thermal burns in the factory setting. Burns and shock can also occur when coming into contact with electricity from exposed wires or faulty machinery and equipment.
Exposure Illnesses
Workers not only suffer burns from hazardous material exposure, but they can also develop exposure illnesses that develop over time. The longer you are exposed to noxious chemicals, the sicker you can become.
Contusions, Fractures, Punctures, Lacerations, and Amputations
Coming into contact with various objects such as heavy machinery and equipment in the manufacturing industry can lead to a number of different injuries. Workers can sustain contusions, punctures, and lacerations if they get hit by an object or come into contact with dangerous equipment. Severe lacerations, fractured bones, and crushed limbs that lead to amputations can also occur when workers get caught in heavy machinery.
Connect with an Experienced Mississippi Workers’ Compensation Attorney
Factory environments can be very dangerous and put workers at high risk for injury and illness. If you or a loved one are injured or become sick as a result of manufacturing work, you may be entitled to receive workers’ compensation benefits. These benefits can help you cover the cost of medical expenses and lost wages while you recover from your injuries.
However, it is not uncommon for workers’ comp claims to get denied or for workers to receive less compensation than they deserve. A professional workers’ compensation attorney can help guide you through the application process to ensure no mistakes are made. And if your claim does get denied, they can help you file an appeal to make sure you get the full amount of benefits you are owed.
For a free consultation with an experienced workers’ compensation lawyer in Mississippi, contact Lunsford, Baskin, and Priebe, PLLC. After-hours visits are available. | https://www.lunsfordbaskin.com/what-are-the-most-common-manufacturing-work-injuries/ |
The CoSMo Company announced, on 12th December 2011, that they joined the BioPreDyn European project bringing their software modeling platform for biotechnology.
The BioPreDyn project
Eight Europeans academic laboratories and three industrial partners join forces in the "BioPreDyn project" to develop new computational tools and methods based on a very innovative software platform to integrate within models the massive amount of data arising in biotechnology.
The final goal of this three-year project funded within the 7th Framework Program of the European Commission is to improve biotechnological processes leading to in-silico simulations.
The kick-off meeting for BioPreDyn has been held on the 12th and 13th December 2012 at the CRG in Barcelona, the project coordinator site.
Integrating and making sense of the biological data
Biological systems involve an incredibly large diversity of molecules, reactions and interactions. 21st Century technology allows us - for the first time - to measure and obtain biological data on large scale at many different scales and levels: from molecules to whole organisms, and from tiny bacteria to humans.
However, these massive data sets are often incomplete, and of very diverse nature. Our brain is not able to deal with such complexity on its own, and the challenge ahead is to integrate and make sense of the data in order to understand and predict biological processes and their applications.
Computational modeling and simulation is absolutely essential for this daunting task.
The BioPreDyn consortium aims to develop innovative solutions by embracing collective expertise and synergies in interdisciplinary areas such as database development, scientific visualization methods, statistics, machine learning, mathematical modeling and simulation, and biotechnological engineering.
The new modeling tools
On the long term, the new modeling tools will allow the design and optimization of biotechnological production processes in a reliable, predictive and quantitative way.
The CoSMo Company, will bring its unique software platform designed to represent the individual components of a biological system with a multi-scale approach to support modeling process in its entirety.
Additionally, The CoSMo Company will enable a widespread application of this approach, both in the context of the academic research community and the private sector.
The other two participating companies will immediately benefit from the platform and the synergies with the academic modelers:
- Fluxome will improve their production processes for dietary supplements, and
- Insilico Biotechnology will optimize its pipeline for microbial biotechnological processes in the food and healthcare industry.
Statements
Julio R. Banga, CSIC researcher, in Vigo and one of the scientific coordinators of the project, emphasizes that "BioPreDyn presents a holistic approach to model building in bioinformatics and systems biology, targeting both fundamental theory and real-world applications".
Johannes Jaeger, a coordinator of the Project from the CGR, in Barcelona, points out that the project is indispensable as it "aims at creating an integrated suite of robust and solid methods to empower data-driven modelling for the systems biology and biotechnology of the future, shortening the lag time 'from ideas to the market'".
Dr. Eric Boix, the CSO of the CoSMo Company, said "We are really thrilled to have the opportunity to collaborate with BioPreDyn partners and to provide, with our software platform, the means for succeeding the challenging model coupling phase at different levels of a living organism. Several public private research partnerships have been or are addressing ambitious biological problems using the CoSMo Company software solution for modeling of morphogenesis, HIV epidemiology and immunoregulation. This project is exciting as it leads us to real-world applications".
Hugues de Bantel, CEO of The CoSMo Company said "There is a pressing need to make sense of the growing amount of data available to biologists to understand and manipulate complex biological systems. "In silico" modeling and simulation in biotechnology is a very exciting challenge for us, not only by its scale but also by the possible outcomes and positive impact on global healthcare".
Find out more
The CoSMo Company, based in Lyon (France), specializes in developing the next generation simulation software to understand, predict and study complex systems.
The Company works with academic institutes on problems in a variety of domains ranging from biology, urban planning, and sustainability.
Its partners seek to control the complexity of their field through dynamic scenarios to predict their future behavior and make better decisions.
In biotechnology The CoSMo Company simulation platform allows integration of specific knowledge of pharmaceutical companies and laboratories into models, leading to resolution of concrete biological problem via computational simulation.
In the future screening potential drugs in virtual biosystems and evaluating in virtual patients will save time and cost in the development of new drugs. | http://www.business.greaterlyon.com/news/the-cosmo-company-software-modeling-platform-selected-by-the-biopredyn-consortium-713.html |
[spb_text_block pb_margin_bottom=”no” pb_border_bottom=”no” width=”1/1″ el_position=”first last”]
In a recent testimony to the Joint Finance Commission of the Wisconsin Legislature, WisDOT Secretary Mark Gottlieb accused “certain groups” of cherry picking traffic numbers to show that VMT on the I94 corridor in Milwaukee has gone down over the last decade. It is likely that he was referring to 1000 Friends’ recent analysis of traffic counts across the state that show consistent declines in driving on almost every single major highway.
Secretary Gottlieb‘s statement is simply false. Our analysis took DOT’s own data and looked at traffic numbers for every single year over the last decade – and found that there was an 8% decrease in volume on the corridor which is consistent with state and national trends. On the subject of cherry picking, however – we found that WisDOT, in their draft environmental impact statement for the project used counts from just one year, in one location, to establish baseline traffic volumes. Even worse –those numbers are very different from those provided to us by a project engineer, which were considerably less than what was used in the DEIS.
In their haste to expand the highway WisDOT is ignoring science that has shown repeatedly that increasing highway capacity never reduces congestion and almost always worsens it. A study by economists from the University of Toronto found that increasing capacity by 10% leads to an increase in driving by 10% – a perfect one-to-one relationship. This phenomenon known as induced demand leads to congestion on highways remaining the same or increasing after expansion. The metrics used to justify expansion are relics of twentieth century transportation planning that only seek to reduce delay for cars, while ignoring other indicators like accessibility to businesses and the quality of travel for transit, bikes and pedestrians.
The hugely disproportionate investments we make in highways come at the expense of other modes. The more supply there is of a free commodity, the more people will use it. When highways are overbuilt, we have people who normally would have lived closer to work or were likely to use public transportation, getting into their cars and driving instead. This leads to more congestion – and further calls to expand the highway.
Instead of being bogged down by this vicious and expensive cycle of highway construction and expansion, we should look at putting different transportation options on a level playing field. There are some enlightened departments of transportation that have recognized their mobility goals being at complete odds with delivering a high quality of life for their citizens. California now has an active target of reducing driving as a means to fighting congestion. The state of Oregon is carrying out cost benefit analyses of different types of projects – for example, weighting transit investments against highway expansions to see how they stack up against each other. This isn’t a liberal phenomenon – even the strongly conservative state of Tennessee is better integrating land use planning into their transportation models to prevent schools and new developments being built on empty, far away land that will need large investments in roads, electricity wires and sewer lines.
Unfortunately, Secretary Gottlieb’s statements show that our own DOT is stuck in a mentality of decades past. If we don’t invest in transportation choice and instead focus single mindedly on highway expansions, Wisconsin will be left behind by states like Minnesota, who are seeing increased economic development by funding other modes more equitably. | https://1kfriends.org/wisdot-ignores-science-cherry-picks-numbers/ |
Job Summary:
At Disney, we believe it takes great people to create the memorable experiences, products and services our consumers know and love. When we grow and develop our people, we're investing in our future. As Human Resources, we are the champions of this investment.
The Walt Disney Company's Human Resources team does so much more than just support our employees…we engage our people, drive change and help our businesses achieve results.
How do we do this? By shaping the employee experience on all levels. By partnering with our clients to create and implement innovative business strategies. By being a true thought partner and proactively anticipating business cycles. By fostering leadership success and being experts in change management. By enabling smart decisions by leveraging business data, metrics and external market insights.
In your role as the Associate HR Business Partner, you will support the lead HR Business Partners to deliver HR programs and solutions in support of both short-term and long-term business goals for our newest segment, Direct-to Consumer & International.
Responsibilities:
• Demonstrates a working knowledge of the business and takes a consultative approach in anticipating, assessing, and providing creative HR solutions to business priorities
• Implements and administers HR policies and procedures and their dissemination through various employee communications.
• Provides consultation to clients by partnering on moderately complex HR processes, procedures, precedents and initiatives integrating appropriate centers of excellence (COE) partners as necessary.
• Provides day to day performance management guidance
• Assists with key talent processes including performance calibrations, talent review, succession planning, leadership development and talent assessment.
• Assists with consulting on organizational design/change/culture
• Utilizes reports and talent dashboards that measure talent efforts and provide predictive analytics for future changes or decisions.
• Works closely with BP team, managers and employees to improve work relationships, build morale, increase productivity and retention.
• Participates in special projects at both the HRBP team and client level
• Run the intern programs globally
• Handles correspondence with clients including high level executives
Basic Qualifications:
• Minimum of 2 years human resources generalist experience in progressively responsible roles
• Experience and exposure to a variety of HR facets including organization development, employee relations, talent acquisition, learning & development and compensation
• Proven ability to build strong relationships
• Excellent consulting and conflict management skills
• Strong use of judgment to identify and anticipate client needs and make recommendations for implementation
• Ability to effectively interact with all organizational levels in a multicultural environment and build trusted relationships
• Excellent analytical skills and the ability to interpret data, identify trends and recommend multiple solutions
• Excellent interpersonal and communication skills
• Strong organizational, motivational, communication and problem solving skills
• Ability to manage multiple conflicting priorities.
• Ability to function independently with minimal supervision
• Function in a matrixed, fast-paced environment
• Basic knowledge and application of federal and state employment laws
Preferred Qualifications:
• Experience directly supporting clients
• SAP experience
• Project management experience
Required Education
• Bachelor's degree from an accredited college in Human Resources, Business Administration or related field or combination equivalent work experience.
About Direct-to-Consumer and International:
Comprised of Disney's international media businesses and the Company's various streaming services, the Direct-to-Consumer and International segment aligns technology, content and distribution platforms to expand the Company's global footprint and deliver world-class, personalized entertainment experiences to consumers around the world. This segment is responsible for The Walt Disney Company's direct-to-consumer businesses globally, including the ESPN sports streaming service, programmed in partnership with ESPN; the upcoming Disney-branded direct-to-consumer streaming service; and the Company's ownership stake in Hulu. As part of the Direct-to-Consumer and International segment, Disney Streaming Services, developer of the ESPN and Disney-branded streaming platforms, oversees all consumer-facing digital technology and products across the Company.
About The Walt Disney Company:
The Walt Disney Company, together with its subsidiaries and affiliates, is a leading diversified international family entertainment and media enterprise with the following business segments: media networks, parks and resorts, studio entertainment, consumer products and interactive media. From humble beginnings as a cartoon studio in the 1920s to its preeminent name in the entertainment industry today, Disney proudly continues its legacy of creating world-class stories and experiences for every member of the family. Disney's stories, characters and experiences reach consumers and guests from every corner of the globe. With operations in more than 40 countries, our employees and cast members work together to create entertainment experiences that are both universally and locally cherished.
This position is with Disney Streaming Operations, LLC, which is part of a business segment we call Direct-to-Consumer and International.
Disney Streaming Operations, LLC is an equal opportunity employer. Applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Disney fosters a business culture where ideas and decisions from all people help us grow, innovate, create the best stories and be relevant in a rapidly changing world. | https://www.entertainmentcareers.net/the-walt-disney-company/associate-hr-business-partner/job/272679/ |
Young people tend to leave the Alpine space because they lack personal and professional fulfilment. Furthermore a majority of decision-makers remain unaware of the benefits a young active population brings to society.
In cooperation with eight partners from five different alpine countries, GaYA aims to increase the quality of democratic processes in the Alpine space by enhancing the involvement of young people in regional governance and by developing new approaches for decision makers.
Therefore, new democratic methods are collected throughout the project and made available in order to overcome the challenges of territorial cohesion and participatory involvement of young people in everyday political actions. The implementation of well-governed forms has great potential for increasing sustainable and fair decision making. | https://www.alpine-space.eu/projects/gaya/en/about |
Topic : How to do a swot analysis.
Author : Evie Follett.
Published : Tue, Jan 1 2019 :1 PM.
Format : jpg/jpeg.
You will find lists of new employees, materials to be packed and products scheduled to arrive in these lists. To do lists describe standard business processes. These lists help to list up the activities to be done during a certain time, for instance, items to be packed before an outing.
If you are looking for help in creating checklists, the templates mentioned here would be best for you. From checklists for house cleaning to project management to event management, you have list templates for every major kind of tasks and programs here. All of them can be downloaded and modified easily.
Do pay attention to the appearance of the checklist! Remember, you will be using the template for tasks and shopping purposes. None of these can be classified as fun activities. But since you have to do it anyway, why not make it as fun and enjoyable as possible? You can do so with attractive checklist templates.
Once you’ve completed the search, you can start shortlisting templates from the collection. Choose Word Templates on the basis of utility more than appearance. Once you’ve made the choice, click on the download button and download the template. Finally, you can start editing the details on the template which have been provided already and start incorporating your own details. | https://perezzies.com/business-strengths-and-weaknesses-checklist/business-analysis-checklist-business-analysis-forms-pinterest-7/ |
ACT I: CREATION
In his meager atelier an artist puts finishing touches to his latest work and so he starts the story. A story with a beginning but without an end. The artist sells the painting to a noble man who uses it as a decoration surrounding the everyday life of his young family. Years pass while the painting mute witnesses the life of the family in happy and sad times. And in the darkest hour it is discarded in a dusty attic as its only purpose has become a reminder of the past bright days.
ACT II: APPRECIATION
Hundreds of years and a thick layer of dust later the painting is rediscovered by a stranger. The painting finds its way into the hands of an expert who appreciates it as a highly valuable masterpiece. The painting is auctioned to a white-suited philanthropist, owner of the most famous galleries. He unveils the painting for public appreciation and it is admired by the people, living in happy and glamour times. Years go by as people everyday come to see the painting for themselves. The gallery becomes emptier and emptier and ever more oppressive place. The final visitor comes and goes.
ACT III: INQUISITION
Times slowly change and a new order is set in. The forgotten gallery is intruded by coarse uniformed imbeciles following specific orders. They take away all the beautiful treasures of art and put them in new types of galleries – where the common people are forced to renounce them. It is the new form of appreciation. While there is no obvious hope, is it really that the man has turned back on its finest creations? Who will be the next admirer? What would be the next generation of people? How will the story go on into the Future?
The picture represents a collage of different times, different values, different moods as it objectively depicts the whole manifestation of us. | http://mkomitova.info/portfolio/directo/infinito-synopsis-crew-and-cast/infinito-detailed-synopsis |
Support for technological development in the state of Arkansas has significantly increased over the past decade, thanks to factors such as conscious efforts to bolster STEM education opportunities and resources in schools and recognition of the positive economic impact of technological ventures throughout the region.
Grassroots organizations with the purpose of supporting entrepreneurial development and innovation through funding, networking and education have sprouted across the state and are steadily growing into lifelines for researchers and innovators, present and future.
The Arkansas Research and Technology Park (ARTP) in Fayetteville, operated by the University of Arkansas, is the blueprint for technological research and innovation in the state. And the tech park is blazing trails, sometimes literally.
A strategically planned “innovation system,” the park consists of three multi-tenant facilities and three multi-disciplinary research facilities occupying more than 120 acres. Each of the six facilities plays a unique role in helping to advance new technologies to the marketplace.
The Innovation Center is a key component to the flow of operations within the park. Featuring minimalist design elements such as clean lines, lofted ceilings, and plenty of natural light thanks to the building’s impressive window wall, the Innovation Center dresses the part as the touchstone of the tech park.
The actual building was the first LEED certified building in the state of Arkansas, as David Hinton, ARTP’s interim director, will proudly tell you. Hinton is also the acting executive director of Technology Ventures, a university department that commercializes and manages the UA’s intellectual property portfolios.
Hinton speaks of the purpose and accomplishments of each organization and project like a proud father.
“The Arkansas Research and Technology Park is now being recognized as a destination where startups and innovative companies want to be,” he said. “It is really a bustling innovation ecosystem in south Fayetteville where companies have the opportunity to start and grow while collaborating with the University of Arkansas. The park is a unique environment where companies and university labs co-exist resulting in many synergistic interactions even as you are just walking down the hall.”
One of those companies is Catalyze H2O, a water treatment startup. But its purpose goes much deeper than that.
“We primarily work on novel water remediation techniques,” research engineer Aaron Ivy said. “We’re currently working with the Army to remediate explosives out of wastewater without leaving behind any solid waste. So, what we do is try to come up with new methods of filtering water that won’t leave behind any dangerous after products.”
Catalyze H2O’s researchers will tell you that clean water underpins economic vibrancy.
“We make it our business to understand the clean water needs of industries. We target treatment goals to enable economic vitality and water sustainability.”
Catalyze H2O shares the lab space with award-winning biopolymer technology company, CelluDot, spun out of the New Venture Development course in the department of Strategy, Entrepreneurship and Venture Innovation at the UA’s Walton College of Business.
Hinton teaches the course with Sarah Goforth, director of the UA’s Office of Entrepreneurship and Innovation. They teach a structured methodology called the lean startup approach, which includes customer discovery and business-model creation and how to pitch to investors. CelluDot was started in this class, wound up winning a Small Business Innovation Research (SBIR) grant from the National Science Foundation and moved into the tech park.
“I feel fortunate to be part of the journey and to have helped them along the way,” Hinton said.
The cutting-edge companies at the tech park have earned an impressive amount of them and know how to put them to good use, according to Weston Waldo of Technology Ventures.
“The companies that are within this park generate roughly 54 percent of the state’s total SBIR and Small Business Technology Transfer Research [STTR] federal grant winnings that we receive,” he said. “ARTP is a research powerhouse.”
Waldo’s official title is Venture Development Program Manager. A technical description of Waldo’s role within the tech park would be working directly with inventions and inventors who represent potential startups and licensing opportunities and serving as a liaison between the startup ecosystems of the UA and the region.
However, Waldo’s enthusiasm for what he does goes far beyond any of the technicalities. A Texas native, Waldo previously was a program manager at Texas Tech. He’s all Arkansas now.
“I think if you ask anyone here, everyone would say that they love their jobs and love what they’re doing,” he said. “I feel extremely grateful for the opportunity to be here. It sounds cliché to say that, but it’s true. I really love my job. There’s so many layers and dimensions to it, and I love it all.”
For startups, making it to the tech park can represent quite a journey.
“There doesn’t seem to be a linear path,” Waldo said. “Sometimes individuals or groups are able to move forward a little faster because maybe they met a venture capitalist or are somehow able to fund the full thing from the beginning. But normally, it starts off with someone who has an idea, and they’re thinking, ‘What do I do with this?’
“They may go to one of David’s classes and participate in some of the early-stage programs, and they start to formulate a team and solidify that idea. Then they may write a business plan and seek out grant funding or early-stage seed funding that helps them to develop some minimal viable product. Then they get their first customer, and they test that all the way through, and they grow in scale from there. That’s kind of the traditional path.
“Generally, I would say the starting point is usually when someone is willing to say, ‘Yes, I want to try it.’ Because most of the entrepreneurs that I work with are first-time entrepreneurs. Many of them begin with no business background or experience with this kind of thing, but they say, ‘I am willing to be uncomfortable and to give this a try.’”
The Innovation Center also serves as the home office of the University of Arkansas Technology Development Foundation, a university-affiliated nonprofit that manages the park and provides essential technology transfer functions to corporate partners.
Dr. David Snow is the UA’s interim vice chancellor for economic development and president of the foundation.
“[The tech park] is inviting with a live, work and play atmosphere,” he explained. “It’s incredible how this is an epicenter for innovation for the state, and it’s continually growing. We’re almost full — we’ve got about 300,000 square feet of leasable space and about 10,000 square feet of that left. And some of that just got claimed the other day, so we’re having some really exciting conversations about the next move.
“We’re currently having some conversations about trail development in the city that will improve accessibility in the area. And there are a lot of other projects like that which are keeping us busy.”
The park is poised to continue its path as a growing community of thinkers and planners, thriving on enthusiasm from its leadership and a solid network of support from scientists, educators and entrepreneurs working together to help each other prosper. Waldo stressed there’s always room to grow that network.
“I think sometimes when people hear the word, ‘entrepreneur,’ or the details of starting a business, it sounds foreign to them,” he said. “They automatically think of Elon Musk or Bill Gates, and there’s no in between. I really wish more people knew that there is a seat at the table for them, no matter where you are in life, whether you’re retired or a college student or other professional. There are so many opportunities to get involved. | https://armoneyandpolitics.com/uas-tech-park/ |
We curate gifts that are inspired by nature and handcrafted by our in-house artisans, to suit your requirements.
SUPPORTING COMMUNITY OF LOCAL ARTISTS
Cooke & Kelvey carries the 160 year old legacy of silversmiths that produce ethically handcrafted products. The intricate designs are handcrafted by talented local artisans who have been the foundation of our brand since inception. We at Cooke & Kelvey encourage, support and provide for the artisans and their families by promoting them and creating employment locally. | https://www.cookeandkelvey.com/ |
Baby Birds at the South Florida Wildlife Center (SOURCE: CBS4)Baby Birds at the South Florida Wildlife Center (SOURCE: CBS4)
FORT LAUDERDALE (CBSMiami) – Love is in the air at the South Florida Wildlife Center as this time of year, also known as “baby season,” the nursery is packed with baby birds.
“On any given day we may receive up to a hundred or more baby mammals and baby birds coming through our doors in need of assistance,” said Sherry Schlueter of the South Florida Wildlife Center.
The nursery is filled with baby birds that are brought in by people who find them alone. The baby birds are then handfed and the ones that are ill are nursed back to health.
“We definitely do our best and provide the best food and cage as possible,” said Jessica Sayre who feeds the animals, “but definitely mom knows best, parents know best.”
What many people don’t realize is that a lot of the birds they find alone are actually not abandoned.
“Sometimes well-intentioned people swoop in too quickly concerned about a baby that they believe to be alone,” said Schlueter.
“Not every bird found on the ground needs to be brought in,” said wildlife veterinarian Dr. Renata Schneider. “This is part of the process of them learning to fly. They get a little shove out of the nest as fledglings.”
Schlueter recommends that if you see a baby bird alone on the ground to give it some time before attempting to rescue it. Look around to see if its mother is nearby. If you have questions, call for advice.
“When a person takes a baby animal from the wild like that, without observing first, without getting good advice from our wildlife rehabilitators, they, in fact, may be kidnapping that baby away from his or her parents, when the parents may be there trying to raise the baby,” Schleuter added.
If you happen upon a baby bird, and are in a rush, don’t be afraid to pick it up and put it in a higher place to protect it from predators. It is great that people are concerned for wild creatures, but it’s also important to keep in mind that in some cases, the best help is to simply let nature work its magic.
“All of us should be looking out for young, whether they’re feathered, furred or on two legs like human beings.”
| |
Policy outreach and communications - what works for improving food security and nutrition at the country level?
The challenge
Technical notes, research reports, policy briefs, etc. on food security and nutrition (FSN) are often targeted at decision-makers and aim at contributing to evidence-based policy making. However the degree to which the FSN information produced is actually used by decision makers, and influences policy making, remains unclear.
Through this forum we would like to explore the factors that contribute to our evidence and knowledge actually being used in policy making processes – in particular at the country and regional level.
We have prepared an optional template for capturing your success story which we encourage you to use.
The purpose of this discussion: collect case studies and concrete examples of successful policy outreach
We would like to gather concrete examples of how the FSN information produced by your organization has been used by policy makers and influenced policy dialogue and decisions in your country or region. We would like you to think of factors such as:
-
What innovative strategies and channels have you used to reach policy makers and get feedback on their emerging needs?
-
Have you ever significantly changed your communication or policy outreach strategy? How did you change it? Did you get better results?
-
What role do intermediaries (the media, “champions” in the government, etc.) play in helping you communicate your recommendations to policy makers?
-
If you are a policy maker or user of FSN information, how do you communicate your information needs to the information producers? What should information producers consider when trying to increase the use of their evidence by policy makers?
More than anything we would like to hear your success stories about what actually worked in terms of your information being used by policy makers!
How we will use the information that comes out of this forum discussion
After the discussion closes we will compile concrete examples and a list of recommendations for making sure the FSN information we produce contributes to evidence based policy making. This document will be available through this website and sent to forum participants.
Looking forward to meeting you online!
Facilitators of the discussion:
Denise Melvin (Communications and Outreach Officer, FAO), | http://www.fao.org/fsnforum/ar/node/2412 |
Technical Field
Description of the Prior Art
The Invention
Preferred Embodiments of the Invention
Brief Description of the Drawings
Possible Laboratory and Industrial Applications
Example 1
Example 2
Example 3
Example 4
Example 5
Example 6
2
2
2
2
The present invention concerns the quantitative determination of hydrogen peroxide in water media and of substrates capable of being oxidized enzymatically with the formation of H0 as well as the enzymes which are involved in such reaction and also the peroxidase which catalyzes the oxidation by H0 of other substrates.
2
2
2
2
The measuring of hydrogen peroxide concentration in aqueous systems is important in many cases such as, for instance, when using H0 as an oxidant for the treatment of effluents (see W. H. KIBBEL, Peroxide Treatment of Industrial Waste Waters, Industrial Water Engineering, August-September 1976) and in medical diagnostic analysis. Thus, in connection with the medical aspect, many analytical techniques exist based on enzymatic oxidation reactions which involve the quantitative production of H0. Such techniques are particularly valuable because of the highly selective and sensitive behavior of some enzymatic systems toward specific substrates.
2
2
2
2
2
2
2
2
Thus, it is well-known to use an "oxidase" for catalyzing the quantitative oxidation of a substrate with the formation of a corresponding amount of hydrogen peroxide. This H0 can be thereafter measured by another enzymatic reaction in which an indicator dye is oxidized by this H0 in the presence of a peroxidase whereby the intensity of the color developed is a measure of the amount of the H0 present in the system and, consequently, a measure of the amount of the substrate originally present which generated the said H0 upon oxidation.
2
2
The enzymes "oxidase" suitable for such determination are usually named from the type of substrate they can act upon; thus, glucose oxidase specifically catalyzes the oxygen oxidation of glucose into gluconic acid with liberation of H0; cholesterol oxidase acts similarly toward cholesterol, etc.
Examples of oxidases can be found in the literature familiar to those skilled in the art, namely in "Enzyme Nomenclature Recommendations" (1972), International Union of Biochemistry, Elsevier Scientific Publishing Co.
2
2
There exists also several different peroxidases which are suitable for catalyzing such oxidations by H0 among which the peroxidases of horse-radish and of Japanese-radish are well-known. Various peroxidases are described by BOYER et al. in "The Enzymes" Vol. 8 (1963), Academic Press. Moreover, hemoglobin, or rather some of its constituents, can also act as a peroxidase in some cases.
2
2
2
2
2
2
Besides the method mentioned heretofore, several other enzymatic routes can be used for determining substrates in biological fluids, for instance glucose in blood or urine. Thus, besides titrating the H0 formed by the glucose oxidase catalyzed oxidation reaction by quantitatively oxidizing a dye in the presence of peroxidase, the amount of said H0 can also be measured polarographically by means of an electrode equipped with a semi-permeable membrane. Otherwise, instead of measuring the H0 formed, the oxygen consumed in the oxidation of glucose can also be measured with an electrode provided with a gas permeable membrane, e.g. an electrode such as the well-known "CLARK" electrode.
2
2
2
2
However, the above analytical procedures are not free from some drawbacks. For instance, the colorimetric method requires that the sample be inherently colorless and not turbid, otherwise significant measurement errors may result. Also, in connection with the electrometric methods involving membranes, the latter will require to be maintained carefully, otherwise bacterial contamination may occur with consecutive spoiling of the electrodes. It is also desirable to have a general system and method enabling to determine either organic substrates that will generate H0 upon enzymatic oxidation, or the H0 formed during this oxidation, or even the enzymes themselves that act as catalysts in such reactions.
2
2
The present invention proposes to remedy the above drawbacks and provide such versatile analytical method and system. It is based on the known phenomenon that some fluoro-compounds are oxidized by H0 in the presence of peroxidase with the quantitive splitting of the fluorine atom into fluoride ion. This is described in "Inorganic Biochemistry", Vol. 2, chapter 28, pages 1000-1001, (1964) Elsevier Scientific Publ. Co. in connection with some investigations on the peroxidase oxidation of organic halogen compounds having stable C-F, C-Br and C-I bonds.
2
2
2
2
2
2
2
2
One compound which is particularly suitable in connection with this reaction is p-fluoroaniline. Now, the present inventor has found that the F- ions generated during this reaction can be easily and precisely measured with an F- selective electrode, for instance of the following type: "96-09" made by ORION RESEARCH Inc., Cambridge, Mass., USA. Hence, the method of the invention which is devised for quantitatively determining, in aqueous media, either one of the two following constituents: H0 or peroxidase comprises reacting said medium with an excess of an organic fluoro-compound the C-F bond of which is splittable by the peroxidase catalyzed action of H0 with consecutive liberation of fluoride anion and electrochemically measuring the amount of the latter by means of a fluoride selectively sensitive electrode. Or, in other words, the rate of F- production is proportional to the amount of H0, keeping the concentration of peroxidase constant, or this rate is proportional to the amount of peroxidase present if the amount of H0 present is large enough for allowing the quantity consumed during the measurement reaction to be neglected. Therefore, the conditions of analysis must be adapted for either one or the other type of measurement according to the means known to people skilled in the art.
The F- concentration with time is measured electrometrically with an electrode which is very sensitive to F- ions but very insensitive to the other substances present in the sample. This is one of the advantages of the method. Another advantage is that such electrode is inherently sturdy, relatively insensitive to shock and easy to maintain and to calibrate which makes it significantly more easy to operate than the electrodes of the prior art.
Thus, the method of the invention is based on the electrochemical measurement of F- ion produced by the reaction:
2
2
Naturally, the same basic technique can be applied to the case when the hydrogen peroxide is formed in situ by some enzymatic oxidation of a substrate. Thus, in the case glucose is oxidized in the presence of glucose oxidase with simultaneous formation of H0, the latter or the glucose oxidase used can be measured by the above method. Consequently, the present invention also deals with the measuring of glucose and, alternatively, oxidase according to the the following set of reactions:and reaction (1) just mentioned above.
Similarly, the method of the invention also applies to the determination of other substrates the catalytic oxidation of which involves the quantitative formation of hydrogen peroxide. This is for example the case of cholesterol which oxidizes in the presence of cholesterol oxidase.
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
The general principle of the method and its applications can be explained briefly as follows: The basic system of the invention consists in having present together, in a buffer, suitable quantities of H0, a peroxidase and the fluoro-compound. H0 is the oxidant and the peroxidase acts as the catalyst in the oxidation of the p-fluoroaniline which behaves as the acceptor and is used in relatively large excess for avoiding its actual consumption to influence the rate equation. Therefore, the reaction rate will be related to the respective concentrations of H0 and peroxidase. If peroxidase (the catalyst) is kept constant for a given set of measurements the reaction will be related to the amount of H0 and various quantities of the latter can be measured by measuring the corresponding rate of F- formation resulting from the p-fluoroaniline oxidation. Further, if rate measurements are taken over a very short period during in which the change in concentration of the H0 can be neglected, the reaction becomes pseudo-zero order which facilitates computing the rate results. When it is wished to measure peroxidase instead of H0 with the present system, allowance is made for having a large excess of H0 relative to peroxidase (saturating H0), whereby the consumption of said hydrogen peroxide can be practically neglected and the measured rate of F- formation is proportional to the amount of peroxidase.
1
2
2
2
2
2
2
2
When the above system is to be used for the determination of precursor systems involving the oxidation of an organic substrate in the presence of the corresponding oxidase and formation of H0, the approach is similar to what is explained above. Thus, if the system involved concerns, for instance, the oxidation of glucose by oxygen in the presence of glucose oxidase and it is wished to measure the glucose contained in a sample, the conditions will be adapted for having the H0 produced by the oxidation of glucose be consumed by the peroxidase catalyzed reaction at a rate much faster than that of the formation of said H0 itself. Hence, the liberation rate of F- will then be a measure of H0 formation in the oxidation of glucose. Such conditions can be achieved because the peroxidase reaction is, per se, much faster than the oxidase reaction and because the relative amount of the respective enzymes in the system can be adapted, from case to case, to maintain such conditions valid by the usual means familiar to people skilled in the art.
Thus, in the presently examplified situation, either the glucose can be determined in the presence of a suitable and fixed amount of oxidase, or the oxidase itself can be determined in the presence of a "saturating" quantity of glucose. The general scheme is adaptable, from case to case, to other systems involving organic substrates and corresponding oxidases.
Naturally, other electrodes than the type mentioned earlier are also suitable for being used in the method of the invention provided they are specifically adapted for the determination of F- ions in the presence of other dissolved substances. When these conditions are met, the electrode can be combined with a reference electrode of a classical type and the electrode system can be connected to any suitable reading device for recording the measured electrochemical parameters (amplifier, meter, recorder, etc.), as will be seen hereinafter in more detail.
2
2
2
2
2
2
2
2
2
2
On a practical standpoint the general analytical scheme, illustrated for example in the case where H0 is measured, is as follows: The sample to be measured (solution of H0) is mixed with a reagent solution containing the peroxidase and an excess of p-fluoroaniline and the rate of F- liberation is measured at constant temperature (room or any other suitable controlled temperature) with the electrode system. If the analytical sample does not already contain the H0 to be measured, i.e. if the analysis concerns the measuring of a substrate generating the H0 upon enzymatic oxidation, (for instance glucose) the reagent solution will also contain the corresponding oxidase. If it is wished to measure glucose-oxidase instead of glucose, then the system will contain an excess of glucose. As mentioned earlier, in this case the generation of H0 is the rate determining step and the actual observed rate of F‾ liberation will then correspond to the rate of oxidase catalyzed oxidation of glucose.
2
2
2
2
In order to determine the rate of F- liberation for an unknown sample, reference to a calibration curve should preferably be made. A calibration curve can be obtained by measuring, as described above, a series of samples containing known concentrations of H0. For each sample the rate of F- is recorded and the slope of the rate curves, at a time (which is of course the same for each sample) where the rate curves are about straight, is measured. Then, these slopes are plotted against H0 concentration thus providing the standard reference curve. The measured electrometric parameters to be used in determining the rate curves can be the voltage readings of the electrode system (mV) or, better, the corresponding [F-] values as calculated from the Nernst equation which, in this case, has the form of:where E is the recorded potential and E' is a constant inherent to the system which is determined experimentally and which involves the activity factors and the liquid junction potentials. S is the "Nernst slope" which is also constant and equals to approximately 59 mV for a change of 10 units in the concentration of F- where the latter is expressed in moles/I. If the [F-] values calculated from the above relation are used in the rate curves instead of the mV values, straighter rate curves are obtained, the slope of which is easier to determine and which permit drawing more accurate reference graphs.
2
2
2
2
2
2
Thus, the present analytical process and system which makes use of any type of existing F-selectively sensitive electrode and any type of organic fluoro-compound susceptible of quantitatively generating F‾ ions by the catalytic oxidation thereof with H0, but preferably of p-fluoroaniline which is enzymatically oxidized by H0 in the presence of a peroxidase, enables the quantitative determination of a variety of chemical or biochemical constituents usually present in biological fluids such as blood, urine and saliva and which can be enzymatically oxidized (by air or O2) with the formation of H0.
2
2
Fig. 1 a illustrates the experimental calibration curves taken from Example 1 which show the potential (mV) variation of the electrode plotted against time for samples of different H0 concentrations.
2
2
Fig. 1 b is a graph obtained by plotting the initial slopes dV/dt of the rate curves of Fig. 1 a versus the corresponding H0 concentrations.
Fig. 2a is taken from the results of Example 2 and shows, as in Fig. 1 a, the F- formation rate curves (mV versus time) measured in the case of the glucose oxidase oxidation of different samples of glucose solutions.
Fig. 2b shows as in Fig. 1 b the plot of the slope d(mV)/dt of the curves of Fig. 2a versus the corresponding concentrations in glucose of the samples.
2
2
2
2
Fig. 3a illustrates experimental rate curves (mV versus time) similar to those of Fig. 2a but concerning the oxidation of cholesterol in the presence of cholesterol oxidase with formation of H0 and the corresponding liberation of F- ions by the quantitative oxidation of p-fluoroaniline with said H0.
Fig. 3b is a standard curve obtained by plotting the dV/dt values of the curves of Fig. 3a against the cholesterol concentrations of the corresponding samples.
2
2
Fig. 4a is a graph similar to that of Fig. 2a but which shows, besides the interrupted line representing the time versus millivolt curve in the case of the analysis of a 2 g/I glucose solution whereby the catalyzed oxidation thereof provides the H0 necessary to the liberation of a corresponding amount of F-, a rate curve (full line) in which the concentration [F‾] in µg/l of the fluoride anion, calculated as shown above from the Nernst equation, is being plotted versus time.
Fig. 4b is a graph showing the change, versus glucose concentration (as measured from a series of known samples), of the slope d(mV)/dt (interrupted curve) and of the slope of the corresponding calculated values d[F‾]/dt.
2
2
Fig. 5a is a graph showin the kinetics of F- formation in systems used for measuring variable amounts of peroxidase in the presence of an excess of H0.
Fig. 5b is a plot of the slopes at 30 sec of the curves of Fig. 5a versus the corresponding peroxidase content of the samples.
Fig. 6a is a plot of [F-] versus time for the reaction involving the measuring of glucose oxidase in the presence of an excess of glucose.
Fig. 6b is a plot of the slopes dV/dt at 30 sec of the curves of Fig. 6a versus glucose oxidase concentration.
Figs. 7 to 1 Ob are discussed in examples 7-17.
The following examples will better illustrate the practical aspects of the invention with reference to the annexed drawings in which,
E
(i) Enzyme solution S: 10 mg of horse-radish peroxidase (600 U) were dissolved in 10 ml of acetate buffer 0.05 M, pH 6.4.
CF
2
2
(ii) Solution of organic fluoro-compound S: 1.7 ml of glacial acetic acid and 1.74 g of NaCl were dissolved in 20 ml of H0 to which were thereafter added enough water and 5 M NaOH solution to give 30 ml of acetate buffer at pH 5―5.5. Then, there were added 0.3 g of TWEEN 20 (polyethylene-oxide sorbitan monolaurate, ICI) and 0.2 g of sodium lauryl sulfate at a temperature of 60°C; then, there was added a solution of 0.3 g of p-fluoroaniline in 20 ml H0 and the mixture was stirred at 60°C until a homogeneous solution was obtained. Then the solution was allowed to cool after which it was made up to 100 ml by adding some more water.
R
CF
E'
(iii) Reagent solution S: This was prepared just before use and involved the mixing of 5 ml of the solution S and 0.1 ml of the solution S
The following solutions were prepared:
R
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Thus, 5.1 ml of the S solution were placed in a plastic beaker and a F- selectively sensitive electrode (Type 96-09, ORION RESEARCH) was introduced therein. This electrode had the reference electrode combined therewith but any other F- selectively sensitive electrode with separate complementary electrode can also be used in the circuit. The electrode was connected to a suitable potentiometer, in the case, to a recording voltmeter of type "CORNING EEL 112 Digital Research pH meter" from CORNING SCIENTIFIC INST., MEDFIELD, Mass. USA. The readings were in mV (relative). When the reading was stable, 0.1 ml of a calibrating solution of H0 was added under magnetic stirring. The calibrating H0 solution was any one of 0.4, 0.8 and 1.6 g/l H0 aqueous solutions. Then, the voltage readings began to change and were recorded automatically with time on the recording chart of the voltmeter. Fig. 1 a shows the three curves which were recorded for the above three samples of H0. It can be seen that the initial part of said curves is reasonably straight. The slope of these curves was then plotted against corresponding H0 concentration which provided the graph of Fig. 1 b. It can be noted that the curvatures of the rate curves of Fig. 1 a are normal rate curvatures although a certain slowing-down of the reaction rate occurs with time due to some extent of poisoning of the enzymatic oxidation by the F- anions. The standard curve dV/dt of Fig. 1 b is sufficiently straight for being used as a reference in the determination of unknown samples of H0. For such determination the unknown sample is treated exactly as described above, the kinetic curve is recorded, the slope at the proper point is measured and the corresponding concentration of the sample is determined by using the standard curve of Fig. 1 b. Naturally, the standard curve can be extended to values below 0.4 g/I or beyond 1.6 g/I of H0 by using other calibrating samples of the desired concentrations.
E
(i) Enzyme solution S: Such a solution was prepared by dissolving 3000 U of glucose oxidase and 600 U of horse-radish peroxidase in 7 ml of 0.05 M acetate buffer (pH 6.4) and making it to 10 ml exactly with more buffer. The solution was stored at 3°C before use.
CF
(ii) Solution of p-fluoroaniline S: This was identical with the corresponding solution of Example 1 with the only difference that 0.12 g of EDTA (ethylenediamine tetraacetic acid) were further added to complex metal ions possibly present in the samples and which might block some F- ions and vitiate the measurements.
This Example concerns the analysis of glucose in aqueous solutions (see reactions (1 a) and (1)):
C-F
E
Then the preparation of a reference graph was performed as described in Example 1. 5 ml of solution S and 0.1 ml of solution S were placed in a polythene beaker to which were added, under agitation, 0.1 ml of a standard solution of glucose. Such standards were water solutions containing 0.5, 1, 2, 3, 4 and 5 g/I of glucose dissolved in a 1 g/I benzoic acid solution. The benzoic acid was used as a preservative. Then the rate curves were run and the recorded values reported on Fig. 2a, after which the slopes, determined on the straightest part of the curves, were used to make the standard graph of Fig. 2b.
CF
E
The standard curve of Fig. 2b was then used to determine the glucose concentration of an unknown sample by mixing 0.1 ml of said sample with 5 ml of the S solution and 0.1 ml of the S solution and running the rate curve exactly as described above. The sample was a commercial control serum sample (Type P, Lot X-2739) from HOFFMANN-LA ROCHE & CO., Basel, Switzerland. This material was bought in lyophilized form and was diluted as directed by the data sheet. The results of the test are illustrated by curve CS on Fig. 2a. The slope of this curve was 0.348 mV/sec and corresponded to 2.18 g/I of glucose according to the chart of Fig. 2b. According to the manufacturer of the sample, other methods of the prior art for glucose determination had given the following analytical values:
Thus, the value measured by the means of the present invention fits well within the above-mentioned interval.
5
It should however be noted that, in order to improve the accuracy of the standard curve and, consequently, of the measurements, it may be advantageous to utilize, instead of the mV values directly furnished by the voltmeter, the corresponding [F-] values as calculated from the Nernst equation mentioned above. If a graph is prepared by plotting the calculated concentrations [F-] versus reaction time t (see Fig. 4a) a line is obtained, the slope of which is practically constant over a long period (about 6 to 100 sec.); this is so because the curvature due to the existence of a log type correlation in the rate curve has been eliminated. It is therefore easier now with the curve of Fig. 4a to determine accurately the parameters governing the kinetics of the F- production than with the voltage rate curves of Fig. 1 a or 1 b. In general, it is advantageous to measure said slope at the time t = 30 sec, zero time being at the moment when the electrode system has reached equilibrium. In this connection, it is also useful to mention that the addition of a small quantity of fluoride to the reaction mixture ([F‾] ≅ 10- M) is beneficial since it reduces the equilibration period to about 60 sec as compared to about 6 min without the additional fluoride. These improvements are further commented hereinbelow in connection with Example 4 and Fig. 4b in which the slopes measured as per Fig. 4a are plotted against the corresponding calibrating sample concentration.
E
(i) Enzyme solution S: This solution was prepared by dissolving in 5 ml of 0.1 M phosphate buffer at pH 6.0, 100 U of cholesterol oxidase, 600 U of horse-radish peroxidase, 5 mg of TRITON X-100 (isooctyl-phenoxy-polyethyleneglycol, ROHM & HAAS, USA). Then, further phosphate buffer was added to make 10 mi. The solution was stored between 1 and 5°C before use.
CF
(ii) Solution of p-fluoroaniline S: This was prepared as in the previous Examples by dissolving at 60°C in about 50 ml of 0.1 M phosphate buffer 0.3 g of TRITON X-100, 0.1 g of sodium cholate, 0.58 g of NaCI, then 0.3 g of p-fluoroaniline under stirring. When the solution was well homogeneous, it was allowed to cool and was completed to 100 ml with phosphate buffer.
CH
(iii) Cholesterol standard samples S: These were made by dissolving cholesterol (quality "PRECISET" from BOEHRINGER, MANNHEIM, Germany) in water, thus making samples at 1, 2, 3 and 4 g/I. Analyses were carried out on aliquots of 0.01 mi.
This Example deals with the analysis of cholesterol according to the reactions:and the reaction (1) discussed above.
CF
E
CH
For the analysis itself, the same procedure described in the previous Examples was used: 5 ml of solution S and 0.1 ml of solution S were agitated in a plastic beaker while measuring the potential by means of the F- sensitive electrode system described above. When equilibrium was reached, 0.01 ml of one of the samples S was added and the voltage change with time was recorded. The curves obtained are shown on Fig. 3a. Then the slopes of the curves were measured as in the previous Examples and the slope values plotted against the corresponding cholesterol concentration of the samples. This is shown on the graph of Fig. 3B which was thereafter used for determining the cholesterol in unknown solutions, aliquots of such solutions to be analyzed having been subjected to the same operational procedure than for the calibrating samples described above.
E
E
(i) Enzyme solution S: A 0.05 M acetate buffer was prepared according to usual procedures and a solution was prepared by dissolving 600 U of horse-radish peroxidase and 300 U of glucose oxidase in 10 ml of the buffer (pH = 6.4). The S solution was stored at 3°C before use.
F
3
(ii) Solution of p-fluoroaniline S: This solution was made by dissolving in 199 ml of 0.5 ml acetate buffer, pH 5.5, 0.18 g of EDTA, 0.1 g of p-fluoroaniline and 1 ml of a 10― M solution of NaF.
This Example is particularly intended for illustrating the preparation of a standard graph by plotting the d[F―]/dt values versus the concentrations of the samples under analysis.
F
E
Then, the analysis was carried out as in the previous Examples, working with 4.8 ml of solution S, 0.1 ml of solution S and 0.1 ml of glucose solutions (see Table below) and operating under magnetic stirring in a polyethylene beaker. The voltage versus time curves were recorded and the mV readings at t = 30 sec after equilibrium were converted to the corresponding [F-] values by the above-discussed Nernst equation (µg/l units). The results are collected in the following Table.
These figures were used to prepare the graph of Fig. 4b. The dotted line on this graph represents dV/dt plotted versus glucose concentration whereas the full line represents d[F―]/dt versus glucose concentration. The drawing clearly shows that the full line is straight within measurement errors. It was used as a standard for the evaluation of unknown glucose concentrations according to the procedure of the invention.
CF
5
(i) Solution of p-fluoroaniline S: This solution was prepared as described in the previous Examples and consisted in a 100 ml solution of 0.3 M acetate buffer containing 0.5 g of p-fluoroaniline and 1 µmole of NaF (10- M F- solution).
2
2
2
(ii) Hydrogen peroxide stock: This was a 10- M solution that is, containing 0.34 g of pure H0 in 1 liter of solution.
E
(iii) Peroxidase standard solution S: Such solutions were prepared which contained, respectively, 1.2, 2.4, 3.6, 4.8 and 6.0 U of enzymes per ml.
This Example reports the measuring of peroxidase in the presence of an excess of hydrogen peroxide.
CF
2
2
E
Then, the analysis was carried out as in the previous Examples with 4.8 ml of S solution, 0.1 ml of the H0 solution and to this were added, under stirring, at 20°C (± 0.1 °C), 0.1 ml of one of the S solutions. Zero time for the measurement was the moment of the addition of the peroxidase. Then, the kinetic development of F- was recorded and the values at t ‗ 30 sec were used as previously as the characteristic parameters. Fig. 5a shows the plot of time (sec) versus [F―] (µmoles/l) calculated from the mV values as described previously. Fig. 5b shows the slope dV/d [F―] plotted versus peroxidase concentration. The chart of Fig. 5b was thereafter used for measuring unknown amounts of peroxidase in immunology tests.
CF
It is useful to note that in the present test, S solutions containing as low as 0.05 g/I of p-fluoroaniline worked as well as those containing 5 g/I.
2
2
2
2
2
2
2
2
It is also useful to remark that 0.1 ml (test quantity) of the H0 stock solution contains actually 5 µmoles of H0. Since, according to accepted standards, 1 unit of peroxidase consumes 1 µmole of H0/min, it is easily seen that in the case of the highest peroxidase concentration tested above (0.6 U of peroxidase involved) the amount of H0 consumed after 30 sec is about 1/20 of the available quantity. Therefore, the change in concentration can be considered negligible.
CF
3
(i) Solution of p-fluoroaniline and glucose S: This solution was prepared as usual by dissolving 5 mg of p-fluoroaniline, 3 g of glucose and 1 ml of 10- M NaF solution in 0.3 M, pH 5.3, buffer (Tisals) and making up to 100 ml.
(ii) Peroxidase solution: This solution contained 60 U/ml.
E
(iii) Glucose oxidase standard solutions S: A set of 5 solutions was prepared which contained 0.1,0.25,0.5,1 1 and 2 µ/ml, respectively.
This Example refers to the measurement of glucose oxidase in a medium containing an excess of glucose.
CF
E
The analysis was conducted as before. 4.8 ml of S and 0.1 ml of the peroxidase solution (6U) were agitated magnetically in a polyethylene beaker and, at to, 0.1 ml of the S solution was added. The response of the F- sensitive electrode (which has been calibrated earlier with known NaF solution) was recorded for one minute and a graph of the rate curves (time versus [F―]) was prepared. This is shown in Fig. 6a. Thereafter, the slope at 30 sec of each curve was plotted against glucose oxidase concentration to give the chart of Fig. 6b. This chart was then used successfully to compute the results from further measurements of the same kind with samples containing unknown amounts of glucose oxidase.
It should be kept in mind that the above Examples have not yet been optimized and that the above directions are possibly not the best way of carrying out the invention.
As stated hereinbefore, the invention can serve to achieve many analytical measurements of many substrates capable of being enzymatically (or otherwise catalytically) oxidized by oxygen (or air) with the quantitative production of hydrogen peroxide. The following Table summarizes some of the possibilities in this field and provides a list of enzymes, the substrates they may act upon as oxidation catalysts and the products resulting from said oxidation.
2
2
2
2
It should also be noted that other organic fluoro-compounds can be used in the present invention which may split their C-F bond with H0 in the presence of a peroxidase, the rate of this splitting being in proportion to the amount of H0. Also the selection of the other reagents involved in the present analysis: buffers, surfactants, preservative, etc. and the relative quantities of such ingredients can be varied according to the needs. Adaptations and modifications can thus be made to the present procedure by any skilled technician.
e
e
As a modification of the present rate reading technique, it is also possible to contemplate using, as a key parameter, the equilibrium potential reached for each sample after a given reaction time. This time shall be determined experimentally as the most suitable for reproducible results. This technique will therefore be based on a static type of determination related to the reaching of a fixed equilibrium of the enzymatic reaction, this being according to the well-known methods called "end point determination". For carrying out such modification, the equilibrium potential V will be measured with each sample and the relationship between V and the corresponding concentration of the samples will be established. Such modification is well suited for being adapted to automatic measurement systems. Thus, such a system would comprise a memory for storing the above-mentioned relationship data, an automatic circulating and mixing device for taking up the samples and contacting them with the reagents, an electrometric cell for measuring the potentials and a computing unit for comparing the measured values with the stored data, thus directly and automatically providing the desired results.
Moreover, the general procedure described in the present Examples, and particularly in regard to Figs. 4a and 4b, could also be automated since calibrating rate data can also be processed electronically and stored in a memory (e.g. the dV/dt or d[F-]/dt parameters), after which the actual rate measurement for unknown sample could be computerized against said stored data in order to automatically furnish the desired analytical results.
5
Further Examples Nos 7 to 17 illustrate the use of fluoro compounds other than p-fluoroaniline. The behavior of some of these compounds (7 to 10) in the analysis of glucose in the presence of glucose oxidase and peroxidase is also illustrated in correspondingly numbered Fig. 7a to 10a and Fig. 7b to 10b. The Figures with subscript a all concern rate curves obtained by measuring 0.1 ml samples containing 1 g/I, 3 g/I and 5 g/I of glucose, respectively, in the presence of 6 U of peroxidase, 30 U of glucose oxidase and 4.8 mg of the fluoro-compound, the analysis being carried out in a pH 5.3 acetate buffer in the presence of 10- M NaF, whereas the figures with subscript b concern the dV/dt data at 30 sec plotted against glucose concentration. Also, such fluoro-compounds are listed in the following table which indicates, all analytical conditions being the same, the relative behavior of such compounds compared to that of p-fluoroaniline. Therefore, the results recorded in the table show, among such compounds, which are suitable (+) for the measurements according to the invention, which are not (-) and which are questionable (+-). | |
Brandy Luczywo has spent over nine (9) years as senior management with a Fortune 50 company, as well as driving force for First 2 Aid, LLC since its inception. During her time in management, she has managed the relationships between staff and clients as well as the company’s relationship with vendors and the sales team. She continues to work alongside Chris during many EMS stand-by events and continues to volunteer as an Admin Lead for the Central Florida Medical Disaster Coalition. Most recently she led the administrative team responsible for a 60-bed field hospital during the Electric Daisy Carnival in Orlando.
Chris LUCZYWO
COO
Christopher Luczywo for the past three (3) years has been the face of First 2 Aid, LLC and has been working as the CEO and onsite manager, daily. Previous to the Inception of First 2 Aid, LLC, Chris was a Firefighter / Paramedic for over a decade, leaving the fire service to become a critical care flight paramedic. During his time as a medic, Chris has been responsible for the care of over thousands of patients and has managed both ambulance and flight crews throughout the world. Chris has advanced training in Critical Paramedicine, Incident Command, Safety Planning, and Mass Casualty Incidents, to name a few.
SANJAY PAREKH
CEO
Sanjay Parekh is the Founder of Amson Consulting, a Management Consulting firm focused on helping small & medium companies that want to grow, optimize their business performance, maximize growth, create value & increase profits by developing effective growth strategies & support them with its implementation.
Sanjay has 30 years of successful global business leadership experience, which includes Strategic Planning, Sales and Marketing, Team-Building and Global Manufacturing. Sanjay’s background ranges from his work as a Director in a family-owned business to heading a Multinational Corporation in Asia. | https://first2aidems.com/our-team/ |
What do researchers think led to the gradual decline and final defeat of the Khmer of Angkor empire?
The cause of the Angkor empire’s demise in the early 15th century long remained a mystery. But researchers have now shown that intense monsoon rains that followed a prolonged drought in the region caused widespread damage to the city’s infrastructure, leading to its collapse.
Who defeated the Khmer empire?
Suryavarman deposed the Cham king in 1144 and annexed Champa in the following year. The Chams, under a new leader, King Jaya Harivarman I, defeated Khmer troops in a decisive battle at Chakling, near Phan Rang, in southern Vietnam.
How old is Khmer Empire?
The Khmer Empire was established by the early 9th century. Sources refer here to a mythical initiation and consecration ceremony to claim political legitimacy by founder Jayavarman II at Mount Kulen (Mount Mahendra) in 802 CE.
How did the change in religion contribute to the Khmer empire’s decline?
Some historians believe that the mass conversion to Theravada Buddhism—by undermining the Hindu and Mahayana Buddhist institutions underpinning the state and by encouraging through its doctrines a more-individualistic attitude among believers—contributed to the decline and gradual abandonment of Angkor, which certainly …
How long did the Khmer empire last?
The Khmer empire was a powerful state in South East Asia, formed by people of the same name, lasting from 802 CE to 1431 CE. At its peak, the empire covered much of what today is Cambodia, Thailand, Laos, and southern Vietnam.
What religion was the Khmer empire?
When the Khmer Empire came to power in the ninth century AD, Hinduism was the official religion. It had been the case in that part of the world for generations. Rulers of the great empire worshipped Hindu gods such as Vishnu and Shiva, and dedicated the 12th-century temple of Angkor Wat to these beliefs. | https://vietnamcarrentaldeals.com/sightseeing/what-do-researchers-think-led-to-the-gradual-decline-and-final-defeat-of-the-khmer-or-angkor-empire.html |
Lean Six Sigma is a combination of two quality improvement concepts. Lean focuses on eliminating waste and improving work flow, while Six Sigma focuses on eliminating unnecessary variation. Combined together, LSS encourages sustainable, long-lasting improvements without forcing organizations to choose between quality and financial savings. LSS projects often result in positive financial impact as well as improving quality. Projects focus on data and establishing measurement systems to define success and provide a way to track future performance.
LSS uses a five-phased approach for finding permanent solutions to difficult business problems. The phases are: Define, Measure, Analyze, Improve, and Control (DMAIC). By following the DMAIC road map, difficult problems are solved in a way that ensures long-lasting sustainability.
To enhance the life, health, and safety of our community, RiverStone Health uses Lean Six Sigma for process improvement.
Since 2013, Organizational Innovation, RiverStone Health’s LSS program, has used LSS methods and tools to find creative ways to optimize value and service.
- Improve our patient/customer’s experience
- Improve the working environment for our staff
- Optimize our available resources
LSS relies on the martial arts system of “belts” to describe levels of training and competence. Our Organizational Innovation structure includes two full-time Black Belts as well as staff members who devote part of their time to LSS projects and have varying levels of training. With the support of Black Belts, staff members who are trained in LSS identify waste within their work area. Waste is defined as anything that does not add value for our customers. Waste includes steps that do not change the information or product; are not done right the first time; or things the customer does not care about or is not willing to pay for.
- Support the mission, vision and values of RiverStone Health
- Improve services to our patients and clients or benefit our staff
- Have the potential of creating a positive financial impact for the organization.
Organizational Innovation Successes
Project: Increasing the number of uninsured/low income women & men receiving cancer screening
Based on FY 2014 outcomes, the Montana Cancer Screening Program at RiverStone Health fell short of meeting its goal of screening 925 people for breast, cervical and colorectal cancers during the designated grant year. Using LSS methods, the team gathered data to determine a baseline, complete process mapping and use tools to understand where sources of waste existed. Then they worked to improve the process. Since the project’s completion, the cancer screening program has been on track to exceed screening goals.
Project: Decreasing first-year turnover
First year staff turnover is very expensive for healthcare organizations. Costs range from $10,000 to $20,000 for each first-year employee who leaves an organization. Understanding the impact of first-year turnover, RiverStone Health Hospice Services wanted to improve the orientation process and improve mentoring of new staff members. The team defined the problem by using focus groups. They measured and analyzed the data, and implemented creative ideas to improve staff retention. Within one year of the project’s implementation, first-year turnover for Hospice Services decreased by 80 percent. The success has been sustained for two years since the completion of the project. | https://riverstonehealth.org/our-organization/organizational-innovation/ |
The very different subjects of Bayesian data analysis and foundation of quantum mechanics are related each other by a common definition of probability based on logic. The foundations of Bayesian statistics are based on this approach and some assumptions. At the same time, in the framework of Relational Quantum Mechanics, the definition of the probability by logic allows to reduce from three to two the number of postulates necessary for the formulation of quantum mechanics.
Nested_fit : a data analysis program based on Bayesian statistics
Nested_fit is a program based on the Bayesian statistics [1–3]. It provides not only commonly used outputs of standard fitting programs based on the maximization of the likelihood function or the minimization of the chi-square, but it also determines the complete probability distribution for each parameter and conjunct probability of pairs of parameters. More important, it provides the Bayesian evidence, a quantity required to compare different models (i.e. hypotheses, like the presence or not of additional peaks or the choice of the peak shape). In the case of several equiprobable models, Nested_fit outputs can be used to extract the probability distribution of one parameter common to the different models (the position of a main spectral component as e.g.) without having to determine uniquely the spectrum modeling. The evidence calculation is based on the nested algorithm presented in the literature (Sivia and J. Skilling, Data analysis : a Bayesian tutorial, 2006 Oxford University Press), which reduces a n-dimensional integral (the integral of the likelihood function in the n parameters space) to a one-dimensional integral. The Nested_fit code is written in Fortran90 with some Python complementary routines for visualizing the output results and for doing automatic analysis of data. Recently, a “machine learning” algorithm has been implemented for cluster analysis (mean shift) to treat difficult case where several local maxima of the likelihood function are present . It has been implemented for the analysis of spectra of different nature : X-ray emission spectra from heavy highly charged ions and pionic atoms [1,4–6], photoemission spectra from nanoparticles [7,8] and nuclear decays .
|
Figure 1 |
Left : Profile curves corresponding to the likelihood maxima of the models with different number of peaks. Right : Probability distribution of the main peak position from the single probabilities of the models (figures from Ref. ).
Publications
M. Trassinelli, Bayesian data analysis tools for atomic physics, Nucl. Instrum. Methods B 408, 301-312 (2017)
M. Trassinelli, The Nested_fit Data Analysis Program, Proceedings 33, 14 (2019)
M. Trassinelli and P. Ciccodicola, Mean Shift Cluster Recognition Method Implementation in the Nested Sampling Algorithm, Entropy 22, 185 (2020)
M. Trassinelli, D.F. Anagnostopoulos, G. Borchert, A. Dax, J.P. Egger, D. Gotta, M. Hennebach, P. Indelicato, Y.W. Liu, B. Manil, N. Nelms, L.M. Simons, and A. Wells, Measurement of the charged pion mass using X-ray spectroscopy of exotic atoms, Phys. Lett. B 759, 583-588 (2016)
M. Trassinelli, D.F. Anagnostopoulos, G. Borchert, A. Dax, J.-P. Egger, D. Gotta, M. Hennebach, P. Indelicato, Y.-W. Liu, B. Manil, N. Nelms, L.M. Simons, and A. Wells, Measurement of the charged pion mass using a low-density target of light atoms, EPJ web conf. 130, 01022 (2016)
J. Machado, G. Bian, N. Paul, M. Trassinelli, P. Amaro, M. Guerra, C.I. Szabo, A. Gumberidze, J.M. Isac, J.P. Santos, J.P. Desclaux and P. Indelicato, Reference-free measurements of the 1s 2s 2p 2PO1/2,3/2 → 1s2 2s 2S1/2 and 1s 2s 2p 4P5/2 → 1s2 2s 2S1/2 transition energies and widths in lithiumlike sulfur and argon ions, accepted for publication in Phys. Rev. A (2020)
I. Papagiannouli, M. Patanen, V. Blanchet, J.D. Bozek, M. de Anda Villa, M. Huttula, E. Kokkonen, E. Lamour, E. Mevel, E. Pelimanni, A. Scalabre, M. Trassinelli, D.M. Bassani, A. Lévy, and J. Gaudin, Depth Profiling of the Chemical Composition of Free-Standing Carbon Dots Using X-ray Photoelectron Spectroscopy, The Journal of Physical Chemistry C 122, 14889-14897 (2018)
M. De Anda Villa, J. Gaudin, D. Amans, F. Boudjada, J. Bozek, R. Evaristo Grisenti, E. Lamour, G. Laurens, S. Macé, C. Nicolas, I. Papagiannouli, M. Patanen, C. Prigent, E. Robert, S. Steydli, M. Trassinelli, D. Vernhet, and A. Lévy, Assessing the Surface Oxidation State of Free-Standing Gold Nanoparticles Produced by Laser Ablation, Langmuir 35, 11859-11871 (2019)
F.C. Ozturk, B. Akkus, D. Atanasov et al., New test of modulated electron capture decay of hydrogen-like 142Pm ions : Precision measurement of purely exponential decay, Phys. Lett. B 797, 134800 (2019)
Born’s rule (and Quantum Mechanics formalism) from two postulates
Relational Quantum Mechanics (RQM) is an approach for the foundation of Quantum Mechanics with only three postulates. Initially formulated by Rovelli in 1996, RQM is based on the limited amount of information that can be extracted from interaction of different systems, with a third postulate to define the properties of the probability function. We demonstrate that from a rigorous definition of the conditional probability for the possible outcomes of different measurements, the third postulate is unnecessary and the Born’s rule naturally emerges from the first two postulates by applying the Gleason’s theorem. We demonstrate in addition that the probability function is uniquely defined for classical and quantum phenomena. The presence or not of interference terms is, in fact, related to the correct formulation of the conditional probability where distributive property on its arguments cannot be taken for granted. | http://www.insp.upmc.fr/Probability-from-Bayesian.html?lang=fr |
Investigation of pairs of unrelated persons mismatched for a particular HLA‐DQB1 or ‐DPB1 gene on the induction of cytotoxic T lymphocytes (CTL) revealed that HLA‐DQ and HLA‐DP antigens provided a slight proliferative stimulus which was, however, sufficient for the generation of CTL. Monomorphic anti‐DQ and anti‐DP monoclonal antibodies abrogated the induction of cytotoxic response. The results indicate that the HLA‐DQ and HLA‐DP antigens play a similar role to HLA‐DR specificities in clinical bone marrow transplantation. | https://kyushu-u.pure.elsevier.com/en/publications/the-role-of-hla-class-ii-antigens-in-the-induction-of-cytotoxic-t |
There are many challenging and expensive winter road maintenance decision problems that can be addressed using operations research techniques. A key operation is spreading of chemicals and abrasives on the road network, which is conducted on a regular basis in almost all rural and urban regions that experience significant snowfall or roadway icing. The importance of winter road maintenance operations is obvious from the magnitude of the expenditures required to conduct winter road maintenance operations, as well as the indirect costs from both the lost productivity due to decreased mobility and from the effects of chemicals (especially salt) and abrasives on infrastructure, vehicles and the environment. In the US alone, 70% of the population and 74% of the roads are in snowy regions and state and local government agencies spend over US $2.3 billion (US) per year for snow and ice control activities (Federal Highway Administration [FHWA], 2010; Pisano, Goodwin, & Stern, 2002). Indirect costs (e.g., for environmental degradation, economic losses and mobility reductions) are thought to be several times larger; for example, the costs for weather-related freight delays in the US have been estimated at US $3.4 billion (US) per year (Nixon, 2009).
Recent developments in winter road maintenance technologies and operations improve efficiency, reduce resource (materials, equipment and personnel) usage, and minimize environmental impacts (Shi et al., 2006; Transportation Research Board [TRB], 2005, 2008; Venner Consulting and Parsons Brinkerhoff, 2004). These developments include use of alternative deicing materials, anti-icing methods, improved snow removal equipment, more accurate spreaders, better weather forecasting models and services, road weather information systems, vehicle-based environmental and pavement sensors, etc. These new technologies, and their growing use by state and local government agencies, have improved the effectiveness and efficiency of winter maintenance operations, benefiting government agencies, users, and the general public.
While new winter road maintenance technologies are being developed and deployed on a broad basis, implementations of optimization models for winter road maintenance vehicle routing remain very limited. Most agencies continue to design vehicle routes based on manual approaches derived from field experiences and most agencies rely on static weather forecasts (Fu, Trudel, & Kim, 2009; Perrier, Langevin, & Campbell, 2007a, 2007b). As Handa, Chapman, and Yao (2005) note, “In practice [route] optimization has traditionally been a manual task and is heavily reliant on local knowledge and experience” (p. 158). The limited deployment of optimization models for winter road maintenance vehicle routing is especially surprising given the documented successes in other areas of arc routing, perhaps most notably for waste management (Sahoo, Kim, Kim, Kraas, & Popov, 2005). Thus, winter road maintenance vehicle routing optimization would appear to offer the promise of significant cost savings, along with a reduction in negative environmental and societal impacts. | https://www.igi-global.com/chapter/vehicle-routing-models-algorithms-winter/58515 |
The majority of athletes use some process to warm-up. A warm-up is usually performed before participating in technical sports or exercise and this process prepares an athlete for performance. A good warm-up has been thought to increase neuromuscular performance as well as mental performance and motivation. Perhaps the biggest reason athletes warm-up is to prevent injury. Although most athletes incorporate some sort of warm-up routine, there has been no real confirmation on the effectiveness of various methods. A recent study was conducted to measure the impact of including functional exercises for the trunk muscles in a warm-up program prior to sprint performance.1
The study consisted of 121 elite youth soccer players (ranging for 13-18 years old) from two German professional sport clubs that were divided into two groups. One group performed a normal soccer warm-up first and then performed the same warm-up four days later supplemented with functional exercises for the trunk muscles. The normal soccer warm-up consisted of nonspecific running, coordination exercises, stretching, and acceleration runs. The second group performed those warm-ups in the reverse order. | https://breakingmuscle.com/learn/warming-up-is-it-beneficial-to-include-functional-core-exercises/ |
A scaffold is an elevated, temporary work platform used by workers to elevate themselves, materials, and equipment. In the United States, roughly 2.3 million construction workers (around 65% of the construction industry) work on scaffolds.
If you haven’t used a scaffold, you’ve probably seen the complicated system of metal or aluminum pipes from a distance and been thankful you weren’t balancing on it. In reality, scaffolds can be safe when used properly.
Unfortunately, they’re not always used properly — and when mistakes are made the consequences can be deadly.
Scaffolding accident statistics
According to the United States Bureau of Labor Statistics (BLS), scaffold-related accidents result in roughly 60 deaths and 4,500 injuries every year. Falls from scaffolds account for roughly 25% of fatal falls from all working surfaces.
Tweet this
In all, employers lose almost $90 million in workdays lost as a result of scaffold accidents and injuries every year.
These statistics show how important it is — for both workers and employers — to take workplace safety seriously.
Common causes of scaffold accidents
According to a recent BLS study, 72% of scaffold accidents can be attributed to 1 of the following 3 causes:
- Scaffold support or planking gives way due to defective equipment or improper assembly
- Slipping or tripping while on a scaffold due to factors such as slippery surfaces or lack of guardrails
- Falling objects hitting either a worker on a scaffold or those below
As for the other 28%, scaffold accidents can be caused by:
- Electrocution as a result of scaffolds and equipment being too close to power or utility lines
- Environmental conditions, such as wind, rain, and the presence of hazardous substances
- Inadequate fall protection
- Collapse of scaffold due to overloading
Common scaffold accident injuries
Due to the fact that scaffolds are used to get access to heights that are otherwise too high to reach, most scaffolding accidents result in serious injuries or death. The most common scaffold injuries include:
- Traumatic brain injuries
- Spinal cord injuries
- Amputations
- Broken bones
- Lacerations
Of course, it’s not just workers who can be injured in scaffolding accidents. Pedestrians can be seriously hurt or killed when scaffolding collapses.
Scaffolding safety standards
The Occupational Safety and Health Administration (OSHA) publishes regulations on scaffold safety, including how to construct, maintain, and use scaffolds.
In addition, many private organizations publish scaffold safety standards. Violations of these standards can be used as evidence of negligence in personal injury cases resulting from scaffold accidents.
Preventing scaffold accidents
Despite how dangerous scaffolding might look, serious injuries and fatal falls can be prevented. The National Institute for Occupational Safety and Health (NIOSH) recommends taking the following safety measures:
- Comply with current and proposed OSHA regulations for working with scaffolds.
- Assure that the design and construction of scaffolds conform with OSHA requirements.
- Keep scaffold suspension ropes and body belt or harness system drop lines (lifelines) protected from hot or corrosive substances.
- Wear personal fall protection equipment.
- Inspect all scaffolds, scaffold components, and personal fall protection equipment before each use.
- Use structurally sound portions of buildings or other structures to anchor drop lines for body belt or harness systems and tiebacks for suspension scaffold support devices.
- Follow scaffold manufacturers’ guidance regarding the assembly, rigging, and use of scaffolds.
What to do if you’re injured in a scaffold accident
If you’ve been injured in a scaffold accident — whether as a worker or a bystander — you’ll want to consider filing an injury claim so that you can be reimbursed for any damages you suffered. Depending on the circumstances of the accident, you have 2 main options:
- Workers’ compensation claim. Workers’ compensation is a type of insurance that provides benefits to employees injured during the course of employment. Though workers’ compensation laws differ from state to state, the vast majority of employers are required to carry workers’ compensation insurance. If you’re a construction worker and you slip on a scaffold and injure yourself, a workers’ compensation claim is your most likely recourse. One of the major benefits of a workers’ compensation claim (as opposed to a personal injury lawsuit) is that workers’ compensation is a no-fault insurance system. This means you don’t have to prove that anyone was at fault for your injury in order to receive compensation.
- Personal injury claim. If the scaffold you used was defective, and the defect caused your accident, then you may be able to sue the manufacturer of the equipment. This is a form of personal injury lawsuit known as a product liability lawsuit. A personal injury lawsuit would also be appropriate if you’re a bystander injured as a result of a scaffold accident. In such a case, you would have to show that the company (or individual) using the scaffold was negligent —in other words that they were careless and their carelessness caused your injuries.
The specific damages you can recover will depend on the nature of your accident, as well as the laws of the state in which you file your lawsuit or workers’ compensation claim. However, in most cases, you’ll be able to recover economic damages (medical expenses, lost wages, etc.) and non-economic damages (pain and suffering, loss of consortium, etc.).
If you or a loved one has been injured in a scaffold accident, consider using our free online directory to locate an attorney in your area.
See our guide Choosing a personal injury attorney. | https://www.enjuris.com/construction-accidents/scaffold-accidents-injuries-deaths/ |
- Part of:
- Health and social care
- ISBN:
- 9781785440748
The document provides a framework for Allied Health Professionals in the implementation of standards to support person-centred musculoskeletal pathways.
Glossary
Advanced Nurse Practitioner
A registered nurse who has acquired the expert knowledge base, complex decision-making skills and clinical competencies for expanded practice, the characteristics of which are shaped by the context and/or country in which s/he is credentialed to practice. A Master's degree is recommended for entry level.
Allied Health Professional Advanced Musculoskeletal Practitioners
Advanced practitioners are experienced professionals who have developed their skills and theoretical knowledge to a very high level which is supported by evidence. They perform a highly complex role and continually develop practice within Musculoskeletal Services.
Allied Health Professionals
Allied Health Professionals are a specific group of Health and Social Care professionals regulated by and registered with the Health and Care Professions Council. They include the following professions: Arts Therapies (including Art Therapy, Drama Therapy, Music Therapy) Diagnostic Radiography, Dietetics, Occupational Therapy, Orthoptics Orthotics, Paramedics, Physiotherapy, Podiatry, Prosthetics, Speech and Language Therapy, Therapeutic Radiography.
Extended Scope Practitioners (ESPs)
Expert physiotherapy practitioners trained and competent to work in their specialised clinical area.
General Practitioners (GP)
General Practitioners (GP) are qualified medical practitioners who treat acute and chronic illnesses and provides preventive care and health education to patients.
Musculoskeletal Conditions
Musculoskeletal conditions include a diversity of complaints and diseases localised in joints, bones, cartilage, ligaments, tendons, tendon sheaths, bursae and muscles.
Healthcare Professional
In this document refers to a medically trained doctor registered with the General Medical Council, an Allied Healthcare Professional registered with Health and Care Professions Council or a nurse registered with the Royal College of Nursing.
NHS 24
NHS 24 is the name of the national confidential health advice and information service provided by NHSScotland.
NHS Inform
NHS Inform provides a co-ordinated, single source of quality assured health and care information for the people of Scotland.
Occupational Therapists
Occupational therapists take a whole-person approach to both mental and physical health and wellbeing, enabling individuals to achieve their full potential. Occupational therapy provides practical support to enable people to facilitate recovery and overcome any barriers that prevent them from doing the activities (occupations) that matter to them. This helps to increase people's independence and satisfaction in all aspects of life.
Orthopaedic Surgeons
Orthopaedic surgeons provide both elective and trauma care. In trauma their work includes treating fractures following accident in the home, on the road, at sport and those related to falls in the elderly, often associated with osteoporosis. Their elective work includes treating patients with arthritis of bones and joints and the soft tissues, and congenital, hereditary, developmental and metabolic disorders that affect the musculoskeletal system. Surgeons are able to replace worn-out joints, repair torn ligaments, remove abnormal or damaged tissue and stiffen those joints that are severely damaged.
Orthotists
Orthotists design and fit orthoses (braces etc) which provide support to part of a patient's body to compensate for paralysed muscles, provide relief from pain or prevent physical deformities from progressing.
Pain Management
Pain management is a growing multidisciplinary specialty dedicated to treating acute, sub-acute, and chronic pain. The goal of pain management is to improve quality of life and help patients return to everyday activities without surgery.
Physiotherapists
Physiotherapists are concerned with human function and movement and maximising potential. Physiotherapy uses physical approaches to promote, maintain and restore physical, psychological and social wellbeing, taking account of variation in health states.
Podiatrists
Podiatrists assess, diagnose and treat foot and ankle pathologies to maintain and enhance locomotion function of the feet and legs, to alleviate pain, and to reduce the impact of disability. Specialist roles are developing in biomechanics/musculoskeletal care, surgical podiatry in the foot and rheumatology.
Radiology
Radiology provides diagnostic imaging services to assist doctors and other healthcare professionals in both diagnosis and deciding upon the best management of a patient's problems. When appropriate radiologists use minimally invasive methods to treat disease.
Rheumatology
Rheumatology is multidisciplinary branch of medicine that deals with the investigation, diagnosis and management of patients with arthritis and other musculoskeletal conditions. This incorporates over 200 disorders affecting joints, bones, muscles and soft tissues, including inflammatory arthritis and other systemic autoimmune disorders, vasculitis, soft tissue conditions, spinal pain and metabolic bone disease. A significant number of musculoskeletal conditions also affect other organ systems.
Self-Referral
A system of access that allows patients to refer themselves to an AHP service directly, without having to see or be prompted by another healthcare practitioner.
SIGN Guidelines
The Scottish Intercollegiate Guidelines Network (SIGN) develops evidence-based clinical practice guidelines for the National Health Service (NHS) in Scotland.
Suggested Self-Referral
A system of access that allows patients to refer themselves to an AHP service directly, having been prompted by another healthcare practitioner. | https://www.gov.scot/publications/allied-health-professional-musculoskeletal-pathway-minimum-standards-framework-action-2015-2016/pages/3/ |
Montana is the fourth largest U.S. state by area, behind Alaska, Texas and California, but with an average of just six people per square mile, it is one of the country’s least densely populated states. Although the name Montana is derived from the Spanish montaña (“mountain” or “mountainous region”), it has an average elevation of only 3,400 feet, the lowest among the Rocky Mountain states. Montana is home to the Little Bighorn Battlefield National Monument, which memorializes the historic 1876 battle between the Sioux tribe and U.S. Army, often referred to as “Custer’s Last Stand.” Yellowstone National Park, located in southern Montana and northern Wyoming, was the first national park established in the United States.
Carved by glaciers more than 10,000 years ago, Flathead Lake is the largest freshwater lake between the Mississippi River and the Pacific Ocean. It is 28 miles long, between 5 and 15 feet wide and encompasses nearly 200 square miles.
The National Bison Range was established in 1908 in western Montana to preserve wild bison from extinction. In addition to elk, deer, antelope, bears and other animals, roughly 500 bison live in the wildlife refuge.
The world’s first International Peace Park was established in 1932 when Glacier National Park in Montana and Waterton Lakes National Park in Alberta, Canada, were combined. In 1995, UNESCO listed the two parks as a joint World Heritage Site for their diverse and plentiful plant and wildlife species, and outstanding scenery.
The coldest temperature in the 48 contiguous states ever recorded was -70 degrees Fahrenheit in Rogers Pass on January 20, 1954. In January of 1972, Loma, Montana, broke the national record for the greatest temperature change within a 24-hour period by recording a 103-degree climb from -54 degrees Fahrenheit to 49 degrees Fahrenheit.
In 2000, 50 of Montana’s 56 counties were designated “frontier counties” by the National Center for Frontier Communities using a matrix that measures population density as well as distance and travel time to a service/market center. In 2010, Montana was home to an average of 6.8 people per square mile.
Eleven tribal nations live on seven Indian reservations in Montana. A twelfth tribe, the Little Shell Band of Chippewa, lives within the state without its own land.
Montana’s large gold and silver mines gave rise to its nickname, the Treasure State, and its state motto, “Oro y Plata” (Spanish for “Gold and Silver”). | https://www.history.com/topics/us-states/montana |
62,507 retail outlets listed in Paris: changes and trends.
A new survey of retail outlets in Paris was undertaken in March and April, 2017, at the instigation of the City of Paris, the Paris Chamber of Commerce and Industry and Apur. The study gives an overall picture of the state of play of the retail sector and how it has changed and is changing, both since 2014, date of the last survey, and long term, as the first surveys date from 2000. Apur has also developed three interactive maps, so that finding out about shops in Paris is more enjoyable.
In 2017, there were 62,507 shops and commercial services in Paris, which translates into a very high commercial density compared to that observed in provincial city centres and also compared to the 11 other Public Territorial Establishments which make up the Grand Paris Metropolis.
Transformations in Parisian society and its modes of consumption are reflected and highlighted in the way retail outlets are evolving. High population density and the way people travel around the city mostly on foot, are ensuring the survival of a variety of small, local shops. Some shops have become more upmarket – a shift made possible by the rise in spending power of the inhabitants.
Technological changes are having an effect on the way commercial outlets are evolving. The use of the internet to shop online, get information or as a leisure activity is turning some clients away from newsagents, travel agencies, video clubs, photographic and electronic supply shops… These shops are in decline and are making room for new businesses.
Compared to the previous survey, new trends are emerging: small, traditional food shops are on the increase after a long period of decline; wholesalers are still declining fast; commercial services linked to well-being are continuing to grow; repair and recycling businesses are developing as a result of the crisis and the desire for a more sustainable society; and finally, the rate of vacant premises is rising slowly after a fairly marked decrease between 2011 – 2014. The number of vacant premises is 9.3% of total ground-floor premises as against 9.1% in the previous period.
2017 data for commercial activity in Paris are available on Open Data. | https://www.apur.org/en/our-works/changes-commercial-activity-paris-2017-inventory-shops-and-2014-2017-developments |
Iris takes our responsibility to all employees in our supply chain seriously and expects all factories and supplies to abide by the Fair Labor Association's Code of Conduct. In addition to abiding by this Code, all entities producing products for Iris are subject to inspections and financial audits to ensure best practices and that all local labor and safety laws are followed. For suppliers wholly or partially owned by Iris, the same policies apply with the enforcement of more frequent financial and safety inspections. For any suppliers found in violation, Iris will cancel all existing and pending orders and promptly terminate all future business relationships with guilty party and its subsidiaries.
Iris is also mindful of the environmental impact the apparel industry has on our planet. From the conception of our designs, we are prioritizing sustainable raw materials and methods. We are also taking steps within our offices and factories to reduce energy and waste. We believe that every little action taken today to reduce our environmental footprint leads to bigger actions and more impactful results tomorrow. | https://irisbasic.com/pages/social-responsibility |
ECSSA is a professional society representing pre-hospital emergency care workers in South Africa, pursuing the wellbeing of patients in this environment and the professional interests of those caring for them.
Who and what does ECSSA represent?
ECSSA advocates for the advancement of pre-hospital emergency care in South Africa and intends to serve as a representative organisation for the profession. The Society strives to achieve this ideal by addressing the needs of the profession through engagement with stakeholders at various levels in both the public and private sectors. ECSSA also has a very important role to play in the scientific and continuous professional development of pre-hospital emergency care by collaborating with other national and multi-national professional bodies.
ECSSA is not a trade union created to protect and enforce the employment rights of Emergency Medical Services personnel. Although the Society certainly has an indirect interest in labour law and employment conditions, norms and standards as they apply to this particular profession, the direct objective is not to provide relief or representation in cases where any such rights or norms have been violated.
ECSSA is not a lobby group or committee in pursuit of a single particular objective or cause. The Society has a long-term strategic vision giving effect to a variety of interests having a bearing on pre-hospital emergency care.
ECSSA is not a recruitment agency. Although we will in future be advertising career vacancies within the EMS sector, our goal is not to place individuals and serve in the capacity of a recruitment agent.
ECSSA is not a statutory authority like the Health Professions Council of South Africa. The role of the Society is not to enforce law or standards, nor is it to regulate the Profession but rather to advance and represent pre-hospital emergency care.
To ensure the well being, safety and proper medical treatment of the patient.
To identify, communicate and promote the general wellness of the patient.
To identify, communicate, and promote the general wellness, professional interests and the honour of emergency care providers.
To work with the medical profession and other professional groups in furthering emergency care.
To encourage research in the field of emergency care in South Africa.
To co-operate, in partnership with the medical profession, government, employers, practitioners and representatives of education, in the delineation of minimum standards for and accreditation of educational programmes for emergency care providers.
To maintain a secured central registry of members meeting the defined competency standards (whether qualified by virtue of completion of an accredited program, by certificate or examination or other means specified by the Society).
To establish or have representations on bulletins, newsletters and professional journals and to establish a Code of Ethics as required to assist professional dialogue and the continuing professional development of members; to establish or co-operate in conferences for the same purpose.
To assist in the promotion of measures designed to improve standards of emergency care in the interest of the public.
To promote the interests of the Society and to advocate on its behalf both nationally and internationally.
The emblem for the Emergency Care Society of South Africa is a Baobab tree with a torch in the background and a star of life. The Baobab tree is a tropical African tree, reaching up to 25 meters tall and known to live for several thousands of years. In the wet months water is stored in its thick, corky and fire-resistant trunk for the dry months ahead. The Baobab symbolises strength, endurance and life. The star of life has a well-known association with pre-hospital emergency care, and the torch represents knowledge and light, bringing all pre-hospital emergency care workers together under one society. | http://ecssa.org.za/about.aspx |
There was no earthly morality associated with early religion. The point was to placate the spirits so you were guaranteed a good place in the afterlife. Virtually all early religions centered on death and the things that you had to do in order to go into the sky to become a respected and loved spirit. By the time of the earliest dynastic Egyptian civilization, after-death survival was only guaranteed for the nobility, and like their ancient counterparts, cultural “morality” was limited only to preferences of the pharaohs. Almost everything about their religion was about the various gods you met when you died, and what you had to do when you met them. Burial rites, as most of us know, were of immense importance to the Egyptians.
The first known code of secular law was written in Babylon by its king, Hammurabi in about 1750 BC. This code was inscribed on a basalt stele discovered in 1901 in present-day Iran. The code of Hammurabi was widely known during Zoroaster’s lifetime. Zoroaster turned Hammurabi’s secular laws into religious laws. Thus Zoroastrianism was the first widespread religion to have a unified code of morality that was said to have been received through divine revelation. The divine origin of this moral code implied that a person’s earthly behavior had a bearing on his or her spiritual future.
The insertion of a god-inspired personal morality into religion had a huge effect on the emerging Persian civilization and the many other civilizations that were spawned in the middle east after Zoroaster. A mandated personal morality had the power to modify people’s behavior in the absence of a secular watchdog. If a person violated the moral code, he or she would be punished for it after their death even if no one ever discovered their misbehavior. It had an added benefit as well. The secular authorities could now use the religion to bolster their authority, as well as to maintain order within the society.
Today, nearly all civilizations are bound together by commonly held moralities. Morality may best be defined as an internalized set of values that form the foundations of a person’s habitual behavior and places firm limits on its boundaries. As long as the majority of the citizens maintain these moral boundaries, the civilization generally remains stable because most of its citizens have an agreed upon standard of right and wrong. The underlying religious foundations (churches, mosques, synagogues, etc.) of those civilizations not only set the moral standards, but also remain as institutions that maintain those moral standards. Moral and religious standards can evolve over time, but they cannot be entirely abandoned, especially in a democracy in which people are expected to use their free will in a responsible manner.
All great civilizations, however, come to an end. As large swaths of citizens abandon a civilization’s foundational religion, their children are no longer schooled in what were once the common national moral standards. As they grow to adulthood, each one begins to set his or her own individual behavioral standards and eventually, the society begins to splinter into separate “tribes”, each one entirely convinced of the righteousness of its own causes. What was once a stable and comfortable civilization begins a descent into economic chaos followed by anarchy. | https://thestructureofheaven.com/the-birth-of-morality/ |
MARYLOU MCCORMACK: Can you tell me about yourself and how you came to be at the Universal Service Fund (USF)?
HAARIS M. CHAUDHRY: I started my career with Citibank New York; I also worked with ABN-Amro in Pakistan and Barclays Capital in Dubai. When I moved back to Pakistan in 2012, USF was not on my horizon, although fintech technology was always an area of interest. When I came across the position of Head of Finance at USF, I found their scope of work both interesting and close to my heart; in other words, providing telecommunications services to marginalised communities, connecting them with the rest of the world and bringing convenience to their lives. Technology has always been a passion for me, so the job was interesting and then last year, I was made CEO of USF.
MM: What is the USF and what is its core objective?
HMC: USF was initiated in 2006 by the International Telecommunications Union (ITU), to create a fund (contributed to by licenced telecom operators), aimed at providing telecom infrastructure to rural populations not covered by telecom services due to their not being commercially viable. The fund works through a reverse auction process whereby the telecom operator with the lowest bid wins the auction.
MM: How do you define ‘rural areas’ in Pakistan for your scope of work?
HMC: We have two categories: unserved and underserved. Unserved refers to areas where there is a population of 100 and above and where there are no telecommunications services. Underserved are areas that have coverage (maybe it is patchy) but do not have data coverage. Currently, we are providing connectivity to over 62 districts. Very few of these districts are unserved, as most have some telecom coverage; they are mostly underserved or a combination of the two.
MM: In the last decade, what have been the USF’s major contributions to these communities?
HMC: Over 10,000 mouzas (administrative districts) have been covered through the Fund and, as we speak, we are committing approximately Rs85 billion for the development of the telecommunication infrastructure in those areas. So far, 15 million people have been provided with telecom coverage.
MM: Can you outline some of the achievements of the Fund over the last 10 years?
HMC: Most of the work has been done in the last two years. Since its inception, USF has laid down approximately 8,000 kilometres of fibre across Pakistan, and in the last two years we have awarded contracts for an additional 5,000 kilometres of fibre, of which approximately 1,500 kilometres has already been laid. We are aggressively pursuing fibre as the backbone infrastructure and this financial year, we are targeting another 5,000 kilometres and next year we will target a little more than that. In the last two years, the subsidies committed to the Fund have been worth approximately Rs 35 billion. So if we look at the 10 years, from 2007 to 2019, the total subsidy committed was Rs 65 billion, 60% of which was committed in the last two years. So the projects have seen almost 400% growth.
MM: What are these projects?
HMC: We have three types of projects. The first one is fibre, whereby we are connecting all the Union Councils (UCs) in Pakistan that do not have fibre yet. The second is the highways and motorways project, where we have provided 1,800 kilometres of connectivity, including 3G and 2G services on the Makran Coastal Highway and we are targeting another 500 kilometres on the M3 and M5 highways in southern Punjab this financial year. The third is high-speed mobile broadband for rural areas, where we put up towers and cover the population with 3G and 4G services – which is where we have covered 15 million people in 10,000 mouzas and 62 districts. This includes all of interior Sindh and southern Punjab, the majority of Balochistan and many districts of KPK.
MM: How challenging is digital adoption in areas where telecom services are now available but people have not necessarily caught up due to literacy or financial limitations?
HMC: There are challenges in terms of digital adoption and there are many reasons for this, not least because access to smartphones may not always be financially viable. However, telecommunications connectivity has made a significant impact on lives in terms of access to information, e-education and financial inclusion. A lot of people have learned to use a smartphone or to use connectivity in a very positive way.
MM: What do you hope to achieve in the next three to five years?
HMC: My vision is to provide connectivity to marginalised communities in a very affordable and efficient way. The larger vision is to establish USF as a leading public sector entity able to compete with any corporate organisation. The three pillars to achieving this are merit, excellence and discipline. Once these pillars are in place, we have the core values: D for diversity, I for integrity, G for growth, I for innovation, and T for teamwork. By bringing all of them together and making them part of our daily lives, I am very confident we will achieve this objective. In terms of connectivity, my target is that at the end of three years, at least 35 million people will have access to connectivity. | https://aurora.dawn.com/news/1144299/technology-has-always-been-a-passion-for-me |
By Erica Patino
How does your child react when things don’t go as planned?
Perhaps there’s trouble with a friend. Or maybe your child got a bad grade after studying hard for a test. Does your child get mad or shrug off the incident without giving much thought to what happened? Or does she think about what occurred, how her actions affected the outcome, and what she might do differently in the future to be more successful?
If it’s the second reaction, that’s called self-reflection or introspection. It’s a great skill for all kids to work on. It can also help those with learning and attention issues do better academically and socially.
Self-reflection might seem like something that’s more for adults—thinking about problems, brainstorming ideas on how to do better. But self-reflection is also important for kids, even young children.
As your child grows up, she’ll face different kinds of challenges in school and life. As she gets older, she’ll be expected to think more independently (with less intervention from you) and be responsible for her actions.
Kids with learning and attention issues may experience frustration at school and elsewhere. This could cause them to feel as if there’s nothing they can do to improve. Self-reflection can help your child keep from doing the same things over and over if she isn’t having success.
Say your child comes home and is upset because her friend Jill ignored her at school and wouldn’t sit with her at lunch. You know this isn’t a new issue. According to your child, Jill has been distancing herself since last week.
Rather than simply complain and get mad another time, your child could use self-reflection skills to figure out if she might be responsible for the change. Did she say something that hurt Jill’s feelings? Did she ignore Jill when Jill was trying to get her attention?
Or is it that they’re simply starting to drift apart? Maybe Jill is now rehearsing for the school play, and your child is on the soccer team.
Even if your child can’t figure out what caused the rift, self-reflection can help her decide what to do about it. She might decide to ask Jill what’s wrong. Or she might decide that she’d rather spend more time with kids from the soccer team rather than seek out Jill.
With self-reflection, your child can consider different options and pick the one that seems best to her. That can help her feel that she has some control over what’s happening in her life.
Self-reflection can help to foster success in your child. That’s because self-reflection isn’t just helpful when things aren’t going well. It’s good for kids to reflect on what’s going right, too.
Acknowledging when she’s successful can boost your child’s self-esteem. Finding different ways to approach challenges and work through them is a powerful way to build self-esteem and help your child grow emotionally. Self-reflection can help kids with learning and attention issues acknowledge their challenges without being overly focused on them.
Children with learning and attention issues may have a harder time learning this skill. You can assist your child by helping increase her self-awareness, whether she’s in grade school, middle school or high school.
Erica Patino is an online writer and editor who specializes in health and wellness content.
Molly Algermissen, Ph.D., is an associate professor of medical psychology at Columbia University Medical Center and clinical director of PROMISE.
There was an error posting your reply.
Thanks for being a part of the Understood Community. Your comment will appear shortly, once it’s been reviewed.
*Please confirm you are not a robot.
5 Ways to Help Your Grade-Schooler Gain Self-Awareness
At a Glance: 5 Factors of Emotional Intelligence
7 Ways to Help Teens and Tweens Gain Self-Awareness
Download: Self-Awareness Worksheet for Kids
Checklist: Questions That Can Help Teens Build Self-Awareness
Experts Weigh In: “Should I Let My Child Fail?”
Learn how to use a cheat sheet to help your child open a combination lock.
Learn how dyscalculia affects this young adult outside of math class.
If your child has auditory processing disorder, this assistive technology can help.
Learn how his faith inspired him to do it.
Elizabeth C. Hamblet
Sign up for weekly emails with helpful resources for you and your family.
This email is already subscribed to Understood newsletters. If you haven't been receiving anything, add [email protected] to your safe-senders list.
Name must have no more than 50 characters. Email address must be valid. Email message must have no more than 140 characters and cannot include the < > / \ special characters. Please fill out all fields and complete the reCAPTCHA to send a message.
*Please confirm you are not a robot.
Don’t worry—we saved what you wrote.
Sign up to get personalized recommendations and connect with parents and experts in our community.
Only members can view and participate in conversations.
Child’s nickname is private and only you can see it. | https://www.understood.org/en/friends-feelings/empowering-your-child/self-awareness/the-importance-of-self-reflection |
Biochemical and genetic analysis of a child with cystic fibrosis and cystinosis.
We have studied a child with cystic fibrosis (CF), nephropathic cystinosis, and manifestations of Bartter syndrome, a finding reported previously in both of these diseases (CF and cystinosis). The chance of an individual inheriting a mutant allele for both CF and cystinosis from each of his parents by independent segregation is very small. Therefore, other mechanisms of inheritance were investigated, including whether his diseases were caused by a chromosome deletion or rearrangement that caused defects in both genes, whether his phenotype was caused by a new mutation or variant of either disease, or whether both diseases were inherited together due to inheritance of 2 copies of the same chromosome from one of the parents (uniparental disomy). An investigation was made of whether having mutations for both CF and cystinosis resulted in a different phenotype for either disease and whether the child was a heterozygote rather than a homozygote for one of the mutations. The results suggest that neither disease influenced the expression of the defect in the other and that this child inherited a mutant allele for both diseases independently from each parent.
| |
Evaluation of hospital readmissions in surgical patients: do administrative data tell the real story?
The Centers for Medicare & Medicaid Services has developed an all-cause readmission measure that uses administrative data to measure readmission rates and financially penalize hospitals with higher-than-expected readmission rates. To examine the accuracy of administrative codes in determining the cause of readmission as determined by medical record review, to evaluate the readmission measure's ability to accurately identify a readmission as planned, and to document the frequency of readmissions for reasons clinically unrelated to the original hospital stay. Retrospective review of all consecutive patients discharged from general surgery services at a tertiary care, university-affiliated teaching hospital during 8 consecutive quarters (quarter 4 [October through December] of 2009 through quarter 3 [July through September] of 2011). Clinical readmission diagnosis determined from direct medical record review was compared with the administrative diagnosis recorded in a claims database. The number of planned hospital readmissions defined by the readmission measure was compared with the number identified using clinical data. Readmissions unrelated to the original hospital stay were identified using clinical data. Discordance rate between administrative and clinical diagnoses for all hospital readmissions, discrepancy between planned readmissions defined by the readmission measure and identified by clinical medical record review, and fraction of hospital readmissions unrelated to the original hospital stay. Of the 315 hospital readmissions, the readmission diagnosis listed in the administrative claims data differed from the clinical diagnosis in 97 readmissions (30.8%). The readmission measure identified 15 readmissions (4.8%) as planned, whereas clinical data identified 43 readmissions (13.7%) as planned. Unrelated readmissions comprised 70 of the 258 unplanned readmissions (27.1%). Administrative billing data, as used by the readmission measure, do not reliably describe the reason for readmission. The readmission measure accounts for less than half of the planned readmissions and does not account for the nearly one-third of readmissions unrelated to the original hospital stay. Implementation of this readmission measure may result in unwarranted financial penalties for hospitals.
| |
Last week, you read the list of what I view as my “must do” and “must haves” before starting your first year as a teacher. As promised, my Google Form Survey is linked under the, “Resources” tab. Check it out/make a copy for your own classroom!
While this week’s content may not be as exciting, it is of IMMENSE importance, especially when beginning a career in education. Here, I share what I view as the Hallmarks of Digital Ethics and Reputation for Educators.
I have comprised this list based on what I have learned in both undergraduate and graduate courses, reviewing case studies, and through my own, personal experience. I will never forget what a past professor told us while in undergrad. She said that educators are quite literally held to a higher standard than the law. This statement scared me at first, but after having a year under my belt, makes complete sense to me. I now understand that she was telling the truth and simply wanted us to understand that one little lapse in judgment or carelessness could jeopardize our entire career. No matter how long you have been in the profession, I encourage you to do your own research and identity exploration to develop a similar list of digital ethics for yourself!
- Always check privacy settings and make sure all social media accounts are set to the highest level of privacy of content
- After doing one large “clean up” of social media photos and posts, frequently do checkups to make sure everything aligns with your identity as a teacher and leader in the community
- Never add students as friends or followers on social media platforms while they are still your students. (talk to a teacher mentor about how you feel when they are no longer your students/if there is a purpose to let them follow you on social media platforms?) *I do not include teacher pages (i.e. websites/educator blogs) as social media platforms.
- Think before you post anything and understand digital citizenship
- Establish your own identity, stay true to that identity but realize in this profession, there is typically no need to overshare
- Know the rules of your own district in regards to digital reputation
- Use social media in a positive light instead of a place to complain and spread negativity/draw negative attention towards yourself
- “Always ask yourself, “Is it true? Is it kind? Is it necessary to post about?”
- NO offensive/inappropriate posts/photos
- Keep your opinions offline unless deemed safe to post by your own, professional judgment
*Be on the lookout for next week’s post! As always, I would love to hear from YOU. Comment your own experience/input regarding individual educator’s digital reputation. Let’s connect! | https://meetmeinthemiddle.school/2021/08/01/this-ones-for-the-1st-year-teachers-part-2/ |
1: Do thinner or thicker BATTER heads cause more, or less unwanted overtones (the after ring that we try to tune out/tame or muffle completely)?
2: Do Thinner or thicker RESONANT heads cause more, or less overtones?
3: What combo of batter and reso thickness is best to prevent over tones?
4: How do double ply heads vs single, apply to the above?
Can someone give me (and anyone else who is in my position) a definitive guide to this?
You really can't make hard and fast rules about overtones. What overtones are present, and to what extent, depends on much more than just head thickness and number of plies. The overall tension of the heads (high, medium, low), how they're tuned in relation to each other (same pitch, certain interval), how hard they're struck, the design of the bearing edge, the type of hoop, the dimensions of the drum, the shell's material and design, etc., all affect the presence of overtones.
If you want to reduce overtones, at the lower tunings commonly found in rock, then the trick is to get the top and bottom heads cooperating so that they produce a strong fundamental note. IMHO, the type of drum head is less important than the tuning relationship between top and bottom heads. Tuned a certain way, double ply heads can produce more overtones than single ply, and vice versa.
And if you find it difficult to judge the pitch of a drum head, and their interval relationship, by ear (as many drummers do), invest in a Tune Bot.
I think that tuning is a critical aspect when it comes to limiting unwanted overtones. There's also the aspect of the bearing edge - sometimes it's difficult to get a particular drum tuned up because the edge isn't true, or the drum isn't round.
Another thing that affects the overtone is the tuning relationship between the batter and reso head. Each individual head can be tuned well to itself, but if the interval between the two heads is bad, it will cause ugly overtones too.
With that in mind, I'll echo what brentcn said - invest in a Tune Bot. My drums (really just my toms - I don't really use it for snare or kick) have been very consistent since I got mine because I use the recommended tuning/settings for the batter and reso heads, which insures a proper interval between the two, and that goes a long way toward eliminating those overtones.
Also, in my somewhat limited experience, it seems that coated heads, and heads with a built-in dampening ring of some kind don't ring quite as much, and that helps to reduce some of the higher frequencies that help to create those overtones.
I chased this bunny for several years in an effort to find the sound I like.
First off, the drums can be separated into snare, bass, and toms. Each of these have different batter/reso requirements.
For the great articulation and brightness, a single ply batter over a 300mil reso works very well. The 200mil reso reduces fullness in the drum’s sound, the 500mil reso mutes the wires too much (they don’t respond well). The heavier the batter head, the lower the harmonic overtones. This eliminates brightness of the snare, and the drum will lose its cut.
A jazz bass drum is tuned very differently than a rock or country music bass drum. I’ve explored rock & funk. Funk needs a tighter sound; a very fast punch with very short decay. Everything I’ve tried has brought me to a single ply head with a damping ring and tiny vent holes around the perimeter (old-style EQ4). This is tuned slightly above JAW (just above wrinkle). I use an EMAD ported reso with a dampening ring around the perimeter. Using a couple pillows inside reduces resonance more. I found that heavier batter heads lose the “snap” of the beater on the head (I use a wood beater, not felt).
I’ve tried Evans Reso 7 and Genera (10mil). The Reso 7 lost some low end tone. After one week I replaced them with Genera. I’ve tried clear G1, G14, G2, EC1, EC2 all tuned for maximum resonance. I’ll use Moon Gel or tape to shorten the decay. this way I can keep the tone of my toms consistent. Going from G1 to G2 loses the brightness of the stick attack on the head, reduces high frequency harmonics and this, in turn, reveals more lower tones. The EC (Edge Control) series is supposed to reduce overtones but neither type (1-ply or 2) produced a noticeable difference for me, and I prefer to manipulate my sound with gel or tape.
I tune my toms for max resonance, but some guys want a dead, dull thump with a lower or higher tone (e.g., Dave Grohl in Nirvana). If this is the desired sound, a coated head might work best.
I’ve not experimented enough with shell thickness to know how it plays into the tone, decay & harmonics. I had a Sonor birch kit for about 10 years and loved the floor toms but never liked the tone/decay of the mounted toms.
I agree with what others here have said.
But I generally get more overtones from thinner batter heads. I use coated 10 mil batter heads on my snare drums; to provide the most open and loudest sound. If the overtones are excessive I put a little piece of Gaffers Tape on the corner of the head. Usually by the second set of a live performance, the band is louder (those darn deaf guitar players). At that point I remove the tape.
Thanks for the replies guys, some great tips and info. i realise there is a whole host of other variables to take into account, but just wanted some general ideas - based on everything else being the same.
Do Tune Bots work really well?
Very very cool post - this is kind of what I've noticed anecdotally, but you've really put in the time and effort to dial it in and figure out just what affects what.
It also sounds like you and I have similar preferences for how we like drums to sound.
Would any of you agree with this chart?
Grohl used a Tama bell brass snare (at least from various videos I've seen). It was one of the loudest drums at the time. He tuned it very low and beat the crap out of it. The guy didn't go for tone like, say, Weckl does. If you're after that old Nirvana tone, find a bell brass snare like this one, put a coated G2 head on the batter and start fussing.
A big difference between drums and other instruments is the cost of experimenting with difference heads. You can buy them, but when you try them you cannot return them. A 4-tom experiment with just the batter heads is gonna run ~$50.
They do when the head & drum resonate. When the drum is tuned flabby, they aren't as accurate.
Thanks, trickg. I don't like a choked sound, but have been known to put tape all over the reso head to kill resonance.
Heads with unequal tension at the rods= whining. Loosening one rod=old timers’ lower-the-pitch trick for jazz, not for rock style Tom pounding.
Go back to Gaddsen(?) tuning videos to see the cure for unequal tuning and cross patterns for tuning different sized toms. | http://www.drummerworld.com/forums/showthread.php?s=6f7f5009662b1a526682854c68cf8ff6&p=1631443 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.