content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
According to American socio-biologist Edward O. Wilson, “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.”
This might explain why some of us can’t help but hoard our belongings (new and old), despite the relative abundance of our environment. From old iPhone boxes and bills to more miscellaneous documents and files, we’ve all had that sense of ‘I should hold on to that’ even though we haven’t looked at whatever ‘that’ is, in years.
The same goes for organizations and data storage. Businesses’ digital storage cabinets are bursting at the seams with a plethora of sensitive data – from floating files and old images to old passwords and ex-employee details.
In her bestselling book and hit Netflix show, organizational expert Marie Kondo taught the world about “the life-changing magic of tidying up”, and that if you properly simplify and organize your home once, you’ll never have to do it again. While we can’t quite say the same about data, organizations can certainly get a lot closer to achieving ‘zero touch data protection’ in 2021 if they adopt a similar mindset.
An exponential challenge
One of the biggest challenges facing CIOs and CISOs is the sheer scale of data. After all, humans are somehow managing to generate 2.5 quintillion bytes of data every day, and more than half of all organizations have doubled their online data over the last two years.
Without proper visibility, IT management teams are unable to optimize storage costs, enforce data retention policies, or respond quickly to data discovery requests. This situation, if left unchecked, will only get worse as more data is created and expose the organization to incredible amounts of risk. With millions of workers now dispatched to home offices and kitchen tables all over the world, organizations have had to abandon slower digital transformation plans and shift their operating models almost overnight.
Cloud migrations and the adoption of virtual collaboration solutions have rapidly sped up to ensure the newly digital workforce has access to the tools and information they need to stay productive and secure. Consequently, data is being produced, shared and stored in more ways than before. This is creating cybersecurity and compliance challenges for companies as they navigate this new normal.
While this is tough enough, ineffective approaches for storing data are exacerbating these challenges. Data sources are increasingly spread across a variety of systems and services, such as endpoints, physical and virtual servers, SaaS applications and AWS infrastructure. Protecting this data has traditionally required multi-vendor solutions, resulting in backup data silos, administrative complexity, and increased costs. Also, as organizations begin to transition workloads and services to the cloud, IT teams are left with a mix of on-premises and cloud data silos, which creates data protection gaps and strains already burdened resources.
All of which is why 2021 is the year businesses must strike a better balance between storing data that’s critically important to the organization, and as Marie Kondo preaches, letting the rest go.
The benefits of evolution
The competitive advantages of upgrading your organization’s approach to data management in the cloud are massive. According to more than 700 IT leaders Druva surveyed across the US and UK, the benefits include increased usability of and accessibility to data, greater cost savings, and the ability to drive economies of scale.
Specifically, 79 percent of IT decision makers now see data management and protection as a key competitive advantage. 73 percent rely more heavily on data for business decisions and 33 percent believe the value of data has permanently increased since the pandemic began.
If the message for IT leaders regarding their pre-pandemic data strategies was to ‘get going’, in 2021 the message is ‘we need to go faster!’. Organizations must evolve to make data a pervasive part of everything from operational decision-making to customer experiences. Rapid availability and analysis will be vital. If businesses can do this successfully, the improved visibility into backup data will reduce administrative complexity and enable greater, more in-depth data insights. Simplified data protection with centralized policy management will also improve data governance, helping address legal and compliance demands. All of which has a material impact on the bottom line.
This kind of advanced cloud storage infrastructure-as-a-service (IaaS) adoption in the enterprise reduces both CAPEX and OPEX, while increasing scalability and flexibility. In fact, our own findings indicate it’s possible to achieve up to 50 percent lower total cost of ownership (TCO) without the cost and complexity of onsite hardware, software, and dedicated resources.
What must change in 2021
If businesses are to successfully embrace cloud technology for cleaning up data storage, some things will have to change. For example, the increased use of private and public cloud increases data usability and accessibility, but also requires everyone in the organization to have a stronger sense of shared responsibility for data protection and security. Cloud computing service providers must ensure availability of their infrastructure, but users must ensure they fulfil their end of the shared responsibility model. This requires clear company policies around data protection, alongside staff training and education.
Lastly, organizations should look for ways they can embrace as-a-service models, including data protection as a service (DPaaS). Instead of trying to build internal expertise around evolving data compliance requirements, security threats, and cloud backup management, you can now partner with a team of experts that can help you protect your most valuable assets.
If you can make progress in these key areas, in the words of Marie Kondo, your organization will be that much closer to sparking joy. With the right approach, CIOs and CISOs in 2021 can soon unlock the competitive advantage of simplified data management and achieve zero touch data protection.
- W. Curtis Preston is Chief Technical Evangelist at Druva. | https://computerbusinessworld.com/how-data-storage-and-management-must-evolve-how-data-storage-and-management-must-evolve/ |
Spoken Vs. Written History.
Brief Evaluation
Cruikshank describes how, while history is everywhere, only certain stories gain purchase. How does that claim apply to Ridington’s description of the writing of Native American spiritual traditions and Deloria’s view on definitions of science?
Cruikshank claims that only certain stories from history gain purchase and that this is due to their representation and re-enforcement of culturally appropriate behaviour. He also argues that extensive written accounts from administrative authorities are able to outweigh local oral narrative. This is despite the fact that these written accounts do not, as is often perceived, contain a fuller or any more objectively reliable account. Both written and oral narrative act to convey conventional social ideals of perfect behaviour; only the context of interpretation alters in these narratives as each invokes their own social system. These stories which gain purchase are those which are best able to display those ideals of the society in which they are told.
Ridington’s investigation of written account of Native American spiritual traditions comes to a similar conclusion as Cruikshank. Ridington finds that spiritual traditions are indigenous to the land and the peoples; creating a cosmic order within which the world realises its meaning. This would be near impossible to convey in written form as understanding requires a complete understanding of the world and culture which seems to be only possible to gain through experience. Western styles of story telling use a monologue; Judeo-Christian creation stories for example present a single given story repeated almost identically over time. Native Americans however use a greater dialogue style; traditions flowing from discourse and altering slightly over time rather than the Judeo-Christian single minded tradition of right and wrong advocating conversion to this dogma. Ethnographic studies inevitably convey the writers bias and is always framed in Western ideas of religion, science and spirituality. These ethnographies are also invariably written as monologues which fails to convey the communication necessary between the cultures for understanding.
DeLoria further claims that the Western ideas of correct scientific method actually fails to consider many useful aspects and possible information of the world around. The Western belief that humans societies seek knowledge devoid of superstition seems overruled by modern tribal societies. These societies don’t seem to wish to remove themselves and their knowledge from nature, instead they seem aware of rhythms that scientific people cannot understand. DeLoria argues that tribal ways represent a complete logical alternative to Western science, not a lesser more primitive means. Tribal methods can even be considered superior in that no data is wasted; while Western science discards failed study, tribal knowledge continues to grow and learn from all of these experiences. Tribal societies also manage to mix facts which western science would divide into distinct categories.
Generally, it seems to be that tribal understandings and portrayals of the world need to be better respected and understood. The Western ideals are not unequivocally the correct and most useful means of recording and gathering knowledge and information and this needs to be understood.
Amazon
Comments
No comments yet. | https://hubpages.com/education/Oral-Vs-Written-commentaries |
Soil water repellency (SWR) limits the soil to wet up. In severe cases, the water will pond on the surface and the underlying soil will remain dry. As a result, SWR can decrease the emergence and growth of agricultural crops. Maximum SWR is commonly thought to occur when the soil is at its driest (0% water content) and following drying at high temperatures, >40˚C. However, there are instances when maximum repellency has been observed at low, but not completely dry, soil water contents (~4% water content for sands). This phenomenon is not fully understood. In addition, my thesis will explore mechanisms to explain the maximum SWR at low water content. As soils contain dissolved organic carbon (DOC), which were found to induce SWR, my research will also investigate the interactive effect between DOC and low soil water content on SWR. Scientific research has demonstrated that soluble organic compounds may either increase or decrease SWR. Thus DOC could potentially control the severity of SWR, while implementation of appropriate farming practices may influence soil DOC concentrations and in turn SWR.
Why my research is important
Globally and nationally SWR is known to restrict soil water infiltration and limit agricultural production. Furthermore, according to CSIRO Marine and Atmospheric Research (http://www.cmar.csiro.au/e-prin/open/projections2001.pdf), by 2030, Australia is forecasted to experience warmer temperatures and declining annual rainfall. Thus the incidence and severity of SWR may further increase. In Western Australia, it has been estimated SWR decreases agricultural grain production by $250M to $330M per annum. Some mitigation and amelioration strategies have been developed to address the issue, however they have practical limitations or are expensive. Examples of some remedial strategies include soil inversion, claying and the use of surfactants. The fundamental understanding of low water content and dissolved organic carbon on SWR may provide an opportunity to develop alternative effective strategies to decrease the incidence and management of SWR. | https://www.uwa.edu.au/science/Postgraduate-Profile?link_id=EAAAAKYNfxyI1RTGZeYCqM3syhDrvwt53j09cBlHVVLjcyqsxcMfzj3t8gfyEjMQuIIUSA== |
Fair Seas is an Irish non-profit organisation seeking to protect, conserve and restore Ireland’s unique marine environment. This report presents 16 Areas of Interest for marine protected area (MPA) designation in Irish waters. An area of interest (AOI) is a critical biodiversity hotspot for one or more species of conservation interest. The proposed AOIs aim to accelerate the Irish government’s efforts to designate an MPA network in Irish waters and increase it to 30% by 2030.
The document is a data-heavy scientific report with numerous maps, charts and graphs in each section. The goal was to make it as accessible as possible while maintaining its scientific integrity. A multi-grid system was designed to organise the information and provide flexibility in laying out the diverse range of data: a 3-column grid for the AOI sections and 2-columns for the other sections.
Each AOI has an introductory spread—its name, qualifying interests, maps identifying its location, spatial extent and habitat type on the verso and a photograph of the area or species type on the recto. The following pages include the primary reasons for site importance, a description of features including Habitats, Seabirds, Elasmobranchs, Cetaceans and commercially exploited species, and a range of maps showing species densities. | https://www.100archive.com/projects/revitalising-our-seas |
Education Strategies Adopted During COVID-19 Are Not Inclusive: UNESCO Report
Lack of preparedness and infrastructure, poor policymaking have made online learning during COVID-19 disadvantageous for poor students, according to the UNESCO Global Education Monitoring Report 2020.
Education strategies adopted by countries across the world during the COVID-19 outbreak are promoting exclusion among students, according to the Global Education Monitoring Report 2020 by the United Nations Educational, Scientific and Cultural Organization, or UNESCO. “The challenge was too large for any education system to respond effectively. School closures placed unprecedented challenges on governments, teachers, students, and parents aiming to ensure learning continuity,” according to the report.
Countries across the world, including India, had adopted online modes of education to continue the teaching-learning process during the global pandemic. However, lack of preparedness and infrastructure have made distance learning during COVID-19 “imperfect substitutes for classroom instruction”; especially for the economically disadvantaged students, according to the report.
Poor policymaking has massively affected the education infrastructure of lower-income countries and contributed to the exclusion of underprivileged students in middle and higher-income countries, says the UNESCO report.
Technology In Education During COVID-19
According to the report, use of technology in education is dependent on income of the country. “74 per cent of lower-middle-income countries used television programmes in primary education, compared with 36 per cent of low-income countries,” the report said.
Though many lower and middle-income countries, including India could use television to broadcast programmes; the number is significantly low in comparison to the reacher countries where the adoption of online methods has been 93 per cent for primary education and secondary education.
However, no current learning continuity solution ensures “learning for all”. The prevailing digital divide and lack of preparedness and resources are limitations of online education, according to the report.
“A survey in the United States found that only 43 per cent of teachers felt prepared to facilitate remote learning and just 1 in 5 said school leaders provided guidance,” the report added.
Students With Disabilities
The report further highlights that online education has been more difficult for differently-abled students. Students with mild learning disabilities like deficit hyperactivity disorder might have struggled to work independently in front of a computer, the GEM report said.
“Overall, about 40% of low and lower-middle-income countries have not supported learners at risk of exclusion during the COVID-19 pandemic, such as those living in remote areas, the poor, linguistic minorities and learners with disabilities,” the report said.
“By increasing social isolation, the pandemic also increased the risk of marginalized students disengaging further from education and leaving school early,” the report added. | https://www.ndtv.com/education/education-strategies-adopted-during-covid-19-are-not-inclusive-unesco-report |
Defining your goal is the first step to reaching it and it’s crucial to define it correctly. But you’ll never reach your goal unless...
How a Small Business Can Create a Big Brand
Branding "Brand is just a perception, and perception will match reality over time. Sometimes it will be ahead, other times it will be behind." But... | https://viva-naija.com/tag/accountability/ |
If there’s one thing I’m certain of, it’s that I’m an experiential learner. Everything makes more sense when it’s tested, and the more senses I can use, the better. Somewhere between visual and kinesthetic, it seems I have a particular aversion to textbooks and homework. It’s not that I can’t do that stuff, I just feel trapped in someone else’s mind (usually an ageing Edwardian man), and it’s uncomfortable.
Going through school, subjects I was once passionate about became hell; a good example of this is math, or better, science. As a kid, I loved collecting samples, experimenting with the elements, excavating, and playing with light. The funny part is that in high school I nearly failed Geology, dropped out of Biology, argued my way out of Chemistry, but then took AP Physics for which i did extracurricular work and got an A. Why is this? Because through real-life trial and error (using my own body), I could apply theory and test its relevance. This was not only incredibly satisfying, it grounded my understanding of what would have otherwise been very abstract and difficult material.
The same goes for wine. As an infinitely complex subject, it’s easy to get lost in the fluff of overworked, exhausting language, overlapping histories, tasting prerogatives, appellations and vintages. With all the confusion of memorizing this and that, there’s a major gap between reciting the material and KNOWING the wine. Really, don’t forget the basics. Here’s a classic example that gives me anxiety, as quoted from a french wine scholar:
|“The textbook indicates that there are 24 Regional AOPs in Bourgogne (plus 44 Communal and 33 Grand Cru for a total of 101). However Appendix A lists only 23, and www.bourgogne-wines.com likewise states that there are 23. What is the missing AOP, or how else does one account for this discrepancy?”|
While the scholar is focused on solving the discrepancy between 23 and 24 Burgundy AOPs, a more fruitful discussion might include questions like: what does it take to be labeled an AOP? Is AOP actually important for distinguishing good wines? What makes good wine? Why? And most importantly, if I was poured a [insert Regional AOP Burgundy] Pinot Noir blind in a glass, would I know? It seems that these questions would facilitate independent thinking on the scholar’s part, perhaps even offering solutions to the discrepancy they describe (or better yet, deeper insights that push the scholar past this question all together).
Being a human is complicated. We are often our own worst enemies, and find ourselves hard to understand or connect to. Especially in our modern world, we are trained to operate externally, objectively, and over time we begin to lose our subjective muscles- our sensory perceptions. When you look down at your hands, what do you see? Do you see your hands, or do you feel them? Do you see yourself feeling your hands? Or, does seeing your hands stimulate your awareness of sense of touch? If you can answer yes to this last question, you are able to connect one sense to another. Bravo!
Connecting our senses is a big part of what makes us humans intelligent. The better we are at understanding ourselves through our minds AND bodies, the better we can process insane amounts of information, share it, and have fun doing it. So instead of lugging around a wine encyclopedia, pick up a glass and give it a whirl without even knowing what it is. Challenge yourself, see how many senses you can use to relate (or not) to the wine. What does the smell remind you of? What does it feel like on your tongue? Does it taste weird? How? Most importantly, do you like it? There are no wrong answers here.
Of course, reading is awesome and everyone should do it. But in order to really absorb written material, make it come to life for YOU. Start with the subject. Get interested, then do research- not the other way around***. Questions that stem from your own curiosity lead to answers that really stick in your mind and your sensory perceptions. Reading about wine with no objective is like sailing at sea with no compass. You will get lost in the doldrums or sink by iceberg. Like anything, it takes a certain amount of conscious re-training of your brain’s habits, but the results will blow your mind.
***BONUS POINTS to teachers and educational programs who choose to adopt this, or already engage their students in this manner. | https://terroirisms.com/2014/02/19/un-knowing-wine-the-importance-of-blind-tasting/ |
Guru Purnima 2013
Rosemary and I spent a wonderful 20 hours in Trenton, NJ this past Sunday. We drove up from Annapolis that morning to join our friends from Colorado to celebrate the Guru, and in particular, Sri Karunamayi Vijayeshwari Devi. We have been followers of Sri Karunamayi, Amma, for years and visit her as often as we can as she travels the US on an annual tour.
The July full moon is the occasion of the celebration of the Guru and is known as Guru Purnima. It is an incredible festival in honor of all Gurus through time and our individual Gurus through our spiritual lives. We have been to several of these now and each celebration, held in a different city each year, seems grander than the last.
The Guru Purnima this year included a rare, once-in-a-life-time puja, or special honoring and offering for deities. This year Amma offered the Sahastra Linga Maha Puja as part of the Guru Purnima celebration. This Puja is conducted only once every 60 years and we were privileged enough to be in Trenton to experience it. This Puja is an offering to Shiva; 1,116 clay Shiva Lingas were used in the ritual. Words cannot describe the experience. But imagine a ritual that can clear away all karma, not only of this life-time but of all life-times!
Am I now clean, clear and ready for Enlightenment? Well, all I know is something happened in that honoring and celebration of Shiva and the Gurus. It was a magical time and the transformation energy was at a peak at the conclusion of the Puja and as we then entered the honoring ceremony. As we honored beloved Amma I also conjured all of my spiritual teachers and gurus that have helped me on my way, my path to spiritual awakening. I have been blessed by many. And they all seemed to join me during this event and were there with the 400 gathered souls to recognize and honor all Teachers.
Why is any of this important? Yesterday my comment on Rosemary’s Exploration on Self-Protection included a suggestion that we protect ourselves from misguided choices. The prescription to avoid these choices is to rely on our intuitive powers to make the choices that lead us onward and upward, on our paths to awakening. Intuition, from the Latin ‘intueri’, means “to look inside.” To use our intuition is to seek a priori knowledge to inform our choices. And this brings me to the key teaching of Amma, repeated yet again in her “Guru Purnima Message”. I have included some of her words here because they are beautiful and they are highly instructive:
Your body is not the external body, the mental body, or the intellectual body. It is the fourth body, which is the eternal Self, ultimate pure spiritual consciousness. You have to connect with your Self to unleash the light within you. Self-realization should be the ultimate goal of every human birth. How can you make progress towards that goal? Meditate.
The Holy rishis and even the gods are seen often in meditative posture. What are they meditating on? They are meditating on their own Self, the Atman, the soul. All the divine souls are merged into one. There is no discrimination for the divine soul.
Meditation is the only path to connect with the Self and attain eternal bliss. That is the path Amma wants all of Her children to follow. Once you start meditating, pride with give way to liberality; I-ness, Me-ness will give way to We-ness; selfishness will give way to selflessness. You will love everyone and hate no one. You will not criticize others but accept everyone for who they are. You will start developing faith in all religions. You will come out of your small boundaries and will be on your way to ultimate spiritual consciousness.
Meditate. Look inside. Access intuition. From that place you are completely protected from poor choice. And you step on the path to eternal bliss!
Jai Karunamayi! | https://menandthegoddess.com/2013/07/26/guru-purnima-2013/ |
In the world of art, the origin of an artifact, i.e. that each artifact can be assigned to a particular artist, is enormously important, both to determine the actual authorship and to judge the value of an artifact, and also from an art historian's point-of-view. The artist, as the author or creator of the artifact, owns comprehensive rights in and to this artifact that are not lost even after the artifact has been sold off. Furthermore, art historians are also interested in the true origin of a work of art, since in many cases, in particular with older works of art, the artist who created these works of art, can no longer be determined without doubt. This can be due either to a lack or the forgery of a signature, or to the fact that eminent masters had a great number of diligent students whose task it was to finish parts or even whole works of art, which were then still signed by the artists, Rembrandt being an excellent case in point. In these cases, investigative techniques based on the highest technological standard are used to verify the artifacts. It is thus quite frequent to X-ray pictures, and infrared radiation is also often used to identify the water marks of paper mills. Materials irradiated with UV light emit fluorescent light in various colors which may thus be used to identify the material on which the work of art is based. This allows conclusions to be drawn as to the artist in question. Further methods for the identification of forgeries are thermoluminescence analysis, complex chemical analyses or the examination of the work of art under a microscope.
These methods are very complex and expensive, however, and also have to be carried out in suitable locations, usually laboratories, i.e. locations that are designed in such a manner that the radiation does not cause damage to the health of personnel, or rooms with a controlled climate, so that due care is given to maintaining the condition of the picture.
On the other hand, art experts can often determine the authorship of an artifact, in particular a picture, solely with their expert eye. In the course of their training and professional experience they have acquired the ability to recognize an artist as the author of a particular picture. The application of paint, linework and brushwork are some characteristics that are salient for an artist and facilitate identification of his or her works. Brushwork is generally a term used for the linework in painting. It can express the importance of individual areas of the image for the artist in question. For example, the linework of a picture can be finer when it is used for the face, than the linework the artist has used for clothing. This is an indication that the artist is particularly interested in working out the facial expression or the face itself. The brushwork is thus like the artist's personal trademark. Amongst other characteristics, it can also encompass the brushstrokes, the linework, the lightness and the strength of the brush pressure. An artifact, in particular a picture, can be identified with the aid of the artist's, and thus the author's, personal trademark, which then allows the assignment of proper authorship. Currently this is only possible by the personal examination of the picture by a professional's expert eye. For this purpose, the art expert has to study the painting to the smallest detail and has to examine it thoroughly. This process is time consuming and usually very expensive, since art experts are highly qualified professionals.
It is thus the object of the present invention to provide an apparatus and a method which assigns a given picture from a set of likewise predetermined artists to the artist who has created the picture.
| |
Arts Council England’s draft ten-year strategy aims to ensure its own bureaucratic survival and reveals the thinking of an organisation that imagines it is in charge. It is not a strategy for the cultural organisations it is meant to serve, says Robert Hewison.
With its draft strategy “Shaping the next ten years”, it seems Arts Council England (ACE) intends to achieve a transformation from a country where ACE exists to help the arts to one where the arts exist to help the Arts Council.
This process has been going on for some time, but the document makes only one fleeting reference to ACE being an “arm’s-length” body, while proudly presenting the funder as “the national development agency for creativity and culture”. This reveals the thinking of an organisation that imagines it is in charge – subject, that is, to government approval.
The first page of the document states: “we are looking to shape a country”. Cultural organisations “must evolve”, “must” become more collaborative, and ACE will “require” them to prove their worth. Written in the first-person plural, this document is the voice of the bureaucratic imperative. This may be a consultation, but the document’s phrasing suggests that ‘you will have to do what we decide’.
‘Art’ means ‘funded art’
But who is this “we” and who choses them? Although it is “accountable to the Department for Digital, Culture, Media and Sport”, the legitimacy of the Arts Council rests, as it always has done, on its own circular definition of what the arts are: the arts are what “we” fund. Because ACE acquired responsibility for museums and libraries in 2011 it has redefined the activities it supports as “culture”. To the embarrassment of its own name, it has discovered that many people are “uncomfortable with the label ‘the arts’”. Though it acknowledges that most people lead creative and cultural lives of their own making, helping people to access “high-quality culture” is what the document is concerned with. The Arts Council’s practice of self-definition, whether it is of ‘art’ or ‘culture’ remains the same.
This document is the voice of the bureaucratic imperative… ‘you will have to do what we decide’
If it accepts this definition of its purpose, ACE has to admit that its previous strategy, “Great Art and Culture for Everyone” (2010), has failed. Everyone has not benefitted. The active audience for the arts is still lamentably low, or, as the document puts it, there are “widespread socio-economic and geographic variances”. Children and young people have unequal access across the country. The lack of diversity in the creative industries and publicly funded cultural organisations is as shameful as ever.
The art that the last strategy was aiming to achieve does not seem to have been that great either. ACE research reveals “a retreat from innovation, risk-taking and sustained talent development” among creative practitioners and cultural organisations. Publicly funded organisations are fragile and lacking in flexibility. Despite being accountable to the DCMS, and in “partnership” with the greatest obstacle to cultural development, the Department for Education, ACE has almost nothing to say about the real reason for this hesitation and fragility: cuts to government funding that are unlikely to be restored.
Weasel words
So what is to be done? ACE’s response to its lacklustre level of achievement has been to invent three “outcomes” so inoffensive that no one would disagree with them: “creative people” – more emphasis will be put on helping individual artists – “cultural communities” – encouraging local collaboration – and “a creative and cultural country”. Are we striving for an uncreative and philistine country? Surely not.
These outcomes will be achieved by the exercise of three cunningly matched “investment principles”; in other words, this is what you will have to do to get the money. You must show “ambition and quality”. It would be surprising if you were expected to be unambitious, but quality is a weasel word, just as “excellence” was in the previous strategy.
However, ACE seems to be moving away from excellence as it was previously understood, because the next pair of requirements are “inclusivity and relevance”. “Inclusivity” rightly means diversity, but from what we know of ACE’s thinking, “relevance” means that it will no longer be enough to do excellent work; the work will also have to be “valued” by the local community. In practice, relevance will turn out to be as relative a term as excellence or quality. Finally, you will have to show “dynamism and environmental sustainability”. Dynamism, loosely defined in ACE’s new strategy, appears to be something to do with building “successful businesses” that deploy “a richer data culture”.
Imprecise as these principles are, ACE seems confident that it can translate them into that dreaded term from the New Labour years – “targets”. Although the document pays lip service to partnership, the bureaucratic imperative is clear. “If these targets are not met” organisations will be held to account, ACE warns – without saying what the targets or the accountability measures will be. The message gleaned from this is ‘you work for us’ and must meet “our” outcomes, priorities, investment principles and performance measures.
ACE says it cannot continue to operate in the ways it always has done, but that is exactly what it intends to do. It will retain its current structure, and there is no suggestion that it will question its own governance or reduce the accounting burden that its desire for data has imposed on those fortunate enough to be funded. ACE claims that “Shaping the next ten years” is only a draft strategy. In reality, it is a strategy for its own bureaucratic survival. It is not a strategy for the cultural organisations ACE is meant to serve. | https://www.artsprofessional.co.uk/magazine/article/strategy-self-preservation |
Megan McCarthy, RN, is the essence of a great nurse.
I have worked in oncology for nearly 30 years and have come across many wonderful and caring professionals. Megan McCarthy is one of the best. His energy and passion for the work of caring for cancer patients is simply unmatched. She is a caring and considerate colleague to her nursing team and can be counted on to elevate any care setting she enters.
The neighborhood where Megan practices in the borough of Queens, New York, is one of the most culturally and linguistically diverse in the world. Her approach to each patient is sensitive and questioning, always seeking to understand the person in front of her. His skills are unmatched both clinically and psychologically.
In the world of COVID-19, patients are understandably worried and fearful, especially in a city that has seen the devastating consequences of the pandemic over the past year.
Not only did Megan help patients navigate the vaccination process during treatment, but she eased fears by demonstrating strict procedural boundaries for everyone’s comfort and safety.
Megan acts as a mentor and instructor in her offices, displaying a natural leadership ability well beyond her years of experience. Those in her sphere admire her, and she is a go-to person for answers as well as for the development of those around her.
I submit a sincere and enthusiastic application to this wonderful nursing professional. The quality and compassion with which she provides care deserves recognition. Thank you for considering her for this award. | https://gwiazdy.org/unparalleled-energy-and-passion-for-cancer-care/ |
East Delhi District consists of 3 major localities Gandhi Nagar, Vivek Vihar and Preet Vihar. As per Census 2011 its Poplulation is 17,09,346. Total Males are 9,07,500 and Total Females are 8,01,846 are located in this district. Growth Rate of East Delhi District is 16.8 for a period of 2001-2011. East Delhi District has a sex ratio of 884 females for every 1000 males. Total Number of Households in this district is 3,58,937.
Geography of East Delhi
Languages spoken are Hindi
The East Delhi average altitude (elevation) 214 meters above sealevel. | http://www.indiagrowing.com/Delhi/East_Delhi |
Will Harry Styles Leave One Direction? 'This Is Us' Director Morgan Spurlock Thinks Another Band Member Will Exit
Could another member of One Direction be plotting his exit? According to “One Direction: This Is Us” director Morgan Spurlock, it’s possible. The filmmaker says he believes frontman Harry Styles will follow in Zayn Malik’s footsteps and branch out on his own. After five years with the group, Malik left One Direction March 25. In an official statement he said he had " to do what feels right in my heart."
Spurlock told TMZ’s cameras he thinks Liam Payne, Niall Horan, Louis Tomlinson and Styles will “stay together for a little while" as a foursome, but that it won't last long. He predicts that in the no-so-distant future, Styles will be ready to focus on a career as a solo artist.
“I think they’ll stay together for a little bit longer and then I think at some point Harry will probably leave,” he said.
Spurlock also said he wasn’t at all surprised by Malik leaving the band as he believes he was “the most talented guy in that group.” The director said all the members of One Direction have musical gifts, but none quite parallel Malik's. He added that he supports Malik's decision to focus on his own career and thinks he chose a good time to give it a shot.
“He’s an incredible singer. All of them are talented singers -- that’s why they were all put together -- but to make the decision to leave and go off on your own and chase a solo career in the midst of being as famous as you are, as successful as you are – you can’t fault someone for that,” Spurlock told TMZ.
He’s not the only one who saw the split coming. Simon Cowell, who has been a mentor for One Direction since creating the group on "The X Factor," told “The Late Late Show” host James Corden during an appearance April 16 that he and Malik had discussed his leaving. Cowell said he tried to talk Malik into staying a bit longer, but “could see he was a bit stressed and he needed some time off.” In the end, Cowell also believes this will be for the best, adding that 1D's next album will likely be their best.
Since quitting the popular boy band, Malik has been spotted outside a recording studio several times. It is believed he is working with British music producer Naughty Boy on a solo project. Naughty released a preview of a love song the pair created, “I Won’t Mind,” but did not say if or when fans should expect to hear more from them.
The 22-year-old singer recently received his first solo award, for Outstanding Contribution to Music, at the Asian Awards in London. In his acceptance speech Malik thanked the members of One Direction, calling them “four of the best guys that I ever met whilst being in a band.” He ended by saying, “Cheers to the future,” but gave no indication what that may include.
© Copyright IBTimes 2023. All rights reserved. | https://www.ibtimes.com/will-harry-styles-leave-one-direction-us-director-morgan-spurlock-thinks-another-band-1889008 |
Canada is a cooperating state within the European Space Agency (ESA) which affords the communities access to both science and industrial opportunities. ESA has recently released a science opportunity for the Lunar Gateway open to Canadians.
This week the Canadian Space Agency sent out a notice from ESA for the following Lunar Gateway science opportunity.
“The Canadian Space Agency (CSA) would like to inform you that the European Space Agency (ESA) has published a solicitation for the selection of international science teams for the European Radiation Sensor Array (ERSA) and Internal Dosimeter Array (IDA) payloads on the Lunar Gateway. This is the second of two solicitations that serve to establish the Gateway International Heliophysics and Space Radiation Science Working Group; the first of these solicitations, for the creation of science teams for the Heliophysics Environmental and Radiation Measurement Experiment Suite (HERMES), was published by NASA on November 17, 2020.”
“The CSA, ESA, JAXA, and NASA are collaborating in the development and scientific utilization of the Lunar Gateway. The CSA wishes to encourage Canadian participation in the Gateway program to help realize its potential as a platform for research into the lunar space environment. Although Canada is an ESA Exploration Envelope Programme participating state, for the scope of this research announcement, Canadian scientists may participate in proposals to ESA, but cannot be financially supported by ESA.”
“The deadline for submissions is November 12, 2021.”
“Further information about ERSA and the ESA announcement of opportunity may be found here.”
About Marc Boucher
Boucher is an entrepreneur, writer, editor & publisher. He is the founder of SpaceQ Media Inc. and CEO and co-founder of SpaceRef Interactive Inc. Boucher has 20 years working in various roles in the space industry and a total of 28 years as a technology entrepreneur including creating Maple Square, Canada's first internet directory and search engine. | |
On Friday, Judge Wendy Beetlestone of the Eastern District of Pennsylvania issued an injunction against Trump's attempt to overturn a provision in the Affordable Care Act, forcing employers — including religious groups like Baltimore's Little Sisters of the Poor — to provide insurance coverage for contraception and abortifacients in their healthcare plans.
In October, the president announced employers would be exempt from the Obamacare provision if it violates their religious or moral beliefs — a sweeping victory for the pro-life movement.
But led by Pennsylvania and California, a handful of blue states — Delaware, Maryland, Massachusetts, New York, Virginia and Washington — fought the move, filing suit against the administration.
On Thursday, pro-abortion Pennsylvania Attorney General Josh Shapiro argued before the Federal District Court in Philadelphia that Trump's rollback would imperil the state's 2.5 million women.
Judge Wendy Beetlestone
The next day, Judge Wendy Beetlestone decided against the administration.
"The Commonwealth's concern is that absent available cost-effective contraception, women will either forego contraception entirely or choose cheaper but less effective methods — individual choices which will result in an increase in untended pregnancies," Beetlestone wrote in her 44-page decision. "That, in turn, will inflict economic harm on the Commonwealth because unintended pregnancies are more likely to impose additional costs on Pennsylvania's state-funded health programs."
"It is difficult to comprehend a rule that does more to undermine the Contraceptive Mandate or that intrudes more into the lives of women," added Beetlestone.
Shapiro applauded the ruling, saying, "This is just the first step, but today is a critical victory for millions of women and families and for the rule of law."
Department of Justice Spokesperson Lauren Ehrsam responded by saying the administration "disagrees with the court's ruling and [we] are evaluating next steps," adding, "The administration is committed to defending the religious liberty of all Americans, and we look forward to doing so in court."
This is a shameful ruling that seeks to continue the Obama-era assault on conscience rights and religious liberty. ... Why should Catholic nuns who care for the elderly poor be forced by the government to provide abortion-inducing drugs in their healthcare plans? Moreover, moral objectors like my own pro-life organization, SBAL, should not have to pay for life-ending drugs that are antithetical to our mission. There is absolutely no "compelling state interest" in forcing pro-life employers to violate their consciences to provide abortion-inducing drugs. We thank President Trump for standing up for conscience rights and religious liberty and are confident the administration will fight this ridiculous ruling.
Josh Shapiro, Pennsylvania AG
In the lead-up to Friday's decision, Mark Rienzi, senior counsel for the Becket Fund for Religious Liberty — the firm representing the Little Sisters of the Poor in their fight to overturn the contraceptive mandate — denounced Shapiro and California Attorney General Xavier Becerra for trying to retain it.
"Sadly Josh Shapiro and Xavier Becerra think attacking nuns is a way to score political points," he said. "These men may think their campaign donors want them to sue nuns, but our guess is most taxpayers disagree. No one needs nuns in order to get contraceptives, and no one needs these guys reigniting the last administration's divisive and unnecessary culture war."
"These states are specifically targeting religious groups. Pennsylvania has never required anyone to provide contraceptives," he noted.
"Now, Pennsylvania is claiming religious groups have to provide these services. Legally there is no good argument for what these states are doing," he concluded.
Become Premium Today!
Choose a subscription period:
To pay in advance without a subscription, click here
If you have a gift code you would like to redeem, click here
Create your account:
Email Address
Password (at least 8 characters)
Re-Enter Password
Terms of Agreement & Cancellation Policy
When you place your order, your Church Militant Premium membership will be activated. Cancel at any time to avoid future
billing. By clicking the "Continue" button, you agree to the ChurchMilitant.com Terms of Use and ChurchMilitant.com Privacy Policy. You also
agree that your subscription will automatically renew monthly, quarterly or annually based upon your subscription
option selection above at the current full price on the date of each renewal, until you cancel.
| |
PROBLEM TO BE SOLVED: To obtain a fine and straight carbon fiber having 1 nm to 50 nm diameter by using an inexpensive catalyst in a high yield.
SOLUTION: In this method for producing a fine carbon fiber, the method comprises (i) a thermal decomposition process for introducing a vapor-phase mixture composed of (a) a hydrocarbon, (b) a catalytic metal-containing organic complex, (c) sulfur or a sulfur compound and (d) hydrogen for a carrier into a reaction zone heated to a temperature to thermally decompose the catalytic metal-containing organic complex and the hydrocarbon and thermally decomposing the catalytic metal-containing organic complex and the hydrocarbon to form a fine carbon fiber, (ii) a thermal decomposition product discharging process, (iii) a cooling process for cooling the thermal decomposition product to a temperature lower than the decomposition temperature of the hydrocarbon and (iv) a separation process for separating the cooled product into the fine carbon fiber and an exhaust gas comprising hydrogen as a main component, the ratio of flow rate of the hydrocarbon to hydrogen and the reaction time of the hydrocarbon are controlled to form the fine carbon fiber having 1 nm to 50 nm average particle diameter.
COPYRIGHT: (C)2002,JPO | |
Myst V : End of Ages
Myst V: End of Ages is the final episode of the Myst series edited by Ubisoft and developed by Cyan Worlds. Released on September 20, 2005, this adventure game allows fans of the saga to learn more about the D’ni civilization and its history.
A brief summary
The Myst saga takes place in an imaginary universe made up of an immensity of parallel worlds called “Ages” that are interconnected. The main world of the saga is the world of the D’ni. It is a very old civilization with advanced technological knowledge through what is called “the art of writing”. This art is one of the central elements of the saga since it is a science that makes it possible to create the famous parallel worlds called “Ages”.
About Myst V: End of Ages
History of the saga
In this final opus, the player’s mission is to decide the fate of the D’ni civilization through many possible choices and riddles. Depending on the actions you take as you progress, the fate of this empire will change, ranging from saving it to destroying it.
Gameplay
Myst V: End of Ages offers the player an immersive gameplay with a first-person view that allows players to truly immerse themselves in the world of Myst. A new feature of this opus is the real-time travel. You can move your character using the point & click system i. e. by clicking in one direction to move. It is also possible to opt for a freer movement, in the same way as FPS.
Thus, for the latter Opus, Cyan Worlds has sought to evolve the Myst saga while keeping its founding foundations.
Be the first to leave a review. | https://www.logitheque.com/en/windows/myst-v-end-of-ages-60781 |
Not for Publication in West's Federal Reporter
Citation Limited Pursuant to 1st Cir. Loc. R. 32.3
United States Court of Appeals
For the First Circuit
No. 05-1336
SAMNANG SEN,
Petitioner,
v.
ALBERTO R. GONZALES, ATTORNEY GENERAL,
Respondent.
ON PETITION FOR REVIEW OF AN ORDER OF
THE BOARD OF IMMIGRATION APPEALS
Before
Lynch, Circuit Judge,
Campbell, Senior Circuit Judge,
and Lipez, Circuit Judge.
Martin J. McNulty on brief for petitioner.
Hillel R. Smith, Attorney, Office of Immigration Litigation,
Peter D. Keisler, Assistant Attorney General, and Terri J. Scadron,
Assistant Director, on brief for respondent.
October 26, 2005
Per Curiam. This is a petition for review of the denial
of asylum.1 Petitioner, a native and citizen of Cambodia had
sought asylum on the ground that he had been persecuted in the past
on the basis of his membership in particular social groups, namely,
the Khmer People National Liberation Front ("KPNLF"), the
Democratic Liberal Buddhism Party ("DLBP"), and the Sam Rainsey
Party ("SRP"), all of which opposed the incumbent Hun Sen regime,
and that he feared that, if he returned to Cambodia, he would be
immediately arrested and probably killed because of his active
membership in the SRP. Where, as here, the Board of Immigration
Appeals ("BIA") has summarily affirmed the decision of the
Immigration Judge ("IJ"), we review the findings and conclusions of
the IJ. Keo v. Ashcroft, 341 F.3d 57, 60 (1st Cir. 2003). Finding
no error in the IJ's decision that the petitioner is statutorily
ineligible for asylum, we deny the petition.
To be entitled to asylum, an applicant bears the burden
of proving that he is a "refugee." See 8 U.S.C. §§ 1101(a)(42)(A),
1158(b)(1); 8 C.F.R. § 1208.13(a); see also Xu v. Gonzales, 424
F.3d 45, 48 (1st Cir. 2005). To do that, the applicant must
1
The Immigration Judge also denied petitioner's claims for
withholding of removal and for relief under the Convention Against
Torture ("CAT"). However, because petitioner did not press those
claims on appeal to the Board of Immigration Appeals and did not
raise his CAT claim or develop his withholding of removal claim in
his brief to this court, those claims are therefore waived. See Un
v. Gonzales, 415 F.3d 205, 210 (1st Cir. 2005); Harutyunyan v.
Gonzales, 421 F.3d 64, 65 (1st Cir. 2005).
-2-
demonstrate either a well-founded fear of future persecution on
account of a protected ground, such as membership in a particular
social group, or past persecution on account of such a ground,
which entitles the applicant to a presumption of a well-founded
fear of future persecution. See 8 C.F.R. § 1208.13(b); see also
Xu, 424 F.3d at 48.
Establishing past persecution is a "daunting task," Diab
v. Ashcroft, 397 F.3d 35, 39 (1st Cir. 2005). To make such a
showing, the applicant must provide "conclusive evidence" that he
has suffered persecution on a protected ground, Romilus v.
Ashcroft, 385 F.3d 1, 6 (1st Cir. 2004); conclusory allegations are
not sufficient, Negeya v. Gonzales, 417 F.3d 78, 83 (1st Cir.
2005). Because "persecution" is not a defined term, "it is in the
first instance the prerogative of the Attorney General, acting
through the BIA, to give content to it." Bocova v. Gonzales, 412
F.3d 257, 262 (1st Cir. 2005). Although the BIA does so on a case-
by-case basis, id. at 263, it is clear that, "'to qualify as
persecution, a person's experience must rise above unpleasantness,
harassment, and even basic suffering,'" Ziu v. Gonzales, 412 F.3d
202, 2204 (1st Cir. 2005) (quoting Nelson v. INS, 232 F.3d 258, 263
(1st Cir. 2000)).
Where an applicant has not proved past persecution and
therefore does not benefit from a presumption of future
persecution, the applicant must satisfy both a subjective and an
-3-
objective test to prove a well-founded fear of future persecution.
Diab, 397 F.3d at 39. "'The subjective test requires the applicant
to prove his fear is genuine, while the objective test requires a
showing by credible and specific evidence that this fear is
reasonable.'" Id. (quoting Mukamusoni v. Ashcroft, 390 F.3d 110,
119 (1st Cir. 2004)).
We review administrative asylum decisions under the
"highly deferential 'substantial evidence' standard. Harutyunyan
v. Gonzales, 421 F.3d 64, 67 (1st Cir. 2005). Where, as here, "the
agency's determination . . . is based, not on substantial evidence
supporting its decision, but on a petitioner's failure to provide
evidence that would support a holding his favor. . ., 'substantial
evidence' review permits [the court] to reverse 'only if the
petitioner's evidence would compel a reasonable factfinder to
conclude that relief was warranted.'" Xu, 424 F.3d at 48 (quoting
Settenda v. Ashcroft, 377 F.3d 89, 93 (1st Cir. 2004)).
Under those deferential standards, we must deny the
petition. The record evidence does not compel the conclusion
either that petitioner was persecuted in the past or that he has an
objectively reasonable fear of being persecuted in the future.
As to past persecution, the IJ reasonably concluded that
the incidents cited by the petitioner do not rise to the level of
"persecution" for purposes of asylum eligibility. Although the
petitioner cites three incidents of having a gun pointed at him and
-4-
one incident of receiving a threatening letter, he was not harmed
in any of those confrontations, and the threat was not particularly
credible and was never fulfilled even though petitioner remained in
Cambodia for two and a half years after receiving it. See Ziu, 412
F.3d at 205 (finding that two episodes of physical abuse not
resulting in serious injury and various threats "do not amount to
persecution under our case law"); cf. Un, 415 F.3d at 209-10, 210
n.3 (holding that verbal death threat could amount to persecution,
where a credible friend confirmed that the threateners were
planning to kill the petitioner). Petitioner's general allegations
that he was often threatened by Hun Sen's followers and followed by
the secret police are not sufficiently specific to fulfill his
burden of providing "conclusive" and "specific" evidence of
persecution. See Topalli v. Gonzales, 417 F.3d 128, 132 n.5 (1st
Cir. 2005).
Nor does the evidence compel a finding that the incidents
where petitioner was held at gunpoint were motivated by his
political party membership. At the first such confrontation, the
soldiers stated that they had stopped petitioner's van because of
a curfew violation; and at the second, they stated that petitioner
and others had violated "local law." In the third incident, no
reason was given. Given the absence of even circumstantial
evidence of motive, the IJ was free to reject petitioner's
-5-
speculation that the incidents were motivated by his party
membership. Ziu, 412 F.3d at 205.
As to future persecution, even assuming, as the IJ
implicitly did, that petitioner's fear of future persecution is
subjectively sincere, the evidence does not compel a finding that
petitioner's fear is objectively reasonable. The IJ reasonably
concluded that petitioner would not be viewed as a party activist,
based on petitioner's own testimony that he had no rank or position
in the SRP but was merely a member and had ceased his active
involvement in party activities after the 1998 elections and on the
fact that he was viewed as a government employee, which facilitated
his movement around the country, even after he had left his
government position. Furthermore, the 2002 State Department
Country Condition Report indicates that political parties normally
were able to conduct their activities freely without government
interference; that political violence associated with the February
2001 local elections was directed primarily at party activists and
candidates rather than mere members, see Khem v. Ashcroft, 342 F.3d
51, 54 (1st Cir. 2003); that the Government took action against
many alleged perpetrators of election-related violence; and that
the SRP won 1,346 local council seats, thus gaining some control
over the country's local governance, id. The IJ properly relied on
that report in concluding that petitioner's fear of future
persecution was not objectively reasonable. Negeya, 417 F.3d at
-6-
84-85. That conclusion is further supported by the fact that
petitioner reported no further problems after March 1998 although
he remained in Cambodia for more than two years after that.
Accordingly, the petition is denied.
-7-
| |
Osteoporosis is a condition characterized by a decrease in the bone density, decreasing its strength & making it weak & brittle – that a fall or even mild stresses can cause a fracture. Men & Women of age are both susceptible to the bone disease, however white & asian older woman past menopause are at a greater risk.
Osteoporosis is often called a silent disease because one can’t feel bones weakening. Breaking a bone is often the first sign of osteoporosis or a patient may notice that he or she is getting shorter or their upper back is curving forward. If you are experiencing height loss or your spine is curving, be sure to consult your doctor immediately.
A bone density test is the only test that can diagnose osteoporosis before a broken bone occurs. This test helps to estimate the density of your bones and your chance of breaking a bone. This test uses a machine to measure your bone density. It estimates the amount of bone in your hip, spine and sometimes other bones.
Who do we recommend for a Bone Density Test?
Your bone density test results are reported using T-scores. A T-score shows how much your bone density is higher or lower than the bone density of a healthy 30-year old adult.
A T-score of -1.0 or above is normal bone density.
A T-score between -1.0 and -2.5 means you have low bone density or osteopenia.
A T-score of -2.5 or below is a diagnosis of osteoporosis. | https://www.fortismumbai.com/osteoporosis-bone-density-test-mumbai/ |
The present invention pertains generally to a joint prosthesis. In particular, the present invention relates to absorbing forces in a joint prosthesis.
The human body includes several joints such as the knee, the hip, the shoulder and the elbow. These joints are vulnerable to damage from various injuries, wear and disease. When the joints have been severely damaged, partial or total joint replacement may be the only viable solution. In many joint replacements, a prosthetic structure is inserted into the joint. Typically, the prosthetics include a base member secured to a bone to allow normal joint articulation.
The human knee is the single largest joint of the human body, but due to its structure, is arguably the most vulnerable to damage. The leg consists principally of a lower bone called a tibia and an upper bone known as the femur. The tibia and femur are hinged together at the knee joint. The knee joint includes several femoral condyles supported in an engagement with crescentic fibrocartilages that are positioned on the upper end of the tibia and receive the femur. The joint is held together by numerous ligaments, muscles and tendons. The patella is a similarly supported bone positioned in front of the knee joint and acts as a shield for it.
In addition to providing mobility, the knee plays a major role in supporting the body during static and dynamic activities. The knee works in conjunction with the hip and ankle to support the body weight during static erect posture. The knee is also heavily loaded because of its location connecting the two longest bones in the human body. Body weight, inertia and ground reaction forces often produce large moments at the knee. Dynamically, the knee joint must transmit extremely high forces needed for powerful movement of the lower extremity, while damping out impulsive shock loads to the spine and head. Furthermore, the knee must provide major stability to the lower extremity as well as fulfill major mobility roles during movement.
In current knee replacement prosthetic designs, the tibia is resected to form a flat, horizontal platform known as a tibial plateau. A tibial platform is secured to the tibial plateau with posts or anchors fixed normal or perpendicular to the tibia plateau. The anchors provide additional support to the tibial platform when the joint is subjected to shear, tipping and torque forces present under normal knee articulation.
A similar component, comprising a curved convex semi-spherical shell, covers the femoral condyles and slidably engages a concave tibial bearing insert. On a side opposite the femoral component, the tibial insert is substantially flat and slidably engages the tibial platform. Interaction of opposing surfaces of these three elements, the femoral component, the tibial component, the tibial insert and the tibial platform allows the prostheses to function in a manner equivalent to a natural knee joint.
Current prosthetic designs are relatively inflexible and inelastic, especially when reacting to forces produced on the knee joint. When a prosthesis is placed in-vivo, the prosthesis experiences a larger number of force cycles that can ultimately lead to failure of the prosthesis. As a result, a prosthesis is needed that can absorb and limit failure over a larger number of force cycles.
| |
The Melancholic Atmosphere Of Death's Door Is My Favorite Part Of The Game So Far
Death's Door is just oozing with atmosphere, its visuals, soundtrack, and storytelling all working in tandem to construct an experience that I enjoy so far. There's a distinctive melancholic stagnation to Death's Door and every piece of the game seemingly upholds that feeling, creating an experience that feels very complete. It reminds me a lot of last year's Hades (and not just because both games present death through the lens of a business office), in that everything in the game is working towards producing a specific vibe.
An isometric action game, Death's Door sees you play as a young crow whose job to collect the souls of the dead hits a bit of a snag when an elderly crow steals the giant soul you were charged with collecting. In order to complete your assignment, you'll need to find a way to get the giant soul back, sending you on an adventure across a world filled with folks seeking a means to avoid death.
Death's Door has an enjoyable combat loop of slashing, dodging, and firing off magical arrows. Despite its appearance, Death's Door is neither a roguelike nor Souls-like--when you die, you respawn at the last checkpoint with all your currency intact. So there's no frustrating reset of your progress or tense journey back to where you died. Instead I fought my way through Death's Door's opening levels at a pretty steady pace, ensuring each environment or enemy I encountered didn't become stale over time.
And I think all of that supports my original point: Death's Door is an atmospheric adventure. And that atmosphere is preserved and appreciated because you're constantly pushing forward--you're seeing the world twist on itself and hearing the music from one area flow into the next. Plus, venturing into optional areas to find pieces of lore is more enticing when you know that dying isn't a huge reset to your progress.
It's a pretty intimate cast of characters in the opening hours too. Each stands out in their own unique way--my favorite is Pothead, a man cursed by a witch to have a pot of soup for a head. (He gives me major Dark Souls' Siegmeyer vibes.) There's not much dialogue in Death's Door and the main character is a silent protagonist, but a remarkable amount of each character's personality is conveyed via their body language. So even though I know very little about these characters' histories, I love many of them. I can't wait to see what else Death's Door has in store.
After my two hours of hands-on playtime with Death's Door, I reached out to programmer, writer, and animator Mark Foster and producer, designer, composer, and sound designer David Fenn to ask them about how the atmosphere for the game's world was created. They emailed me back responses. Death's Door is scheduled to launch for Xbox and PC on July 20.
I noticed the music for certain areas seems to fade into each other; they're different but similar enough to sound like different parts of the same song--why structure the soundtrack to play out this way as opposed to making harder cuts in the music to further differentiate each individual track/location?
We wanted to give the world of Death's Door a distinctive vibe throughout, so we went for a more unified soundtrack with a fairly consistent atmosphere, focusing on the feeling of melancholy and tiredness of a world stagnating without death and a lack of new life being born into it. There are also themes which are used for certain characters which are weaved into the areas of the world that those characters have had an impact on, so certain chapters might feel quite blended together for that reason. That said, we're still pretty confident the soundtrack will take you to plenty of interesting places and bring a few surprises!
I really like how the world's colors transition from the grey World of Doors to the more vibrant areas you go to on your adventure--what was the inspiration behind that stylistic choice? Why do something like that (beyond making me go, "Whoa, that's cool")?
One element to this is the idea that the crow's office world is this noir office space, like a 1930s detective film. This felt like it fit well with the crows being black and grey themselves.
The other core reason for this is we wanted there to be a clear juxtaposition between this space, which acts as a hub world, and the outside world where most of the game takes place.
The main game world is set in a land untouched by death, intended to feel bleak yet beautiful, and the dreary office space is a darker greyscale environment, a reflection of the monotony of office work amplified by this grim world that has spent many centuries growing more and more devoid of life.
Despite the frantic nature of the combat, Death's Door has--so far anyway--had a rather chill vibe. With one exception: When you die and hear that loud noise and DEATH flashes across the screen. In a game that handles death with a calm atmosphere, why is the player's death regarded as this obtrusive event?
We wouldn't say that deaths are handled with a calm atmosphere in the game. The main targets of the player are those who are avoiding Death at all costs, and in a way the crows are no exception--they carry out their jobs knowing that if they don't, they will die. Failing on the job would not be a pleasant or casual occurrence for a Reaper. Not to mention, dying would involve a lot of paperwork for the other office crows...
Deaths are important moments in games like this. They're moments to be remembered and reflected upon so next time you tackle the same situation, you have new experience to wield and overcome the obstacles that previously took you down.
I've been finding items that detail the history of Death's Door's world and characters--does picking them up change how the story plays out and how certain characters may react to me or are they solely there for fleshing out the lore?
There's no consistent rule for how the items work. Some just flesh out lore, some provide a more direct purpose for exploration and some will impact the player's interactions with the NPCs of the world. There are also some unique features to some of the shiny collectables you find that we will leave players to discover on their own. Won't say any more on that to avoid spoilers!
Why tell so much of the story through optional items as opposed to more traditional cutscenes and expositional dialogue between characters?
We do actually make sure we tell a full and non-ambiguous story through all of the main character interactions and cutscenes in the game, which felt like a bold step for us because as indies it's often more common to opt for a more open-to-interpretation "story without words" (which is a style we still love and have used before!).
We put a lot of thought into the pacing, making sure you're never slowed down by having lore dumped on you. For that reason, many of the character interactions and cutscenes are fairly concise. You then have the option of digging deeper into the lore through optional interactions, items and secrets for players who want to unravel more of the mysteries, and find deeper secrets hidden in the world.
Why a silent, unnamed protagonist? We don't even get to know how they're feeling or what they're thinking beyond their body movements.
For the most part, the player can impart their feelings and reactions to the events of the game onto the protagonist like a player avatar, and use this to feel more connected to the game world. That said, we do prescribe a little bit of personality onto the protagonist, in that they are a young and inexperienced Reaper with a sense of innocent curiosity. Having the player character not speak serves this personality further and also plays into the idea that the Crows generally do what they are told without questioning it. | |
Sara Friedman is a reporter/producer for GCN, covering cloud, cybersecurity and a wide range of other public-sector IT topics.
Before joining GCN, Friedman was a reporter for Gambling Compliance, where she covered state issues related to casinos, lotteries and fantasy sports. She has also written for Communications Daily and Washington Internet Daily on state telecom and cloud computing. Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.
Friedman can be contacted at [email protected] or follow her on Twitter @SaraEFriedman.
Click here for previous articles by Friedman.
NSF, Energy Department invest in AI research
The new funding opportunities will distribute approximately $170 million to advanced research projects powered by artificial intelligence.
- By Sara Friedman
-
Building a cloud-ready workforce
After earning an associate degree in cloud computing at Northern Virginia Community College, students can take courses at George Mason University toward a bachelor's of applied science degree in cloud computing.
- By Sara Friedman
-
Air Force, MIT launch AI accelerator
The joint artificial intelligence accelerator program will focus on fast-tracking AI technologies through fundamental research in computational intelligence, reasoning, decision-making and autonomy.
- By Sara Friedman
-
Cybersecurity
Filling the federal cyber talent pipeline
The Cybersecurity Talent Initiative aims to jumpstart cyber careers by placing university graduates in federal agencies.
- By Sara Friedman
-
Emerging Tech
HHS investigates blockchain for securing log files
The HHS Division of Acquisition is working on a pilot to secure system logs required by the Continuous Diagnostics and Mitigation program.
- By Sara Friedman
-
ARL taps universities for battlefield AI research
Carnegie Mellon University will lead a consortium of universities developing solutions to support autonomous processing and highly effective human-machine teams.
- By Sara Friedman
-
Emerging Tech
CFTC chairman calls for light touch on blockchain regulation
Today's rapidly changing financial environment calls for a light regulatory touch when it comes to distributed ledger technologies, says the chief of the Commodity Futures Trading Commission.
- By Sara Friedman
-
NSF rolls out financial management bots
The National Science Foundation is working with the Bureau of Fiscal Service to automate payment processes.
- By Sara Friedman
-
Emerging Tech
FDA explores blockchain to secure drug supply chain
A Food and Drug Administration pilot is looking for technologies to verify and trace prescription drugs.
- By Sara Friedman
-
Emerging Tech
How blockchain-secured voting works
The West Virginia Secretary of State’s office explains how the first statewide blockchain voting pilot for overseas voters came together.
- By Sara Friedman
-
Cloud & Infrastructure
DISA investigates blockchain as a service
The Defense Information Systems Agency is considering adding blockchain as a service to its infrastructure platform.
- By Sara Friedman
-
Emerging Tech
Marriage certificates sealed by blockchain
Hunter Halcomb, the head of IT for the Washoe County Recorder’s Office, spoke with GCN about the challenges of putting marriage certificates onto a blockchain-based system.
- By Sara Friedman
-
Emerging Tech
Blockchain makes new inroads into public sector
Many of the efforts are still nascent, but all levels of government have started to explore blockchain’s capabilities through pilot projects, legislation and research.
- By Sara Friedman
-
Emerging Tech
HHS expects to roll out blockchain acquisition solution
The Department of Health and Human Services is making a big push to have the first blockchain-based program in the federal government that gains an authority to operate.
- By Sara Friedman
-
Emerging Tech
NSF explores blockchain for grants management
The Grants Community Blockchain would allow agencies and universities to track funding opportunities.
- By Sara Friedman
-
Cybersecurity
DHS looks to blockchain to deter counterfeit certificates
The Department of Homeland Security wants to prevent forgeries in the certificates and licenses issued by Customs and Border Protection, Citizen and Immigration Services and the Transportation Security Administration.
- By Sara Friedman
-
Emerging Tech
GSA expands blockchain procurement pilot
The General Services Administration is investigating how blockchain can be applied to the Multiple Award Schedules program.
- By Sara Friedman
-
Cybersecurity
How West Virginia brought blockchain-secured voting to Election Day
The West Virginia Secretary of State estimates 144 voters in 30 different countries used the app to cast secure, anonymous ballots.
- By Sara Friedman
-
Cybersecurity
CDC tests blockchain for outbreak response
The Centers for Disease Control and Prevention wants to use blockchain to get health responders into crisis zones faster.
- By Sara Friedman
-
Emerging Tech
CBP tests augmented reality, blockchain to better protect intellectual property
Customs and Border Protection is using emerging tech to help agents ensure intellectual property rights. | https://gcn.com/voices/sara-friedman/18809/?oref=gcn-post-author |
AEN-LLc collaborates with experts representing a diverse consortium of knowledge in crafting solutions for our clients. The result of our collaborative approach is that we are able to deliver cutting edge. state-of-the-art solutions to the challenges presented to us by our clients.
MULTIPLE DISCIPLINES
Members of our teams have extensive backgrounds in multiple disciplines, which positions them to bring their unique perspectives to crafting solutions. | http://www.aen-llc.com/digital/partners.php |
I started out as a developmental biologist, but over the past two decades years, my lab has come to focus more and more on questions of evolution. We investigate these both from a genomic perspective and starting from phenotypes, primarily, but not exclusively in Arabidopsis thaliana. The most recent addition is a strong investment in plant-microbe interactions, including natural microbiomes; see our Pathopdopsis project! I am an elected member of the US National Academy of Sciences, the Royal Society, the German National Academy of Sciences and EMBO. I have served or am serving on many advisory and editorial boards and have co-founded three biotech start-ups. Download a recent CV here.
Karasov, T. L., Almario, J., Friedemann, C., Ding, W., Giolai, M., Heavens, D., Kersten, S., Lundberg, D., Neumann, M., Regalado, J., Neher, R. A., Kemen, E., Weigel D.
Cell Host Microbe (2018), 24(1) 168-179.
Chae E., Bomblies K., Kim S. T., Karelina D., Zaidem M., Ossowski S., Martin-Pizarro C., Laitinen R. A., Rowan B. A., Tenenboim H., Lechner S., Demar M., Habring-Müller A., Lanz C., Rätsch G. and Weigel D.
Weigel D., Jürgens G., Küttner F., Seifert E. and Jäckle H.
Gene expression in hybrid plants.
Phenotypic variation within species is often observed both at the morphological and the physiological level. While genetic variation contributes to structural alterations in proteins, differences in gene expression can provide an additional means of varying cellular composition and cell responses. I am using next generation sequencing to study the transcriptomes of several natural Arabidopsis thaliana accessions with full genome information, and also manually generated F1 hybrids. My main focus is on detecting cis and trans factors that contribute to differences in spatial and temporal expression, particularly those with roles in local adaptation such as to respond to pathogen pressures.
Evolutionary rates and signatures. Experimental evolution. Spatial distribution of genetic variation.
How can some species thrive close to the Arctic circle as well as the Sahara Desert? And what are the consequences on genetic diversity? I address these questions with the cosmopolitan plant model, Arabidopsis thaliana. I combine genomic data from the 1001 Genomes Project with ecological experiments using seeds originating from all over the world. Specifically, I study how populations with low genetic diversity could colonise a large part of North America within only a few centuries, and what role de novo mutations played in that process. In the native species range, Eurasia, I investigate how different genotypes respond to challenging climatic conditions by performing common garden experiments at extreme altitudes or simulated drought. With these experiments, I aim to characterise the natural selection regimes that have acted on the different genetic variants, and how they can explain patterns of local adaptation as well as the geographic distribution of genetic diversity.
Exposito-Alonso, M., Becker, C., Schuenemann, V. J., Reitter, E., Setzer, C., Slovak, R., Brachi, B., Hagmann, J., Grimm, D. G., Jiahui, C., Busch, W., Bergelson, J., Ness, R. W., Krause, J., Burbano, H. A., Weigel, D. PLoS Genet (2018), 14(2): e1007155.
Exposito-Alonso, M., Vasseur, F., Ding, W., Wang, G., Burbano, H. A., Weigel, D.
Nat Ecol Evol (2018), 2: 352-358.
Exposito-Alonso, M., Rodríguez, R. G., Barragán, C., Capovilla, G., Chae, E., Devos, J., Dogan, E. S., Friedemann, C., Gross, C., Lang, P., Lundberg, D., Middendorf, V., Kageyama, J., Karasov, T., Kersten, S., Petersen, S., Rabbani, L., Regalado, J., Reinelt, L., Rowan, B., Seymour, D. K., Symenoidi, E., Schwab, R., Tran, D. T. N., Venkataramani, K. Van de Weyer, A.-L., Vasseur, F., Wang, G., Wedegärtner, R., Weiss, F., Wu, R., Xi, W., Zaidem, M., Zhu, W., García-Arenal, F., Burbano, H. A., Bossdorf, Ol, Weigel, D.
Co-evolution of Arabidopsis and its oomycete pathogen Hyaloperonospora.
Co-evolution of pathogens and their host shape the diversity of both partners. In contrast to many well-studied crops, wild plant populations usually provide extensive genetic diversity to counteract pathogen pressure. My work focusses on the other side of the coin: using Arabidopsis thaliana and its natural pathogen Hyaloperonospora arabidopsidis (Hpa), I use genetic, genomic, and molecular tools to characterize how plant diversity shapes that of its pathogen. In combination with directed infection assays, association of phenotype and genotype will help me pinpoint genomic regions involved in pathogen adaptation.
Metagenomics. Plant-Microbe Interactions. Immune Response Analysis in A. thaliana.
I am a trained plant breeder working as a technical and scientific assistant. My work concerns plant-microbe interactions in local populations. In one project, we sequence A. thaliana rosette microbiomes to study factors influencing the composition of these microbial communities. In a second, I assess phenotypic diversity of immune responses in North American A. thaliana populations challenged with Albugo candida and Hyaloperonospora arabidopsidis.
DNA sequencing and all associated challenges.
I’m heading the Genome Center at the institute. Apart from making sure all our instruments run smoothly and continuously, establishing and advancing protocols for Next Generation Sequencing libraries is my prime focus. Over the last years, I have experienced the tremendous advancement in sequencing technologies, and moved from classical Sanger sequencing to highly parallelized short-read sequencing on a series of Illumina instruments. Our latest acquisition, the Pacbio Sequel, adds single-molecule long-read sequencing to our repertoire and, as the other instruments before, gives us the possibility to master numerous exciting challenges.
Bolger, A., Scossa, F., Bolger, M. E., Lanz, C., Maumus, F., Tohge, T., Quesneville, H., Alseekh, S., Sorensen, I., Lichtenstein, G., Fich, E. A., Conte, M., Keller, H., Schneeberger, K., Schwacke, R., Ofner, I., Vrebalov, J., Xu, Y., Osorio, S., Aflitos, S. A., Schijlen, E., Jimenez-Gomez, J. M., Ryngajllo, M., Kimura, S., Kumar, R., Koenig, D., Headland, L. R., Maloof, J. N., Sinha, N., van Ham, R. C., Lankhorst, R. K., Mao, L., Vogel, A., Arsova, B., Panstruga, R., Fei, Z., Rose, J. K., Zamir, D., Carrari, F., Giovannoni, J. J., Weigel, D., Usadel, B., Fernie A. R.
Chae, E., Bomblies, K., Kim, S.-T., Karelina, D., Zaidem, M., Ossowski, S., Martin Pizarro, C., Laitinen, R. A. E., Rowan, B. A., Tenenboim, H., Lechner, S., Demar, M., Habring-Müller, A., Lanz, C., Rätsch, G., Weigel, D.
Cao, J., Schneeberger, K., Ossowski, S., Gunther, T., Bender, S., Fitz, J., Koenig, D., Lanz, C., Stegle, O., Lippert, C., Wang, X., Ott, F., Muller, J., Alonso-Blanco, C., Borgwardt, K., Schmid, K. J., Weigel, D.
Hanikenne, M., Talke, I. N., Haydon, M. J., Lanz, C., Nolte, A., Motte, P., Kroymann, J., Weigel, D., Krämer, U.
Rendulic, S., Jagtap, P., Rosinus, A., Eppinger, M., Baar, C., Lanz, C., Keller, H., Lambert, C., Evans, K. J., Goesmann, A., Meyer, F., Sockett, R. E., Schuster, S. C.
GxG interactions. Flowering time variation. Herbicide resistance.
I am interested in deciphering the genetic basis of adaptation and phenotypic plasticity of flowering time of plants, especially in response to changing environmental conditions. I use natural variation as a resource of genetic diversity to identify new alleles of known genes to better understand gene function, regulation, and interaction. Employing CRISPR/Cas9 technology, I investigate GxG interaction by testing a deleterious genetic deletion in a large set of natural Arabidopsis genetic backgrounds. I also work on herbicide resistance of Alopecurus myosuroides (black-grass), a plant species classified as a weed. Several next generation sequencing technologies and methods are applied to study the dynamics of resistance dispersion and the population structure of local and nationwide samples.
Natural haplotypes of FLM non-coding sequences fine-tune flowering time in ambient spring temperatures in Arabidopsis.
Lutz, U., Nussbaumer, T., Spannagl, M., Dienter, J., Mayer, K. F., Schwechheimer C.
Modulation of Ambient Temperature-Dependent Flowering in Arabidopsis thaliana by Natural Variation of FLOWERING LOCUS M.
Lutz, U., Posé, D., Pfeifer, M., Gundlach, H., Hagmann, J., Wang, C., Weigel, D., Mayer, K. F., Schmid, M., Schwechheimer C.
A set of Columbia-0-specific single nucleotide polymorphism markers for the genetic analysis of natural variation in Arabidopsis thaliana.
Evolution and regulation of ribosomal RNA genes. Herbicide resistance.
Epistatic and allelic interactions control expression of ribosomal RNA gene clusters in Arabidopsis thaliana.
Rabanal, F. A., Mandáková, T., Soto-Jiménez, L. M., Greenhalgh, R., Parrott, D. L., Lutzmayer S., Steffen, J. G., Nizhynska, V., Mott, R., Lysak, M. A., Clark, R. M. and Nordborg, M.
Unstable Inheritance of 45S rRNA Genes in Arabidopsis thaliana.
Rabanal, F. A., Nizhynska, V., Mandáková, T., Novikova, P. Y., Lysak, M. A., Mott, R. and Nordborg, M.
1,135 genomes reveal the global pattern of polymorphism in Arabidopsis thaliana.
Long, Q., Rabanal, F. A., Meng, D., Huber, C. D., Farlow, A., Platzer, A., Zhang, Q., Vilhjálmsson, B. J., Korte, A., Nizhynska, V., Vronin, V., Korte, P., Sedman, L., Mandáková, T., Lysak, M. A., Seren, Ü., Hellmann, I. and Nordborg, M.
Co-evolution in natural A. thaliana – oomycete pathosystems.
I am a plant pathologist interested in understanding co-evolutionary patterns in host-pathogen interactions. In the long-term I intend to apply the knowledge I acquire for development of durable and sustainable crop disease management strategies. In the Weigel lab, I am studying phenotypic and molecular evolution in North American A. thaliana and co-existing oomycete pathogens H. arabidopsidis (downy mildew) and A. candida (white rust) populations. Important questions are whether new pathogen recognition capabilities in the host more often evolve through mutation or introgression from divergent lineages, and whether virulence factors in the pathogens are co-evolving in response.
Shirsekar G. S., Vega-Sanchez M. E., Bordeos A., Baraoidan M., Swisshelm A., Fan J., Park C. H., Leung H. and Wang G. L.
Mol Plant Microbe Interact (2014), 27(6) 528-36.
Park C. H*., Chen S*., Shirsekar G*., Zhou B*., Khang C. H., Songkumarn P., Afzal A. J., Ning Y., Wang R., Bellizzi M., Valent B. and Wang G. L.
Liu J., Li W., Ning Y., Shirsekar G., Cai Y., Wang X., Dai L., Wang Z., Liu W. and Wang G. L.
Plant Physiol (2012), 160(1) 28-37.
My project is focused on understanding how epigenetic modifications can impact gene expression at the molecular level. I am employing transgene-induced epi-mutations as well as natural epigenetic variants in Arabidopsis thaliana, and primarily focus on alterations in DNA methylation. Monitoring genome-wide changes in chromatin architecture and gene expression in varying methylation backgrounds helps me distinguish the processes at play and their temporal roles.
GxG interactions. Diversity in flowering and immunity genetic architectures.
I am interested in natural genetic variation, which is found in all organisms, but often particularly obvious in plants. Two paradigms for extreme phenotypic variation in a single plant species are flowering time and immune responses in Arabidopsis thaliana. I am taking a systematic approach to determine gene x gene (GxG) interactions, by testing the effects of the same genetic mutation on a diverse set of genetic backgrounds. To this end, I am using CRISPR/Cas9 technology to knock out the flowering regulator FLOWERING LOCUS C (FLC) and the immune regulator ISOCHORISMATE SYNTHASE 1 (ICS1) in dozens of natural accessions.
Dimitriadis E., Alexiou G. A., Tsotsou P., Simeonidi E., Stefanaki K., Patereli A., Prodromou N. and Pandis N.
J Neurooncol (2013), 113(3) 353-8.
Evolutionary genomics of herbicide-resistant weeds.
The evolution of herbicide-resistant weeds is a predictable consequence of natural selection. Although resistance alleles bestow a fitness advantage in the presence of herbicide, resistance may come at a fitness cost in the absence of herbicide selection. This would influence the spread of herbicide-resistance-conferring alleles in natural populations.
I am evaluating the costs associated with a prominent herbicide-resistant mutation in 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), the target enzyme of glyphosate, by modelling herbicide resistance in A. thaliana using CRISPR-Cas9 technology. High resolution phenotyping and competition experiments are employed to assess fitness and changes in allele frequency.
In addition, I am conducting a genome-wide association study investigating the alleles underlying the differential response of A. thaliana accessions to varying glyphosate doses.
I am a trained gardener and help with plant care in various projects. Apart from the standard lab pet Arabidopsis thaliana, I am also in charge of other plant species that require more specialized growth conditions.
Impact of tissue-specific regeneration and stress priming in different A.thaliana accession and different plant species.
Heritable phenotypic variation in plants is mostly due to differences in DNA sequence. A number of studies have, however, shown that epigenetic changes can also result in phenotypic alteration. Furthermore, it was suggested that epigenetic diversity could positively impact on a population’s productivity, as well as tolerance against environmental stresses.
I am developing a novel methodology for the robust production of epigenetically stable plant variants, in order to increase epigenetic and phenotypic diversity. Special focus is on the propagation of otherwise tissue-specific epigenomes, as well as on natural Arabidopsis accessions. High throughput phenotyping and sequencing methods will be applied to characterize the resulting plant variants.
Wibowo, A., Becker, C., Marconi, G., Durr, J., Price, J., Hagmann, J., Papareddy, R., Putra, H., Kageyama, J., Becker, J., Weigel, D., Gutierrez-Marcos, J.
Belfield, E. J., Gan, X., Mithani, A., Brown, C., Jiang, C., Franklin, K., Alvey, E., Wibowo, A., Jung, M., Bailey, K., Kalwani, S., Ragoussis, J., Mott, R., Harberd, N. P.
Mahfouz, M. M., Li, L., Shamimuazzaman, M., Wibowo, A., Fang, X., Zhu, J. K.
Plant evolutionary genomics. Evolutionary dynamics of gene content in bacterial and plant genomes. Co-evolution of plant-microbe interactions.
Ashkenazy, H., Sela, I., Levy Karin, E., Landan, G., Pupko, T. Systematic Biology (2018).
Ashkenazy, H. Abadi, S., Martz, E., Chay, O., Mayrose, I., Pupko, T., Ben-Tal, N. Nucleic Acids Res (2016), 44: W344-350.
Sela, I.*, Ashkenazy, H.*, Katoh, K., Pupko, T. Nucleic Acids Res, (2015), 43: W7-14 (* Equal contribution).
Ashkenazy, H., Cohen, O., Pupko, T., Huchon, D. Genome Biol Evol, (2014), 6(12): 3199-3209.
Automated Phenotyping. Pipelines for processing of short and long read sequencing data.
Bezrukov, I., Schmidt, H., Gatidis, S., Mantlik, F., Schafer, J. F., Schwenzer, N. and Pichler, B. J.
Bezrukov, I., Schmidt, H., Mantlik, F., Schwenzer, N., Brendle, C., Schölkopf, B. and Pichler, B.
Hofmann, M., Bezrukov, I., Mantlik, F., Aschoff, P., Steinke, F., Beyer, T., Pichler, B. J. and Schölkopf, B.
Plant-microbiome-interaction. Comparative genomics of phyllosphere bacteria.
I am a bioinformatician with a wide range of interests, having worked in areas such as plant and organelle evolution, bacterial genomics, genome and transcriptome assembly and companion animal microbiome.
Deusch, O., O'Flynn, C., Colyer, A., Morris, P., Allaway, D., Jones, P. G., Swanson, K. S.
Davis, I. J., Wallis, C., Deusch, O., Colyer, A., Milella, L., Loman, N., Harris, S.
Gruenheit, N., Deusch, O., Esser, C., Becker, M., Voelckel, C., Lockhart, P.
Deusch, O., Landan, G., Roettger, M., Gruenheit, N., Kowallik, K. V., Allen, J. F., Martin, W., Dagan, T.
Patterns of structural variation in genomes.
My main goal is to describe structural variation in plant genomes, and understand its contribution to natural selection and adaptation. To do so, I use Next Generation Sequencing data, importantly sequencing reads obtained with long-read technology, as well as comparative genomics.
Array enrichment for targeted NGS analyses. Species-wide spectrum of NB-LRR (NLR) genes.
I am interested in using computational biology to decipher the molecular mechanisms leading to genetic diversity, adaptive evolution and speciation. My first project focuses on natural variation in resistance (R) genes of Arabidopsis thaliana. The main class of R-genes encodes nucleotide binding site-leucine rich repeat (NLR/NB-LRR) proteins and mediates the effector triggered immunity (ETI) in plants. My work will aid to understand how variation in these genes helps plants to resist pathogen pressures.
Evolution of transcriptional networks. Transcriptional regulation. Plant evolutionary genomics.
Plants adapting to distinct ecological niches need to cope with variable antagonistic factors, such as limited nutritional resources, competitors, pathogens, and adverse climate. Gene expression is expected to change accordingly, hence, plants adapted to separate environments will transcribe their genome differently even when grown under the same controlled conditions. I am interested in how transcription regulation is moulded by the environment and how the regulatory genome facilitates these changes, focusing on natural strains of A. thaliana.
Voichek, Y.*, Mittelman, K.*, Gordon, Y., Bar-Ziv, R., Lifshitz Smit, D., Shenhav R., Barkai, N.
Voichek, Y.*, Bar-Ziv, R.*, Barkai, N.
Bar-Ziv, R.*, Voichek, Y.*, Barkai, N.
Fuchs, G.*, Voichek, Y.*, Benjamin, S., Gilad, S., Amit, I., Oren, M.
Microbial evolution. Plant evolutionary genomics. Plant pathogen interactions.
Successful pathogen colonization of a host requires not only that the pathogen successfully surpass the host immune system, but also that it survive and proliferate in the presence of other microbes. My research focuses on the evolution of pathogen and plant in the context of a larger microbial community.
Karasov T. L., Kniskern J. M., Gao L., DeYoung B. J., Ding J., Dubiella U., Lastra R. O., Nallu S., Roux F., Innes R. W., Barrett L. G., Hudson R. R.and Bergelson J. Nature (2014), 512(7515) 436-40.
Karasov T. L., Horton M. W. and Bergelson J.
Curr Opin Plant Biol (2014), 18 24-30.
Karasov T., Messer P. W. and Petrov D. A.
PLoS Genet (2010), 6(6) e1000924.
Map distribution of microbial genotypes onto that of plant genotypes in a wild field. Associate plant and microbial genotypes in nature while tracking plant growth and transcriptional output. Test predictions about the microbial community and fitness using targeted genome engineering of locally adapted plants.
Nature exposes plants to large communities of microbes that are also interacting amongst themselves. Because maintaining an immune system is costly, successful plants must streamline their defense capabilities. I am investigating the influence of locally-adapted plant genetic variation on the colonization of wild microbes in the field. This will help understand the pressures that drive the evolution of the plant immune system, and to identify plant genes and key microbes that significantly contribute to microbial colonization patterns.
Wagner M. R., Lundberg D. S., Coleman-Derr D., Tringe S. G., Dangl J. L. and Mitchell-Olds T.
Ecol Lett (2014), 17(6) 717-26.
Lundberg D. S., Yourstone S., Mieczkowski P., Jones C. D. and Dangl J. L.
Nat Methods (2013), 10(10) 999-1002.
Lundberg D. S., Lebeis S. L., Paredes S. H., Yourstone S., Gehring J., Malfatti S., Tremblay J., Engelbrektson A., Kunin V., del Rio T. G., Edgar R. C., Eickhorst T., Ley R. E., Hugenholtz P., Tringe S. G. and Dangl J. L.
Flowering time regulation in Arabidopsis. INTACT technology.
I am currently on maternity leave - back at the bench in 2017.
Pathodopsis. Geographical patterns of microbial diversity on Arabidopsis. Plant-microbe co-evolution.
I am interested in how plants deal with their associated pathogens in the wild. My primary aim is to identify signatures of co-evolution between host and pathogens, and I do so mainly in the context of the Pathodopsis project. Pathodopsis focuses on two important pathogens of Arabidopsis thaliana, the oomycete Hyaloperonospora arabidopsidis and bacteria of the genus Pseudomonas, which we sampled from all over Europe. Combining DNA sequencing data, available metadata, as well as infection assays, I investigate local adaptation of host as well as pathogens.
Classification of plant microbiome samples. GPU computational tools.
My main research focuses on bioinformatics, computational biology, and algorithm development. Currently, I am working with metagenomics to understand the plant microbiome, aiming to develop new ways to analyze microbial communities using machine learning and empirical inference techniques. A special tool I am interested in is the use of GPU computation for massively parallel algorithms. This way complex problems can be approached without the use of sophisticated IT infrastructure. Other interests include next generation sequencing technology, and plantpathogen evolution.
Genetic factors determining the specificity of plant-microbe interactions.
Plants serve as a rich habitat for microbes. Numerous macro and micro niches, both inside the plant and on its surface, allow a diverse set of microbes to thrive and propagate.
Using Arabidopsis thaliana and the wide-spread gram-negative bacterium Pseudomonas syringae as a model system, I aim to understand the interaction of host and microbe genotypes, and thus the selectivity of colonization. I am investigating both pathogenic and mutualistic Pseudomonas strains, and use genomic tools to decipher the dynamics of gene repertoires enabling or preventing plant colonization - both from a plant and a microbe perspective.
Barda, O., Shalev, O., Alster, S., Buxdorf, K., Gafni, A. and Levy, M.
Hybrid incompatibility. Evolution of immune receptors. Arabidopsis arenosa.
The combination of two genomes in hybrid plants can lead to detrimental effects, often caused by interaction of incompatible immune genes. NLR-type immune receptors play major roles in this interaction, and to gain a better understanding of the molecular mechanisms involved in their activation, I am investigating the complexity and variability of NLR-type immune receptor gene clusters in Arabidopsis thaliana. I also study the Arabidopsis arenosa to find out whether hybrid incompatibility is more or less prevalent in this naturally outcrossing species, compared to the closely related selfer Arabidopsis thaliana.
I'm a research technician in the department.
Hybrid incompatibility. Evolution of disease resistance.
My work as a technician in the group focuses on molecular and plant projects aimed at identifying hybrid incompatibilities and characterizing the underlying causal genes. I have been coordinating the generation and maintenance of the 80x80 diallel of Arabiodopsis thalinana accessions. Furthermore, I am generating molecular tools for the analysis of RPW8, and RPP8, two of the genes we are analyzing more closely now. Aside from research, I am lab safety person and have several organizational tasks that help the department run smoothly.
Chae E., Bomblies K., Kim S. T., Karelina D., Zaidem M., Ossowski S., Martin-Pizarro C., Laitinen R. A., Rowan B. A., Tenenboim H., Lechner S.,Demar M., Habring-Muller A., Lanz C., Ratsch G. and Weigel D.
Biochemical characterization of immune protein complexes. Plant-pathogen interaction. Hybrid incompatibility.
Plants have evolved sophisticated mechanisms to recognize non-self molecules, allowing them to deploy effective immune reactions against a myriad of pathogens. Resistance protein plays an important role in the process of pathogen defense, autoimmunity, and hybrid incompatibility. My research focuses on the activation mechanism of plant resistance proteins.
Li, L.*, Kim, P.*, Yu, L.*, Cai, G., Chen, S., Alfano, J. R., Zhou, J. M.
Cell Host Microbe. (2014), 20(4):504-14.
Li, L.*, Li, M.*, Zhou, Z., Liang, X., Liu, Z., Cai, G., Gao, L., Zhang, X., Wang, Y., Chen, S., Zhou, J. M.
Cell Host Microbe. (2014), 15(3):329-38.
Sun, Y.*, Li, L.*, Macho, A. P., Han, Z., Hu, Z., Zipfel, C., Zhou, J. M., Chai, J.
The molecular mechanism of heterosis.
I am a plant population geneticist being trained to exploit state-of-the-art molecular, genomics, and statistical tools to address evolutionary questions. My current research focus is on hybrid plants, and I use systematically acquired growth trajectories and transcriptome profiles from Arabidopsis thaliana hybrids and their parents to apply GWAS, TWAS, and eQTL analyses that will allow me uncover the relationship between genetic variation (available from the 1001 Genomes resource) and hybrid phenotype. My research will systematically evaluate whether heterosis is mainly dictated by overall heterozygosity in hybrid genomes, or rather by the expression/interactions of a core set of genes.
Yuan, W., Flowers, J. M., Sahraie, D. J., Ehrenreich, I. M., Purugganan, M. D.
Natural modifiers of a natural ACD6 allele. Modulation of immune response in natural populations.
I am interested in understanding genetic and molecular mechanisms involved in trade-off between growth and defense in plants. To this end, I am dissecting genetic variation in Arabidopsis thaliana accessions adapted to distinct geographical regions across the world. Combining genome wide association analysis (GWAS), quantitative genetic analysis and genome editing (CRISPR/Cas9 system), I am aiming to understand the way plants optimize their fitness in response to resource limitation and environment fluctuation.
Zhu W.*, Hu, B., Becker, C., Dogan, E. S., Berendzen, K. W., Weigel, D. and Liu C*.
Grimm, D. G., Roqueiro, D., Salomé, P. A., Kleeberger, S., Greshake, B., Zhu, W., Liu, C., Lippert, C., Stegle, O., Schölkopf B., Weigel, D., Borgwardt, K. M.
The Plant Cell (2017). 29(1): 5-19.
Sanchez-Bermajo E.*, Zhu W.*, Tasset C., Eimer H., Sureshkumar S., Singh R., Sundaramoorthi V., Colling L. and Balasubramanian S.
Zhu W*., Ausin I*., Seleznev A., Méndez-Vigno B., Picó F. X., Sureshkumar S., Sundaramoorthi V., Bulach D., Powell D., Seemann T., Alonso-Blanco C. and Balasubramanian S.
The dynamics of past plant-pathogen epidemics. Identification and timing of key events in crop domestication. Colonization of new ecological niches by invasive or introduced species.
I am very interested in evolutionary biology and chose genomics to study the forces and processes that influence evolution. I combine the use of modern and historic samples from both herbaria and archaeological remains. My group performs computational analysis of large datasets resulting from high-throughput DNA sequencing with a special emphasis on the particularities of DNA retrieved from historic samples: ancient DNA (aDNA). I have been extensively involved in the study of hominid evolution and currently focus mainly on plant and plant-pathogen evolution.
Weiss, C. L., Schuenemann, V.J., Devos, J., Shirsekar, G., Reiter, E. Gould, B.A., Stinchcombe, J.R., Krause J., Burbano, H.A.
Weiss, C. L., Dannemann, M., Pruefer K., Burbano H. A.
Yoshida K., Burbano H. A*., Krause J., Thines M., Weigel D*. and Kamoun S*.
Burbano H. A., Hodges E., Green R. E., Briggs A. W., Krause J., Meyer M., Good J. M., Maricic T., Johnson P. L., Xuan Z., Rooks M., Bhattacharjee A., Brizuela L., Albert F. W., de la Rasilla M., Fortea J., Rosas A., Lachmann M., Hannon G. J. and Paabo S.
I am currently on maternity leave - back at the bench in 2018.
Genetic diversity trailing global change. Natural variation in microRNA pathways.
I am interested in different aspects of adaptation and evolution, from small- (molecules) to large-scale (populations) processes. To complete my skill repertoire for looking at this big picture of evolution, I am now, after a molecular biology PhD on biogenesis and function of miRNAs, doing a Postdoc in ancient genetics and evolution. My project focuses on the effects of global change on genetic variation in plants - where, how and with which outcome do plants react (adapt?) to environmental changes? To address these questions, I am developing RAD sequencing and analysis methods for five non-model plant species, to process ~1000 contemporary and ~300 up to ~200 years old historic samples I collected in forests and herbaria. The project is part of the German Biodiversity Exploratories, a large scale, long-term DFG-funded effort for biodiversity and ecosystem research.
Ancient genomics. Evolution of methylation in key plant species. Dynamics of host - pathogen interactions through history.
Historical and ancient samples, such as dried plants in herbaria, can tell us about the genetic makeup of a species at the time of collection and allow us to directly study, rather than infer, changes in DNA sequences over historical time spans. Advances in sequencing technologies and extraction methods have allowed us to interrogate not only very small amounts of material, but also epigenetic modifications on the DNA, such as cytosine methylation. I am analyzing those methylation patterns in ancient and modern plant samples to study changes across recent history, which will allow us to investigate the evolution of epigenetically controlled traits.
Environmental occurrence of arsenic in Colombia: a review.
Alonso, D. L*., Latorre, S*., Castillo, E., Brandão, P.F.
Analyses of ancient DNA retrieved from herbarium specimens. Co-evolution of plants and their pathogens, combining ancient and modern DNA from infected plant samples.
I am interested in using Next Generation Sequencing (NGS) to address questions regarding the co-evolution of plants and their pathogens. The sequencing of ancient DNA (aDNA) is especially powerful in this regard, since it enables us to look back into the genomes of pathogens that triggered past epidemics. Desiccated plants that are stored in herbaria provide a great resource, since they sometimes also carry infectious organisms, and I am using herbarium samples to understand host pathogen interactions on a genomic level, over a timescale of several hundred years. | https://weigelworld.org/people.html |
It looks like I should use the result from part 3 for part 4. However if so, how should I use the initial values?
I'm confused about what to do when 2b) asks us to solve for v and then gives us v. I feel like I'm missing something obvious, but I can't figure it out. Can anyone help me?
I'm also confused by this. Could it possibly mean "Solve for u using v = ..."?
for the last part, can we just assume the same solution as c) but state a few assumptions instead? it's because I dont think the general solution of the equation would vary since no other IC were stated.
should the $\phi$ in eq. 6 read $\phi(r+ct)$ rather than $\phi(x+ct)$?
How can I make it continuous at r=0 when I have 1/2r as part of the general solution?
I'm even more confused after your response to Thomas' comment - why is part b) the only part that has x as a variable now?
Could you make it clear which version you change first, so I know if I should always check the html version instead of the pdf version for changes?
Usually changes come first to html.
In this question r is always positive right (since it's the distance to the origin)? Should u(r,0) and ut(r,0) be even functions of r? I guess we need additional information about u(r,0) and ut(r,0) so that v can be extended to negative values. | http://forum.math.toronto.edu/index.php?amp;topic=28.0 |
Hyundai's rear reflectors don't meet safety standards and Renault has some defective airbags.
South Korea's top automaker Hyundai Motor and the local unit of French automaker Renault SA will recall almost 150,000 cars due to a manufacturing defect and a safety problem, officials said on April 25.
The Ministry of Land, Transport and Maritime Affairs said rear reflectors installed in Hyundai's mid-size YF Sonata sedan and its Tucson sport-utility vehicle did not meet safety standards.
Hyundai will offer free repair services to the owners of 19,211 YF Sonatas produced between March 30 and May 17 last year, and 8,050 Tucson SUVs rolled out between February 27 and April 17.
Renault Samsung will begin recalling 55,648 SM5 cars produced between August 12, 2009 and October 29, 2010 due to defective airbags.
It will also call back 65,157 SM3 compact sedans produced between April 23, 2009 and August 10, 2010 because of faulty rear reflectors and air bags, the ministry said. | https://www.industryweek.com/companies-amp-executives/hyundai-renault-samsung-recall-cars |
While searching our database we found 1 possible solution for the: Guaranteed (RNUSEDE) jumble. This word was last seen on August 9 2021 in the popular Jumble Crossword puzzle. The solution we have for Guaranteed (RNUSEDE) has a total of 7 letters.
If you have already solved this crossword clue and are looking for the main post then head over to Jumble Crossword August 9 2021 Answers
Subscribe now and get all the Daily Jumble Puzzle Answers straight into your inbox every single day! | https://jumbleanswers.net/guaranteed-rnusede-jumble |
Iambic pentameter refers to the pattern or rhythm of a line of poetry or verse and has to do with the number of syllables in the line and the emphasis placed on those syllables. William Shakespeare’s works are often used as great examples of iambic pentameter.
Understanding Iambic Pentameter
When we speak, our syllables are either stressed (stronger emphasis) or unstressed (weaker emphasis). For example, the word "remark" consists of two syllables. "Re" is the unstressed syllable, with a weaker emphasis, while "mark" is stressed, with a stronger emphasis.
In poetry, a group of two or three syllables is referred to as a foot. A specific type of foot is an iamb. A foot is an iamb if it consists of one unstressed syllable followed by a stressed syllable, so the word remark is an iamb.
Penta means five, so a line of iambic pentameter consists of five iambs – five sets of unstressed and stressed syllables.
Iambic Pentameter in Poetry and Verse
Iambic pentameter is the most common type of meter used in poetry and verse. One writer in particular was famed for using it, William Shakespeare, although he was not the first, Chaucer used it to good effect before him, as you'll see in these iambic pentameter examples:
Her vestal livery is but sick and green
And none but fools do wear it; cast it off.
- William Shakespeare, Romeo and Juliet
And I do love thee: therefore, go with me;
I'll give thee fairies to attend on thee,
And they shall fetch thee jewels from the deep,
And sing while thou on pressed flowers dost sleep
- William Shakespeare, A Midsummer Night’s Dream
Now is the winter of our discontent
Made glorious summer by this sun of York;
And all the clouds that lour'd upon our house
In the deep bosom of the ocean buried.
- William Shakespeare, Richard III
Henceforth be earls, the first that ever Scotland
In such an honour named. What's more to do,
Which would be planted newly with the time,
As calling home our exiled friends abroad
That fled the snares of watchful tyranny;
Producing forth the cruel ministers
Of this dead butcher and his fiend-like queen,
Who, as 'tis thought, by self and violent hands
Took off her life; this, and what needful else
That calls upon us, by the grace of Grace,
We will perform in measure, time and place:
So, thanks to all at once and to each one,
Whom we invite to see us crown'd at Scone.
- William Shakespeare, Macbeth
O that this too too solid flesh would melt,
Thaw, and resolve itself into a dew!
Or that the Everlasting had not fix’d
His canon ’gainst self-slaughter! O God! O God!
- William Shakespeare, Hamlet
If music be the food of love, play on;
Give me excess of it, that, surfeiting,
The appetite may sicken, and so die.
That strain again! it had a dying fall:
O, it came o'er my ear like the sweet sound,
That breathes upon a bank of violets,
Stealing and giving odour! Enough; no more:
'Tis not so sweet now as it was before.
- William Shakespeare, Twelfth Night
When I do count the clock that tells the time,
And see the brave day sunk in hideous night;
When I behold the violet past prime,
And sable curls all silver’d o’er with white;
- William Shakespeare, "Sonnet 12"
Whan that aprill with his shoures soote
The droghte of march hath perced to the roote,
And bathed every veyne in swich licour
Of which vertu engendred is the flour;
Whan zephirus eek with his sweete breeth
Inspired hath in every holt and heeth
Tendre croppes, and the yonge sonne
Hath in the ram his halve cours yronne,
And smale foweles maken melodye,
That slepen al the nyght with open ye
(so priketh hem nature in hir corages);
Thanne longen folk to goon on pilgrimages,
And palmeres for to seken straunge strondes,
To ferne halwes, kowthe in sondry londes;
And specially from every shires ende
Of Engelond, to Caunterbury they wende,
The hooly blisful martir for to seke
That hem hath holpen, whan that they were seeke.
- Geoffrey Chaucer, Canterbury Tales
Batter my heart three-personed God, for you
as yet but knock, breathe, shine and seek to mend.
That I may rise and stand o'erthrow me and bend
Your force to break, blow, burn and make me new.
- John Donne, "Holy Sonnet XIV"
Of Man's first disobedience, and the fruit
Of that forbidden tree whose mortal taste
Brought death into the World, and all our woe,
With loss of Eden, till one greater Man
Restore us, and regain the blissful seat,
Sing, Heavenly Muse, that, on the secret top
Of Oreb, or of Sinai, didst inspire
That shepherd who first taught the chosen seed
In the beginning how the heavens and earth
Rose out of Chaos: or, if Sion hill
Delight thee more, and Siloa's brook that flowed
Fast by the oracle of God, I thence
Invoke thy aid to my adventurous song,
That with no middle flight intends to soar.
- John Milton, "Paradise Lost"
Set the Pace
Iambic pentameter has been used in poetry and verse since medieval times as we see in the example from Chaucer, and gained popularity in Elizabethan times amid Shakespeare and his peers. It is an ideal rhythm for the spoken word as it sets a good pace and natural, clear intonation. | https://examples.yourdictionary.com/examples-of-iambic-pentameter.html |
For many artists and bands, visual imagery plays a large role in heavy metal. In addition to its sound and lyrics, a heavy metal band’s image is expressed in album cover art, logos, stage sets, clothing, design of instruments, and music videos.
20 Thin Lizzy Thin Lizzy are an Irish rock band formed in Dublin in 1969. Their leader Phil Lynott was composer or co-composer of almost all of the band’s songs, and the first black Irishman to achieve commercial success in the field of rock music. Thin Lizzy’s most successful songs, “Whiskey in the Jar”, “Jailbreak” …read more.
They where stuck in CBS so they had to feel in the dark side a touch. I think the difference between Sad Wings of Destiny and Sin After Sin shows the compromise of more radio friendly songs CBS demanded. Good thing Rob Halfords has one of the great voices in Rock music not just metal. Everyone from King Diamond to Dream Theater owe great debts to the mighty Priest. Power metal is the Halfords imataters club.
Slash feat Myles Kennedy & The Conspirators will return later this year with a new album: http://teamrock.com/news/2018-03-23/slash-feat-myles-kennedy-the-conspirators-return-with-new-album …pic.twitter.com/56CGwIbsSX
Critics disagree over who can be thought of as the first heavy metal band. Most credit either Led Zeppelin or Black Sabbath, with American commentators tending to favour Led Zeppelin and British commentators tending to favour Black Sabbath, though many give equal credit to both. A few commentators—mainly American—argue for other groups including Iron Butterfly, Steppenwolf or Blue Cheer. Deep Purple, the third band in what is sometimes considered the “unholy trinity” of heavy metal (Black Sabbath, Led Zeppelin, and Deep Purple), despite being slightly older than Black Sabbath and Led Zeppelin, fluctuated between many rock styles until late 1969 when they took a heavy metal direction.
In 1968, the sound that would become known as heavy metal began to coalesce. That January, the San Francisco band Blue Cheer released a cover of Eddie Cochran’s classic “Summertime Blues”, from their debut album Vincebus Eruptum, that many consider the first true heavy metal recording. The same month, Steppenwolf released its self-titled debut album, including “Born to Be Wild”, which refers to “heavy metal thunder” in describing a motorcycle. In July, the Jeff Beck Group, whose leader had preceded Page as The Yardbirds’ guitarist, released its debut record: Truth featured some of the “most molten, barbed, downright funny noises of all time,” breaking ground for generations of metal ax-slingers. In September, Page’s new band, Led Zeppelin, made its live debut in Denmark (billed as The New Yardbirds). The Beatles’ White Album, released the following month, included “Helter Skelter”, then one of the heaviest-sounding songs ever released by a major band. The Pretty Things’ rock opera S.F. Sorrow, released in December, featured “proto heavy metal” songs such as “Old Man Going” and “I See You”. Iron Butterfly’s 1968 song “In-A-Gadda-Da-Vida” is sometimes described as an example of the transition between acid rock and heavy metal or the turning point in which acid rock became “heavy metal”, and both Iron Butterfly’s 1968 album In-A-Gadda-Da-Vida and Blue Cheer’s 1968 album Vincebus Eruptum have been described as laying the foundation of heavy metal and greatly influential in the transformation of acid rock into heavy metal.
Do I Ryan need to explain why this song needs to be ranked higher? One of the best songs to ever capture the feel of the Vietnam war era angst without ever saying that’s what it was. It applies to every tragedy. One of the best songs ever released. Lets vote it up there.
10/25/14: Musicians Carlos and Salvador Santana perform the national anthem prior to Game 4 of the 2014 World Series Check out http://m.mlb.com/video for our full archive of videos, and subscribe on YouTube for the best, exclusive MLB content: http://youtube.com/MLB About MLB.com: Commissioner Allan H.
Just the best! The pioneers of Hard Rock.. With Robert Plant’s vocals and Jimme Page’s heavenly guitar.. With Bonham on drums and JP Jones on bass.. None of even todays greatest have a chance against them.. Led Zep forever!
Should easily be in the top twenty probably the top 10 of all time. Tom Petty’s career and the people he played with and who lived him and mourned his death is a whose who of the greatest in Rock of all time. Plus the Heartbreakers are all fantastic musicians and band. RIP legend.
Here is a 2013 list of the 71 Best Rock Dance Songs for weddings or any other party where you want to dance and you want to ROCK! These are fast-dance, booty-shaking songs from oldies to 80s rock to new rock dance songs and classic rock dance songs. luck! – All Around Raleigh DJ Company
It has been argued that heavy metal has outlasted many other rock genres largely due to the emergence of an intense, exclusionary, strongly masculine subculture. While the metal fan base is largely young, white, male, and blue-collar, the group is “tolerant of those outside its core demographic base who follow its codes of dress, appearance, and behavior”. Identification with the subculture is strengthened not only by the group experience of concert-going and shared elements of fashion, but also by contributing to metal magazines and, more recently, websites. Attending live concerts in particular has been called the “holiest of heavy metal communions.”
28 Foo Fighters Foo Fighters are an American rock band, formed in Seattle in 1994. The band was founded by Nirvana drummer Dave Grohl after the death of Kurt Cobain and the dissolution of his previous band. The band consists of Dave Grohl (lead vocalist, rhythm guitar, main songwriter), Pat Smear (rhythm guitar), Nate …read more.
Classic rock is a radio format which developed from the album-oriented rock (AOR) format in the early 1980s. In the United States, the classic rock format features music ranging generally from the mid-1960s to the late 1980s, primarily focusing on commercially successful hard rockpopularized in the 1970s. The radio format became increasingly popular with the baby boomer demographic by the end of the 1990s. | http://60sbands.com/english/free-classic-rock-apps-classic-rock-band-concerts-2018/ |
---
abstract: 'Natural Language Inference is an important task for Natural Language Understanding. It is concerned with classifying the logical relation between two sentences. In this paper, we propose several text generative neural networks for generating text hypothesis, which allows construction of new Natural Language Inference datasets. To evaluate the models, we propose a new metric – the accuracy of the classifier trained on the generated dataset. The accuracy obtained by our best generative model is only 2.7% lower than the accuracy of the classifier trained on the original, human crafted dataset. Furthermore, the best generated dataset combined with the original dataset achieves the highest accuracy. The best model learns a mapping embedding for each training example. By comparing various metrics we show that datasets that obtain higher ROUGE or METEOR scores do not necessarily yield higher classification accuracies. We also provide analysis of what are the characteristics of a good dataset including the distinguishability of the generated datasets from the original one.'
address: 'Jožef Stefan Institute and Jožef Stefan International Postgraduate School, Jamova 39, 1000 Ljubljana, Slovenia'
author:
- Janez Starc
- Dunja Mladenić
bibliography:
- 'nli\_paper.bib'
title: Constructing a Natural Language Inference Dataset using Generative Neural Networks
---
natural language inference ,natural language generation ,machine learning ,dataset construction ,generative neural network ,recurrent neural network
Introduction {#sec:nli_intro}
============
The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (*entailment*, *contradiction* or *neutral*). In addition to reading capabilities this task also requires language generation capabilities.
The Stanford Natural Language Inference (SNLI) Corpus [@snliemnlp2015] is a NLI dataset that contains over a half a million examples. The size of the dataset is sufficient to train powerful neural networks. Several successful classification neural networks have already been proposed [@rocktaschel2016reasoning; @wang2015learning; @cheng2016long; @parikh2016decomposable]. In this paper, we utilize SNLI to train generative neural networks. Each example in the dataset consist of two human-written sentences, a premise and a hypothesis, and a corresponding label that describes the relationship between them. Few examples are presented in Table \[tab:entail\_orig\_ex\].
The proposed generative networks are trained to generate a hypothesis given a premise and a label, which allow us to construct new, unseen examples. Some generative models are build to generate a single optimal response given the input. Such models have been applied to machine translation [@sutskever2014sequence], image caption generation[@xu2015show], or dialogue systems [@serban2016building]. Another type of generative models are autoencoders that generate a stream of random samples from the original distribution. For instance, autoencoders have been used to generate text [@bowman2015generating; @li2015hierarchical], and images [@goodfellow2014generative]. In our setting we combine both approaches to generate a stream of random responses (hypotheses) that comply with the input (premise, label).
[lll]{}
Premise & Hypothesis & Label\
A person throwing a yellow ball in the air. & The ball sails through the air & entailment\
A person throwing a yellow ball in the air. & The person throws a square & contradiction\
A person throwing a yellow ball in the air. & The ball is heavy & neutral\
But what is a good stream of hypotheses? We argue that a good stream contains *diverse*, *comprehensible*, *accurate* and *non-trivial* hypotheses. A hypothesis is comprehensible if it is grammatical and semantically makes sense. It is accurate if it clearly expresses the relationship (signified by the label) with the premise. Finally, it is non-trivial if it is not trivial to determine the relationship (label) between the hypothesis and premise. For instance, given a premise ”A man drives a red car” and label *entailment*, the hypothesis ”A man drives a car” is more trivial than ”A person is sitting in a red vehicle”.
The next question is how to automatically measure the quality of generated hypotheses. One way is to use metrics that are standard in text generation tasks, for instance ROUGE[@lin2004rouge], BLEU[@papineni2002bleu], METEOR[@meteor2014]. These metrics estimate the similarity between the generated text and the original reference text. In our task they can be used by comparing the generated and reference hypotheses with the same premise and label. The main issue of these metrics is that they penalize the diversity since they penalize the generated hypotheses that are dissimilar to the reference hypothesis. An alternative metric is to use a NLI classifier to test the generated hypothesis if the input label is correct in respect to the premise. A perfect classifier would not penalize diverse hypotheses and would reward accurate and (arguably to some degree) comprehensible hypotheses. However, it would not reward non-trivial hypotheses.
Non-trivial examples are essential in a dataset for training a capable machine learning model. Furthermore, we make the following hypothesis.
> A good dataset for training a NLI classifier consists of a variety of *accurate, non-trivial and comprehensible* examples.
Based on this hypothesis, we propose the following approach for evaluation of generative models, which is also presented in Figure \[fig:nli\_approach\]. First, the generative model is trained on the original training dataset. Then, the premise and label from an example in the original dataset are taken as the input to the generative model to generate a new *random* hypothesis. The generated hypothesis is combined with the premise and the label to form a new unseen example. This is done for every example in the original dataset to construct a new dataset. Next, a classifier is trained on the new dataset. Finally, the classifier is evaluated on the original test set. The accuracy of the classifier is the proposed quality metric for the generative model. It can be compared to the accuracy of the classifier trained on the original training set and tested on the original test set.
The generative models learn solely from the original training set to regenerate the dataset. Thus, the model learns the distribution of the original dataset. Furthermore, the generated dataset is just a random sample from the estimated distribution. To determine how well did the generative model learn the distribution, we observe how *close* does the accuracy of the classifier trained on the generated dataset approach the accuracy of classifier trained on the original dataset.
Our flagship generative network <span style="font-variant:small-caps;">EmbedDecoder</span> works in a similar fashion as the encoder-decoder networks, where the encoder is used to transform the input into a low-dimensional latent representation, from which the decoder reconstructs the input. The difference is that <span style="font-variant:small-caps;">EmbedDecoder</span> consists only of the decoder, and the latent representation is learned as an embedding for each training example separately. In our models, the latent representation represents the mapping between the premise and the label on one side and the hypothesis on the other side.
Our main contributions are i) a novel *generative neural network*, which consist of the decoder that learns a mapping embedding for each training example separately, ii) a procedure for generating NLI datasets automatically, iii) and a novel *evaluation metric* for NLI generative models – the accuracy of the classifier trained on the generated dataset.
In Section \[sec:nli\_related\] we present the related work. In Section \[sec:nli\_models\] the considered neural networks are presented. Besides the main generative networks, we also present classification and discriminative networks, which are used for evaluation. The results are presented in Section \[sec:nli\_results\], where the generative models are evaluated and compared. From the experiments we can see that the best dataset was generated by the attention-based model <span style="font-variant:small-caps;">EmbedDecoder</span>. The classifier on this dataset achieved accuracy of $78.5\%$, which is $2.7\%$ less than the accuracy achieved on the original dataset. We also investigate the influence of latent dimensionality on the performance, compare different evaluation metrics, and provide deeper insights of the generated datasets. The conclusion is presented in Section \[sec:nli\_conclusion\].
(orgdata)[Original Dataset]{}; (genmodel)[Generative Model]{};
(orgdata) edge node[learns from]{} (genmodel); (gendata)[Generated Dataset]{}; (genmodel) edge node[generates]{} (gendata);
(orgclass) [Original Classifier]{}; (orgdata) edge node \[text width = 2cm\][learns from, tested on]{} (orgclass);
(genclass)[New Classifier]{}; (gendata) edge node [learns from]{} (genclass); (orgdata) edge node [tested on]{} (genclass); (orgclass) edge node[accuracy compared]{} (genclass) ;
Related Work {#sec:nli_related}
============
NLI has been the focal point of Recognizing Textual Entailment (RTE) Challenges, where the goal is to determine if the premise entails the hypothesis or not[^1]. The proposed approaches for RTE include bag-of-words matching approach [@glickman2005web], matching predicate argument structure approach [@maccartney2006learning] and logical inference approach [@bos2006logical; @tatu2006logic]. Another rule-based inference approach was proposed by @barhaim2015. This approach allows generation of new hypotheses by transforming parse trees of the premise while maintaining entailment. @hickl2006recognizing proposes an approach for constructing training datasets by extracting sentences from news articles that tend to be in an entailment relationship.
After SNLI dataset was released several neural network approaches for NLI classification have emerged. [@rocktaschel2016reasoning; @wang2015learning; @cheng2016long; @parikh2016decomposable]. The state-of-the-art model [@parikh2016decomposable] achieves $86.6\%$ accuracy on the SNLI dataset. A similar generation approach to ours was proposed by @kolesnyk2016generating, The goal of this work is generating entailment inference chains, where only examples with entailment label are used.
Natural Lanuguage Generation (NLG) is a task of generating natural language from a structured form such as knowledge base or logic form [@wen2015semantically; @mairesse2010phrase; @belz2008automatic]. The input in our task is unstructured text (premise) and label. On the other side of this spectrum, there are tasks that deal solely with unstructured text, like machine translation [@koehn2009statistical; @bahdanau2014neural; @luong2014addressing], summarization [@clarke2008global; @rush2015neural] and conversational dialogue systems [@serban2016building; @banchs2012iris]. Another recently popular task is generating captions from images [@vinyals2015show; @socher2014grounded].
With the advancement of deep learning, many neural network approaches have been introduced for generating sequences. The Recurrent Neural Network Language Model (RNNLM) [@mikolov2010recurrent] is one of the simplest neural architectures for generating text. The approach was extended by [@sutskever2014sequence], which use encoder-decoder architecture to generate a sequence from the input sequence. The Hierarchical Recurrent Encoder-Decoder (HRED) architecture [@serban2016building] generates sequences from several input sequences. These models offer very little variety of output sequences. It is obtained by modeling the output distribution of the language model. To introduce more variety, models based on variational autoencoder (VAE)[@kingma2013auto] have been proposed. These models use stochastic random variables as a source of variety. In [@bowman2015generating] a latent variable is used to initial the RNN that generates sentences, while the variational recurrent neural network (VRNN) [@chung2015recurrent] models the dependencies between latent variables across subsequent steps of RNN. The Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) [@serban2016hierarchical] extends the HRED by incorporating latent variables, which are learned similarly than in VAE. The latent variables are, like in some of our models, used to represent the mappings between sequences. Conditional variational autoencoders (CVAEs) [@yan2015attribute2image] were used to generate images from continuous visual attributes. These attributes are conditional information that is fed to the models, like the discrete label is in our models. As recognized by [@reiter2009investigation], the evaluation metrics of text-generating models fall into three categories: manual evaluation, automatic evaluation metrics, task-based evaluation. In evaluation based on human judgment each generated textual example is inspected manually. The automatic evaluation metrics, like ROUGE, BLEU and METEOR, compare human texts and generated texts. [@elliott2014comparing] shows METEOR has the strongest correlation with human judgments in image description evaluation. The last category is task-based evaluation, where the impact of the generated texts on a particular task is measured. This type of evaluation usually involves costly and lengthy human involvement, like measuring the effectiveness of smoking-cessation letters [@reiter2003lessons]. On the other hand, the task in our evaluation, the NLI classification, is automatic. In [@hodosh2013framing] ranking was used as an automatic task-based evaluation for associating images with captions.
Models {#sec:nli_models}
======
In this section, we present several neural networks used in the experiments. We start with variants of Recurrent Neural Networks, which are essential layers in all our models. Then, we present classification networks, which are needed in evaluation of generative neural networks presented in the following section. Next, we present how to use generative networks to generate hypothesis. Finally, we present discriminative networks, which are used for evaluation and analysis of the hypotheses.
The premise $W^p = w^p_1 w^p_2 \ldots w^p_M$ and hypothesis $W^h = w^h_1 w^h_2 \ldots w^h_N$ are represented with word embeddings $X^p = x^p_1 x^p_2 \ldots x^p_M$ and $X^h = x^h_1 x^h_2 \ldots x^h_N$ respectively. Each $x$ is a $e$-dimensional vector that represents the corresponding word, $M$ is the length of premise, and $N$ is the length of hypothesis. The labels (entailment, contradiction, neutral) are represented by a $3$-dimensional vector $Y^l$ if the label is the output of the model, or $L$ if the label is the input to the model.
Recurrent Neural Networks
-------------------------
The Recurrent Neural Networks (RNNs) are neural networks suitable for processing sequences. They are the basic building block in all our networks. We use two variants of RNNs – Long short term memory (LSTM) network [@hochreiter1997long] and an attention-based extension of LSTM, the mLSTM [@wang2015learning]. The LSTM tends to learn long-term dependencies better than vanilla RNNs. The input to the LSTM is a sequence of vectors $X=x_1x_2 \ldots x_n$, and the output is a sequence of vectors $H=h_1h_2 \ldots h_n$. At each time point $t$, input gate $i_t$, forget gate $f_t$, output gate $o_t$, cell state $C_t$ and one output vector $h_t$ are calculated.
$$\begin{aligned}
i_t &= \sigma(W_i x_t + U_i h_{t-1} + b_i) \\
f_t &= \sigma(W_f x_t + U_f h_{t-1} + b_f) \\
o_t &= \sigma(W_o x_t + U_o h_{t-1} + b_o) \\
C_t &= f_t \odot C_{t-1} + i_t \odot tanh(W_c x_t + U_c h_{t-1} + b_c) \\
h_t &= o_t \odot tanh(C_t),\end{aligned}$$
where $\sigma$ is a sigmoid function, $\odot$ is the element-wise multiplication operator, $W \in \mathbb{R}^{d \times e} $ and $U \in \mathbb{R}^{d \times d}
$ are parameter matrices, $b \in \mathbb{R}^{d} $ parameter vectors, $e$ is the input vector dimension, and $d$ is the output vector dimension. The vectors $C_0$ and $h_0$ are set to zero in the standard setting, however, in some cases in our models, they are set to a value that is the result of previous layers.
The mLSTM is an attention-based model with two input sequences – premise and hypothesis in case of NLI. Each word of the premise is matched against each word of the hypothesis to find the soft alignment between the sentences. The mLSTM is based on LSTM in such a way that it remembers the important matches and forgets the less important. The input to the LSTM inside the mLSTM at each time step is $x_t^\prime = [a_t, x_t^h]$, where $a_t$ is an attention vector that represents the weighted sum of premise sequence, where the weights present the degree to which each token of the premise is aligned with the $t$-th token of the hypothesis $x_t^h$, and $[*,*]$ is the concatenation operator. More details about mLSTM are presented in [@wang2015learning].
Classification model {#sec:classifier}
--------------------
The classification model predicts the label of the example given the premise and the hypothesis. We use the mLSTM-based model proposed by [@wang2015learning].
The architecture of the model is presented in Figure \[fig:class\_model\]. The embeddings of the premise $X^p$ and hypothesis $X^h$ are the input to the first two LSTMs to obtain the hidden states of the premise $H^p$ and hypothesis $H^h$.
$$H^p = \mathit{LSTM}(X^p) \quad \quad
H^h = \mathit{LSTM}(X^h)$$
All the hidden states in our models are $d$-dimensional unless otherwise noted. The hidden states $H^p$ and $H^h$ are the input to the mLSTM layer. The output of mLSTM are hidden states $H^m$, although only the last state $h^m_N$ is further used. A fully connected layer transforms it into a 3-dimensional vector, on top of which softmax function is applied to obtain the probabilities $Y^l$ of labels.
$$Y^l = Softmax(\mathit{Dense}_3(h^m_N)),$$
where $\mathit{Dense}_x$ represents the fully connected layer, whose output size is $x$.
(premise)[$X^p = x^p_1 x^p_2 \ldots x^p_M$]{}; (lstmp)[LSTM]{}; (premise) edge (lstmp); (hpremise)[$H^p = h^p_1 h^p_2 \ldots h^p_M$]{}; (lstmp) edge (hpremise);
(hypo) [$X^h = x^h_1 x^h_2 \ldots x^h_N$]{}; (lstmh)[LSTM]{}; (hypo) edge (lstmh); (hhypo)[$H^h = h^h_1 h^h_2 \ldots h^h_N$]{}; (lstmh) edge (hhypo);
(attention)[mLSTM]{}; (hhypo) edge (attention); (hpremise) edge (attention);
(hiddenm) [$h^m_N$]{}; (attention) edge (hiddenm);
(y)[$Y^l$]{}; (hiddenm) edge (y);
Generative models {#sec:gen_models}
-----------------
The goal of the proposed generative models, is to generate a diverse stream of hypotheses given the premise and the label. In this section, we present four variants of generative models, two variants of <span style="font-variant:small-caps;">EmbedDecoder</span> model presented in Figure \[fig:embed\_decoder\], and two variants of <span style="font-variant:small-caps;">EncoderDecoder</span> model presented in Figure \[fig:encoder\_decoder\].
[.49]{}
(latent)[$Z$]{}; (decoder)[Decoder]{}; (hypo)[0-Hypo]{}; (premise)[Premise]{}; (label)[Label]{}; (output)[Hypo]{};
(latent) edge (decoder); (hypo) edge (decoder); (premise) edge (decoder); (label) edge (decoder); (decoder) edge (output);
[.49]{}
(latent)[$Z$]{}; (decoder)[Decoder]{}; (hypo)[0-Hypo]{}; (premise)[Premise]{}; (label)[Label]{}; (output)[Hypo]{};
(latent) edge (decoder); (hypo) edge (decoder); (premise) edge (decoder); (label) edge (decoder); (decoder) edge (output);
(encoder)[Encoder]{}; (hypo2)[Hypo]{}; (premise2)[Premise]{}; (label2)[Label]{};
(encoder) edge (latent); (hypo2) edge (encoder); (premise2) edge (encoder); (label2) edge (encoder);
\[fig:gen\_model\_architecture\]
All models learn a latent representation $Z$ that represents the mapping between the premise and the label on one side, and the hypothesis on the other side. The <span style="font-variant:small-caps;">EmbedDecoder</span> models learn the latent representation by learning an embedding of the mapping for each training example separately. The embedding for $i$-th training example $Z^{(i)}$ is a $z$-dimensional trainable parameter vector. Consequentely, $Z \in \mathbb{R}^{n \times z}$ is a parameter matrix of all embeddings, where $n$ is the number of training examples. On the other hand, in <span style="font-variant:small-caps;">EncoderDecoder</span> models latent representation is the output of the decoder.
The <span style="font-variant:small-caps;">EmbedDecoder</span> models are trained to predict the next word of the hypothesis given the previous words of hypothesis, the premise, the label, and the latent representation of the example. $$\theta^\star, Z^\star = \operatorname*{arg\,max}_{\theta, Z} \sum_{i=1}^n \sum_{k=1}^{d(W^{h^{(i)}})} \log p(w_k^{h^{(i)}} | w_{k-1}^{h^{(i)}}\ldots w_1^{h^{(i)}}, W^{p^{(i)}}, L^{(i)}, Z^{(i)}, \theta )$$ where $\theta$ represent parameters other than $Z$, and $d(W^{h^{(i)}})$ is the length of the hypothesis $W^{h^{(i)}}$.
The <span style="font-variant:small-caps;">AttEmbedDecoder</span>, presented in Figure \[fig:gen\_model\], is attention based variant of <span style="font-variant:small-caps;">EmbedDecoder</span>. The same mLSTM layer is used as in classification model. However, the initial cell state $C_0$ of mLSTM is constructed from the latent vector and the label input. $$C_0 = \mathit{Dense}_d([Z^{(i)}, L])
\label{eq:reduction}$$ For the sake of simplifying the notation, we dropped the superscript $(i)$ from the equations, except in $Z^{(i)}$, where we explicitly want to state that the embedding vector is used.
The premise and the hypothesis are first processed by LSTM and then fed into the mLSTM, like in the classification model, however here the hypothesis is shifted. The first word of the hypothesis input is an empty token , symbolizing the empty input sequence when predicting the first word. The output of the mLSTM is a hidden state $H^m$, where each $h^m$ represents an output word. To obtain the probabilities for all the words in the vocabulary $y_k^h$ for the position $k$ in the output sequence, $h_k^m$ is first transformed into a vocabulary-sized vector, then the softmax function is applied. $$y_k^h = softmax(\mathit{Dense}_V(h_k^m)),$$ where V is the size of the vocabulary. But, due to the large size of the vocabulary, a two-level hierarchical softmax [@goodman2001classes] was used instead of a regular softmax to reduce the number of parameters updated during each training step. $$y_k^h = hsoftmax(h_k^m)$$ In the training step, the last output word $y^h_{N+1}$ is set to , while in the generating step, it is ignored.
In the <span style="font-variant:small-caps;">EmbedDecoder</span> model without attention, <span style="font-variant:small-caps;">BaseEmbedDecoder</span>, the mLSTM is replaced by a regular LSTM. The input to this LSTM is the shifted hypothesis. But, here the premise is provided through the initial cell state $C_0$. Specifically, last hidden state of the premise is merged with class input and the latent representation, then fed to the LSTM.
$$C_0 = \mathit{Dense}_{d^\prime}([Z^{(i)},L, h_M^p])
\label{eq:reduction2}$$
In order to not lose information $d^\prime$ was picked to be equal to sum of the sizes of $Z^{(i)}$, $L$ and $h_M^p$. Thus, $d^\prime = f + 3 + d$. Since the size of $C_0$ is $d^\prime$, the output vectors of the LSTM are also the size of $d^\prime$. We also present two variants of <span style="font-variant:small-caps;">EncoderDecoder</span> models, a regular one <span style="font-variant:small-caps;">BaseEncodeDecoder</span>, and a regularized one <span style="font-variant:small-caps;">VarEncoderDecoder</span>, which is based on Variational Bayesian approach. As presented in Figure \[fig:encoder\_decoder\], all the information (premise, hypothesis, label) is available to the encoder, whose output is the latent representation $Z$. On the other hand, the decoder is provided with the same premise and label, but the hypothesis is shifted. This forces the encoder to learn to encode only the missing information – the mapping between premise-label pair and the hypothesis. The encoder has a similar structure as the classification model in Figure \[fig:class\_model\]. Except that the label is connected to the initial cell state of the mLSTM
$$C_0 = \mathit{Dense}_d(L),$$
and the output of mLSTM $h_N^m$ is transformed into latent representation $Z$
$$Z = \mathit{Dense}_z(h_N^m).$$
The decoder is the same as in <span style="font-variant:small-caps;">EmbedDecoder</span>.
The <span style="font-variant:small-caps;">VarEncoderDecoder</span> models is based on Variational Autoencoder from [@kingma2013auto]. Instead of using single points for latent representation as in all previous models, the latent representation in <span style="font-variant:small-caps;">VarEncoderDecoder</span> is presented as a continuous variable $Z \sim \mathcal{N}(Z_\mu, Z_\sigma)$. Thus, the mappings are presented as a soft elliptical regions in the latent space, instead of a single points, which forces the model to fill up the latent space [@bowman2015generating]. Both $Z_\mu$ and $Z_\sigma$ are calculated form the output of the encoder using two different fully connected layers.
$$\begin{aligned}
Z_\mu = \mathit{Dense}_z(h_N^m), \quad Z_\sigma= \mathit{Dense}_z(h_N^m).\end{aligned}$$
To sample from the distribution the reparametrization trick is applied
$$Z = Z_\mu + Z_\sigma \odot \epsilon, \quad \epsilon \sim \mathcal{N}(0, \mathbb{I})$$
When training, a single sample is generated per example to generate $Z$.
As in [@kingma2013auto], the following regularization term is added to the loss function $$\frac{1}{2} ( 1 + log(Z_\sigma^2) - Z_\mu^2 - Z_\sigma^2).$$
Generating hypotheses
---------------------
In the generation phase only decoder of a trained generative model is used. It generates a hypothesis given the premise, label, and a randomly selected latent vector $Z^{(*)}$ . A single word is generated in each step, and it becomes the hypothesis input in the next step.
$$x^h_k = \mathit{embedding}(\operatorname*{arg\,max}{y_k^h})
\label{eq:max_y}$$
We also used beam search to optimize hypothesis generation. Similarly as in [@sutskever2014sequence], a small number of hypotheses are generated given a single input, then the best is selected. In $k$-beam search, in each time step $k$ best partial hypotheses are expanded by all the words in the vocabulary producing $kV$ partial hypothesis. Out of these $k$ best partial hypotheses are selected for the next step according to the joint probability of each partial hypothesis. Thus, when $k$ is $1$, the procedure is the same as the one presented in Eq \[eq:max\_y\]. The generation ends when symbol is encountered or maximum hypothesis length is reached[^2]. The random latent vector $Z^{(*)}$ is selected randomly from a normal distribution $\mathcal{N}(0, \sigma)$, where $\sigma$ is the standard deviation of $Z$.
(premise)[$X^p = x^p_1 x^p_2 \ldots x^p_M$]{}; (lstmp)[LSTM]{}; (premise) edge (lstmp); (hpremise)[$H^p = h^p_1 h^p_2 \ldots h^p_M$]{}; (lstmp) edge (hpremise);
(hypo) [$X^h = \text{\footnotesize{$<$null$>$}} x^h_1 x^h_2 \ldots x^h_N$]{}; (lstmh)[LSTM]{}; (hypo) edge (lstmh); (hhypo)[$H^h = h^h_1 h^h_2 \ldots h^h_{N+1}$]{}; (lstmh) edge (hhypo);
(attention)[mLSTM]{}; (hhypo) edge (attention); (hpremise) edge (attention);
(sum)[.]{}; (latent)[$Z^{(i)}$]{}; (classinput)[$X^l$]{};
(latent) edge (sum); (classinput) edge (sum); (sum) edge (attention);
(hiddenm) [$H^m = h^m_1 x^m_2 \ldots h^m_{N+1}$]{}; (attention) edge (hiddenm);
(hsoftmax)[Hierachical Softmax]{}; (hiddenm) edge (hsoftmax);
(output)[$Y^h = y^h_1 y^h_2 \ldots y^h_{N+1}$]{};
Discriminative model
--------------------
The discriminative model is used to measure the distinguishability between the original human written sentences and the generated ones. Higher error rate of the model means that the generative distribution is similar to the original distribution, which is one of the goals on the generative model. The model is based on Generative Adversarial Nets [@goodfellow2014generative], where in a single network the generative part tires to trick the discriminative part by generating images that are similar to the original images, and the discriminative part tries to distinguish between the original and generated images. Due to the discreteness of words (the output of our generative model) it is difficult to connect both the discriminative and generative part in a single differentiable network, thus we construct them separately. The generative models have already been defined in Section \[sec:gen\_models\]. Here we define the discriminative model.
The discriminative model $D$ takes sequence $X$ and process it with LSTM and fully connected layer
$$D(X) = \sigma(\mathit{Dense}_1(\mathit{LSTM}(X))$$
In the training step, one original sequence $X_\mathit{original}$ and one generated sequence $X_\mathit{generated}$ are processed by the discriminative model. The optimization function maximizes the following objective $$log(D(X_\mathit{original})) + log(1 - D(X_\mathit{generated}))$$ In the testing step, the discriminative model predicts correctly if $$D(X_\mathit{original}) > D(X_\mathit{generated})$$
Dataset Generation
==================
To construct a new dataset, first a generative model is trained on the training set of the original dataset. Then, a new dataset is constructed by generating a new hypotheses with a generative model. The premises and labels from the examples of the original dataset are taken as an input for the generative model. The new hypotheses replace the training hypotheses in the new dataset.
Next, the classifier, presented in Section \[sec:classifier\], is trained on the generated dataset. The accuracy of the new classifier is the main metric for evaluating the quality of the generated dataset.
Experiment details
------------------
All the experiments are performed on the SNLI dataset. There are 549,367 examples in the dataset, divided into training, development and test set. Both the development and test set contain around 10.000 examples. Some examples are labeled with ’*-’*, which means there was not enough consensus on them. These examples are excluded. Also, to speed up the computation we excluded examples, which have the premise longer than 25 words, or the hypothesis longer than 15 words. There were still $92.5\%$ remaining examples. Both premises and hypothesis were padded with symbols (empty words), so that all premises consisted of 25 words, and all hypotheses consisted of 15 tokens.
We use 50-dimensional word vectors[^3] trained with GloVe [@pennington2014glove]. For words without pretrained embeddings, the embeddings are randomly selected from the normal distribution. Word embeddings are not updated during training.
For optimization Adam method [@kingma2014adam] was used with suggested hyperparameters[^4].
Classification models are trained until the loss on the validation set does not improve for three epochs. The model with best validation loss is retained.
Generative models are trained for 20 epochs, since it turned out that none of the stopping criteria were useful. With each generative model a new dataset is created. The new dataset consists of training set, which is generated using examples from the original training set, and a development set, which is generated from the original development set. The beam size for beam search was set to 1. The details of the decision are presented in Section \[sec:ld\_eval\].
Some datasets were constructed by filtering the generated datasets according to various thresholds. Thus, the generated datasets were constructed to contain enough examples, so that the filtered datasets had at least the number of examples as the original dataset. In the end, all the datasets were trimmed down to the size of the original dataset by selecting the samples sequentially from the beginning until the dataset had the right size. Also, the datasets were filtered so that each of the labels was represented equally. All the models, including classification and discriminative models, were trained with hidden dimension $d$ set to 150, unless otherwise noted.
Our implementation is accessible at <http://github.com/jstarc/nli_generation>. It is based on libraries *Keras*[^5] and *Theano*[@theano].
Results {#sec:nli_results}
=======
First, the classification model <span style="font-variant:small-caps;">OrigClass</span> was trained on the original dataset. This model was then used throughout the experiments for filtering the datasets, comparison, etc. Notice that we have assumed <span style="font-variant:small-caps;">OrigClass</span> to be ground truth for the purpose of our experiments. However, the accuracy of this model on the original test set was $81.3\%$, which is less than $86.1\%$, which was attained by *mLSTM (d=150)* model in [@wang2015learning]. Both models are very similar, including the experimental settings, however ours was trained and evaluated on a slightly smaller dataset.
Preliminary evaluation {#sec:ld_eval}
----------------------
Several <span style="font-variant:small-caps;">AttEmbedDecoder</span> models with various latent dimensions $z \in [2, 4, 8, 16, 32, 147\footnote{Latent dimension $z=147$ is the largest dimension so that there is no reduction in dimensionality in Equation~\ref{eq:reduction}, therefore $z + c = d$, where $c$ is the number of labels.}]$ were first trained and then used to generate new datasets. A couple of generated examples are presented in Table \[tab:enatil\_ex\].
[|c|l|l|]{} & A person throwing a yellow ball in the air.\
$z=2$ & neutral & Someone is playing basketball.\
& contradiction & A person is sleeping in a chair.\
& entailment & A person is throwing a ball\
$z=8$ & neutral & The person has a yellow ball going to the game.\
& contradiction & The person is sitting in the bleachers.\
& entailment & A person is playing with a ball.\
$z=147$ & neutral & A person is trying to get home from give a ball.\
& contradiction & A person is reading a bank from london.\
& entailment & A person is throwing a ball up.\
& Two women in bathing suits climb rock piles by the ocean.\
$z=2$ & neutral & Two women are climbing rocks in the ocean on a sunny day.\
& contradiction & The women are playing basketball.\
& entailment & Two women are climbing.\
$z=8$ & neutral & Two young women in bathing suits are friends\
& contradiction & Two women naked.\
& entailment & The girls looking at the water.\
$z=147$ & neutral & Two women are looking at the lagoon in front of a calm shore.\
& contradiction & Two women are gossiping on a sandy beach.\
& entailment & A group of women are climbing wood in the ocean.\
Figure \[fig:data\_acc\] shows the accuracies of the generated development datasets evaluated by the <span style="font-variant:small-caps;">OrigClass</span>. The maximum accuracy of $64.2\%$ was achieved by <span style="font-variant:small-caps;">EmbedDecoder</span> (z=2), and the accuracy is decreasing with the number of dimensions in the latent variable. The analysis for each label shows that the accuracy of contradiction and neutral labels is quite stable, while the accuracy of the entailment examples drops significantly with latent dimensionality. One reason for this is that the hypothesis space of the entailment label is smaller than the spaces of other two labels. Thus, when the dimensionality is higher, more creative examples are generated, and these examples less often comply with the entailment label.
Since none of the generated datasets’ accuracies is as high as the accuracy of the <span style="font-variant:small-caps;">OrigClass</span> on the original test set, we used <span style="font-variant:small-caps;">OrigClass</span> to filter the datasets subject to various prediction thresholds. The examples from the generated dataset were classified by <span style="font-variant:small-caps;">OrigClass</span> and if the probability of the label of the example exceeded the threshold $t \in [0.0, 0.3, 0.6, 0.9]$, then the example was retained.
For each filtered dataset a classifier was trained. Figure \[fig:gen\_class\_acc\] shows the accuracies of these classifiers on the original test set. Filtering out the examples that have incorrect labels (according to the <span style="font-variant:small-caps;">OrigClass</span>) improves the accuracy of the classifier. However, if the threshold is set too high, the accuracy drops, since the dataset contains examples that are too trivial. Figure \[fig:gen\_class\_dev\_acc\], which represents the accuracy of classifiers on their corresponding generated development sets, further shows the trade-off between the accuracy and triviality of the examples. The classifiers trained on datasets with low latent dimension or high filtering threshold have higher accuracies. Notice that the training dataset and test dataset were generated by the same generative model.
![Accuracies of the unfiltered generated datasets classified by <span style="font-variant:small-caps;">OrigClass</span>. A dataset was generated for each generative model with different latent dimension $z \in [2, 4, 8, 16, 32, 147]$. For each dataset the examples were classified with <span style="font-variant:small-caps;">OrigClass</span>. The predicted labels were taken as a golden truth and were compared to the labels of the generated dataset to measure its accuracy. The accuracies were measured for all the labels together and for each label separately.[]{data-label="fig:data_acc"}](data_acc.pdf){width="70.00000%"}
[.49]{} ![Accuracies of classifiers trained on the generated dataset and tested on the original test set and the generated development sets. A dataset was generated for each generative model with different latent dimension $z \in [2, 4, 8, 16, 32, 147]$. From these unfiltered datasets new datasets were created by filtering according to various prediction thresholds (0.0, 0.3, 0.6, 0.9), which also represent chart lines. A classifier was trained on each of the datasets. Each point represents the accuracy of a single classifier. The classifiers were evaluated on the original test set in Figure \[fig:gen\_class\_acc\]. Each classifier was evaluated on its corresponding generated development set in Figure \[fig:gen\_class\_dev\_acc\].[]{data-label="fig:gen_class_acc_all"}](gen_class_acc.pdf "fig:"){width="100.00000%"}
[.49]{} ![Accuracies of classifiers trained on the generated dataset and tested on the original test set and the generated development sets. A dataset was generated for each generative model with different latent dimension $z \in [2, 4, 8, 16, 32, 147]$. From these unfiltered datasets new datasets were created by filtering according to various prediction thresholds (0.0, 0.3, 0.6, 0.9), which also represent chart lines. A classifier was trained on each of the datasets. Each point represents the accuracy of a single classifier. The classifiers were evaluated on the original test set in Figure \[fig:gen\_class\_acc\]. Each classifier was evaluated on its corresponding generated development set in Figure \[fig:gen\_class\_dev\_acc\].[]{data-label="fig:gen_class_acc_all"}](gen_class_dev_acc.pdf "fig:"){width="100.00000%"}
The unfiltered datasets have been evaluated with five other metrics besides classification accuracy. The results are presented in Figure \[fig:other\_eval\]. The whole figure shows the effect of latent dimensionality of the models on different metrics. The main purpose of the figure is not show absolute values for each of the metrics, but to compare the metrics’ curves to the curve of our main metric, the accuracy of the classifier.
The first metric – Premise-Hypothesis Distance – represents the average Jaccard distance between the premise and the generated hypothesis. Datasets generated with low latent dimensions have hypotheses more similar to premises, which indicates that the generated hypotheses are more trivial and less diverse than hypothesis generated with higher latent dimensions.
We also evaluated the models with standard language generation metrics ROUGE-L and METEOR. The metrics are negatively correlated with the accuracy of the classifier. We believe this is because the two metrics reward hypotheses that are similar to their reference (original) hypothesis. However, the classifier is better if trained on more diverse hypotheses.
The next metric is the log-likelihood of hypotheses in the development set. This metric is the negative of the training loss function. The log-likelihood improves with dimensionality since it is easier to fit the hypotheses in the training step having more dimensions. Consequently, the hypothesis in the generating step are more confident – they have lower log-likelihood.
The last metric – discriminative error rate – is calculated with the discriminative model. The model is trained on the hypotheses from the unfiltered generated dataset on one side and the original hypotheses on the other side. Error rate is calculated on the (generated and original) development sets. Higher error rate indicates that it is more difficult for discriminative model to distinguish between the generated and the original hypotheses, which suggests that the original generating distribution and the distribution of the generative model are more similar. The discriminative model detects that low dimensional generative models generate more trivial examples as also indicated by the distance between premise and hypotheses. On the other hand, it also detects the hypotheses of high dimensional models, which more frequently contain grammatic or semantic errors.
There is a positive correlation between the discriminative error rate and the accuracy of the classifier. This observation led us to the experiment, where the generated dataset was filtered according to the prediction probability of the discriminative model. Two disjoint filtered datasets were created. One with hypotheses that had high probability that they come from the original distribution and the other one with low probability. However, the accuracies of classifiers trained on these datasets were very similar to the accuracy of the classifier on the unfiltered dataset. Similar test was also done with the log-likelihood metric. The examples with higher log-likelihood had similar performance than the ones with lower log-likelihood. This also lead us to set the size of the beam to 1. Also, the run time of generating hypothesis is $\mathcal{O}(b)$, where $b$ is beam size. Thus, with lower beam sizes much more hypotheses can be generated.
![Comparison of unfiltered generated datasets using various metrics. Each dataset was generated by a model with a different latent dimension, then each metric was applied on each dataset. For metrics other than classifier accuracy and discriminator error rate, the metric was applied on each example and the average was calculated for each dataset.[]{data-label="fig:other_eval"}](other_eval.pdf){width="100.00000%"}
To accept the hypothesis from Section \[sec:nli\_intro\] we have shown that a quality dataset requires *accurate* examples by showing that filtering the dataset with the original classifier improves the performance (Figure \[fig:gen\_class\_acc\]). Next, we have shown that *non-trivial* examples are also required. If the filtering threshold is set too high, these examples are excluded, and the accuracy drops. Also, the more trivial examples are produced by low-dimensional models, which is indicated by lower premise-hypothesis distances, and lower discriminative error rate (Figure \[fig:other\_eval\]). Finally, a quality dataset requires more *comprehensible* examples. The high dimensional models produce less comprehensible hypotheses. They are detected by the discriminative model (see discriminator error rate in Figure \[fig:other\_eval\]).
Other models
------------
We also compared <span style="font-variant:small-caps;">AttEmbedDecoder</span> model to all other models. Table \[tab:nli\_all\_models\] presents the results. For all the models the latent dimension $z$ is set to 8, as it was previously shown to be one of the best dimensions.
For all the models the number of total parameters is relatively high, however only a portion of parameters get updated each time. The <span style="font-variant:small-caps;">AttEmbedDecoder</span> model was the best model according to our main metric – the accuracy of the classifier trained on the generated dataset. The hidden dimension $d$ of the <span style="font-variant:small-caps;">BaseEmbedDecoder</span> was selected so that the model was comparable to <span style="font-variant:small-caps;">AttEmbedDecoder</span> in terms of the number of parameters $\theta_*$. The accuracies of classifiers generated by <span style="font-variant:small-caps;">BaseEmbedDecoder</span> are still lower than the accuracies of classifiers generated by <span style="font-variant:small-caps;">AttEmbedDecoder</span>, which shows that the attention mechanism helps the models.
[lccccccccc]{} Model & $z$ & d & $|\theta_{total}|$ & $|\theta_*|$ & [email protected] & [email protected] & acc-data & nll & disc-er\
<span style="font-variant:small-caps;">EncoderDecoder</span> & 8 & 150 & 6.4M & 1.1M & 43.4 & 72.4 & **57.5** & 1.00 & 0.01\
<span style="font-variant:small-caps;">VaeEncoderDecoder</span>&8 & 150 & 6.4M & 1.1M & 58.6 & 77.9 & 48.0 & 0.77 & 1.9\
<span style="font-variant:small-caps;">BaseEmbedDecoder</span> & 8 & 226 & 13M & 580K & 65.0 & 77.7 & 56.3 & 0.73 & 14.0\
<span style="font-variant:small-caps;">AttEmbedDecoder</span> & 8 & 150 & 11M & 581K & **65.7** & **78.5** & 56.8 & **0.69** & **14.8**\
Table \[tab:nli\_all\_datasets\] shows the performance of generated datasets compared to the original one. The best generated dataset was generated by <span style="font-variant:small-caps;">AttEmbedDecoder</span>. The accuracy of its classifier is only 2.7 % lower than the accuracy of classifier generated on the original human crafted dataset. The comparison of the best generated dataset to the original dataset shows that the datasets had only $0.06 \%$ of identical examples. The average length of the hypothesis was $7.97$ and $8.19$ in the original dataset and in the generated dataset, respectively. In another experiment the generated dataset and the original dataset were merged to train a new classifier. Thus, the merged dataset contained twice as many examples as other datasets. The accuracy of this classifier was 82.0%, which is 0.8 % better than the classifier trained solely on the original training set. However, the lowest average loss is achieved by the classifier trained on the original dataset.
[lcc]{} Dataset & loss & accuracy%\
<span style="font-variant:small-caps;">EncoderDecoder</span> & 1.563 & 72.4\
<span style="font-variant:small-caps;">VaeEncoderDecoder</span> & 1.174 & 77.9\
<span style="font-variant:small-caps;">BaseEmbedDecoder</span> & 1.095 & 77.7\
<span style="font-variant:small-caps;">AttEmbedDecoder</span> & 0.970 & 78.5\
Original Dataset & **0.475** & 81.2\
Original Dataset + <span style="font-variant:small-caps;">AttEmbedDecoder</span> & 0.486 & **82.0**\
Qualitative evaluation
----------------------
We also did a qualitative evaluation of the generated hypothesis. Hypotheses are mostly grammatically sound. Sometimes the models incorrectly use indefinite articles, for instance ”*an phone*”, or possessive pronouns ”*a man uses her umbrella*”. These may be due to the fact the system must learn the right indefinite article for every word separately. On the other hand, the models sometimes generate hypotheses that showcase more advanced grammatical patterns. For instance, hypothesis ”*The man and woman have a cake for their family*” shows that the model can correctly use plural in a non-trivial setting. Generative neural networks have a tendency to repeat words, which sometimes make sentences meaningless, like ”*A cup is drinking from a cup of coffee*” or even ungrammatical, like ”*Several people in a car car*”.
As shown previously the larger is the latent dimension more creative hypotheses are generated. However, with more creativity semantic errors emerge. Some hypotheses are correct, just unlikely to be written by a human, like ”*A shirtless man is holding a guitar with a woman and a woman*”. Others present improbable events, like ”*The girls were sitting in the park watching tv*”, or even impossible events, for instance ”*The child is waiting for his wife*”. This type of errors arise because the models have not learned enough common sense logic. Finally, there are hypotheses, which make no sense. For instance, ”*Two women with grassy beach has no tennis equipment*”. On the contrary, the models are able to generate some non-trivial hypotheses. From the original premise ”*A band performing with a girl singing and a guy next to her singing as well while playing the guitar*”, the model has generated some hypotheses that do not contain concepts explicitly found in the premise. For instance, ”*People are playing instruments*” (entailment), ”*The band was entirely silent*” (contradiction), or ”*The girl is playing at the concert*” (neutral).
Regarding the compliance of the hypotheses with the label and premise, we observed that many generated hypotheses are not complying with the label, however they would be a very good example with a different label. For instance, the generated hypotheses represent entailment instead of contradiction. This also explains why the accuracy of the generated dataset measured by the original classifier is low in Figure \[fig:data\_acc\]. On the other hand, the models generate examples that are more ambiguous and not as clear as those in the original dataset. These examples are harder to classify even for a human. For instance, the relationship between premise ”*A kid hitting a baseball in a baseball field*” and hypothesis ”*The baseball player is trying to get the ball*” can be either interpreted either as an entailment if verb *get* is intepreted as *not to miss* or contradiction if *get* is intepreted as *possess*. For a deeper insight into generated hypothesis more examples are presented in \[sec:more\_examples\].
The gap between the discriminative error rates (disc-er) of <span style="font-variant:small-caps;">EncoderDecoder</span> models and <span style="font-variant:small-caps;">EmbedDecoder</span> models in Table \[tab:nli\_all\_models\] is significant. To further investigate, the same experiment was performed again by a human evaluator and the discriminative model. This time on a sample of 200 examples. To recap, both the model and human were asked to select the generated hypothesis given a random original and generated hypothesis without knowing which one is which.
Human evaluation confirms that <span style="font-variant:small-caps;">AttEmbedDecoder</span> hypotheses are more difficult to separate from the original one than the hypotheses of <span style="font-variant:small-caps;">VaeEncoderDecoder</span>. Table \[tab:discriminator\] presents the results. The discriminative model discriminates better than the human evaluator. This may be due to the fact that the discriminative model has learned from a large training set, while the human was not shown any training examples. Human evaluation has shown that generated hypotheses are positively recognized if they contain a grammatical or semantic error. But even if the generated hypothesis does not contain these errors, it sometimes reveals itself by not being as sophisticated as the original example. On the other hand, the discriminative model does not always recognize these discrepancies. It relies more on the differences in distributions learned form a big training set. The true number of non-distinguishable examples may be even higher than indicated by the human discriminator error rate since the human may have correctly guessed some of the examples he could not distinguish.
[lcccc]{} & &\
Gen. Model & Disc. Model & Human & Disc. Model & Human\
<span style="font-variant:small-caps;">AttEmbedDecoder</span> & 14.0 & - & 14.0 & 22.5\
<span style="font-variant:small-caps;">VaeEncoderDecoder</span> & 1.9 & - & 2.0 & 11.5\
Conclusion {#sec:nli_conclusion}
==========
In this paper, we have proposed several generative neural networks for generating hypothesis using NLI dataset. To evaluate these models we propose the accuracy of classifier trained on the generated dataset as the main metric. The best model achieved $78.5 \%$ accuracy, which is only $2.7 \%$ less than the accuracy of the classifier trained on the original human written dataset, while the best dataset combined with the original dataset has achieved the highest accuracy. This model learns a decoder and a mapping embedding for each training example. It outperforms the more standard encoder-decoder networks. Although more parameters are needed to be trained, less are updated on each batch. We have also shown that the attention mechanism improves the model. The analysis has confirmed our hypothesis that a good dataset contains accurate, non-trivial and comprehensible examples. To further examine the quality of generated hypothesis, they were compared against the original human written hypotheses. The discriminative evaluation shows that in $22.5 \%$ of cases the human evaluator incorrectly distinguished between the original and the generated hypothesis. The discriminative model was actually better in distinguishing. We have also compared the accuracy of classifier to other metrics. The standard text generation metrics ROUGE and METEOR do not indicate if a generated dataset is good for training a classifier.
To obtain higher accuracies of the generated datasets, they need to be filtered, because the generative models produce examples, whose label is not always accurate. Thus, we propose for future work incorporating the classifier into the generative model, in a similar fashion that it was done on images by [@lamb2016discriminative]. This network could also include the discriminative model to generate examples from a distribution that is more similar to the original training distribution. Finally, constructing a dataset requires a lot of intensive manual work that mainly consists of writing text with some creativity. To extend the original dataset human users could just validate or correct the generated examples. On top of that we would like to develop active learning methods to identify incorrect generated examples that would most improve the dataset if corrected.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Slovenian Research Agency and the ICT Programme of the EC under XLike (ICT-STREP-288342) and XLime (FP7-ICT-611346).
References {#references .unnumbered}
==========
More Examples {#sec:more_examples}
=============
In this section more generated hypotheses are presented. Each example starts with the original example data. Then, several hypotheses generated with from the original example with our best model are displayed.
[@ll]{} **Premise:** & a little boy in a blue shirt holding a toy.\
**Hypothesis:** & boy dressed in blue carrying toy to a toybox.\
**Label:** & neutral\
\
& the little boy is caucasian.\
& a boy in blue holds a toy for the first time.\
& a child in a blue shirt holds a toy outside.\
& a little boy watches the toy to his friend.\
& a little boy is playing with a toy in the kitchen.\
& the boy likes to his toy.\
& the boy is in the store.\
[@ll]{} **Premise:** & a little boy in a blue shirt holding a toy.\
**Hypothesis:** &boy dressed in red lighting things on fire.\
**Label:** & contradiction\
\
& the little boy is dancing.\
& a boy is jumping into the pool.\
& a boy is watching television.\
& the boy is sleeping on the floor.\
& a boy is watching television indoors.\
& a boy is holding a kitten.\
& the little boy is standing in the pool.\
[@ll]{} **Premise:** & a little boy in a blue shirt holding a toy.\
**Hypothesis:** &boy dressed in blue holds a toy.\
**Label:** & entailment\
\
& the boy is holding a toy.\
& a kid is holding a toy.\
& a boy with a toy is riding a toy.\
& little little boy holding a toy.\
& a boy is riding on a toy.\
& a boy is with an object.\
& the little boy is wearing a shirt.\
[@ll]{} **Premise:** & one woman with a dark coat on walking by herself in a city.\
**Hypothesis:** & the woman is walking on a sidewalk.\
**Label:** & neutral\
\
& a woman at a city street has a sunny day.\
& the woman is shopping.\
& a woman is walking to the store.\
& the woman is asian.\
& the lady with the coat walking by her way to a store in the city.\
& the woman is afraid of the woman.\
& a woman was watching to go to a party\
[@ll]{} **Premise:** & one woman with a dark coat on walking by herself in a city.\
**Hypothesis:** & the woman is standing in a building.\
**Label:** & contradiction\
\
& the woman is sitting on a couch.\
& a woman is sitting on the ground.\
& a woman in a coat and a coat is sitting next to her way\
& a woman is watching the movie at the theater.\
& a man is walking on the street\
& the men are asleep\
& the dog is wearing a pink coat.\
[@ll]{} **Premise:** & one woman with a dark coat on walking by herself in a city.\
**Hypothesis:** & the woman is outdoors.\
**Label:** & entailment\
\
& a female is walking\
& there are people present.\
& woman walk.\
& a woman in a coat walking through a city.\
& there is a woman and a man in a crosswalk.\
& woman are walking by a city.\
& the woman is talking to each other.\
=2
[@ll]{} **Premise:** & 2 two young boys of opposing teams play football, while wearing full protection uniforms and helmets.2=\
**Hypothesis:** & boys scoring a touchdown\
**Label:** & neutral\
\
& the boys in uniform are trying to win the game.\
& the brothers are wearing a uniforms for the football game.\
& the boys were waiting to play the game.\
& two boys are fighting with their team.\
& two teams wearing uniforms being finished for the game\
& two young boys are having fun.\
& the boys are wearing helmets and are standing nearby.\
[@ll]{} **Premise:** & 2 two young boys of opposing teams play football, while wearing full protection uniforms and helmets. 2=\
**Hypothesis:** & dog eats out of bowl\
**Label:** & contradiction\
\
& the two boys are watching tv.\
& the boys are playing baseball.\
& the man is on a trampoline.\
& two young girls are dressed in tuxedos.\
& two kids playing with the ball at the beach.\
& two kids are in the ocean.\
& the boys are chasing an elephant\
[@ll]{} **Premise:** & 2 two young boys of opposing teams play football, while wearing full protection uniforms and helmets. 2=\
**Hypothesis:** & boys play football\
**Label:** & entailment\
\
& a trio of young boys wearing uniforms.\
& two kids are being photographed for a football player.\
& the boys have uniforms.\
& two boys are wearing uniforms.\
& the boys are dressed\
& two boys are performing.\
& the boys all in uniforms for their team.\
[^1]: The task is designed as a 2-class classification problem, while the SNLI dataset is constructed for a 3-class classification task.
[^2]: In beam search mode the process stops when all $k$ hypotheses reach the symbol or maximum hypothesis length is reached
[^3]: http://nlp.stanford.edu/data/glove.6B.zip
[^4]: As suggested in [@kingma2014adam] $\beta_1$ is set to 0.9 and $beta_2$ is set to 0.999.
[^5]: <http://keras.io>
| |
Inflation risk has been called the silent risk because it erodes the value of every investors portfolio regardless of the investment choices made. In simple terms, inflation is the sustained rise of the price of goods and services and inflation risk is the chance that an investment won't grow quickly enough over time to meet an investors objectives.
As an investor you must be aware of your actual rates of return. Unfortunately, most investors only examine their nominal rates of return and forget about their purchasing power, altogether. The nominal rate of return is the actual growth rate of your money while the actual rate of return is the growth rate of your purchasing power. For example, if a money market fund earns 4%, the nominal rate of return, but inflation is running at 2%, your actual rate of return, therefore, your purchasing power has only increased by 2%.
Please review the attached charts so you can better understand how inflation erodes your purchasing power and affects the rates of return on the Dow Jones Industrial Average and the Waverley Research Selections. | http://waverleyresearch.com/investing_inflation.htm |
1. Field of the Invention
The invention-relates to fiber optic sensor systems. More particularly, the invention relates to a fiber optic sensor housing which is capable of introducing a desired initial pre-load. The housing arrangement allows for low pressure response in a fiber optic Bragg grating.
2. State of the Art
Fiber optic sensor technology has developed concurrently with fiber optic telecommunication technology. The physical aspects of optical fibers which enable them to act as wave guides for light are affected by environmental influences such as temperature, pressure, and strain. These aspects of optical fibers which may be considered a disadvantage to the telecommunications industry are an important advantage to the fiber optic sensor industry.
Optical fibers, whether used in telecommunications or as environmental sensors, generally include a cylindrical core, a concentric cylindrical cladding surrounding the core, and a concentric cylindrical protective jacket or buffer surrounding the cladding. The core is made of transparent glass or plastic having a certain index of refraction. The cladding is also made of transparent glass or plastic, but having a different, smaller, index of refraction. The ability of the optical fiber to act as a bendable waveguide is largely determined by the relative refractive indices of the core and the cladding.
The refractive index of a transparent medium is the ratio of the velocity of light in a vacuum to the velocity of light in the medium. As a beam of light enters a medium, the change in velocity causes the beam to change direction. More specifically, as a beam of light travels from one medium into another medium, the beam changes direction at the interface of the two media. In addition to changing direction at the interface of two media, a portion of the incident beam is reflected at the interface such that the energy of the beam traveling through the second medium is diminished (the sum of the energy of the refracted and reflected beams must equal the energy of the incident beam). The angles of reflection and refraction can be predicted using Snell's law if the refractive indices of both media are known.
By altering the indices of refraction of two adjacent media, the angle of refraction and the angle of reflection of a beam traveling toward the interface of the two media can be altered such that the intensity of the light entering the second medium approaches zero and substantially all of the light is reflected at the interface. Conversely, for any two transparent media, there is a critical angle of incidence at their interface at or below which substantially all of the incident light will be reflected. This phenomenon, known as total internal reflection, is applied in choosing the refractive indices of the core and the cladding in optical fibers so that light may propagate through the core of the fiber with minimal power loss.
Many other factors affect the propagation of light through the fiber optic core, including the dimensions of the core and the cladding, the wavelength of the light, the magnetic field vectors of the light and electrical field vectors of the light. In addition, many of the physical laws used to determine the ideal propagation of light through a wave guide (optical fiber) assume an “ideal” waveguide, i.e. a straight wave guide with perfect symmetry and no imperfections. For example, the diameter of the core and the wavelength of the light transmitted through it will determine whether the fiber optic is “single mode” or “multimode”. The terms single mode and multimode refer to the dimensional orientation of rays propagating through the fiber. Single mode fibers have a core with a relatively small diameter (2-12 microns) and support only one spatial mode of propagation. Multimode fibers have a core with a relatively large diameter (25-75 microns) and permit non-axial rays or modes to propagate through the core. The so-called single mode fibers are actually two mode fibers in the sense that there are two different states of optical polarization that can be propagated through the core. In an ideal, straight, imperfection-free fiber with perfect circular symmetry, the propagation velocity of light is independent of the direction of polarization.
A fiber with an elliptical core will have two preferred directions of polarization (along the major axis and along the minor axis). Linearly polarized light injected into the fiber at any other direction of polarization will propagate in two separate modes that travel at slightly different velocities. This type of fiber is said to have a “modal birefringence”. In a real fiber of this type, even ideally polarized light will couple into the other mode due to imperfections in the core-cladding interface, index of refraction fluctuations, and other mechanisms. Static and dynamic changes in polarization may occur along the entire length of the fiber. Over a given distance, the phases of the two modes will pass through an entire cycle of being in phase and out of phase. This distance is known as the “beat length”. A long beat length is associated with a small birefringence and a short beat length is associated with a large birefringence. Birefringent optical fibers are also known as “polarization preserving fibers” or “polarization maintaining (PM) fibers”. Birefringence is achieved by providing a core with an elliptical cross section or by providing circular core with a cladding which induces stress on the core. For example, the cladding may be provided with two parallel stress members having longitudinal axes which lie in the same plane as the axis of the core.
As mentioned above, fiber optic sensors employ the fact that environmental effects can alter the amplitude, phase, frequency, spectral content, or polarization of light propagated through an optical fiber. The primary advantages of fiber optic sensors include their ability to be light weight, very small, passive, energy efficient, rugged, and immune to electromagnetic interference. In addition, fiber optic sensors have the potential for very high sensitivity, large dynamic range, and wide bandwidth. Further, a certain class of fiber optic sensors may be distributed or multiplexed along a length of fiber. They may also be embedded into materials.
State of the art fiber optic sensors can be classified as either “extrinsic” or “intrinsic”. Extrinsic sensors rely on some other device being coupled to the fiber optic in order to translate environmental effects into changes in the properties of the light in the fiber optic. Intrinsic sensors rely only on the properties of the optical fiber in order to measure ambient environmental effects. Known fiber optic sensors include linear position sensors, rotational position sensors, fluid level sensors, temperature sensors, strain gauges, fiber optic gyroscopes, and pressure sensors.
One type of fiber optic pressure sensor takes advantage of the fact that ambient pressure places a strain on the jacket of an optical fiber which strains the cladding, thereby straining the core and changing the birefringence of the fiber. When a force is applied transversely to the fiber, the birefringence of the fiber changes, which changes the beat length and thus the intensity of light viewed by an intensity detector. Another type of fiber optic sensor utilizes intra-core Bragg fiber gratings as disclosed in U.S. Pat. No. 5,380,995 to Udd et al., the complete disclosure of which is incorporated by reference herein. Intra-core Bragg gratings are formed in a fiber optic by doping an optical fiber with material such as germania and then exposing the side of the fiber to an interference pattern to produce sinusoidal variations in the refractive index of the core. Two presently known methods of providing the interference pattern are by holographic imaging and by phase mask grating. Holographic imaging utilizes two short wavelength (usually 240 nm) laser beams which are imaged through the side of a fiber core to form the interference pattern. The bright fringes of the interference pattern cause the index of refraction of the core to be “modulated” resulting in the formation of a fiber grating. Similar results are obtained using short pulses of laser light, writing fiber gratings line by line through the use of phase masks. By adjusting the fringe spacing of the interference pattern, the periodic index of refraction can be varied as desired. Another method of writing the grating on the fiber is to focus a laser through the side of the fiber and write the grating one line at a time. Specialized fiber Bragg grating sensors can also be made from this process. These sensors include side air hole fibers with Bragg gratings, polarization fibers (PM) with gratings, long period gratings, pi-shifted gratings, chirped gratings, and gratings inside of holey fibers.
When a fiber optic is provided with a grating and subjected to transverse strain, two spectral peaks are produced (one for each polarization axis) and the peak to peak separation is proportional to the transverse strain. Spectral demodulation systems such as tunable Fabry-Perot filters, acousto-optical filters, interferometers, or optical spectrum analyzers coupled to the fiber detect the two spectral outputs. The spectral outputs are analyzed and the transverse strain is determined by measuring the peak to peak separation. Depending on how the fiber optic is deployed, the transverse strain may be related to temperature, pressure, or another environmental measure.
There are two shortcomings of this type of sensor system. First, dual peaks are only discernable in ordinary single mode fiber when there is considerable transverse strain, e.g. at very high pressure. Various structures are known for mechanically influencing the fiber such that isotropic forces are converted to anisotropic forces to produce birefringence and to magnify the effect of transverse strain on birefringence. Exemplary structures are disclosed in previously incorporated U.S. Pat. No. 5,841,131 and U.S. Pat. No. 6,218,661. Nevertheless, these mechanical structures can only do so much to enhance the sensitivity of fiber optic sensors.
Previously incorporated U.S. Pat. No. 6,363,180 discloses methods for enhancing dynamic range, sensitivity, accuracy, and resolution in fiber optic sensors which include manipulating the polarization characteristics of the light entering a fiber optic sensor and/or manipulating the polarization characteristics of the light exiting the sensor before it enters the light detection system. While these methods are effective, they do require additional equipment.
Most of the known structures used to enhance the sensitivity of fiber optic pressure sensors suffer from several disadvantages. They often require complicated construction with many parts. They react adversely to thermal changes. They are relatively large. They require the use of an o-ring which reacts adversely to high temperature. They only operate in one mode. By one mode, it is meant that the sensor operates either in a forward mode, a reverse mode, or a differential mode. In a forward mode, increased pressure causes increased strain on the fiber optic. In a reverse mode, increased pressure causes a decrease in the strain on the fiber optic. In a differential mode, the fiber optic is exposed to two different pressures and the strain on the fiber optic is indicative of the difference between the pressures. It will be appreciated that it would be desirable to provide a sensor which operates in multiple modes. Thus, as used herein, the term “multiple mode” refers to a fiber optic sensor which can operate in multiple modes and should not be confused with the term “multimode” used above to refer to a particular type of fiber optic.
| |
The Life Cycle of Software Products, included in the GREENSOFT Model, is a Life Cycle Thinking inspired product life cycle that can also be attributed with “from cradle-to-grave”. Its objective is to enable stakeholders to assess impacts on Sustainable Development (abbr. SD).
Effects of ICTs on Sustainable Development [1,2,3]
Basically, “Green and Sustainable” software should be as sustainable as possible. This means that economic, societal, ecological impacts, and impacts on human beings that result from the product over its whole life cycle, should be as small as possible.
Most obvious are the first-order effects (or: effects of ICT supply), like performance re-quirements, network bandwidth, hardware requirements, and product packaging that directly demand energy or natural resources.
The second-order effects (or: effects of ICT usage) evolve from using the services offered by ICTs on the life cycle of other products or services. Today, software plays a significant role in the life cycles of many other products or services: software can be used to optimize product design, production processes, the end-of-life treatment, or the usage of other products or services. Unfortunately, these second-order effects are not as obvious as the first-order effects.
Even harder to predict or analyze are third-order effects (or: systemic effects of ICT), be-cause of the manifold systemic interdependencies, which require experienced knowledge from examiners.
One example are rebound effects that may occur, if a specific optimization frees used resources, which can be used to produce more products, which then causes additional demand for these resources. This may in turn overcompensate the initially achieved savings.
The Development Phase accounts for impacts on SD that directly result from activities in-volved in software development, as well as indirectly involved activities, e.g. proportional impacts of common corporate departments.
Environmental impacts to be considered in-clude, for example, electrical energy that is necessary to power the workstations of soft-ware developers and other employees, electrical energy and natural resources that are necessary to operate the IT infrastructure (e.g. networking devices, servers, and storages), energy that is necessary for heating and air conditioning, electrical energy that is necessary for offices lighting, or energy for transportation purposes like long distance business trips for meetings with customers and the development team, and even the employees’ daily way to work.
Social impacts can be working conditions and payment of offshore workers (e.g. developers, type setters), which have consequences for the workers and their families.
Some of these impacts can be mitigated by introducing teleworking and teleconferencing, or by replacing material products with ade-quate immaterial substitutes (second-order effects). This in turn may induce, e.g. changes in organizations, software development methods, or life styles (third-order effects).
The Distribution Phase accounts for impacts on SD that result from distributing the software product. This includes environmental impacts, e.g. of printed manuals (type of paper and ink), chosen means of transport, type and design of the retail and transport packaging (e.g. plastic, polyurethane foam, biodegradable material), or data medium (e.g. CD/DVD, USB memory stick). Furthermore, if the software product is offered as a download then its download size should be considered, as well as the electrical energy and material resources that are necessary to operate the required IT infrastructure.
The Disposal Phase accounts for impacts on SD that result from disposal and recycling of the afore mentioned material sub products.
The Usage Phase considers impacts that result from deploying, using, and maintaining the software product.
Here, maintaining means that administrators are in charge of installed software and support users in their organization. Thus, maintaining includes, e.g. the installation of software patches or updates, the configuration of software and computer systems, and the training of employees in regards to proper software usage.
Beside these effects, software usage has several first-order effects regarding environmental sustainability: In order to deliver its offered services, a computer program requires processing time, which in turn consumes electric energy. This may also require the consumption of services offered by other servers (consider e.g. Data Base Management Systems, Enterprise Resource Planning systems, or simply the WWW service), which causes additional power consumption.
State-of-the-art software systems usually require up-to-date and more powerful hardware than older software systems or previous versions. As a result, this causes hardware replacements in organizations as well as at home, when a new software product is introduced.
On the one hand, new hardware is typically more power efficient than older hardware, but on the other hand it has to be taken into account that the production of the new hardware and the disposal of the old hardware causes vast amounts of resource and energy consumption .
Mining the necessary ores, e.g. in developing countries, where social and environmental standards are very low, leads to considerable social and environmental impacts, which sometimes even culminate in armed conflicts . There are also reports about old and even non-functional hardware that is exported from industrial countries to developing countries, where it is reused but more often recycled under doubtful circumstances in so called backyard facilities or just deposited on waste disposal sites, causing damage to the environment and people’s health.
The second- and third-order effects on SD that result from the usage phase, depend on the purpose of the software product. Some software products directly promote sustainability aspects, like resource or energy efficiency, because it is their intended purpose: software that enables smart heating, smart lighting, smart logistics, paper free offices, teleconferencing, etc. In these cases second-order effects can be easily assessed. However, there is also multipurpose software, like word processors, spread sheets, or graphics software. For these, it is nearly impossible to assess second- or third-order impacts, because software manufacturers usually do not know for which purposes their software product is used.
Finally, in order to avoid rebound effects, newer (and even more resource-efficient) versions and updates of software should be developed in such a way, that they do not enforce the procurement of newer hardware.
If a software product is taken out of service, it is mostly necessary to convert the available data to a format that can be processed by the succeeding software product, or to make it accessible in some other ways. If the data cannot be converted easily, e.g. because it is stored in a proprietary format, this may have an impact on economic sustainability of an organization.
In this phase, even the backup size of data matters, e.g. if legal regulations require long-term storage of data. | https://www.umwelt-campus.de/en/research/projekte/green-software-engineering/reference-model/the-life-cycle-of-software-products |
With pandemic tension still on the rise and escalating vandalism continuing to plague Downtown Portland, live music is no longer something we just take for granted. It is an absolute necessity to keeping sane in this bizarre dystopic multi-verse. But through this cataclysmic lens there is still a light that continues to shine brightly.
Hailing from the island of Oahu, Ron Artis II has been making waves that have been reaching far beyond those Hawaiian shores. As one of 11 talented siblings raised in a supportive musical household surrounded by love & creativity, it was only a matter of time that he’d cross the ocean and into hearts across the nation. With Portland’s Jack London Revue reopening with limited capacity, Ron Artis II & the Truth quickly sold out the venue and due to audience demand, scheduled 4 more shows throughout the week as well.
It feels as if a lifetime has past since I last walked down that dimly lit staircase and into that underground temple of music. A month of anticipation boiled down to this one climatic moment. The few tables that were spaced throughout the venue floor created a certain exclusivity that would be our welcome back into the world. The first notes that cried out from Ron’s 6-strings gave rise to a raw energy that would only be equaled by the soulful voice that accompanied it. At his left, Ernie Ecraela, smiling ear-to-ear, closed his eyes and let his bass speak for him while drummer, Jarod Manis brought the audience into a rhythmic trance.
The band focused on a selection of songs off of their latest album, Soul Street. And before breaking into an impassioned rendition of “Searching for Answers”, Ron asked the crowd to search deep within themselves and find the light within. This theme of positivity flowed from song to song and rose like the tide at noon slowly reclaiming a sandy shore. The band continued to turn up the heat with a breathtaking performance of “The Truth” with each of the trio showcasing their individual mastery of their instruments as well as their unspoken, kinetic bond through the rhythm.
Their emotional performance of the ballad, “Photograph of Love” from his album, Julia, brought the audience to tears as Ron emphasized the love for his wife and that true strength is having the ability to express your feelings to the one you love without question. Before performing their latest single, “This is America”, Ron announced a young guitar prodigy by the name of TJ Edwards would be joining them on stage and who, after witnessing his virtuosity, there was no question. They continued into the upbeat “Walk That Walk” and after a blistering solo and a laugh, Ron announcing to the crowd, “If you’re not breaking stuff you’re not having a good time!”
Ron Artis II & the Truth’s music is born with unquestionable faith and fueled by the Aloha spirit. Through the bright times and the struggles, their lesson is clear: “If you want to make a difference, you have to live it.” So be a little light in the world. Be safe, stay positive and catch these incredible musicians the next time they’re passing through a town near you. | https://hunnypotunlimited.com/hunnypot-does/item/3298-concert-review-ron-artis-ii-the-truth-jack-london-revue-portland-4-11 |
6 comments:
That's going to be fun to watch Dexter taking such drastic action in front of all of his colleagues. They may treat him like a hero. Just from these pictures, it seems like Dexter recognizes Beth Dorsey. That, or he just recognizes that she means to cause harm. Anyway,it looks like Dexter will run from his lab area and push Beth into the interview room to save Debra and his colleagues. I think Beth will only poison herself.ReplyDelete
- AnonymousDecember 10, 2011 at 1:00 PM
To NicholasReplyDelete
I really hope that you're right. That would be awesome.
- AnonymousDecember 10, 2011 at 3:08 PM
That's exactly what I was thinking would happen. I think Deb is finally going to see Dexter take a life. And I can't wait!ReplyDelete
- AnonymousDecember 10, 2011 at 3:19 PM
There doesn't seem to be any plausible way that dexter - but NOT miami metro - would know who this woman is. its not like he ID'd the guy he killed on the boat whom he could then track as this woman's boyfriend/husband.ReplyDelete
the only way anyone's finding out who she is would be through knowing that Batista went to her house. In that case, the whole department should know who she was - not just dexter
- AnonymousDecember 10, 2011 at 3:29 PM
@3.19 - he doesn't have to necessarily have prior knowledge of her. As far as his colleagues are concerned he noticed her acting suspiciously and reacted faster than anyone else.ReplyDelete
Looking at the other promo shot though it looks like Dexter is trying to hold the door closed and Deb is trying to pull him away, which would have made sense if the spoiler about Quinn had been true but now it turns out it isn't, so who knows? | http://www.dexterdaily.com/2011/12/dexter-episode-611-talk-to-hand-3-new.html |
This blog is part of a series of youth authored essays on Health futures: digital first health systems for and with young people. The views expressed in this blog are of the author and not of the Lancet and Financial Times Commission on Governing Health Futures 2030: Growing up in a digital world.
Over the last decade, there have been rapid developments in the use of digital health interventions to address health disparities. Such interventions have transformed from primarily one-way SMS messaging to now including features that promote feedback and interactivity through two-way messaging, peer networks, captivating multimedia (e.g. audio content, rich visual interfaces), and social media. As digital platforms have evolved, so has the popularity of digital health programmes which target a diverse array of end-users. Among youth globally, there has been a dramatic increase in smartphone ownership, resulting in prime opportunities to leverage strategies in using digital platforms to address and promote health outcomes amongst youth. In the U.S., there have been a plethora of digital health interventions to address sexual and reproductive health outcomes among youth. Although digital health offers numerous advantages, many of these interventions neglect to address the needs of Black youth- specifically young adult Black women- who experience disproportionate sexual and reproductive health burdens including HIV and sexually transmitted infections (STIs). There is a dearth of evidence on generating relevant tools for Black women living in high-income countries such as the U.S., who may be underserved with less access to health-related resources.
Oftentimes, research that has focused on developing digital health programmes for Black youth fails to incorporate their unique experiences including contextual and cultural nuances. Consequently, this leads to a severe lack of engagement, uptake, and sustainability of digital health interventions within this community. A systematic review conducted by the authors which assessed sexual and reproductive health digital health interventions for Black women in the U.S. found high rates of loss to follow-up and attrition; with participants noting that one of the reasons for not completing digital health programmes was due to not viewing the intervention as being helpful towards their health needs and concerns. Studies have additionally cited that lack of access to the internet and limited data plans function as barriers to engagement in digital health programs within Black communities. These findings allude to the disparities certain populations may face in access to digital health- in certain instances, digital technologies in public health settings may benefit groups that are privileged, and inadvertently potentiate health disparities.
In aiming to alleviate HIV disparities for Black women in the U.S., our research team decided to develop a mobile health application (app) SavvyHER (Sexual/HIV Health Electronic Empowerment Resource) which aimed to reduce HIV risk for young adult Black women. To ensure that SavyyHER was developed using a community-centred approach, our research team conducted extensive formative work with target communities through focus groups, interviews, and human centred design workshops to understand what their health and social priorities were and whether an HIV prevention app would be beneficial for them. In conducting this work, we came to realise that while HIV was somewhat of a health concern in the community, it was not the primary health concern. Communities voiced needing access to mental health resources, primary care resources, contraception and family planning, alongside intimate partner violence resources. Many of the community members we interviewed described how they would not be motivated to download a digital health app that solely focused on HIV prevention. They would, however, download a digital health app that included comprehensive aspects of health and wellness in addition to HIV prevention. We provide examples of feedback obtained from participants who were interviewed.
This participant described that including features such as a menstrual tracker , coupled with content pertaining to contraception and general sexual health would be beneficial. They stated how many of their peers, despite engaging in sex, did not have adequate access to health information:“The app should have maybe a menstrual tracker for people who have an abnormal cycle or whose cycle come on too frequently… [the app should include] questions about birth control, status of the birth control, sexual health... of course condom [use] and everything else. It [the content on the app] also depends on the age that you’re looking to go towards…a lot of people, a lot of women…i've talked to some of my friends …they haven't had a sex talk with their parents. They haven’t had the sex education course in school. They didn't know things about sex until they start having sex or they start doing things. We have to educate …we as women have to educate other women about their bodies.”
This participant, who was postpartum, voiced wanting more access to mental health content and resources to support her during the postpartum period as opposed to needing solely HIV prevention content: “I would like more on mental health… just more on care for the mom, you know, because during that time, when you have the baby, it's like a lot you get a plethora of information about caring for your baby and you know, like making sure that they're good… but it's like sometimes you get only four or five little things about moms and mental health and postpartum… so I feel like having more information about postpartum, you know, because a healthy mommy means a healthy baby.”
In analysing this feedback, our research team realised that in order for SavvyHER to be effective in reducing HIV disparities, it would have to contain information beyond HIV prevention. Including information on other health topics would motivate users to download the app, and through accessing health content relevant to the end-user, they would also have access to health content pertaining to HIV prevention. In realising this, our research team made extensive changes to the content, features, and design of SavvyHER to ensure that the app reflected the needs of the community. This realisation substantiates the objectives provided from The Lancet and Financial Times Commission Report on putting youth at the centre of digital health innovation, and adapting innovations so that they are responsive towards the needs and priorities of young people.
There is no doubt that digital health can offer innovative solutions in addressing adverse health outcomes. However, in communities that experience competing health, social, and economic burdens, it is critical for programmes to be employed strategically in a manner that aligns with community priorities. As health care becomes more accessible and personalised through technological advances, there is a need for incorporating youth and community-centred approaches that adequately acknowledge and reflect specific needs and health concerns of priority communities. This process will require collaborating with communities, ensuring joint ownership, and eliciting their input regarding desired content and features within all implementation settings globally. In doing so, public trust around digital health and the digital health ecosystem can be boosted, thus leading to further uptaking of digital health interventions as recommended in The Lancet and Financial Times Commission Report. There must be measurable value in implementing such programmes that are backed by robust data, and working alongside communities to ensure that the intervention is being developed in a manner that is relevant and relatable; as opposed to solely using digital health as a “shiny new object” due to its increasing popularity within public health. As illustrated in our case example, incorporating user-centred design frameworks will be essential in ensuring that the content, design, and features of digital health interventions are relevant towards individuals’ lived experiences, with the goal of fostering positive health outcomes.
This article was co-authored by Rasheeta Chandler, Natalie Hernandez, Sherilyn Francis and Andrea Parker.
Dominique Guillaume
Dominique Guillaume is a PhD candidate at Johns Hopkins University and a Jhpiego Global Women’s Health Fellow. She manages the care of patients living with HIV as an infectious disease nurse practitioner, and collaborates on research addressing HIV and cervical cancer disparities within immigrant communities in the U.S. and in sub-Saharan Africa. | https://www.governinghealthfutures2030.org/centering-community-perspectives-in-the-development-of-digital-health-interventions-a-case-example-of-an-hiv-prevention-application/ |
Effect of Lepidium meyenii (Maca), a root with aphrodisiac and fertility-enhancing properties, on serum reproductive hormone levels in adult healthy men.
Lepidium meyenii (Maca) is a Peruvian hypocotyl that grows exclusively between 4000 and 4500 m in the central Andes. Maca is traditionally employed in the Andean region for its supposed aphrodisiac and/or fertility-enhancing properties. This study was a 12-week double-blind, placebo-controlled, randomized, parallel trial in which active treatment with different doses of Maca Gelatinizada was compared with a placebo. The study aimed to test the hypothesis that Maca has no effect on serum reproductive hormone levels in apparently healthy men when administered in doses used for aphrodisiac and/or fertility-enhancing properties. Men aged between 21 and 56 Years received 1500 mg or 3000 mg Maca. Serum levels of luteinizing hormone, follicle-stimulating hormone, prolactin, 17-alpha hydroxyprogesterone, testosterone and 17-beta estradiol were measured before and at 2, 4, 8 and 12 weeks of treatment with placebo or Maca (1.5 g or 3.0 g per day). Data showed that compared with placebo Maca had no effect on any of the hormones studied nor did the hormones show any changes over time. Multiple regression analysis showed that serum testosterone levels were not affected by treatment with Maca at any of the times studied (P, not significant). In conclusion, treatment with Maca does not affect serum reproductive hormone levels.
| |
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy query load. However, most machine learning frameworks and systems only address model training and not deployment. In this paper, we introduce Clipper, a general-purpose low-latency prediction serving system. Interposing between end-user applications and a wide range of machine learning frameworks, Clipper introduces a modular architecture to simplify model deployment across frameworks and applications. Furthermore, by introducing caching, batching, and adaptive model selection techniques, Clipper reduces prediction latency and improves prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. We evaluate Clipper on four common machine learning benchmark datasets and demonstrate its ability to meet the latency, accuracy, and throughput demands of online serving applications. Finally, we compare Clipper to the Tensorflow Serving system and demonstrate that we are able to achieve comparable throughput and latency while enabling model composition and online learning to improve accuracy and render more robust predictions. | https://dancrankshaw.com/project/clipper/ |
Simulating scenario thinking is essential for any organising wanting to improve its forward planning. In this interview, we discuss with King’s College London neuroscientist, Dr. Kris De Meyer, how his Shutdown simulation, created for the Cabinet Office, uses immersive learning to improve people’s ability to imagine different scenarios and plan for them.
Transcript
Kris De Meyer: So in shutdown, we wanted to help the Cabinet Office to find out questions that they had about how the public would respond if the national grid goes out for about a week. If there is a national power failure for a week and that's a really difficult issue because you can't just ask people how they will behave in that situation, because most of us have had no experience of that for a long time. And therefore, we cannot answer questions about situations or not easily answer questions about situations we don't have experience of. So the way the other organizations could be using the insights that we built into shut down is by taking the questions we have about the future and turning them into judgments about things that happened in the past.
Because our brains may not be very good in imagining the future, but we are very good at judging things and other people that happened in the past. That's one aspect to it. The second is by combining storytelling and scenario thinking with discussion and decision making, because what we are doing in shutdown is really switching back and forth between the two where we have a little bit of story that focuses the minds on a particular situation or question. And then we asked people to make decisions and to have discussions about the questions that we want to answer to. A third aspect that is crucial to what we did with shutdown is that if you really want to learn from the experience, if you want experience based learning, you don't only create an immersive experience. You also need to create the conditions for reflection on the experience. And that's what we usually do in something that we call a debrief, where after the experience we take people into sometimes even a different room and we get them to talk about what it felt like to be in that experience of disagreement or a difficult decision making. And so on and so forth. | https://www.crforum.co.uk/research-and-resources/scenario-thinking-interview-with-dr-kris-de-meyer/ |
Pamela Myers - a female celebrity - born on Tuesday July 15th 1947, in Hamilton, Ohio, USA,.
Pamela Myers has a compelling sense of herself as a spiritual being who is the searcher and the seeker of truth. That said, Pamela's life is devoted to investigations into the unknown, and finding the answers to the mysteries of life. Monumental as it is, Myers is well-equipped to handle her mission. She enjoys a fine mind, and is an analytical thinker, capable of great concentration and theoretical insight. Pamela Myers enjoys research, and putting the pieces of an intellectual puzzle together, and once she has enough pieces in place, Pamela is capable of highly creative insight and practical solutions to problems.
Pamela Myers enjoys her solitude and prefers to work alone. She needs time to contemplate her ideas without the intrusion of other people's thoughts. She is a lone wolf and a person who lives by his own ideas and methods. As a result, close associations are difficult for Pamela to form and keep, especially marriage. Myers needs her space and privacy, which, when violated, can cause her great frustration and irritation.
When her life is balanced, however, Pamela Myers is both charming and attractive. She can be the life of a party, and enjoy performing before an audience. Pamela loves displaying her wit and knowledge, which makes her attractive to others, especially the opposite sex. It should just be remembered that because she associates peace with the unobtrusive privacy of her world, intimacy is difficult for Pamela Myers. It is Myers' challenge to avoid shutting out the love of others and keeping her from experiencing the true joy of friendship and close companionship.
With her abilities to learn, analyze, and seek out answers to life's important questions, Pamela Myers has the potential for enormous growth and success in life. By the time she reaches middle age, Pamela will radiate refinement and wisdom. More...
More flavors to Pamela's personality
Entrepreneurial and progressive, Pamela Myers is ever-striving, heading for the top, and enjoying an enterprising, ambitious and determined personality to do things well, and an unyielding dedication to her plan until the goals are achieved.
She bounces back easily from setbacks and can overcome any adversities or obstacles thrown in her way.
There is danger, however that her trait of determination and dedication will shift to stubbornness, making Pamela cling to ideas and projects well past their fruitious season. It is a good idea for her to keep fresh pipeline of ideas to make it easier to replace outdated plans by new and better ones.
Pamela Myers is a practical and down to earth person that can think through any situation or problem.
Her good health and physical stamina allow her to work long and hard for anything that she believe in, and quite often for material rewards.
Pamela utilizes her practical skills, strong physical dexterity, strength, and good health to work with her hands in either a practical or artistic way, such as being a sculptor or craftsman that possesses both physical and mental talents, and strives for high quality end results.
She is easy to get along with, and she has natural ability to save money and pool up economic resources.
Tour Pamela's menu and gain more insight into her personality traits, relationships, strengths and weaknesses, likes and dislikes, compatibility with you and with others, and much more.
We invite you to create your own free personality profile, in private and for your eyes only!.
April 22nd 2019 combines responsibilities with financial opportunity for Pamela Myers.
It is a day of emotional strength for Pamela, and she may find herself playing the role of counseling and comforting in domestic affairs and among her friends, up to the point of self-sacrifice. | https://www.celebrities-galore.com/celebrities/pamela-myers/home/ |
Only when nothing is said about you and you are merely named, are you recognized as you. As soon as something is said about you, you are only recognized as that thing… – Max Stirner
It’s amusing how often people confuse identity with individuality. Identity traces back to a Latin word meaning “sameness.” And sameness implies the existence of something with which I can be the same.
It is certainly possible to conceive of individuals as identical atoms bashing into each other—marxists like to assume that this is what individualists are talking about—but even atoms only become identical when you or I conceive of them as atoms, giving them an identity. Atomization is a process that has its basis in the denial of my unique individuality, and identification plays a part in this process.
Stirner referred to you and I, i.e., to any individual in the flesh at this moment, as “the unique” (der Einzige). In Stirner’s Critics, he explains that this is merely a name, nothing more. To speak, to write, he had to use a name. But, he wrote, “The unique … has no content; it is indeterminacy in itself…” To give it content before I live it out in my world, before you live it out in your world, is to give it an identity, a sameness, to destroy it as unique. To give a conceptual content to the unique is to make it an absurdity.
But even as unique, I am forced to contend with identity. There are the banalities of having to identify myself, for example, when entering a tavern, or when cashing a check, or when stopped by the cops. In every one of these instances, someone has been delegated a certain legal authority to make sure that I am the same as something required by their rules. Am I the same as someone old enough to drink? Am I the same as the one authorized to cash the check? Am I the same as a person with no outstanding warrants? Each of these identities are concepts that I am supposed to live up to. And if I fail, I suffer the consequences. But, in fact, no one is ever the same as any of these things. Even if I can meet each of these challenges to get what I want (some drinks, some needed cash, some distance from the pigs), I am not any of those things. And those who impose these tests on me are my enemies in that they impose abstractions onto my unique self, forcing a conformity to their rules and to a social requirement for personal consistency. They seek to undermine my ownness and with it my uniqueness.
In addition, every ruling social order is set up only to process individuals in terms of categorical identities: race, gender, nationality, sexuality, etc. Though these are all fictions, they affect people physically and mentally. These categories have served as justifications for enslaving individuals, excluding individuals, placing restrictions on individuals, beating and killing individuals, etc, ad nauseum. It makes sense that those who have experienced abuse based on such categorical identities would unite to fight against this abuse and those who carried it out. What doesn’t make sense to me is that most of those who unite for this purpose don’t base their unity on their shared desire to eradicate the abuse, but rather on the categorical identity that has served to justify this abuse. In other words, they choose to unite not as enemies of an order they aim to destroy, but as victims of an order from which they want recognition and justice. A social order can only recognize categories, not unique individuals. Justice can only deal with what can be measured and weighed, i.e., what can be compared and equated. Identity, sameness, belonging to a group, different ways of expressing the requirement for social recognition and justice. I, as an egoist aware of my uniqueness, respond differently, as an enemy, aiming to destroy categorical identity and those who benefit from it immediately as I experience them here and now. If I unite with others, they will be those whose aims and powers enhance my own. Not identity politics, but the destruction of identity and politics, in favor of myself and my associations.But I am not a moralist. I may well find uses for identity in some sense, even while recognizing that it is always a lie. In fact, I use identity whenever I say “I.” In this word, I identify myself here and now, my immediate concrete self, with my concept of myself in the past. As unique (i.e., as I exist concretely here and now), I am not the same as that, but I choose to unite myself with that, even to the extent of identifying with it, because it gives me a significant power in relating to my world and in interacting with others, just as identifying others with the past forms of these others that I have encountered enhances that power. So here, identity can become my tool. However, here as well, I am not talking about categorical identity, but about personal identity, equations that I make for myself, knowing full well that they are nothing more than conceptual tools for my use, for enhancing my self-enjoyment. If I take them to be myself, I am deluding myself.
Recently, I have come across communiqués from individuals (apparently acting in small groups) who describe themselves as individualist-nihilists and egoist-nihilists, laying claim to various attacks against the ruling order. Anyone who rebels and attacks the ruler order for themselves is certainly my comrade. I feel a kinship with her even if I don’t agree with all of his decisions about how he goes about her action. But I wonder why someone who’s acting for himself, from his own life, feels the need to lay claim to her action at all, let alone by using a group name, creating a group identity. If I choose to attack the ruling order or to act against the law in any other way, this choice springs from the immediacy of my life here and now, and I owe no one an explanation. Nor do I need the inspiration of other actions to move me. It is my own life and my own opportunities that move me. It’s true that a rebellious act may move the rebel with passion so she wants to express her rage and joy. Then he might write to claim his act, but there is no need to do so and a great deal of wisdom in not doing so. But what I question most in this is that individuals who claim an act in this way are taking on an identity. This is why they have to name themselves (and as beautiful and poetic as some of these names are, they remain labels for an identity). The signed communiqué replaces the immediate fleeting meaning of the action for the unique individuals who carried it out with a permanent meaning intended to explain the action to an audience. With permanent meanings come permanent identities and the unique individuals disappear into this crystallized form. A unique individual, acting for herself, is nameless. She is nameless, because her existence is too immediate and fleeting for any name that is not completely empty of meaning or thought to express him. If he chooses to act, it makes sense for him to act anonymously, without an identity. If she chooses to talk about her act, to make it a matter for conversation or debate, or to let others know that they are not alone in their rebellion, it makes sense for her to do this anonymously as well. It isn’t difficult to figure out how. The individual, acting from his uniqueness, has no need to identify with his action, she was completely in that action at the moment that she did it. In any case, the full implications of claiming one’s acts should be a matter for ongoing debate without taking away from the solidarity and kinship one feels with those who in their rebellion make different choices.
Identity is about defining what you are. As I said, there are moments when playing with such definitions may make sense (or give pleasure). But these definitions, these identities can never be me. They can, however, become prisons locking me into the cell of a role or a set of roles. And if I am not to be a slave, I have to reject these roles, except as occasional masks I may don when it serves my interests. Of course, when I don’t conform to roles, I become unpredictable, I become fleeting, I become unintelligible to the institutions and to those with institutional ways of viewing their worlds. Stirner says, in Stirner’s Critics, that he “names the unique and says at the same time that ‘names don’t name it’…” Precisely as a unique individual I am nameless, precisely as such I have no identity. I am simply myself here and now.
| |
The District, a fully developed, urban city, still hosts many green spaces, with a variety of vegetation types. There are large natural areas covered by undeveloped forests, avenues lined by majestic willow oaks, neighborhood parks with shade trees and community gardens, wetlands filled with wild rice and wood ducks, grassy athletic fields, and native plant gardens that provide habitat for birds and butterflies in the heart of the city. DOEE is working to restore and improve all of these vegetated areas through a number of programs: | https://doee.dc.gov/service/vegetation-trees-flowers-plants |
© 2019, American College of Neuropsychopharmacology. Attention-deficit/hyperactivity disorder (ADHD) is one of the most prevalent neurodevelopmental disorders in childhood and persists into adulthood in 40–65% of cases. Given the polygenic and heterogeneous architecture of the disorder and the limited overlap between genetic studies, there is a growing interest in epigenetic mechanisms, such as microRNAs, that modulate gene expression and may contribute to the phenotype. We attempted to clarify the role of microRNAs in ADHD at a molecular level through the first genome-wide integrative study of microRNA and mRNA profiles in peripheral blood mononuclear cells of medication-naive individuals with ADHD and healthy controls. We identified 79 microRNAs showing aberrant expression levels in 56 ADHD cases and 69 controls, with three of them, miR-26b-5p, miR-185-5p, and miR-191-5p, being highly predictive for diagnostic status in an independent dataset of 44 ADHD cases and 46 controls. Investigation of downstream microRNA-mediated mechanisms underlying the disorder, which was focused on differentially expressed, experimentally validated target genes of the three highly predictive microRNAs, provided evidence for aberrant myo-inositol signaling in ADHD and indicated an enrichment of genes involved in neurological disease and psychological disorders. Our comprehensive study design reveals novel microRNA–mRNA expression profiles aberrant in ADHD, provides novel insights into microRNA-mediated mechanisms contributing to the disorder, and highlights promising candidate peripheral biomarkers. | https://portalrecerca.uab.cat/en/publications/epigenetic-signature-for-attention-deficithyperactivity-disorder- |
Situated at the Clinical Genetics department of the Amsterdam Medical Center (NL), Dr Henneman (PI) is leading a research group studying epigenetics of complex and non-complex diseases in order to detect novel associated genetic and epigenetic loci. Moreover, Dr. Henneman is closely involved in follow-up translational studies of novel detected loci. Three major topics are in particular Dr. Henneman’s interest: (1) Early developmental environmental factors and their epigenetic contribution to complex diseases e.g. food allergy, Type II diabetes mellitus and stress disorders like post-traumatic stress disorder (PTSD). (2) Epigenetics in intervention and translational studies focusing on e.g. PTSD and Crohn’s disease. (3) Clinical diagnostic applications of epigenetic signatures in rare syndromes. In relation to the latter, Dr. Henneman is affiliated to the dept. of clinical genetics, genome diagnostics, where he is closely involved in the implementation and application of DNA-methylation signatures of rare syndromes under the flag of EpiSign, an international collaboration with Canadian (London Health Sciences Centre, Canada, Episign, Dr. Bekim Sadokovic) and USA partners. In total Dr. Henneman’ s research group involves 9 PhD students, 2 bioinformaticians and 1 Post doc researcher. In 2017 his first PhD student defended her thesis on complex traits and epigenetic analyses within a large cohort of Ghanaian migrants (promotor Prof. dr. Charles Agyemang). In 2018 Dr. Henneman received substantial funding (in collaboration with Prof. dr, G.R.A.M. D’Haens) from the Helmsley charity foundation (USA) to study therapeutic response of anti-inflammatory therapy in patients suffering from Crohn’s disease in order to develop a personalized treatment protocol. Within this project, Dr. Henneman’ s group also applied single cell technology in this cohort in order to study the context and origin of the detected differential DNA-methylation profiles (unpublished).
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This person’s work contributes towards the following SDG(s): | https://researchinformation.amsterdamumc.org/en/persons/peter-henneman |
This study is based on a comprehensive review of psychosocial factors in alcohol dependency presented by 214 African American and white patients in Grady Memorial Hospital. The purpose of this study was to extend and elaborate descriptive research of psychosocial differences to create and sustain the appropriate prevention and treatment strategies. Therefore, this research relies on Biopsychoecological Integrative Contextual Model that not only permits competing explanations of the etiology and epidemiology but also embarks on prevention and treatment plans. Prevention Research Institute (PRI), George L. Engel, Jerrold S. Maxmen, Nicholas G. Ward and others have presented models of drinking vulnerabilities that have utilized the above model. As an alternative, this study has confirmed the validity of this model, and structured the theoretical and empirical approaches based on its framework. The model serves as a foundation for generating hypotheses and general research questions that are set in the context of ethnic experiences among the races and their psychosocial and environmental factors. Therefore, the contextual information drawn from the above model have guided this research to the following question: will there be any relationship and/or difference between the races (African American and white) regarding psychological (self-esteem, attitude, personality, stress, and logical thinking) and social (lifestyle and legal status) factors in alcohol dependency levels (use, abuse, psychological, and physical)? Furthermore, the null hypotheses postulate that there will be no relationship and/or difference between the races regarding their psychosocial factors in alcohol dependency. The rational for these hypotheses are derived from the literatures which indicate that acculturation strains, conflicts, gaps, stressors, and altered psychological or interpersonal circumstances are related to social disorganization and personal dysfunctionality and, therefore, these processes are conducive to a wide variety of social and behavioral deviances which are high-risk in increasing alcohol use and abuse among the races. The findings, of this study, not only stress the above rational but also mitigate perceptions of risk related to cultural, racial, and ethnical factors that have been speculated to African Americans. Patients' races and alcohol dependency levels (dependent variables) enabled this study to address eight cross-ethnic comparisons and cross-behavioral issues (independent variables). In this study, it was permitted to utilize the Statistical Package for the Social Sciences (SPSS).
Momtahan, Mohammad M., "Psychosocial factors affecting African Americans in alcohol dependency" (2003). ETD Collection for AUC Robert W. Woodruff Library. 3542. | http://digitalcommons.auctr.edu/dissertations/3542/ |
I’m concerned about the accuracy and reliability of some of the ‘expert’ psychiatric evidence given in some recent cases.
In the recent R v Nelson EWCA Crim 1615 case dealing with Sec 45a - Paragraph 39 says:
*"Dr Cumming gave evidence that those released under section 37 and 41 MHA hospital and restriction orders were likely to reoffend in 4 per cent of cases.”
This is er, questionable.
Recent academic evidence shows a very high rate of recidivism by restricted patents. A long-term study of patients released from Arnold Lodge medium secure unit in Leicester found 48% (368 patients) were re-convicted after release, 30% (109 patients) for grave offences.
Another study of patients released from high secure psychiatric hospitals found 38% were re-convicted, 26% for serious offences.
And not all offending results in conviction, as these types of offenders are often recalled or diverted away from the criminal justice system. Offending rates by restricted patients will be significantly higher than convictions alone suggest.
A further study of offenders with schizophrenia
released from medium secure psychiatric unit in South East London found two thirds of the total – 67% -reoffended violently.
Dr Cummings figures may come from the MoJ stats 2010-2016, which claim a re-offending rate of 5.7%, but only counts offending during the first year of release. The MoJ report itself says their figures must be treated with caution.
This is not the first incident I’m aware of where judges have been given partial or inaccurate information by ‘expert’ witnesses. (One thought recently the Home Secretary was involved in the release of restricted patients).
How can Judges ensure they are being given accurate
and reliable information?
Is there a mechanism to notify them after the fact when this has happened? | https://forum.mentalhealthlaw.co.uk/t/reliability-accuracy-of-expert-witnesses/265 |
If you would like more informal documentation of myself, please refer to the victor page. Also, please take a look at my resume.
Summary
My name is Victor [vi], and I'm a multidisciplinary designer and developer residing both in Montreal and Tokyo.
I design, develop, and produce static and interactive media. I work with graphics (2D graphics, editing, UI, branding, web, 3D modeling, 3D rendering, animation), audio (music and sound effect production), and programming (fullstack web, game). Using this skillset, my goal is to create unique, innovative, aesthetically powerful, and engaging media to materialize a project's vision - to tell a story.
For businesses, I offer branding consultation, as well as brand design and platform development.
For studios, I offer 2D and 3D asset production (textures, models, rigs, environments, concept art, user interfacing), audio production (effects and music), as well as art direction, game design, and interaction programming.
As the digital art industry has adopted more mixed media production techniques in the past decades, my skills have become increasingly relevant. My abilities allow me to be both comfortable using a single medium if necessary, as well as combining multiple media in order to work significantly more rapidly and efficiently. All the while, communication between different experts is seamless, given my experience in varied fields. Ultimately, these skills are a means to an end: to tell a story in its most impactful form.
With that said, I look forward to putting my skillset towards making our vision into a reality.
Experience
I studied in Montreal, graduating with first class honors from Interactive Media Arts in Dawson College. Currently, I study Computation Arts at Concordia University in Montreal, as well as Waseda University in Tokyo.
What follows is my work experience.
Technical lead and instructor for the sixth installment of the Skins Workshop, in collaboration with Abtec / IFF / Obx Labs.
There, I acted as a technical lead and instructor in the field of game design and audio. I was also involved in teaching programming in C# in Unity over the course of the 1 month workshop. I created and taught the curriculum for audio production, narrative design, and a portion of the programming section. I also helped students (~15 in total) collectively create a game that they had conceptualized. My help ranged from 3D work in Blender, to audio production, to programming in C#, to version control management.
Programming teaching assistant at Concordia University for the Computation Arts program.
There, I helped teach university students how to code in Java and work on personal game development projects.
Research assistant at Obx Labs working on virtual reality and typography.
There, I conceptualized and produced experimental typography projects using both Unity and Unreal Engine for use with the HTC Vive.
Game design teacher at Dawson College.
There, I created and taught a 2-week full-time curriculum centered around familiarizing students with game design, and helping each student produce a personal game using Construct 2.
Web designer at Arthem.co.
There, I dealt with client requirements and front-end design of websites.
Links
- LinkedIn: Professional profile containing work experience, education, skills, and other work-related information.
- GitHub: Code projects repository.
- Bandcamp: Released albums and featured music repository.
- [email protected]: Email address.
Projects
Below is a collection of curated projects that showcase previous work, some of which were for clients, and others personal projects.
For a full list of projects, please visit the projects page.
Visual - 2D and 3D graphics.
Audio - Music and ambient production.
Code - Web platforms, interactive media, and games.
If you would like more informal documentation of myself, please refer to the victor page. Also, please take a look at my resume. | https://v-os.ca/portfolio |
A new study shows that, in addition to species richness, plant evolutionary history plays a critical role in regulating year-to-year variation of biomass production in grasslands. In the face of climate change, understanding the causes of variability in key ecosystem services such as biomass production is essential. A team of researchers led by Dr. Dylan Craven from the University of Göttingen has published the results in the new issue of Nature Ecology & Evolution. They show that multiple factors, including biodiversity and climate, jointly reduce annual variation in grassland productivity.
Biodiversity is much more than just counting species; it also includes diversity in how plants function and in the history of how they have evolved. Despite the growing appreciation for biodiversity and its role in buffering the impacts of vital ecosystem services, these other aspects of biodiversity are frequently overlooked.
An international team of researchers examined how multiple facets of biodiversity contribute to year-to-year variation in grassland biomass production. The researchers measured biomass, which is the dry weight of plant matter (including grassland and other species).
“We show that grassland communities with high species richness and high diversity in evolutionary history show reduced variation in biomass production,” said Dylan Craven, lead author of the study and postdoctoral researcher at the University of Göttingen.
“Our results suggest that greater diversity in evolutionary history makes biomass production in grasslands more stable because these communities are less vulnerable to herbivore attacks or pathogen outbreaks.”
The researchers also found that biomass production of plant communities dominated by slow-growing species typically varied less. Peter Manning, the senior author of the study, said that “we were surprised by these results because we had expected that communities with a greater diversity in characteristics related to plant growth rates would have more stable biomass production but actually species richness as measured by evolutionary history and genetic diversity were better predictors.”
However, the researchers caution that lower year-to-year variation does not imply that grasslands will be more productive, and that measures of stability that consider over- and under-production may be more relevant for agroecological applications.
Dr. Dylan Craven
University of Göttingen – Department of Biodiversity, Macroecology and Biogeography
Büsgenweg 1 – 37077 Göttingen
Phone: +49-(0)551-3910443
E-Mail: [email protected]
www.uni-goettingen.de/en/585428.html
Craven, D. et al. Multiple facets of biodiversity drive the diversity-stability relationship. Nature Ecology & Evolution (2018). doi: 10.1038/s41559-018-0647-7
Thomas Richter | idw - Informationsdienst Wissenschaft
Further reports about: > Biodiversity > Ecosystem > Macroecology > biomass > ecosystem services > genetic diversity > grassland
Cereals use chemical defenses in a multifunctional manner against different herbivores
06.12.2018 | Max-Planck-Institut für chemische Ökologie
Can rice filter water from ag fields?
05.12.2018 | American Society of Agronomy
An international research team including astronomers from the Max Planck Institute for Radio Astronomy in Bonn, Germany, has combined radio telescopes from five continents to prove the existence of a narrow stream of material, a so-called jet, emerging from the only gravitational wave event involving two neutron stars observed so far. With its high sensitivity and excellent performance, the 100-m radio telescope in Effelsberg played an important role in the observations.
In August 2017, two neutron stars were observed colliding, producing gravitational waves that were detected by the American LIGO and European Virgo detectors....
Up to now, OLEDs have been used exclusively as a novel lighting technology for use in luminaires and lamps. However, flexible organic technology can offer much more: as an active lighting surface, it can be combined with a wide variety of materials, not just to modify but to revolutionize the functionality and design of countless existing products. To exemplify this, the Fraunhofer FEP together with the company EMDE development of light GmbH will be presenting hybrid flexible OLEDs integrated into textile designs within the EU-funded project PI-SCALE for the first time at LOPEC (March 19-21, 2019 in Munich, Germany) as examples of some of the many possible applications.
The Fraunhofer FEP, a provider of research and development services in the field of organic electronics, has long been involved in the development of...
For the first time, an international team of scientists based in Regensburg, Germany, has recorded the orbitals of single molecules in different charge states in a novel type of microscopy. The research findings are published under the title “Mapping orbital changes upon electron transfer with tunneling microscopy on insulators” in the prestigious journal “Nature”.
The building blocks of matter surrounding us are atoms and molecules. The properties of that matter, however, are often not set by these building blocks...
Scientists at the University of Konstanz identify fierce competition between the human immune system and bacterial pathogens
Cell biologists from the University of Konstanz shed light on a recent evolutionary process in the human immune system and publish their findings in the...
Laser physicists have taken snapshots of carbon molecules C₆₀ showing how they transform in intense infrared light
When carbon molecules C₆₀ are exposed to an intense infrared light, they change their ball-like structure to a more elongated version. This has now been... | https://www.innovations-report.com/html/reports/agricultural-sciences/many-facets-of-biodiversity-reduce-variability-of-grassland-biomass-production.html |
Traffic Control: the Need for Change
As the population of the United States dramatically increases and the number of vehicles on the nation's roads and highways skyrockets, new methods of traffic control and organization have become necessary, by utilizing new methods of transportation or by revising the current system. In the past 15 years, the number of vehicles on American roads has increased 41.9%, the number of licensed drivers has increased 29.3%, but the size of the general population has only risen 15.9% (Clark 387-404). Between the years 1975 and 1985, the number of miles driven by Americans rose 34.6%, but the number of miles of roads increased by only 4.4% (Doan 64).
Cars and other vehicles are an enormous cost to society, costing between $300 billion and $700 billion per year. These expenses are caused mainly by traffic accidents, traffic jams, and the environmental hazards created by the large number of vehicles on the road.
Traffic accidents account for one of the major reasons that the current techniques of traffic control need revision. Traffic jams, along with broken cars and the lack of alternate routes, account for one half of the traffic congestion in the United States (Clark 387-404). Although the number of traffic accidents in the United States has slowly decreased over the past several years, it is still alarmingly high. In 1990, approximately 7 deaths occurred for every 10,000 people in the United States due to traffic accidents (Wallich 14).
In addition, traffic jams also demonstrate the need for better methods of traffic management. Due to both the increase of women in the work force and the expansion of businesses to the suburbs, traffic jams have increased dramatically over the past few years (Koepp 55). As a consequence of traffic jams, the American population was delayed 722 million hours in 1985 (55), costing the average citizen approximately $800 (Doan 64). In 1984, drivers, while waiting in their cars during traffic jams, used three billion gallons of gasoline (Koepp 55). This figure represents four percent of the total amount of gasoline used during that year (55).
Highways themselves cause a large number of traffic jams in America today. Of the 3.88 million miles of roads in the United States, 92% of them were built before 1960 (Koepp 54). The government has failed to increase the number of roads and highways proportional to the extraordinary increase of vehicles on the road. On major highways in Los Angeles, the most congested city in the United States, the average highway speed is 37 miles per hour, and is expected to drop to 17 miles per hour by the year 2000 (Doan 65).
Problems with traffic congestion arise not only in the United States but also in Europe. In the spring of 1992 on the Nuremberg-Berlin motorway, a 70 kilometer traffic jam occurred during a holiday weekend ("Jam tomorrow" S15-S17). At a standstill for up to 18 hours, many drivers fell asleep and had to be awakened by police officers when the traffic jam began to disperse (S15-S17).
Hazards to the environment also prove the necessity for more worthwhile methods of traffic administration.
As many more vehicles make use of United States roads, the amount of poisonous hydrocarbons released into the atmosphere steadily increases. On the average, one out of every four Americans has problems breathing during the peak summer months due to the excess of smog in the atmosphere (Carpender 69). Studies have shown that automobiles produce the majority of this smog (69). This dramatic increase exists as one of the major reasons for the creation of the Environmental Protection Agency (EPA) in 1970, the same year that the Clean Air Act passed through Congress (Clark 387-404). As the number of automobiles kept increasing, the emissions standards became more rigid in 1980, especially in California (387-404). Governments even passed laws requiring large businesses to provide better and more environmentally safe methods... | https://www.studymode.com/essays/Traffic-Control-The-Need-For-Change-4305.html |
Gastronomic consumption practices are one of the main ways of forming the corporeal human identity, a marker of his/her cultural and social status. Specificity of food as a material medium of symbols and signs, which are assimilate d at the level of the most direct corporeal experience, determines its consumption as a complex system of communication links. Today, the practice of food consumption in the form of fast food is especially relevant and registers new forms and ways of communication, not only gastronomic, but also reflecting power and new gender relations. It can be argued that it is in the form of fast food (Mac-food) the symbolic content of food overcame and absorbed its physical and technical aspects.
The paper describes manipulation techniques we face in our everyday life, why manipulation works and how we can overcome it so that not to meet any negative consequences.
This important new book offers the first full-length interpretation of the thought of Martin Heidegger with respect to irony. In a radical reading of Heidegger's major works (from Being and Time through the ‘Rector's Address' and the ‘Letter on Humanism' to ‘The Origin of the Work of Art' and the Spiegel interview), Andrew Haas does not claim that Heidegger is simply being ironic. Rather he argues that Heidegger's writings make such an interpretation possible - perhaps even necessary.
Heidegger begins Being and Time with a quote from Plato, a thinker famous for his insistence upon Socratic irony. The Irony of Heidegger takes seriously the apparently curious decision to introduce the threat of irony even as philosophy begins in earnest to raise the question of the meaning of being. Through a detailed and thorough reading of Heidegger's major texts and the fundamental questions they raise, Haas reveals that one of the most important philosophers of the 20th century can be read with as much irony as earnestness. The Irony of Heidegger attempts to show that the essence of this irony lies in uncertainty, and that the entire project of onto-heno-chrono-phenomenology, therefore needs to be called into question.
The Eastern or Crimean War (1853–1856) phenomenon is the reflection of fundamental conflicts of the era: the clash of empires’ interests and emerging centers of capital – financial elites. The Crimean War can be referred as a protoworld war even by just considering the number of participants. The participants were not united by a common interest, but rather by a common rival. With the commencement of military actions, a common rival became a common enemy. Wars of such a scale usually occur in transitional phases of history, for example, a period of transition from political stability to political fragmentation, or vice versa. The Crimean War was related to the phase of the first type: it destroyed international political stability – the Vienna system, and opened the gate for political instability. The war had a chronocultural sense and this is one of the Crimean War’s secrets.
The article is concerned with the notions of technology in essays of Ernst and Friedrich Georg Jünger. The special problem of the connection between technology and freedom is discussed in the broader context of the criticism of culture and technocracy discussion in the German intellectual history of the first half of the 20th century. | https://publications.hse.ru/en/articles/103537811 |
At this year’s conference, we are providing the opportunity for members of the RSE community to present their work using posters. Posters allow presenters to present their work in an engaging way and to engage in discussions with conference attendees on a one-to-one basis.
Additionally, conference attendees will be able to vote for their favourite poster and there will be prizes for the best poster, as well as runner-up prizes. The prizes will be announced in the run-up to the conference.
What are we looking for?
We are particularly interested in “RSE Stories” – posters that focus on RSE research, process, and tool development, or projects where an RSE has produced software that has enabled the research or where software has had a significant impact on the project.
Planning your Submission
Please consider the following information when preparing your postal proposal submission.
Required Information
When you submit your proposal, you will be required to provide the following information:
- Title – Should describe what your poster is about. (50 words or less)
- Abstract – A brief and attention-grabbing summary of the content of your poster. (250 words or less)
- Expertise Level – what level of expertise will your poster be aimed at?
- Audience – describe how your submission is of interest to the RSE community and your specific target audience.
Publication
The title and abstract will be published on the conference website as soon as your proposal has been accepted and will be used as part of the advertisement for the poster session.
Poster Guidelines
We want to make sure that all posters are as engaging and accessible as possible. If your proposal is accepted, please consider the following guidelines when designing your poster.
Format
Posters should be:
- A0 sized (841 x 1188 mm)
- Portrait Orientation Preferred
Audience and Outcomes
You should also consider the following:
- Your Target Audience
- Would your audience require any prerequisite knowledge to understand your poster?
- What information could you include to make your poster accessible to as wide an audience as possible?
- Core Message
- What is the core message of your poster?
- Learning Outcomes
- What do you expect your audience to gain/learn from viewing your poster?
Readability Tips
- Please use an accessible, easily readable font
(e.g. Tahoma, Calibri, Helvetica, Arial, Verdana)
- For legible text, we recommend:
- 24pt+ for text
- 48pt+ for titles
- Use an accessible colour palette that makes your content clear and easy to read for everyone. (Google “accessible colour palettes” or check out http://colorsafe.co/)
At the Conference
Posters will be displayed in a designated area, which will be open to all attendees for the duration of the conference. Successful applicants will be asked to meet the Poster Chair in the designated space on the morning of Day 1 of the Conference to set up the posters.
- You will need to print your poster and bring it to the conference venue.
- We will provide boards, hanging materials etc.
Poster Events
Additionally, the following events will take place:
Lightning Talks
Lightning Talks will take place during one of the plenary sessions. Each presenter will present a brief, two (2) minute summary of the content of their poster to conference attendees. You do not need to prepare slides for the lightning talks.
Poster Session
There will be a dedicated Poster Session, where each presenter will be expected to stand beside their posters for the duration of the session to answer questions and discuss the content of the poster. This session is expected to last for approximately one (1) hour.
Judging and Prizes
All conference attendees will be allowed to vote for their favourite poster, and prizes will be awarded to the poster with the highest score. | https://rsecon2022.society-rse.org/posters/ |
I have started my Stitch Your Stash class and can already say this class is awesome ! I love the layout and how Charlotte presents each section. One section that really got my attention, and made me think about what I already have is Build Your Stash . In this section Charlotte explores how to build your stash through repurposing, such as using wool garments for felting, using vintage linens and more. Building a stash is the best part of this creative endeavor and one I can easily do.
I have been a collector of vintage linens for many years and it’s very hard for me to pass them up when hubby and I are out antiquing. This past weekend I found this beautiful hinged vintage tin (made in Holland) that I decided would be perfect for one of the projects in this class, a sewing kit. I also found a few other pieces to add to my stash, the lovely yellow finger towel and a appliquéd piece that I tend to repurpose.
For a bit of slow stitching, I picked up two denim pieces, a vest and a jacket., and plan to try my hand at Sashiko as a decorative addition too these very plain Jane pieces. I haven’t really tried this form of stitching, so I have enrolled in the Creativebug online class taught by Lisa Solomon. I have been rather enthralled with Sashiko, as well as many slow stitching techniques that I have been seeing online lately, and have been enjoying reading Mending Matters by Katrina Rodabaugh, as well as Make & Mend by Jessica Marquez . Both books are excellent, and a wealth of inspiration. I am very excited about trying Sashiko and can’t wait to try it! | https://mylifewithwool.com/2019/11/12/its-about-gathering-and-slow-stitching/ |
Any academic researching intelligence has to rely on archives. In Britain, the primary repository is the National Archives in Kew, London – a wealth of hidden treasures nestled amongst its millions of records.
Unfortunately, researchers are faced with two problems: secrecy and fragmentation. The overwhelming majority of material on intelligence remains classified. The Secret Intelligence Service (or MI6) has not released any of its own records. The Security Service (MI5) and GCHQ have declassified some material but generally only up to the early 1950s. Issues of national security are exempt from the Public Records Act. Indeed, researchers will be all too familiar with the dreaded red stamp of exemption: “classified under article 3 section 4”. Government weeders can be very thorough, leaving whole documents covered in thick black redactions.
Researchers therefore have to rely on a fragmented archival record. We have to search for titbits of information tucked away inside broader files.
On the one hand this is a frustrating side of studying intelligence. On the other hand, it can be a fun intellectual challenge – to beat the system.
How can historians researching intelligence escape from the twin constraints of secrecy and fragmentation?
1. Patience
As mentioned above, patiently wading through reams of dusty files can provide useful clues or pieces of a larger puzzle. It pays to know the labyrinthine archives well and to understand where gems may lie. I have found MI5 counter-terrorism files from as late as the 1970s in the Maritime and Transport Department of the Foreign Office. The first ever Joint Intelligence Committee paper was found in the India Office records.
2. Freedom of Information
Tony Blair publicly regretted bringing in the Freedom of Information Act, describing himself as a ‘nincompoop’. It can be a useful tool for researchers if we know what to ask for and are able to make specific requests. Although I’ve had many rejections, I’ve had some successes too. A recent batch of documents sent to me from the Foreign Office about launching large-scale deniable operations and lying to parliament and the United Nations felt like Christmas!
3. Interviews
These can take the intrepid researcher all over the world. They offer an exciting chance to meet some interesting characters and add vital flesh to the archival skeleton. My research has taken me from telephone calls with Chris de Burgh’s mother to visits to sprawling estates in the Scottish highlands; from Oxford colleges to dark basement offices in central London. Researching my first book, interviews provided insight into the personalities involved (the feuds, the sharp minds, and the weak links), alongside details on a shady organisation overseeing covert action.
4. More archives
Local archives are a fantastic resource containing the private papers of notable individuals from Prime Ministers to intelligence personnel. Rhodes House archives in Oxford, for example, possesses the diaries of one colonial High Commissioner who was unafraid to lambast Whitehall’s central intelligence assessors. He described officials in London who did not share his views on the threat to Aden as having an “old womanish” attitude. Similarly, the Julian Amery papers in Churchill College Archives in Cambridge give fascinating insight into a man who essentially became an unofficial minister for covert action, from inciting rebellion in Albania to secretly meeting opposition leaders in the South of France to plan the overthrow of the Egyptian government.
Intelligence is one of the few academic disciplines where the subject of study controls the sources. This creates a host of problems, but it also presents an exciting challenge.
Rory Cormac’s book on intelligence and counterinsurgency is out now. He is currently writing a history of British approaches to covert operations. You can follow Rory on Twitter @RoryCormac.
The Lakeside Arts Centre’s exhibition on Secret Intelligence and Hidden Evidence from the University’s archive is now open. | http://nottspolitics.org/2013/09/26/how-to-uncover-state-secrets-through-archival-research/ |
ATTAIN HOUSING is a group of positive, high impact life changers dedicated to empowering others. We are located in downtown Kirkland and maintain 37 apartment units of transitional housing tucked in among neighborhoods in Kirkland and Bellevue. We have celebrated 30 years of helping homeless families become stable.
ATTAIN HOUSING’s staff team are supportive, committed and effective at delivering stable housing, case management and education/life skills. ATTAIN HOUSING’s Board, staff, volunteers and donors are innovative partners who use the power of community to create the opportunity for self-sufficiency. ATTAIN HOUSING is recognized as an effective resource for helping homeless families in King County make positive life changes that lead to self-sufficiency. We offer stability to families in crisis, breaking the cycle of generational poverty and homelessness.
ATTAIN HOUSING partners with individuals and families to provide safe, stable, low cost housing for families facing homelessness. Our professional case managers provide emotional support, financial counseling, life planning, goal setting, a structured program with the support, and related resources needed to help families achieve their goals. A primary goal for all of our families is finding permanent housing. Our case managers are instrumental in supporting this. We create a community network that ensures every family has adequate food, transportation; help with utilities, relevant health and community service referrals, and the education, life skills training and ongoing support needed to achieve self-sufficiency.
We’re transforming lives. Strengthening communities.
ATTAIN HOUSING’s Vision: A safe and supportive community, where opportunity exists for all families to achieve stable housing and self-sufficiency. ATTAIN HOUSING’s Mission: Bringing our community together to partner with homeless families by providing affordable housing, empowering them with practical case management and hope.
No openings at this time. | https://www.attainhousing.org/who-we-are/join-our-staff/ |
BACKGROUND TO THE INVENTION
SUMMARY OF THE INVENTION
0001 This invention relates to a method of, and apparatus for, allocating bandwidth in a wireless LAN, and in particular to a method of, and apparatus for, adaptive bandwidth allocation in a wireless LAN using any one of the family of 802.11 standards.
0002 In a communications system, such as one operating using 802.11 wireless technology, a hot spot is an area of high bandwidth connectivity, that is to say an area in which high bandwidth connections can be made.
0003 The aim of the invention is to provide a method of, and apparatus for, monitoring and managing the deployment of a wireless LAN, particularly in a hot spot.
0004 The present invention provides a method of allocating bandwidths in a wireless LAN having a plurality of access points each using the same wireless technology for data communication with users, the method comprising the steps of:
0005 a) continuously monitoring bandwidth usage by each of the access points; and
0006 b) re-allocating bandwidth from a low bandwidth usage access point to a high bandwidth usage access point.
0007 Preferably, the access points each use the 802.11 wireless technology.
0008 In a preferred embodiment, the 802.11 wireless technology uses direct-sequence spread spectrum radio (DSSS). In this case, step b) may be such as to re-allocate a first sub-bandwidth of DSSS associated with the low bandwidth usage access point to complement a second sub-bandwidth of DSSS associated with the high bandwidth usage access point, and the method further comprises the step of expanding the coverage of a third access point using the third sub-bandwidth of DSSS for data communication with the users of the access point previously operating under the first sub-bandwidth of DSSS.
0009 Alternatively, the 802.11 wireless technology operates under frequency-hopping spread spectrum radio (FHSS). In this case, step b) may be such as to re-allocate at least one FHSS bandwidth channel from the low bandwidth usage access point to the high bandwidth usage access point.
0010 The invention also provides a wireless LAN constituted by a plurality of access points each using the same wireless technology for data communication with users, wherein the LAN is provided with means for continuously monitoring bandwidth usage by each of the access points, and for re-allocating bandwidth from a low bandwidth usage access point to a high bandwidth usage access point.
BRIEF DESCRIPTION OF THE DRAWINGS
0011 The invention will now be described in greater detail, by way of example, with reference to the drawings, in which:
0012FIG. 1 is a schematic representation of a hot spot which utilises DSSS technology; and
0013FIG. 2 is a schematic representation of a hot spot using FHSS technology.
DESCRIPTION OF PREFERRED EMBODIMENTS
0014 Referring to the drawings, FIG. 1 shows a hot spot having four access points A, B, C and D, the ranges of the access points being indicated by the circles A, B, C and D. The access points A to D use the 802.11 wireless technology and operate under DSSS. In DSSS, a data signal at the sending station is combined with a higher data rate bit sequence, or chipping code, that divides the user data according to the spreading ratio. The chipping code is a redundant bit pattern for each bit that is transmitted, which increases the signal's resistance to interference. If one or more bits in the pattern are damaged during transmission, the original data can be recovered as a result of the redundancy of the transmission. A DSSS system spreads the power of the 2.4 GHz frequency band using mathematical coding functions. In practice, DSSS splits the total bandwidth of 802.11 into three equal sub-bandwidth channels.
0015 In the hot spot of FIG. 1, each of the access points A, B and C is allocated one of the three sub-bandwidth channels, for example, the access point A may be allocated the first sub-bandwidth channel, the access point B the second sub-bandwidth channel and the access point C the third sub-bandwidth channel. The user within range of each of the access points A, B and C will, therefore, communicate with the relevant access points over the respective sub-bandwidth channel. Where the ranges of adjacent access points A, B and C overlap, users can communicate with one or more of the access points. Users within range of the access point D also communicate with that access point using the first sub-bandwidth channel rather than the second or third sub-bandwidth channel. This is because, as shown, the range of access point D overlaps the ranges of access points B and C, but does not overlap the range of the access point A. Consequently, there is no danger of interference from access point A for users within range of the access point D.
0016 The hot spot is controlled by a control means associated either with one of the access points A to D separately (as indicated by the reference M). The control means M is preferably associated with a server S, to which the access points A to D are connected, conveniently by hard wiring. The control means M continuously monitors the consumption of the bandwidth channels in all areas, and will increase or decrease it in one or more areas in dependence on the number of users within those areas. For example, if the number of users within range of access point A increases substantially, and the number of users within range of the access point B reduces substantially, the second sub-bandwidth channel would be re-allocated to the access point A, and the access point C would be reconfigured by expanding its range to cover the users previously within range of the access point B.
0017 As a DSSS system spreads the power out over a wider frequency band using mathematical coding functions, the widespread signal is correlated into a stronger signal at a receiver, so that any narrow band noise is spread widely. Thus, a system operating under DSSS is susceptible to interference to, for example, noise from microwaves. DSSS has, however, the advantage of a high throughput, and hence a high quality of service (QoS).
0018 The arrangement described above with reference to FIG. 1 could be modified, for example by adding a fifth access point E (shown in dotted lines). This access point would operate using the second sub-bandwidth channel, as access points C and D use the first and third sub-bandwidth channels.
0019FIG. 2 is a schematic representation of a hot spot similar to that shown in FIG. 1, the hot spot having three access points X, Y and Z whose ranges are indicated by the lines X, Y and Z. In this case, each of the access points operates using 802.11 technology operating under FHSS. This is a technique that uses a time-varying narrow band signal to spread the radio frequency (RF) energy over a wide band. In practice, FHSS divides the 802.11 bandwidth into a large number of smaller bandwidth channels, and the system works by jumping from one frequency (bandwidth channel) to another in a random pattern, a short burst of data being transmitted at each of the frequencies. The technique reduces interference because a signal from a narrowband system will only affect the spread spectrum signal if both are transmitting at the same frequency at the same time. If transmitter and receiver are synchronised properly, a simple logical channel is maintained. The transmission frequencies are determined by a spreading, or hopping, codethe receiver must be set to the same hopping code and must listen to the incoming signal at the right time and correct frequency in order to receive the signal properly.
1
4
5
8
9
12
0020 In the hot spot of FIG. 2, the access point X may be allocated four FHSS bandwidth channels f to f, the access point Y may be allocated four bandwidth channels f to f, and the access point Z may be allocated four bandwidth channels f to f. In practice, each of the access points X, Y and Z would be allocated more bandwidth channels, but this system will be described as using only twelve channels for the sake of simplicity.
7
8
9
10
0021 The hot spot is controlled by control means associated with one of the access points X-Z or separately (as indicated by the reference N). The control means N is preferably associated with a server T, to which the access points X, Y and Z are connected, conveniently by hard wiring. The control means N continuously monitors the consumption of the bandwidth channels in all areas, and will increase or decrease it in one or more areas in dependence on the numbers of users within those areas. For example, if the number of users within range of the access point X increases substantially, and the number of users within the range of the access point Y reduces substantially, the control means. N will re-allocate one or more of the bandwidth channels associated with that access point to the access point X. For example, bandwidth channels f and f may be re-allocated to the access point X. It should be noted that bandwidth channels adjacent to those associated with the access point X should not be re-allocated, as they are more likely to cause interference with the bandwidth channels already being deployed by the access point X. If further bandwidth is required in the area covered by the access point X, this could be accomplished by re-allocating, for example, bandwidth channels f and f from the access point Z.
0022 The system described above with reference to FIG. 2 has advantages over that described with reference to FIG. 1 in that it gives greater flexibility, it being possible to allocate extra bandwidth in small, discrete amounts than the DSSS system. The FHSS system of FIG. 2 also suffers less from problems with noise, but it does have the disadvantage of having a smaller throughput and reduced QoS when compared with the DSSS system of FIG. 1. The choice of which system (DSSS or FHSS) to use is, therefore, dependent upon the requirements for throughput, QoS, flexibility and noise.
0023 The choice of wireless technology used will depend upon the requirements of the LAN concerned. Thus, 802.11b can operate at up to 11 Mbps over a relatively wide area of coverage, and 802.11a can operate at up to 54 Mbps but over a narrower range of coverage. Moreover, with 802.11b, DSSS modulation allows up to the full data rate of 11 Mbps, whereas FHSS modulation allows a data rate of only 2 Mbps.
0024 By continuously monitoring bandwidth using the control means M or N, smart allocation of bandwidth can be accomplished. This use of a centralised control system cuts down on the amount of on-air signalling traffic requesting varying amounts of bandwidth, and so increases the amount of on-air available bandwidth. Inevitably, the channel is asymmetric, that is to say a mobile device will usually be the requester of information (such as a web page), so that the amount of uplink traffic is small, but the downlink channel is large. Consequently, the control means M or N is better placed to reserve bandwidth efficiently for the mobiles in its coverage area. | |
The Russian wheat aphid, Diuraphis noxia (Mordvilko) (Homoptera: Aphididae), has been a major economic pest of small grains in the western United States since its introduction in 1986. Recently, a new Russian wheat aphid biotype was discovered in southeastern Colorado that damaged previously resistant wheat, Triticum aestivum L. Biotype development jeopardizes the durability of plant resistance, which has been a cornerstone for Russian wheat aphid management. Our objective was to assess the relative amount of biotypic diversity among Russian wheat aphid populations collected from cultivated wheat and barley, Hordeum vulgare L. We conducted field surveys from May through June 2002 and August 2003 from seven counties within Texas, Kansas, Nebraska, and Wyoming. Based upon a foliar chlorosis damage rating, three new Russian wheat aphid biotypes were identified, one of which was virulent to all characterized sources of Russian wheat aphid resistance. The future success of Russian wheat aphid resistance breeding programs will depend upon the continual monitoring of extant biotypic diversity and determination of the ecological and genetic factors underlying the development of Russian wheat aphid biotypes.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither BioOne nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the BioOne website. | https://complete.bioone.org/journals/journal-of-economic-entomology/volume-99/issue-5/0022-0493-99.5.1862/Biotypic-Variation-Among-North-American-Russian-Wheat-Aphid-Homoptera/10.1603/0022-0493-99.5.1862.short |
Local benefit sharing is an important aspect of hydropower projects’ support for the development aspirations of project-affected people and communities - beyond direct impact mitigation. This publication features in-depth case studies on how local benefit sharing approaches are put into practice. The case studies provide an inside look at the opportunities and challenges for hydro project developers as they strive to incorporate benefit sharing into their operations. The experiences of projects detailed in this report serve to underscore the complexities associated with identifying an optimal path - and the importance of attending to the unique circumstances of the individual project and affected communities. The case studies draw from different regions around the world, with an eye toward demonstrating the ways in which national and regional norms influence the approach.
Detalhes
-
Autor
Wojczynski,Eduard
-
Data do documento
2021/07/27
-
TIpo de documento
Report
-
No. do relatório
162164
-
Nº do volume
1
-
Total Volume(s)
1
-
País
-
Região
-
Data de divulgação
2021/07/27
-
Disclosure Status
Disclosed
-
Nome do documento
Case Studies on Local Benefit Sharing in Hydropower Projects
-
Palavras-chave
Environmental and Social Impact Assessment; community-driven development; quality of life improvement; Environmental and Social Safeguard; access to potable water; Peace and Development Program; community development program; independent power producer; source of income; source income; water treatment facility; Local Economic Development; investments in infrastructure; lack of knowledge; Access to Education; modern health care; allocation of resource; local development; crime and violence; resettlement action plan; number of jobs; emphasis on education; time and resource; environmental assessment process; communications development; cost of water; construction and equipment; financing facility; adult education program; access to capital; potable water supply; traditional knowledge system; restoration program; benefit sharing; local resident; government staff; Vocational Training; Ethnic Minorities; environmental monitoring; clean water; resettlement program; young people; partnership agreement; hydropower development; downstream village; refugee camp; loan approval; rural youth; local stakeholder; project construction; relocated communities; burn agriculture; field visits; several factors; community cohesion; health facility; hydropower plant; rice paddy; The Disabled; local area; rice production; core competencies; public health; outreach effort; compensation measures; knowledge exchange; opinion leaders; community consultation; musical instrument; public level; Armed Conflict; sustainable agriculture; government system; immediate vicinity; quantifiable indicators; human rights; downstream flow; company operating; recent history; consultation process; town hall; monitoring progress; social program; adaptive approach; media coverage; site visits; armed violence; student housing; family farm; several communities; construction work; Value Initiative; community disputes; civil engineer; remote region; legal expert; hydro facilities; high school; life skill; power production; common good; management capacity; social cohesion; music program; working relationship; corporate involvement; community governance; positive outcome; environmental program; urban school; education support; long-term commitment; armed militia; catholic church; community participation; stakeholder concern; poultry farming; local school; classroom school; non-profit organization; communication strategy; school staff; community survey; bank finance; revolving fund; learning need; socioeconomic benefits; integrate watershed; cultural activities; across border; community organizing; needs analysis; literacy level; text book; in school; organizational capacity; investment program; management skill; school program; construction equipment; improving energy; seed fund; road improvement; optimal outcomes; affected communities; local livelihoods; limited resources; communications skill; wet season; settlement period; community relation; regional norms; resettlement villages; government capacity; human capital; culturally sensitive; rural population; commercial purpose; little appetite; farming method; company staff; development successes; fixed budget; equity partner; preparatory work; adaptive management; annual budget; vulnerable people; legal entities; legal entity; agricultural subsidy; cash crop; restoration measure; supply chain; oversight function; seed varieties; first nations; seed variety; alternative crop; mitigation activity; community management; indigenous communities; indigenous community; increase productivity; relocated village; village committee; loan request; agreed schedule; Research Support; loan repayment; government support; agricultural method; extreme poverty; private company; social compensation; rice field; soil health; know how; global development; license agreement; national research; multiple sources; soil quality; important component; financial stake; local facility; medical appointment; medical technology; Medical technologies; water shar
- Exibir mais
-
Idioma
Inglês
-
Tópicos históricos
Water, Education, Hydropower and Dams, Environment, Natural Resources and Blue Economy, Health,
-
Tópicos históricos
Indústria, Desenvolvimento Rural, Educação, Energia, Recursos Hídricos,
-
Subtópicos históricos
Hydrology, Energia renovável, Water and Energy, Hydro Power, Rural and Renewable Energy, Health Care Services Industry, Educational Sciences,
-
Unidade proprietária
AS - Sustainable Infrastructure (CNGIA)
-
Version Type
Final
-
Series Name
Downloads
COMPLETAR RELATÓRIO
Versão oficial do documento (pode conter assinaturas, etc.)
- PDF oficial
- TXT*
- Total Downloads** :
- Download Stats
-
*A versão do texto é um OCR incorreto e está incluído unicamente em benefício de usuários com conectividade lenta. | https://documents.worldbank.org/pt/publication/documents-reports/documentdetail/757781627388178782? |
(This is an essay I wrote for my environmental ethics class in Fall of 2016. I hope you enjoy!)
Over the summer, I was a camp counselor at Lutheridge, an Evangelical Lutheran Church camp located right outside of Asheville, North Carolina. The camp is located on a forested “mountain” (it’s more of a glorified hill), and campers and counselors sleep in small cabins located throughout different areas of the property. The area is just nature-y enough to make you feel like you can connect with the environment without distractions, but not too nature-y that campers complain. (Unless you have middle school girls who have to shower every day no matter what, but that’s a different story.) During my first week of camp, I had a group of third grade girls who were so excited to be away from home and couldn’t wait to experience everything Lutheridge had to offer. One of these girls was Emily, a short, beach-blonde, energetic child (arguably my favorite camper of the summer) who loved animals. Emily loved all animals — cute ones, ugly ones, big and small animals, and, namely, insects. She had the courage to pick up any crawling thing that made its way into our little cabin and subsequently place it back outside where it belonged. One day, however, we had an incident. Addie, another camper who was not a big fan of nature, squished a spider in the cabin. “Kill it!” she screamed from her top bunk. “No!!! Don’t kill it, it’s just a spider! It’s not going to hurt you!” Emily shouted back. Taking her Bible from behind her pillow, Addie dropped the heavy book on the spider.
Emily burst into tears.
Taking Emily outside, we sat on the gravel and talked about what happened and why she was so upset. Emily explained that after reading Charlotte’s Web, she couldn’t understand why anyone would want to kill any animal or insect ever. She said that they deserved to be treated the same as humans, and that humans should protect them at all costs.
Emily, my eight-year old camper, is an animal liberationist.
Careful not to step on Emily’s toes, I told her how her passion for animals made me so happy. However, I explained to her that lots of animals eat spiders, and even though that causes harm for the spider, it benefits the other animals. I then went through a brief explanation of natural selection and the food web and pointed out that she seemed to really enjoy her ham sandwich that she ate earlier that day. I told her that even though we might want to save every individual spider from every source of harm, it’s just not practical; however, we can make choices that benefit the ecosystem as a whole.
That is how to explain environmental holism to an eight-year-old girl. But that’s not what this essay is about.
As much as I’d love to fill the rest of this paper with stories from camp, that would quickly turn into a book. The point of this anecdote is to show that while initially we may think that an individualistic approach to environmental ethics is the best choice, we quickly see that it has its flaws. In this essay, I will argue that a holistic approach, rather than an individualistic one, to environmental ethics is the most logical way to prevent ecological destruction and improve the standard of living for all human and non-human life.
Holism is the idea that systems (in this case, ecological) and their properties should be viewed as wholes rather than a collection of parts. Simply, environmental holism focuses on the ecosystem rather than the individuals of that ecosystem. This can mean the entire biosphere; however, it can also refer to an entire species or a particular ecosystem within the biosphere. Holism argues that the system is morally considerable, and that the system itself has moral priority over the parts of the system. According to holism, the parts are relatively insignificant; the survival of an individual deer does not hold as much value as the survival of the deer species. In his book A Sand County Almanac, Aldo Leopold describes the land ethic as an ethic that “enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land” (Leopold). He also states that humans are plain members of the ecological community and that we should take a biocentric approach towards the biosphere rather than an anthropocentric one. Being plain citizens of the land-community “implies respect for his fellow-members, and also respect for the community as such” (Leopold).
Individualism, unsurprisingly, argues against the idea of holism. This is a type of ethic or worldview where only individuals are valued and the whole system is considered to be no more than the “aggregate of the parts.” In this case, the spider itself holds more value than the spider species. There are many types of individualist theories, such as the animal liberationist theory and biocentric individualism; however, they both hold the idea that the individual being, whether it is a non-human animal or a shrub, have moral priority over the species or ecosystem they are a part of.
To understand the differences between holism and individualism and why holism is a more practical method for ecological preservation, we’ll look at the economic, speciesist, and natural implications of each of these theories.
In his essay “Animal Liberation and Environmental Ethics,” Mark Sagoff argues that “the liberationist must morally require society to relieve animal suffering where it can and at a lesser cost to itself, whether in the chicken coop or in the wild” (Sagoff). My camper, Emily, if she knew what all those words meant, would most likely agree with this concept. However, if we as a society were to adopt this ethic, we would be required to go out of our way to prevent all animal life from death, especially in nature where they are subject to “predation, starvation, disease, parasitism, [and the] cold” (Sagoff). The economic cost of this proposal would be astronomical. Peter Singer, an animal liberationist, suggests that animals might be fed contraceptive chemicals so that “fewer will fall short to an early and horrible death” (Sagoff). Considering that contraceptive care for women isn’t covered by most health insurance companies, finding money to provide birth control for non-human animals might prove to be somewhat difficult. Because holistic environmental ethics isn’t concerned with individuals, they are not concerned with relieving individual animal suffering. Instead, they are concerned with the preservation of species. Nature, if left to its own devices, will preserve itself. Species A will never overexploit species B because A’s survival depends on B. Therefore a holistic environmentalist will not look to interfere itself in natural selection because of the suffering of another animal; humans are plain members of the society and must respect the rest of nature.
There are other economic consequences to the individualist perspective; Tom Regan, for example, is against all slaughter of animals for human consumption because by doing so we use animals as a means to our own end. Policy following Regan’s ideals would result in shutting down not only all of the corporate-owned farms but family-owned and local farms as well. This would cause an economic catastrophe that would be felt by every individual throughout the United States. Also, where would these liberated animals go? Assuming they would be released into the wild, they would be subject to the violent causes of death described earlier and could potentially cause a disturbance in pre-existing ecosystems. Instead of shutting down all slaughter of non-human animals, a holistic environmentalist might suggest breaking up large corporate farms into smaller entities, enforcing sustainable and environmentally-friendly farming practices, and increasing education for farmers and the general public about these farming techniques and their importance. This would provide economical benefits (sharing wealth and increasing jobs), ecological benefits (less stress on animals, surrounding ecosystems, and the land), and social benefits (more people see farming as an accessible and profitable career).
Finally, in his essay “The Conceptual Foundations of the Land Ethic,” J. Baird Callicott argues that the individualistic basis for moral standing isn’t ethical. The egotist assumption that many individualists base their theories off of is the idea that “I should treat others well because I want others to treat me well.” We find some feature in ourselves that we believe deserves moral standing and then test this feature on other members of ecological society to see if they deserve moral standing. Callicott believes this is a flawed basis of ethics. He is trying to remove the so-called “measuring stick of moral standing” away from humans and claim that ethics rests upon sentiments, emotional reactions towards an action by someone or something else. Callicott claims that all plain members of the land-community have sentiments and therefore have a basis for ethics. He also argues that while humans are still plain members of the community, we have removed ourselves from nature but still have moral obligations to it due to our existing sentiments. This interpretation of Leopold’s land ethic allows us to benefit the entire ecological community while still acting as plain members of its society.
If I had attempted to explain environmental holism like this to Emily, it would have gone right over her little head. Admittedly, it can be easier to explain an individualistic ethic to a child because children tend to think mainly of themselves, the individual. They aren’t used to thinking of others and how their actions will affect the group; rather, they want to do what they want to do, and that doesn’t always line up with what everyone wants to do. But when we teach children that we shouldn’t harm animals because “we wouldn’t want to be treated the same way,” they don’t see themselves as part of the biosphere, and it can cause them to think some actions towards the ecosystem are okay because they would be okay if it happened to them. If environmental conservation and preservation starts with education, then we must start educating children the right way. We need to teach children that they are plain members of the ecological community and that we have a moral obligation to treat it with respect. They need to understand that environmentalism isn’t just about saving individual spiders from the wrath of screaming third-graders (or college students, for that matter) but rather making significant changes in our community that benefit the entire biosphere. | http://www.rebeccamariewright.com/blog/nature/holism-8/ |
|Crystal Ball Senior Columnist Alan Abramowitz’s discussion of his Senate elections model not only confirms how close we expect the 2014 Senate campaign will be, but it’s also quite timely, as the Republicans yesterday expanded their list of credible Senate targets. Rep. Cory Gardner (R, CO-4), after previously declining to challenge Sen. Mark Udall (D-CO), is now reportedly entering the contest. Assuming he advances to the general election, Gardner should give Udall a stiffer challenge than the other Republicans in the field. This news moves the Colorado Senate race from Likely Democratic to Leans Democratic. — The Editors|
The battle for control of the U.S. Senate is where the action is this year in American politics. Right now all signs point to a near standoff in the U.S. House elections. Barring a major change in the political environment in the next few months, the 114th House is expected to closely resemble the 113th House with a slightly larger or slightly smaller Republican majority. In contrast, party control of the next Senate is definitely up for grabs this year.
The main reason why Democrats are at risk of losing control of the Senate in November is not because of public discontent with the Affordable Care Act, continued weakness in the economy or President Obama’s mediocre approval ratings. All of these issues may have an impact on the Senate elections. But the Democrats’ biggest problem this year is that they were so successful in the 2008 Senate elections. While Barack Obama was capturing the White House in 2008, Democrats gained eight net seats in the Senate, winning 20 of the 35 seats at stake.
Now Democrats must defend all of the seats that they won six years ago, including several in states that usually support Republicans. Of the Democratic seats up for grabs this year, seven are in states that were carried by Mitt Romney in 2012, including six that Romney won by a double-digit margin. In contrast, Republicans are only defending one seat in a state that was carried by Obama in 2012 — Sen. Susan Collins’ seat in Maine. And Collins is so popular that she isn’t a credible Democratic target.
A simple model based on a large body of political science research allows us to make fairly accurate predictions of seat swing in midterm U.S. Senate elections. This model is almost identical to one that I have used to accurately forecast seat swing in midterm House elections. The three predictors are the results of the generic congressional ballot question in national polls in early September; the difference between the number of Republican seats and the number of Democratic seats at stake in the election; and a dummy variable for the president’s party.
The generic ballot question provides a measure of the mood of the electorate two months before the election. Even though it asks specifically about House voting intentions, the generic ballot question predicts Senate voting as well. The seat difference variable is a measure of party exposure in the election: The more seats a party has at risk relative to the opposing party, the more seats it tends to lose. Finally, the dummy variable for the party of the president reflects the tendency of the president’s party to lose seats in midterm elections even after controlling for the other predictors in the model.
Table 1: Results of regression analysis of Senate seat swing in midterm elections
Source: Data compiled by author
Table 1 above presents estimates for the effects of the three predictors on change in Republican Senate seats based on data for all 17 midterm Senate elections since World War II. The estimated coefficients for all three predictors are in the expected direction and highly statistically significant. Moreover, the model explains an impressive 75% of the variance in Senate seat swing. This is substantially less than the 90% of variance in seat swing explained by the House forecasting model. However, that is to be expected given the much smaller number of Senate seats at stake in each election and the larger proportion of competitive contests in Senate elections. There is simply more random variation in Senate seat swing because of the effects of campaigns and candidate quality. The only way to produce a more accurate forecast of the outcome in November is to factor in these seat-specific characteristics based on in-depth information from on-the-ground sources — the sort of information used by the Crystal Ball in making its exceptionally accurate final predictions.
Even at this early stage, however, estimates for two of the predictors in our model make it clear that 2014 is likely to be a difficult year for Democrats in the Senate. First, based on the fact that there are six more Democratic than Republican seats at stake this year, Democrats would be expected to lose 2.4 Senate seats compared with an election in which there were equal numbers of Democratic and Republican seats at stake. Second, the fact that there is a Democrat in the White House means that Democrats can expect to lose about four more seats than they would expect to lose in a midterm election with a Republican in the White House.
When we add together the effect of the seat exposure variable and the effect of the midterm dummy variable, and factor in the intercept of -0.9 in the regression equation, Republicans start off the 2014 campaign with an expected gain of 5.7 seats. That is very close to the six seats that Republicans need to regain control of the Senate.
The only predictor that is not yet set for 2014 is the generic ballot variable, and that will not be known for several months — results much earlier than the late summer or early fall have little or no predictive validity. However, at this early date we can make conditional predictions of seat swing in the 2014 Senate elections depending on the results of the generic ballot question in early September. Those conditional predictions are found in Table 2.
Table 2: Conditional forecasts of change in Republican Senate seats
Source: Data compiled by author
The conditional forecasts in Table 2 above make clear once again that the fundamentals in 2014 are very favorable for Republicans in the Senate elections. Even if Democrats have a 10-point lead in generic ballot polling in early September, Republicans would still be expected to gain between three and four Senate seats because of the GOP advantage on the seat exposure and midterm party variables. However, a Democratic lead of five or more points in generic ballot polling would give Democrats a better than 50/50 chance of retaining control of the Senate. On the other hand, a Republican lead of five or more points would almost ensure a GOP majority in next year’s Senate.
The 2014 elections and the future of the Senate
Because there is a Democrat in the White House and Democrats have 21 seats at stake compared with only 15 seats for Republicans, the GOP is almost guaranteed to make gains in the 2014 Senate elections. Based on the generic ballot forecasting model, Democrats will need a cushion of at least five points on the generic ballot question in early September in order to have a better than 50/50 chance of keeping control of the Senate. The good news for Democrats is that any Republican majority in the 114th Senate is almost certain to be very narrow, and the Senate seats that are up for grabs in 2016 would give them an excellent chance of regaining control of the Senate in that year.
Regardless of which party is in the majority next year, the long-term outlook for the Senate is for a continuation of narrow majorities with party control at stake every two years for the foreseeable future. This is a situation that is likely to encourage more of the intense partisan conflict that has characterized the Senate in recent years. Moreover, the next Senate is almost certain to be even more ideologically polarized than the current Senate. The replacement of several moderate Democrats with conservative Republicans will increase the already wide ideological divide between the parties. This may lead to further erosion of the filibuster rule and other Senate practices that allow the minority party to obstruct, delay or defeat decisions supported by the majority party. For better or worse, the end result of these trends is likely to be the transformation of the Senate into a body that much more closely resembles the House of Representatives. | https://www.centerforpolitics.org/crystalball/articles/generic-ballot-model-shows-senate-control-at-tipping-point/ |
Since the conclusion of the Republican and Democratic national conventions last month, pundits, political reporters, and ordinary Americans have, for understandable reasons, been preoccupied with developments in the presidential campaign. And the contest between Hillary Clinton and Donald Trump has certainly provided plenty of material for serious political observers as well as late night comics. With the presidential contest getting so much coverage in the national media, however, much less attention has been devoted to the critical battle for control of the next Congress. Regardless of the outcome of the presidential election, whether Republicans or Democrats control the House and Senate will have enormous consequences for the direction of the country and the ability of the next president to carry out his or her agenda.
At present, Republicans hold a 247 to 186 seat majority in the House of Representatives (with vacancies in two formerly Democratic-held seats that the party will easily hold onto). All 435 House seats and 34 of the 100 Senate seats are up for election this year. In reality, however, only around 50 House seats and perhaps a dozen Senate seats are really in play — the rest are completely safe for one party or the other. Nevertheless, there are enough seats in play that there is some uncertainty about which party will end up in control of the House and a great deal of uncertainty about which party will end up in control of the Senate.
So what should we expect in the House and Senate elections this year? A simple forecasting model based on three predictors — the number of Republican seats at stake in the election, support for the two major parties on the “generic ballot” question in national polls, and whether it is a midterm election under a Democratic or Republican president — yields fairly accurate predictions of seat swing in the House and Senate. The number of Republican seats at stake is a measure of exposure to risk: the more seats Republicans have at stake, the more seats they are likely to lose. The generic ballot, based on polls asking voters whether they would prefer a Democratic or Republican candidate for the U.S. House without naming the actual candidates, is an indicator of the national political climate. Finally, the midterm election variable indicates whether an election is a Republican or Democratic midterm. There is a strong tendency for the party holding the White House to lose seats in midterm elections. However, 2016 is a presidential election year, so the midterm variable is not relevant.
Sign up: Free daily newsletter Sign up!
Table 1: Results of regression analyses of House and Senate election outcomes, 1946-2014
Notes: Dependent variable is change in Republican seats; SEE = standard error of estimate. No generic ballot data available for 1948.
Sources: Data compiled by author
Prediction equations for 2016 are presented in Table 1. The estimated weights for the three predictors and intercepts (constants) are based on OLS regression analyses of data on U.S. House and Senate elections between 1946 and 2014. The results indicate that for both House and Senate elections, all three predictors have substantial and highly statistically significant effects. As expected, seat exposure has large and highly significant negative effects on seat swing in both House and Senate elections, the generic ballot variable has substantial and highly significant effects in both types of elections, and the midterm variable has large and highly significant effects in the expected direction in both types of elections. However, while all three predictors have strong and statistically significant effects for both types of elections, the model is much more accurate for House elections than for Senate elections, explaining about 82% of the variance in House seat swing compared with only about 63% of the variance in Senate seat swing. This is exactly what one would expect given the much smaller number of Senate seats at stake in each election and the larger proportion of competitive Senate races.
Conditional Forecasts
Two of the three predictors in this model are already set for the 2016 election. We know that Republicans will be defending 247 of 435 House seats and 24 of 34 Senate seats this year, and we know that 2016 is a presidential election year so the midterm variable does not favor either party. The only unknown for 2016 is the value of the generic ballot variable. Table 2 therefore presents conditional forecasts for House and Senate seat change based on values for the generic ballot variable ranging from a two-point Republican lead to a 14-point Democratic lead.
Table 2: Conditional forecasts of change in Republican seats in House and Senate
Sources: Table 1 and data compiled by author
The results in Table 2 indicate that for almost any conceivable values of the generic ballot variable, Democrats are likely to make gains in both the House and Senate. That is largely due to the fact that, as a result of their successes in the 2010 and 2014 midterm elections, Republicans are defending unusually large numbers of seats in both chambers this year. However, the results indicate that in order for Democrats to gain the minimum of four seats they need to regain control of the Senate (if there is a Democratic vice president to break a 50-50 tie), they probably would need a lead of at least two or three points on the generic ballot and to gain the minimum of 30 seats they need to regain control of the House, they probably would need a lead of at least 13 points on the generic ballot.
According to HuffPost Pollster, results of recent national polls give Democrats an average lead of five points on the generic ballot. If that lead were to hold up until the week after Labor Day, the traditional cutoff date for the generic ballot forecast, Democrats would be expected to gain about 16 seats in the House and about four seats in the Senate — not enough to flip control of the House but enough to flip control of the Senate if Clinton wins the presidential election.
Of course any forecasts based on a statistical model are subject to a margin of error. In this case, the results in Table 1 indicate that if Democrats maintain a five-point lead in the generic ballot, they would be very likely to pick up between six and 26 seats in the House and between two and six seats in the Senate. They would have about a 50% chance of regaining control of the Senate (if there is a Democratic vice president) but less than a 15% chance of regaining control of the House.
Courtesy of Larry Sabato's Crystal Ball
See Other Commentary by Dr. Alan Abramowitz
See Other Political Commentary
| |
Sciatica caused by injury is a common diagnostic conclusion when symptoms follow a traumatic event. However, this verdict is not always accurate, since sciatica might be blamed on perceived injury when no actual anatomical damage has occurred, as well as be unfairly linked to actual injury that is coincidental to the symptoms. Injury is a great scapegoat in the back and neck pain sector of healthcare, with many patients suffering terrible mind and body effects due to the overuse of the term and the lack of evidence proving that any lasting damage has been done.
Back and neck injuries can surely happen. Injuries occur every day on a massive scale. People can hurt themselves or be hurt in endless ways and some of these traumas are surely severe and will produce enduring symptoms. However, the body is designed to heal and heal it will, given all but the most extreme circumstances. Healing should bring an end to pain that is a direct result of the injury. Actually, pain will cease long before full healing has occurred. However, the factual statistics of sciatica defy these anatomical facts. Why does sciatica persist for way longer than its clinical expectations in virtually every patient?
This vital essay discusses the role of injury in the generation of sciatica symptoms. We will discuss actual injury leading to sciatica, perceived injury leading to sciatica and the misdiagnosis of sciatica as a direct result of either injury or perceived injury. Hold on; this will be a winding road, but is a must-read for anyone who blames their sciatica pain on an injury.
Injury can occur to the spine or to the sciatic nerve itself. Either way, sciatica symptoms can result. The sciatic nerve is formed by 5 neurological roots located between L4 and S3 in the lower reaches of the vertebral column. This lumbosacral spinal area is the zone that degenerates more than any other location in the spine and is also an area that is relatively easy to injure, since it bends and flexes frequently and considerably, often under great stress loads.
Many factors can cause injury to the backbone and this injury can hasten and worsen the type and degree of structural deterioration in the lumbosacral spine. Injury can occur directly to the spinal nerve roots, such as when severed, lacerated or otherwise damaged by trauma. However, more commonly, injury will affect some other structure that will subsequently affect the spinal nerve roots, such as a herniated disc or fractured vertebra that compresses one or more of these nerves.
Injury might affect the spinal cord far above the lumbosacral spine, potentially damaging the tracts of cells that will eventually be involved in messaging between the sciatic nerve and the brain. Injury can also affect the fully formed sciatic nerve, such as in the case of compression, laceration or penetration of the large nerve in the buttock or leg.
Some of the most common causes of direct injury to the sciatic nerve or its component nerve roots include motor vehicle collisions, falls, sporting traumas, acts of violence and surgical interventions.
Not all "injuries" actually involve anatomical damage of any significant degree. Some injuries produce very transient pain and the impression of being hurt, even though there are no lingering consequences. We see many patients who are in pain and believe that they hurt themselves doing some particular activity or during some specific event. However, there is no evidence of any bodily injury, despite the patient complaining that the event brought on pain that has not stopped since.
The mind is very powerful and the suggestion of injury is enough to produce pain in all of us. We have been conditioned to expect long lasting pain when we are injured, through exposure to opportunistic doctors, medical advertisements and a very misinformed legal system. The nocebo effect of continuing warnings about the effects of injury throughout life have most humans expecting to be injured at the slightest traumatic happening. All of this is a complete fabrication. The human body is unbelievably strong and capable. One only needs to look at the diversity of accomplishments we collectively achieve on a daily basis with our bodies to understand how durable we actually are. We are not easily injured; but instead are naturally resistant to injury.
This is not to say that injuries do not occur, since they do with regularity. However, in most instances where an "injury" is reported, the actual damage is not noteworthy and certainly nothing that could ever produce a chronic pain syndrome, such as sciatica, through anatomical factors alone. This is indisputable scientific fact.
If we had a dollar for each time a patient has informed us that their sciatica started after an innocent event, we would be the richest people on Earth. Some of the commonly cited explanations for decades of pain include lifting a minor weight (under 20 pounds), opening a window, sneezing, coughing, getting up from a seated position or working around the home. Do these sound like the ideal circumstances for real injury to occur?
Regardless of whether actual injury occurs or not, we often take our clue to blame pain on a traumatic event from our doctors. If a doctor diagnoses us with an injury that has caused us to suffer sciatica, we tend to believe them. We also tend to hold on to their diagnosis, even if it becomes quite apparent that the verdict is completely mistaken, as in cases where 20 years of intractable pain are blamed on normal disc degeneration said to have been caused by bending to pick up a 10 pound box. This is preposterous.
Many injuries will leave anatomical evidence, but are not necessarily the direct cause of sciatica. They are incidental and completely non-influential occurrences to the sciatica symptoms. Of course many traumas do not produce actual and verifiable evidence of injury, but are still blamed by doctors as the origin of sciatica. Misdiagnosis of the root causes of sciatica is the single most widespread and significant problem facing patients today.
It is vital to remember that the body has a main directive imparted by millions of years of natural engineering. This primary function is to heal. The body heals each day it is alive and this imperative is so strong that much of the body continues to heal for a short time even after death. We know all this as fact, but we still subscribe to the idea that minor injuries or even nonexistent perceived injuries are the main reasons to explain the horrific types of chronic sciatica pain blamed upon them. Are you ready to think outside the box and get to the heart of why you really have sciatica? We can help. Start by reading our comprehensive section on sciatica causes. | https://www.sciatica-pain.org/sciatica-caused-by-injury.html |
The simplest fault detector employed in electrical apparatus is the simple impedance relay which continuously calculates the ratio of voltage across the apparatus to current through it. When a fault occurs, the impedance of the apparatus changes and the fault detector registers a fault. The problem with this protection system is that it is not very sensitive and when applied to air core reactors simply cannot detect the small faults which can occur in these devices.
Differential protection systems have been applied very successfully to iron cored electrical apparatus like generators, transformers and iron cored reactors. Current transformers at either end of the apparatus compare the currents entering and leaving the winding. When a ground fault occurs, the current leaving is not equal to the current entering the winding and the detector registers a fault. Winding to ground faults cannot easily occur on an air cored winding and therefore the system is not useful for air cored reactors.
In another known differential relaying system, useful to protect electrical apparatus in which the winding comprises two identical halves connected in parallel, current transformers continuously compare the currents in two halves of the winding and when a fault occurs in either winding the resulting imbalance in currents produces a detector signal which signifies that a fault has occurred. The difficulty with this scheme when applied to any air core reactor is that it is unable to detect a turn to turn fault in many reactors, particularly in those reactors which consist of a very large number of windings in parallel.
In a variant of the preceeding, a single detector is used to detect a fault in any one phase of a three phase system. It works in essentially the same manner as the preceeding system, but in this arrangement a single detector is able to detect when a fault occurs in any one of the three windings of a three phase device. When applied to air cored reactors, the system suffers from the same limitations as the preceeding system, namely that it is not sensitive enough to detect turn to turn faults in many air cored reactors even though these turn to turn faults can quickly cause extensive damage to the reactor and often to other devices to which the reactor is connected.
The system to be described in the next section overcomes at least some of these limitations and is able to detect the smallest of faults in air core reactors, and furthermore has the decided advantage that the detector current is directly proportional to the severity of the fault that has occurred.
| |
- The number of books on a shelf can be described by a single cardinal number.
书架上图书的数量是一个基数。
- The seed is the base number used for the generation of pseudo-random numbers.
种子是用于生成伪随机数的基数。
- The unknown number can be calculated from the data.
通过已知数可以算出未知数。
- You should number the pages of the thesis.
你应该计算一下论文的页数。
- In a floating-point representation, the numeral that is multiplied by the exponentiated implicit floating-point base to determine the real number represented. | http://dict.cn/basic%20number |
Several people – over Thanksgiving break, and again this morning – have posed questions to me about whether a focus on getting more technology in the classroom is well-justified. After all, tying technology infusion in the classroom directly to student achievement, for instance, is very difficult (although the same is true of virtually any initiative, the effects of which are difficult or impossible to fully entangle from other factors impacting students, teachers, and the learning environment.
While I’ve written in other articles about the basic justification for some of these things, I thought I’d post – for those of you with insomnia, perhaps, and nothing to do for an hour or so – a lengthy portion of something that I wrote a few months ago that details some of the more specific needs for technology access in schools, along with some of the research related to the success of educational technology implementations. Warning: what follows is long and potentially boring, and is not well-formatted for a blog post. Skimming is encouraged.
Introduction
Technology was once considered a luxury in the classroom, useful for rewards and supplementation at the fringes of the curriculum (Peggy A. Ertmer, Addison, Lane, Ross, & Woods, 1999). This secondary emphasis could be seen in the typical manifestation of educational technology in schools, which frequently included a handful of Apple II computers, and – in larger or wealthier schools – a small computer lab for business applications (Becker, 1991). As the A Nation at Risk report (Gardner, 1983) had spurred the nation’s interest in investing in educational reform, particularly in terms of improving competitiveness in math, science, and social studies (Maranto, 2015), the dotcom explosion of the 1990s served as a wakeup call to school and economic leaders that the United States had fallen behind in terms of technology education (Blackley & Howell, 2015). In 1996, President Bill Clinton unveiled a technology modernization program aimed at putting a “computer in every classroom,” which ushered in an era where computers were seen as invaluable tools to support the learning process through universal connections to online resources and availability of digital tools (Coley, Cradler, & Engel, 1997).
President Clinton’s classroom connectivity initiative coincided with a dramatic increase in the percentage of homes in the United States that had a computer, and perhaps more importantly, were connected to the internet (Morris & Ogan, 1996). Prior to 1994, internet was only widely available in institutions, such as universities, hospitals, and research facilities (Hargittai, 1999). With the launch of consumer internet services such as America Online, Prodigy, and CompuServe, home internet access became a reality, and the amount of content available online began to increase at an exponential rate (Coffman & Odlyzko, 2002). By 2003, over 54% of U.S. households had a device that was capable of connecting to the internet, up from 18% in 1997 (File, 2013). In 2015, 84% of Americans reported using the internet, including 51% of high school students who carry smartphones capable of accessing internet resources on a daily basis (Cavanagh, 2013; Perrin & Duggan, 2015).
As the ubiquity of internet access increased, so did the amount that individuals and organizations relied upon internet resources. Print, telephone, and software-based encyclopedias, application forms, government policies, and financial tools were rapidly replaced with online equivalents. The necessity of technology and internet access in modern society now extends to social and economic aspects of our daily lives. In 2015, 65% of American adults were active on some form of social media – such as Facebook, Twitter, or Instagram – and teenagers in the United States spent over nine hours per day viewing or interacting with digital media (Perrin, 2015; Sharif, Henry, & Nelson, 2015). Applications for college or jobs will generally be completed online, and the new jobs being created for which recent graduates will apply are disproportionately in STEM-related – science, technology, engineering, and math (Hanson & Slaughter, 2015). Basic computer skills are often required to complete employment-related functions even in positions not related to technology, such as accessing email or an online records system to retrieve a paystub or submit benefits information.
In higher education, one-third of college students took an online course in 2013, a percentage that has been increasing since 2005 (Lederman, 2013). Retention of students enrolled in online courses lags behind retention of students taking in-person courses, a factor that has been partially attributed to difficulties students face in navigating the technological requirements for online coursework (Estes, 2015). Many universities require that all enrolled students own a computer, and the majority of college courses include an online component via a learning management system (LMS) such as Blackboard or Moodle (Fathema, Shannon, & Ross, 2015). Despite these requirements, few colleges provide information literacy or introductory training programs (Badke, 2016).
Despite the prevalence of and need for technology, access to and awareness of technology is not evenly distributed among members of different social groups in the United States. The device availability and internet access rates among adults with incomes below $30,000 per year is substantially lower – 74% vs 95% in 2015 – than among adults with household incomes above $50,000 per year (Perrin & Duggan, 2015). Young people from impoverished households are also more likely to drop out of school (Ceballos & Sheely-Moore, 2015), be convicted of a crime (Males, 2015), and to be unemployed as adults (Ullucci & Howard, 2015). This digital divide – whereby poorer members of society are less likely to be able to access online and other technological resources that can contribute to development of academic, social, and economic skills and credentials – contributes to maintenance and expansion of achievement and employment gaps, and disproportionately impacts ethnic minorities (Ladson-Billings, 2013).
Technology Access in Schools
Given the level at which technology is integrated into and required for basic social functions, access to technology in primary and secondary education is no longer equivalent to provision of a tool to support learning in general. Rather, schools are in a position to provide students with the access, experience, and skills needed to use technology efficiently and safely. This is generally addressed through school technology initiatives, which may originate at the federal level – such as the previously-mentioned “computer in every classroom” initiative in 1996 – or at state, district, or school building levels (Coley et al., 1997).
Technology initiatives can take many forms, from the deployment of a single software application – such as a LMS or an online productivity tool such as Google Drive – to a one-to-one initiative, wherein every student is provided with a device, such as a computer or tablet, that can be used both at school and at home. As such, the term “technology initiative” refers to both small-scale initiatives that are generally narrowly-focused and inexpensive, to large-scale initiatives that have the potential to consume vast amounts of resources and alter the instructional and learning paradigms within a school (Donovan, Hartley, & Strudler, 2007).
One-to-one initiatives represent large-scale technology initiatives, and have become increasingly prevalent as a mechanism by which schools can provide for standardized student access to technology both within and outside of the classroom, regardless of a student’s economic background or previous technology experience (Harper & Milman, 2016). These initiatives are generally designed to improve outcomes in the areas of academic performance, equity, digital literacy, digital citizenship, and student engagement (Penuel, 2006). Beyond one-to-one, other large-scale educational technology deployment models exist – classroom-assigned sets of computers, specialty labs, and bring your own device (BYOD) initiatives, for instance – some of which can be implemented in concert with a one-to-one approach, and most of which share some or all intended outcomes with one-to-one implementations (Song, 2014). For the purposes of this study, the focus on technology initiatives will be confined to those initiatives that can be broadly classified as one-to-one deployment models, where all students in – at minimum – an entire grade level are issued devices by the school or district, which can be used by the student both within and outside of school.
Evaluating Program Success
The relative success of a one-to-one initiative can be conceptualized and operationalized in a variety of ways, depending upon the program and/or research goals. As with a number of areas of educational research, change in student performance – on standardized tests, particular assessments, or in terms of mastery of specific skills – forms the basis for implementation effectiveness evaluations in many cases (Zheng, Warschauer, Lin, & Chang, 2016). This method offers advantages in terms of establishment of a direct correlation between the technology initiative and short-term student outcomes, although the difficulty or impossibility of conducting controlled experimentation within an active educational environment makes it difficult to identify causal relationships between a particular technology initiative and student performance, independent of the impacts of other educational or environmental factors (Wood, Lawrenz, Huffman, & Schultz, 2006).
Another mechanism that has frequently been used has been to analyze usage rates of the technology being implemented, where high – or increased – usage is seen as evidence of program success. Quantifying usage can be accomplished by tracking the frequency and duration of software use by students, or by quantifying the amount of time that student-assigned devices are actively utilized within the classroom, for instance (Penuel, 2006). There are strengths to this approach – high levels of use, especially over a long period of time, can be shown to correlate with factors such as student and teacher buy-in, training effectiveness, and technology-related skill development – in addition to the intuitive conclusion that high usage levels of these technologies – that are intended to improve student academic performance and build technology fluency – are preferable to low usage rates (Penuel, 2006). A weakness, however, is the risk that a utilization-driven evaluation of program success will skew towards programs wherein use is mandated, or where use may be uncorrelated with intended educational or social outcomes, such as allowing computer games as a reward for good behavior (Cuban, Kirkpatrick, & Peck, 2001).
A third approach to operationalization of technology initiative success takes the form of stakeholder satisfaction ratings, in terms of the effectiveness of the initiative as a whole, as well as particular components of the initiative (Donovan et al., 2007). High levels of satisfaction have been shown to correlate with high levels of usage, improved student performance, and reinvestment in and longevity of the initiative (Donovan et al., 2007; Penuel, 2006). As with student performance data, however, it can be difficult to establish causal relationships given challenges related to controlling for environmental and other factors (Cuban et al., 2001).
A fourth approach to assessment of one-to-one implementation success is analysis of factors that comprise or correlate to student engagement. A variety of studies have shown that dropout and absentee rates are reduced following the implementation of one-to-one initiatives, and active student engagement during class is increased (Fishman, Penuel, Hegedus, & Roschelle, 2011; Penuel, 2006). Student engagement has consistently been shown to directly influence achievement measures, which provides a theoretical basis for association of demonstrated increases in engagement with corresponding increases in academic achievement (Fredricks, Blumenfeld, & Paris, 2004). Again, a challenge in terms of establishing a direct relationship between technology initiatives and student engagement is related to the fact that schools generally implement multiple initiatives to increase student engagement simultaneously, making it difficult to isolate the impact of any single initiative.
Factors Impacting Technology Initiative Success
Given the personnel and financial costs associated with educational technology initiatives, educational leaders, school board members, and community stakeholders often expect that the results of major technology initiatives, such as one-to-one device programs, will be immediately apparent. To the extent that devices are made available, software is installed, and services are delivered, the program outputs – direct results of the program implementation – are often immediate and tangible. That said, teachers and students do not develop instantaneous familiarity with the new technology at their disposal, and interaction with the technology and all that it offers is likely to be superficial at the onset. Given that the broader success of technology initiatives cannot be meaningfully evaluated during – or even shortly after – program deployment, it is crucial that school leaders understand the factors that have been shown to contribute to program effectiveness, as defined through long-term outcomes such as improved academic performance, student engagement, or students entrance into STEM (science, technology, engineering, and mathematics) career paths.
It is often the case that negative impacts are apparent during early stage implementations, due to the fact that time that might have otherwise been spent on academic work, lesson plan development, or training related to other educational initiatives is instead being spent on getting acquainted with the hardware, software, and management aspects of the new program and associated technologies (Mouza, 2008; Zucker & Hug, 2008). Long-term, and even short-term outcomes of one-to-one implementations often do not become apparent until years after the device deployment, and even then are often realized only when an initiative has been implemented with fidelity. A 2007 study concluded that a full educational technology initiative implementation takes five to eight years, a time period that would span multiple generations of hardware and software (Silvernail & Gritter, 2007).
Regular Technology Integration
Technology usage levels in the classroom have been shown to relate directly to the effectiveness of teacher training programs supporting the technology implementation (Bebell & Kay, 2010; Lemke & Martin, 2004; Shapley, Sheehan, Maloney, & Caranikas-Walker, 2008; Zucker & Hug, 2008). Further, regular integration of technology during classes is shown to be a necessary condition for obtaining significant benefits from an educational technology initiative, whether in terms of academic performance, or secondary benefits such as decreased absenteeism, increased collaboration between students, and improved editing skills (Keengwe, Schnellert, & Mills, 2012).
Implementation Model
Another important factor to consider as it relates to technology integration is program’s implementation model. Most one-to-one initiatives, for instance, are based upon a concentrated model, wherein students have access to a laptop at school and are also able to take the laptop home (Zheng et al., 2016). Other implementations, however, range from students only having access to laptops at school to having access in only one classroom (albeit one-to-one access in that classroom). Several studies have compared outcomes resulting from these differing approaches to one-to-one computing, and have concluded that the model where students have access to laptops at school and at home is the most effective, largely due to the fact that computer access – and along with it, the ability to use software, engage in digital collaboration with other students, and the ability to gain experience and familiarity with the technology – is not limited to a particular time or place (Chan et al., 2006; Muir-Herzig, 2004; Rockman, 2004).
Teacher and Administrator Training
Across all studies, technology-related professional development for teachers recurs as a critical component of educational technology initiative success. It is important that teachers have time to work with the technology both in terms of becoming familiar with basic operational characteristics and procedures, as well as to explore how the technology can effectively supplement the curriculum (Bebell & O’Dwyer, 2010; Martin et al., 2010; Muir-Herzig, 2004; Russell, Bebell, O’Dwyer, & O’Connor, 2003). Further, teachers must be exposed to teaching practice modifications geared towards accommodation of student-centered learning strategies – such as decentralized classrooms, flipped instruction, and project-based learning – which have shown to be highly correlated with technology program effectiveness (Keengwe et al., 2012; Lemke & Martin, 2004; Lowther, Inan, Ross, & Strahl, 2012).
Teachers who had not received adequate technology training generally used the technology less frequently and spent a great deal of time dealing with technology issues rather than curricular issues in instances where they did design a lesson that made use of the technology (Lowther et al., 2012). Specifically, professional development opportunities must be focused on a connection between the technology and the curriculum (Fishman et al., 2011), whereby teachers spend more time working on lesson planning with the technology in mind than on technology assistance (Martin et al., 2010).
Operational Technology Support and Planning
While it may seem intuitive that technology support must be readily available in order for teachers and students to feel confident integrating technology into their work, the widespread adoption of 1:1 laptop initiatives in school districts has resulted in many districts being unable to provide adequate maintenance and repair service (Holcomb, 2009). Research has shown that even in schools where technology resources are prevalent, inadequate support response times or repair rates result in a dramatic reduction of technology use in classrooms, compared to expected values based upon other criteria (Topper & Lancaster, 2013). Beyond the basics of technology support, it is critical that network and internet functionality be robust in terms of availability, and sufficient in terms of bandwidth (speed). If internet service is inadequate or unreliable, teachers will avoid making use of technology – even given adequate training, hardware, and software – since the most valuable and accessible resources are often found online (Sundeen & Sundeen, 2013).
Administrative Leadership
As with many educational initiatives, it is important that school administrators positively and proactively engage with teachers regarding technology initiatives in schools (Anderson & Dexter, 2005). Building administrators are generally responsible for prioritization of professional development time and development of instructional goals, in addition to being responsible for teacher evaluation and other areas of oversight with relation to the school’s educational program (Neumerski, 2013). With adequate administrative support, adequate technology-related professional development can be established as a priority, with clear expectations regarding the use of the technologies that contribute to regular and effective integration in the school’s curriculum (Bebell & Kay, 2010). Further, school leaders who themselves make use of the technologies at hand for communication and collaboration with their faculty and take steps to understand the impact of educational technology in the classrooms have significant impacts on technology outcomes (Anderson & Dexter, 2005).
Given that building administrator engagement has been shown to be important to the success of a technology initiative, it is worth considering the factors that contribute to high levels of technology leadership engagement on the part of building principals and assistant principals. A logical conclusion may be that technology fluency on the part of the educational leader is directly correlated with level of engagement in terms of technology initiatives. This potential effect is easy to conceptualize, as an educational leader who struggles with technology is less likely to have personal experiences, expertise, and an understanding of the potential for technology implementations. Research that establishes a connection between technical expertise and administrative engagement would support this conclusion.
Beyond the concept that expertise breeds engagement, however, the influence of other factors can be conceptualized and analyzed. Attendance at and participation in educational technology conferences, for instance – or more broadly, interaction with the educational technology community – could contribute to administrator engagement, regardless of the administrator’s level of technology expertise (Dolle, Gomez, Russell, & Bryk, 2013). Seeing a product demonstration in the context of a second grade classroom, for instance, may spur a principal to encourage their teachers and the school’s technology integration staff to implement the product in his/her school, regardless of whether the principal possesses the expertise to personally use or implement the technology. Research relating to social basis for leader decision-making, especially where factors considered include participation with professional communities, would help establish this as a relevant factor.
Another potentially-critical condition for leader engagement is the presence of strong district-provided instructional and technical support for the initiative. Regardless of technical expertise on the part of the building administrator, lack of strong implementation support could dissuade promotion of or engagement with a technology-focused project or implementation (Peggy A Ertmer, Ottenbreit-Leftwich, Sadik, Sendurur, & Sendurur, 2012). This factor may be somewhat difficult to operationalize, as it will require a research basis for establishment of criteria for technology support and integration effectiveness, as well as a mechanism to tie that support effectiveness not to classroom technology implementations, but to administrative engagement itself.
The fourth additional factor that is worthy of consideration is administrator involvement or influence in planning the technology implementation. While an edict from the highest levels of the district may fall on deaf ears, an initiative that an administrator was either directly involved in crafting or to which they appointed representatives in the planning process may be more likely to receive attention of the educational leader (Harris, 2013). Research focusing on collaborative decision-making and buy-in could be valuable in establishing this factor as worthy of consideration.
A fifth factor to consider – particularly relevant for technology implementations, but also analogous, for instance, to changes in state assessment mandates – is related to perceived longevity of the initiative. An implementation that is thoroughly planned in such a way as to guide future years’ implementations, and involves funding and other resource commitments, may be more likely to spur administrative engagement than an apparently-one-shot program that an educational leader may expect to be supplanted after a short time (Davis, 2003). While this factor is likely influential in terms of administrative engagement – school administrators have limited available time, and are unlikely to choose to expend resources on something that they don’t expect to be around a short while later – one of the keys will be the development of a strategy to not only promote program stability, but also to promote the perception of program stability.
This final point speaks to the question of in what ways, if any, can district, building, and technology leaders facilitate increased engagement with technology initiatives on the part of relevant leader. This question is more pressing given the limited budgets and competing priorities available to school districts. While it may be that all of the aforementioned factors are significant, addressing each factor to an optimum level is likely not possible vis-à-vis the engagement of the school leader. Attendance at conferences and participation in professional communities requires time and money, as does professional development designed to improve educational leaders’ technical expertise. Staffing and resources available to school technical and technology integration support are typically far below industry standards in the private sector (McLeod & Richardson, 2013), and – especially in large districts – committee work and other planning processes can grind to a halt if meaningful, direct influence is provided to too-large a group of people (Van De & Delbecq, 1971). Finally, schools operate within a funding paradigm where budgets are generally set on a year-by-year basis, and long term commitments – even using multi-year levies and bond issues – are often undermined by changes in previously-unrelated funding streams (Howell & Miller, 1997).
Given the investment in technology resources in schools and districts throughout the United States, it is crucial that districts be able to plan technology initiatives with an understanding of the factors that will contribute to program success. Further, districts must have some guidelines upon which an analysis of the costs and benefits of initiative support mechanisms can be weighed. Research that addresses the two questions posed above has the potential to provide this resource to districts in terms of spurring leader engagement, which has almost-universally been shown to be an important factor in overall technology program success.
References
Anderson, R. E., & Dexter, S. (2005). School technology leadership: An empirical investigation of prevalence and effect. Educational Administration Quarterly, 41(1), 49-82.
Badke, W. (2016). Why information literacy is invisible. Information and Data Literacy: The Role of the Library, 137.
Bebell, D., & Kay, R. (2010). One to one computing: A summary of the quantitative results from the berkshire wireless learning initiative. The Journal of Technology, Learning and Assessment, 9(2).
Bebell, D., & O’Dwyer, L. M. (2010). Educational outcomes and research from 1: 1 computing settings. Journal of Technology, Learning, and Assessment, 9(1), n1.
Becker, H. J. (1991). How computers are used in united states schools: Basic data from the 1989 I.E.A. computers in education survey. Journal of Educational Computing Research, 7(4), 385-406. Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ439850
Blackley, S., & Howell, J. (2015). A STEM narrative: 15 years in the making. Australian Journal of Teacher Education, 40(7). Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ1069533
Cavanagh, S. (2013). Smartphones a standard for majority of students by high school, survey finds. Digital Education: http://blogs. edweek. org/edweek/DigitalEducation/2013/05/more_ than_half_of_students_car. html.
Ceballos, P. L., & Sheely-Moore, A. I. (2015). Addressing the needs of low-income school-age children: A teaching model. Journal of Counselor Leadership and Advocacy, 2(2), 131-143.
Chan, T.-W., Roschelle, J., Hsi, S., Kinshuk, Sharples, M., Brown, T., Norris, C. (2006). One-to-one technology-enhanced learning: An opportunity for global research collaboration. Research and Practice in Technology Enhanced Learning, 1(01), 3-29.
Coffland, D. A., & Strickland, A. W. (2004). Factors related to teacher use of technology in secondary geometry instruction. The Journal of Computers in Mathematics and Science Teaching, 23(4), 347.
Coffman, K. G., & Odlyzko, A. M. (2002). Internet growth: Is there a “Moore’s Law” for data traffic? Handbook of massive data sets (pp. 47-93): Springer.
Coley, R., Cradler, J., & Engel, P. K. (1997). Computers and classrooms: The status of technology in us schools. Policy information report.
Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox. American Educational Research Journal, 38(4), 813-834.
Davis, N. (2003). Technology in teacher education in the USA: What makes for sustainable good practice? Technology, Pedagogy and Education, 12(1), 59-84.
Dolle, J. R., Gomez, L. M., Russell, J. L., & Bryk, A. S. (2013). More than a network: Building professional communities for educational improvement. National Society for the Study of Education Yearbook, 112(2), 443-463.
Donovan, L., Hartley, K., & Strudler, N. (2007). Teacher concerns during initial implementation of a one-to-one laptop initiative at the middle school level. Journal of Research on Technology in Education, 39(3), 263-286.
Drayton, B., Falk, J. K., Stroud, R., Hobbs, K., & Hammerman, J. (2010). After installation: Ubiquitous computing and high school science in three experienced, high-technology schools. The Journal of Technology, Learning and Assessment, 9(3).
Ertmer, P. A., Addison, P., Lane, M., Ross, E., & Woods, D. (1999). Examining teachers’ beliefs about the role of technology in the elementary classroom. Journal of Research on Computing in Education, 32(1), 54. Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=2780718
Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012). Teacher beliefs and technology integration practices: A critical relationship. Computers & Education, 59(2), 423-435.
Estes, J. S. (2015). The pivotal role of faculty in online student engagement and retention. Handbook of Research on Strategic Management of Interaction, Presence, and Participation in Online Courses, 65.
Fathema, N., Shannon, D., & Ross, M. (2015). Expanding the technology acceptance model (tam) to examine faculty use of learning management systems (lmss) in higher education institutions. Journal of Online Learning & Teaching, 11(2).
File, T. (2013). Computer and internet use in the United States. Current Population Survey Reports, P20-568. US Census Bureau, Washington, DC.
Fishman, B. J., Penuel, W. R., Hegedus, S., & Roschelle, J. (2011). What happens when the research ends? Factors related to the sustainability of a technology-infused mathematics curriculum. Journal of Computers in Mathematics and Science Teaching, 30(4), 329.
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of educational research, 74(1), 59-109.
Gardner, D. P. (1983). A nation at risk. Washington, DC: The National Commission on Excellence in Education, US Department of Education.
Hanson, G., & Slaughter, M. (2015). High-skilled immigration and the rise of STEM occupations in us employment Education, skills, and technical change: Implications for future us gdp growth: University of Chicago Press.
Hargittai, E. (1999). Weaving the western web: Explaining differences in internet connectivity among oecd countries. Telecommunications Policy, 23(10/11), 701. Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=2697570
Harper, B., & Milman, N. B. (2016). One-to-one technology in K–12 classrooms: A review of the literature from 2004 through 2014. Journal of Research on Technology in Education (Routledge), 48(2), 129-142. doi:10.1080/15391523.2016.1146564
Harris, A. (2013). Distributed leadership friend or foe? Educational Management Administration & Leadership, 41(5), 545-554.
Holcomb, L. B. (2009). Results & lessons learned from1:1 laptop initiatives: A collective review. TechTrends: Linking Research & Practice to Improve Learning, 53(6), 49-55. doi:10.1007/s11528-009-0343-1
Howell, P. L., & Miller, B. B. (1997). Sources of funding for schools. The future of children, 39-50.
Keengwe, J., Schnellert, G., & Mills, C. (2012). Laptop initiative: Impact on instructional technology integration and student learning. Education and Information Technologies, 17(2), 137-146.
Ladson-Billings, G. (2013). ” Stakes is high”: Educating new century students. The Journal of Negro Education, 82(2), 105-110.
Lederman, D. (2013). Growth for online learning. Inside Higher Ed.
Lemke, C., & Martin, C. (2004). One-to-one computing in Indiana: A state profile (preliminary report). Retrieved May, 23, 2005.
Lowther, D. L., Inan, F. A., Ross, S. M., & Strahl, J. D. (2012). Do one-to-one initiatives bridge the way to 21st Century knowledge and skills? Journal of Educational Computing Research, 46(1), 1-30.
Males, M. (2015). Age, poverty, homicide, and gun homicide. SAGE Open, 5(1), 2158244015573359.
Maranto, J. H. (2015). The effect of standardized testing on historical literacy and educational reform in the U.S. Academic Leadership Journal in Student Research, 3. Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ1062724
Martin, W., Strother, S., Beglau, M., Bates, L., Reitzes, T., & McMillan Culp, K. (2010). Connecting instructional technology professional development to teacher and student outcomes. Journal of Research on Technology in Education, 43(1), 53-74.
McLeod, S., & Richardson, J. W. (2013). Supporting effective technology integration and implementation. Principal, 2, 249-272.
Morris, M., & Ogan, C. (1996). The internet as mass medium. Journal of Communication, 46(1), 39-50. Retrieved from http://proxy.lib.uiowa.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=EJ540740
Mouza, C. (2008). Learning with laptops: Implementation and outcomes in an urban, under-privileged school. Journal of Research on Technology in Education, 40(4), 447-472.
Muir-Herzig, R. G. (2004). Technology and its impact in the classroom. Computers & Education, 42(2), 111-131.
Neumerski, C. M. (2013). Rethinking instructional leadership, a review what do we know about principal, teacher, and coach instructional leadership, and where should we go from here? Educational Administration Quarterly, 49(2), 310-347.
Penuel, W. R. (2006). Implementation and effects of one-to-one computing initiatives: A research synthesis. Journal of Research on Technology in Education, 38(3), 329-348.
Perrin, A. (2015). Social media usage: 2005-2015. Pew research center: Internet. Science & Tech. Retrieved from http://www. pewinternet. org/2015/10/08/social-networkingusage-2005-2015.
Perrin, A., & Duggan, M. (2015). Americans’ internet access: 2000-2015. Pew research center: Internet. Science & Tech. Retrieved from http://www. pewinternet. org/2015/06/26/americans-internet-access-2000-2015.
Rockman, S. (2004). Getting results with laptops. Technology & Learning, 25(3), 1-12.
Rosen, Y., & Beck-Hill, D. (2012). Intertwining digital content and a one-to-one laptop environment in teaching and learning: Lessons from the time to know program. Journal of Research on Technology in Education, 44(3), 225-241.
Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examining teacher technology use implications for preservice and inservice teacher preparation. Journal of Teacher Education, 54(4), 297-310.
Schaffhauser, D. (2015). 3 reasons Chromebooks are shining in education. THE Journal (Technological Horizons In Education), 42(3), 22.
Shapley, K., Sheehan, D., Maloney, C., & Caranikas-Walker, F. (2008). Evaluation of the texas technology immersion pilot: Outcomes for the third year (2006-07). Texas Center for Educational Research.
Sharif, M. Z., Henry, K. A., & Nelson, B. B. (2015). Current state of child health in the united states. Child Health: A Population Perspective, 31.
Shiu, E., & Prakash, S. (2015). System challenges and hardware requirements for future consumer devices: From wearable to chromebooks and devices in-between. Paper presented at the VLSI Technology (VLSI Technology), 2015 Symposium on.
Silvernail, D. L., & Gritter, A. K. (2007). Maine’s middle school laptop program: Creating better writers: Gorham, ME: Maine Education Policy Research Institute.
Song, Y. (2014). “Bring your own device (byod)” for seamless science inquiry in a primary school. Computers & Education, 74, 50-60. doi:10.1016/j.compedu.2014.01.005
Sundeen, T. H., & Sundeen, D. M. (2013). Instructional technology for rural schools: Access and acquisition. Rural Special Education Quarterly, 32(2), 8.
Topper, A., & Lancaster, S. (2013). Common challenges and experiences of school districts that are implementing one-to-one computing initiatives. Computers in the Schools, 30(4), 346-358. doi:10.1080/07380569.2013.844640
Ullucci, K., & Howard, T. (2015). Pathologizing the poor implications for preparing teachers to work in high-poverty schools. Urban Education, 50(2), 170-193.
Van De, A., & Delbecq, A. L. (1971). Nominal versus interacting group processes for committee decision-making effectiveness. Academy of Management Journal, 14(2), 203-212.
Wood, N. B., Lawrenz, F., Huffman, D., & Schultz, M. (2006). Viewing the school environment through multiple lenses: In search of school‐level variables tied to student achievement. Journal of Research in Science Teaching, 43(3), 237-254.
Zheng, B., Warschauer, M., Lin, C.-H., & Chang, C. (2016). Learning in one-to-one laptop environments: A meta-analysis and research synthesis.
Zucker, A. A., & Hug, S. T. (2008). Teaching and learning physics in a 1: 1 laptop school. Journal of Science Education and Technology, 17(6), 586-594. | https://innovation.iowacityschools.org/2016/11/29/technology-access-initiative-success/ |
A renowned French artist of the 18th Century, Chardin was well known for his still-life works and genre paintings. His refined and realistic style had a lasting influence on some of the greatest artists of the 19th and 20th Centuries, including Henri Matisse (1869 – 1954) and Paul Cézanne (1839 – 1906). His depictions were of simple subjects, but masterful in their execution, as exemplified by his works now in the Uffizi Gallery, Boy Playing Cards and, Girl with Racket and Shuttlecock.
His training was under French history painters, Pierre Jacques Cazes and Noël-Nicolas Coypel (1690 – 1734), though he trained mostly from his own studies. According to the Getty Museum Biography on Chardin the painter got his start executing signposts for tradesmen and detailing the works of other artists. He was documented in 1724 as a member of the Academy of Saint Luc in Paris, but was discovered by Nicolas de Largillière (1656 – 1746), a portrait painter. Largillière recommended Chardin’s entry into the Royal Academy of Painting (Académie Royale de Peinture et de Sculpture) in 1728.
The works that garnered his acceptance into the Academy were, La Raie (The Ray) and, The Buffet, displaying his realistic representations and established his nickname as the “painter of animals and fruit.” From here Chardin developed his skill of still-life further and grew a love of painting genre works. The painter’s reputation escalated him into gainful patronage, including a pension from King Louis XV (1710 – 1774). His works continually evolved from simple still-life painting into highly detailed representations of everyday life in French society. | https://www.virtualuffizi.com/jean-baptiste-sim%C3%A9on-chardin.html |
Robotics in Wichita, KS continues to grow. As advancements in the field develop and industries such as manufacturing and aviation depend on robots to do things that humans can not, there is a higher demand for a skilled workforce in robotics. Earning a degree in robotics or robotics engineering can lead to rewarding and in demand career opportunity.
Robotics
Robotics is a branch of engineering that involves the conception, design, construction, operation and application of robots. The robotics field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering. | https://wsutech.edu/articles/robotics/ |
Joint Statement by Prime Minister of Japan Junichiro Koizumi and President of the Russian Federation Vladimir Putin Concerning the Adoption of a Japan-Russia Action Plan
The Prime Minister of Japan and the President of the Russian Federation,
Sharing the view that in the current situation in which the vigorous process of formation of new forms of inter-state cooperation based on principles of freedom and democracy are advancing in the international community, the roles and responsibilities of Japan and the Russian Federation are increasing in the world and further enhancement of the cooperation between the two countries is vital;
Understanding the importance of intensifying joint efforts towards the resolution of urgent global issues including the issues of enhancing international peace and security and tackling international terrorism based on the purposes and principles of the Charter of the United Nations;
Standing upon the position that the enhancement of cooperation between Japan and the Russian Federation on the global and regional level would promote the development of bilateral relations and also be beneficial to the international community as a whole;
Confirming that Japan and the Russian Federation, which share the universal values of freedom, democracy, rule of law and respect for fundamental human rights, are resolved to continue to make efforts towards the creation of a creative partnership that is consistent with their strategic and geo-political interests;
Recognizing that in a situation in which structural and positive changes are occurring in relations between the Russian Federation and other major countries, through the vitalization of bilateral relations in various areas, there is a need to boost cooperation to a level concomitant with its potential, and that this would be of strategic significance for the Asia-Pacific region and the world as a whole;
Aspiring to open up a new horizon for broad Japan-Russia partnership, by ultimately overcoming difficult legacies from the past between the two countries, and affirming their determination to conclude a peace treaty as soon as possible through the solution of the issue of where the islands of Etorofu, Kunashiri, Shikotan and Habomai belong and therefore accomplish complete normalization of Japan-Russia relations through vigorous negotiations based on the agreements that have been achieved to date, including the Joint Declaration of Japan and the USSR in 1956, the Tokyo Declaration on Japan-Russia Relations in 1993, the Moscow Declaration on Establishing a Creative Partnership between Japan and the Russian Federation in 1998, the Statement by the Prime Minister of Japan and the President of the Russian Federation on the Issue of a Peace Treaty in 2000 and the Irkutsk Statement by the Prime Minister of Japan and the President of the Russian Federation on continuation of future negotiations on the issue of a Peace Treaty in 2001;
Indicating the highly evaluated role the Program to Stimulate the Basic Direction of Cooperation in Trade and Economic Areas, which was signed in September 2000, has had in developing bilateral trade and economic relations, and the necessity for the realization of the potential possibilities of broader and more effective cooperation in these areas;
Indicating the significance of Japanese technical assistance to promote the transition to market economy in the Russian Federation, including technical assistance through the Japan Centers, and affirming that it would be to the benefit of the international community as a whole for the Russian Federation to become a fully integrated member of the international economic system, including the accession of the Russian Federation to the World Trade Organization (WTO), which Japan is supporting;
Being convinced that through the series of Japanese cultural events to be held in the Russian Federation in 2003 and through similar Russian cultural events to be held in Japan in the future, the expansion of human exchange among people from all walks of life in both countries will further promote friendship, trust and mutual understanding between the people of Japan and the Russian Federation; and
Emphasizing the importance of adopting specific measures to secure dynamic and comprehensive development in Japan-Russian Federation cooperation;
Adopted of the Japan-Russia Action Plan attached herein, and expressed both countries' intention to undertake joint work to ensure the steady implementation of the Action Plan.
| |
Participatory Spirituality for the 21st Century
The symposium was intense and I think caused a shift in integral thinking. There were integral theorists in one group, Roy Bhaskar and his students of Critical Realism in another group, and Gary Hampson and I played the role of meta-theoretical referees of sorts. I had the benefit of having read just about all of Bhaskar’s work before I went, so the discourse was very rich and meaningful for me, where others seemed to be struggling with basic concepts (on both sides ) I will try to outline some of the main ideas that I took away from it.
Bhaskar talks about philosophy as “under-laboring” different disciplines or fields. Critical Realism’s focus is to under-labor science (physics, biology, sociology)… which means that it can point out the hidden assumptions or embedded frameworks which “under-lay” the science, thereby pointing to inconsistencies or falsities of theories that are created from within that discipline. To this effect, Critical Realism (CR) has a broad criticism of empiricism, which says that within any given empirically-driven theory, there are a set of assumptions outside of which the theory doesn’t work. In other words, all the truths that arise from empirical science is constrained by the framework from which the science is conducted, and therefore the “truths” are relative, not universal. One easy example of this is if you go to an Indian reservation and do “empirical science” you can “prove” that Indians are more lazy, less intelligent, more prone to alcoholism and crime, more degenerate, etc… than the general white population. It is easy to see that these “facts” appear only because an Indian reservation has a history that explains why these are contingencies of that history, not “facts” about native Americans. CR says all empirical science has this kind of blind spot, and that the role of philosophy is to contextualize what is “outside” the purview of the science, and the role of science Is then to advance its theory to include what was previously outside. This is a never-ending process, and puts philosophy right at the side of science (which is very cool I think.)
The method that CR uses is a dialectical method that has 3 major “steps” (there are several more steps in his system, I am generalizing). First there is what he terms “immanent critique”. This is where you critique the system from within the system’s own understanding. An important part of immanent critique is pointing out what is absent from the system/theory. The second step is “explanatory critique” which explains the system’s inconsistencies by pointing to what is left out and explains why the “truths” that arise within the system are merely “apparent truths” when the greater view is taken into account. The third step is an emancipatory “leap” -- which is an axiological step, or relates to values. The emancipatory leap asks what has to be “absented” from this line of reasoning to redress oppression or to transform the system toward greater liberation. This last step relies on principles Bhaskar calls “co-presencing” or “transcendental identification.”
When we applied this type of analysis to Integral Theory (IT) , we got the following key problem areas:
[An interesting aside is that Wilber writes from a spiritual wisdom of the abosolute unity of consciousness, and then goes on to fashion of theory that emphasizes the discordance of the world; whereas Bhaskar tells us how he began with this notion of critical naturalism, and ended up with the philosophy of meta-reality and its spiritual notions of transcendental co-presence. Many of the core constituents of Critical Realism denounce Bhaskar’s spiritual turn, and ridiculed him about attending a symposium with IT.]
Views: 2398
Bhaskar's three realities has a lot in common with the developed Advaitin theory of illusion (not the archaic naive version that speaks only of maya and brahman). However, I fail to see the actual line separating the actual world from the demi-real. The developed Advaita theory distinguished transcendental illusion from what we normally think of as illusory, such as the rope/snake or silver/nacre. The rope snake theory is just too simplistic to account for transcendental illusion. Advaitins also talk about illusions being causally effective (they need to since avidya is in some sense creative for them). The classic example is the hot dream chic who appears in a man's dream and causes him to ejaculate. In any case, much of what Bhaskar calls the demi-real, Advaitins might say belongs to the the realm of transcendental illusion.
Given all of this, both parties still seem to be indebted to Kantian Philosophy. OOO would probably call both correlational. I have serious doubts and reservations about any way of thought that starts to divide the world up into multiple universes.
The distinction between real and actual is interesting. And the terms themselves, actually – since the point of distinction is to shift the distinction (to include integrated perspectives ? theres the difference between integrated perspectives and perspectives all the way up and all way down, and the need to account for separate world views , but that’s another matter)
Perhaps a distinction is to eliminate an unacknowledged boundary, at which point the distinction becomes irrelevant. The point here though is the overview, the usefulness of loops or an intrinsic non linearity irrespective of context. A loop could underlabor the real and find itself where the real uderlabors the actual . the loop will obviously have to be a continuum otherwise a linear imposition presents itself.
continuums like events could have an independence from context and experience, in their path. I digress a bit here but its related to the search for context transcendent meaning – in different instances, there is a tendency for thinning contexts if not absence of them , the tendencies having a potential of arrival
I like the term actual because *act* implies an enacted paradigm, or is related to performative acts. Since that is too narrow a premise for the real, the emphasis is likely more on energy expended for action than action itself. Cause sort of assumes precedence over action in the context of the real, but closer to home, it does so within the usefulness of context as such. Like how distinction is usefull to negate itself. So there is a deconstruction of cause, enacted paradigms and context. Action though emerges as progressive free of the performative perspective. Free of perspectives action and cause are less problematic. Iam crossing threads a bit, but the proximity of cause and action may be a lead….
yes, i like the distinction ,too. Bhaksar uses the phrase "when it is important to disambiguate" -- in other words, he knows there is no actual distinction between the actual and the real (well, actually, the distinction exists only in the actual, not the real... haha) ... anyway, he says this is a tool that is useful when it becomes necessary to disambiguate what we are talking about. the same he says of the world and the known world -- the epistemic fallacy is only relevant when it becomes useful to disambiguate the epistemic from the ontological. this is much more post-metaphysical than Wilber's tetra-arising quadrants, which reek of ontic-ally real structures.
bonni
Which reek of ontic-ally real structures
Ha ha ah . that feels right……. if epistemic fallacy is a defence of perspectives way up and down, there is the emergence of ontological fallacy in defence of the reeky - as a way out of the former and correlationism at that.
With a movement in distinction between the actual and real – we have a realm of distinctions and a realm which is free of distinctions. The real is also subject to distinction in the sense it has to have validation. Is validation key to causality, ontology and experience/existence et al ? a question in the other thread too . Anyway about a realm without distinctions , I want to say something about the metaphysical and formlessness charge and the nature of the collapse of distinctions, which is misunderstood. This is a shift to dimensional entry which is an axilogical access for form, which makes form sharper, exponentially definitive kind of works – say infinite definition of the indefinable - - description of it might fall short :) but it is easy enough to observe (it is the regress of effort really) analogously a collapse of distinction is a leap in distinctivity.
This issue has come up any number of times, I think UL perspectivalism is a misnomer. Perspectives have interiors . they are barely interior if at all. I’m suggesting its just a matter of location, quantity all the way
The Integral Research Center has posted a summary of the Critical Realism / Integral Theory conference that took place last year. Two audio files, one of Bhaskar's introduction to CR and the other of Sean's introduction to IT, are available for download at the bottom of the page.
Sean has a blog called critical realism 101 here.
Ah, thanks, I was going to post about that today. There are two articles on CR and IT linked on the blog; I'll link to one of them here.
In re-reading this thread Bonnie's statement below struck me that she has no idea what the OOO crowd has said on this topic. It is obvious to anyone who has that Bryant for example, using Bhaskar no less, is well aware of this distinction between the real and the actual. And has written quite eloquently and accurately on it. And yet she makes statements like this, quite willing to pigeonhole the entire field with little to no knowledge on the subject.
bonnitta roy said:
maybe there was a confusion. Bhaskar DEFINTELY says there is only one real world.
he *might* warn the OOO that "their world" is the actual world (which is an incomplete description of the real) and CR would be happy to "underlabor" them for that reason (supply either an immanent critque pointing out the internal inconsistency, or explanatory crituqe pointing to the incompleteness -- which of course is how good science proceeds, anyway)
b
theurj said:
But on the other side, we have only one real world.
And that is where the likes of Bryant and perhaps Morton (OOO thread) might disagree with Bhaskar.
Following up on the last post, here's Bryant from The Democracy of Objects, section 1.5:
"Here it is necessary to clarify what the epistemic fallacy is and is not about. A critique of the epistemic fallacy and how it operates in philosophy does not amount to the claim that epistemology or questions of the nature of inquiry and knowledge are a fallacy. What the epistemic fallacy identifies is the fallacy of reducing ontological questions to epistemological questions, or conflating questions of how we know with questions of what beings are. In short, the epistemic fallacy occurs wherever being is reduced to our access to being. Thus, for example, wherever beings are reduced to our impressions or sensations of being, wherever being is reduced to our talk about being, wherever being is reduced to discourses about being, wherever being is reduced to signs through which being is manifest, the epistemic fallacy has been committed.
"We have seen why this is so, for our experimental practice is only intelligible based on a series of ontological premises and these ontological premises cannot be reduced to our access to being. They are ontological in the robust sense. These ontological premises refer not to what is present or actual to us. Indeed, they refer, as we will see, to beings that are radically withdrawn from any presence or actuality. And as such, they are genuinely ontological premises, not epistemological premises pertaining to what is given.
"In recognizing that the epistemic fallacy emerges from foundationalist aspirations on the part of philosophers, Bhaskar hits the mark. It is the desire for a secure and certain foundation for knowledge that leads philosophy to adopt the actualist stance and fall into the epistemic fallacy. These decisions, in turn, ultimately lead to correlationism. In raising the question, “how do we know?” and seeking an argument that would thoroughly defeat the skeptic or sophist, the philosopher concludes that only what is present or given can defend against the incursions of the skeptic. But what is present or given turns out either to be mind or sensations. Therefore the philosopher finds himself in the position of restricting all being to what is given as actual in sensations. From here a whole cascade of problematic consequences follow that increasingly lead to the dissolution of the world as a genuine ontological category.
"However, once these foundationalist aspirations are abandoned, the nature of the problem changes significantly and we no longer find ourselves tied to the actualist premise that generates all of these issues. And indeed, these aspirations should be abandoned, for foundationalism is premised on the possibility of absolute presence, absolute proximity, the absence of all absence, and we have now discovered that it is being itself that is split between generative mechanisms or objects and the actual. Difference, deferral, absence, and so on are not idiosyncracies of our being preventing us from ever reaching being, but are, rather, ontological characteristics of being as such. Moreover, this split at the heart of all beings is not simply characteristic of those objects that we would seek to know, but are also characteristics of the peculiar object that we are. We ourselves are split. If, then, this split is a general ontological feature of the world, then the dream of presence required for any form of foundationalism is a priori impossible. We are then left with two paths: to persist in the correlationist thesis that would reduce ontological questions to epistemological questions and which is itself implicitly premised on the ontotheological assumption of actualism, or to investigate the split in being in a post-humanist, realist fashion that is genuinely ontological. It is the second of these two paths that I here attempt."
http://quod.lib.umich.edu/o/ohp/9750134.0001.001/1:5/--democracy-of...
Continuing from the FB discussion on the above fallacy, part of the problem with the epistemic fallacy is actualism. Now Wilber does account for the non-actual via the timeless, changeless Causal real which subsists the actual, but then turns around in the next breath and asserts we can directly access the Causal via a non-dual meditative state (aka satori), the Absolute side of the equation. Which is exactly what Bryant is criticizing, that we can directly and accurately 'know' not just this state, but that this state directly accesses that Causal realm underlying the actual.
So yes, it's a fixation on enacting the interior state(s), because this 'consciousness per se' IS the metaphysical foundationalism of ALL, "for foundationalism is premised on the possibility of absolute presence." Bryant (and Morton) have the good sense to carefully read and understand Derrida on this metaphysics of presence.
In a way it's akin (aken?) to the magical thinking of New Age positive thinking, in that our thoughts, or mystical states in this case, is all we need do to effect societal change. Which is of course a bastardization of even Buddhism, for most adherents are required to directly engage with the world of suffering to alleviate it. And not just by teaching people to achieve a state of equanimity but by feeding, clothing and sheltering the poor, etc. Only in America can we think that just attaining a special state handles everything.
I'm reminded of the back and forth between Joe Corbett and Jeff Salzman on exactly this lack in AQALingus. The AQALack? If it quacks like a lack... | http://integralpostmetaphysics.ning.com/forum/topics/report-from-critical-realism-integral-theory-symposium?xg_source=activity&id=5301756%3ATopic%3A33101&page=3 |
The invention belonging to the process called DNA filter is probably not undoubtedly one of the most crucial developments inside the history of research, but is certainly one of its most critical. The earliest successful isolation of DNA was performed simply by Friedrich Miescher in 1869. Since then it is often a standard technique in forensic studies or biochemistry. The DNA hair strands are remote by cleansing with water detergent after which passed on to a glass vial containing agarose or other DNA joining reagent. To avoid damage to the DNA hair strands, the liquid is incubated at low temperature for around 30 mins before getting placed in another storage case.
Right here is the most common method by which DNA samples are processed. Unfortunately nowadays there are many ways through which DNA refinement can be executed. One method is known mainly because DNA microarray testing. This requires taking GENETICS samples from a crime location, preserving them until down the road, then examining the differences between your purified DNA and the anonymous DNA from which the examples were used. Although this technique can be successful, it has quite often been found that it as well results in DNA contamination out of sources outside the crime arena. If GENETICS contamination is usually severe then it will be needed to carry out GENETICS purification just before processing.
A much more invasive technique of DNA filter is known as lysis using carbamide peroxide gel electrophoresis. This method involves utilizing a strong alkaline liquid such as buffer water put together with a detergent (such for the reason that sodium sarcosinate) in order to take out cellular debris such as digested blood skin cells, viral GENETICS, and cysts formed caused by yeast or perhaps bacterial expansion. The final step belonging to the lysis procedure involves extraction using a GENETICS extractor, a high pressure normal water filter similar to a centrifuge, then Get More Information amount according to the needs of the test. | https://newinnovation-tech.com/2021/07/25/genetics-purification/ |
My approach is based on listening to the person in front of me, working with them to create a sustainable nutrition plan that meets their needs, whilst incorporating daily schedules and family demands.
I take care not to overload clients with too much information or change in one go, as I believe that adapting eating habits takes time, and should be embedded slowly to be truly effective.
Ultimately I believe that balance, variety and wholefoods are key to optimal health and weight. I strive to help each person I see find a way to incorporate these nutritional elements into their life, leaning how to nourish themselves and use food to their advantage.
I aim to inspire change in those who find the process difficult, by explaining why food is such a critical route to good health. Often minor changes that are carefully targeted can make a huge difference to how you feel.
I work in a holistic way and address these key areas:
-
Fluid intake to ensure the body is hydrated
-
Good antioxidant status
-
Dietary sources of anti-inflammatory foods
-
Vitamins, minerals, enzymes and phytonutrients
-
Healthy gut flora
-
Excretion of toxins
-
Adequate fibre intake
-
Balanced lifestyle
-
Stress management
"Nutrition Classes in London" "Nutrition Classes in London"
"Nutrition Classes in London" "Nutrition Classes in London" | https://www.nutrition-for-health.org/my-philosophy |
Journey Manager (JM) Previously known as Transact Manager (TM). | System Manager / DevOps | 19.05 This feature was updated in 19.05.
Journey Manager defines user accounts to authenticate and authorize users to log on to the Journey Manager Home Dashboard and one or more form spaces. to access assigned tasks, or Workspaces. This is controlled by assigning users to form spaces, where the Manager dashboard is one of form spaces.
There are two types of the users that can access forms:
For more information, see Authenticated vs Unauthenticated Transactions.
The diagram below illustrates various user types and what they can access in the Journey Platform.
Each user account can belong to one or more roles. Roles are collections of user accounts with similar access rights. Similarly, user accounts can belong to any number of groups, which group user accounts into non-permission based collections, for example, administrators who are interested in receiving certain email notifications.
An administrator is a user account that is associated with the Manager dashboard form space. A form space user is a user account that is associated with at least one form space. Note that these categories are not mutually exclusive: An administrator can also be associated with arbitrarily form spaces, and can have global access or be restricted to data associated with one or more specific organizations.
Manager controls access to certain content or actions using permissions. When requesting restricted content, Manager only allows this if the required permissions are present. Permissions can be granted to one or more roles, and a role is characterized by the permissions it contains. If a user account belongs to several roles, they have all permissions that are set for at least one of their roles.
As users make changes to Manager's configuration, these changes are tracked and written to the audit log. This makes it possible to view an object's history and review a specific user's changes.
A user account is part of the Journey Manager security management system, which allows for fine-grained control based on user's functions, roles, groups, and access permissions. To provide flexibility around the security configuration, Manager supports one or multiple security managers. A security manager contains such settings as how users are authenticated, lockout and password settings, session timeout, SSO configuration and user self-registration settings.
Each security manager contains one or more authentication managers responsible for authenticating a user during login. Each form space can be associated with its own security manager or use the default security manager defined in the system.
Manager provides the easy-to-use interface to create, modify, and deactivate user accounts, as well as manage access permissions, user roles and groups. It also comes with the bulk import functionality to create multiple users, including user's details, roles, groups and access configurations, which significantly simplifies user provisioning task.
Manager doesn't impose any limits on the number of user accounts you can have. However, as customersA customer is a person who uses Temenos Journey Manager to create, host and manage application forms. Other names for a customer is a client. handle user authentication in different ways, there might be some limit in the Security Managers the customers use, such as SSO, LDAP and so on, or a physical limit in the database. Most customers only authenticate their own users, but some might also authenticate form applicants raising transactions. In this case, they could theoretically have thousands of users.
Manager doesn't have any built-in functionality to purge old or inactive user accounts automatically, so you should periodically audit user accounts to ensure their number doesn't grow beyond a point that impedes the system performance. For more information, see data retention management.
Before users can log in to a form space, which requires authentication, they must have valid user accounts. Manager allows an administrator to create a user and grants access to one or more form spaces. The user receives an email with login credentials, such as a username and password, to access forms hosted on the form space.
Manager allows authenticated users to login to a form pace using credentials by delegating authentication and authorization to the 3rd party system, such as Active Directory, LDAPLightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol network. or SSOSingle sign-on (SSO) is a property of access control of multiple related, yet independent, software systems. With this property, a user logs in with a single ID and password to gain access to any of several related systems.. Manager can also maintain local users, such as contractors that are not in the organization's LDAP system.
The authenticated users can be:
Authenticated users have access to the advance features of the secured portal such as Form, Task and History pages.
Anonymous (unauthenticated or public) users can access forms if its form space is configured to allow anonymous access. An example of such a form space is the Web Plug-in form space. Members of the public click on a link from a company website which opens an unsecured form page in the portal. The available list of forms is manually maintained by the company on their website.
Using anonymous users has the following advantages:
Anonymous users don't have access to the advance features of the secured portal such as Form, Task and History pages.
An anonymous user can still use the forms, but with the following limitations:
Anonymous users must provide their email address. Optionally, a security question can be set to open saved forms and tasks.
You can grant standard roles to users to control their access to and define capabilities at their form spaces. Each form space has a set of roles that can be granted to a user. You can create a new role as needed.
A typical form user must only be granted access to a form space where forms are hosted. This allows self-registered users immediate access to forms. However, most users must be granted access to:
When working with user accounts, you should consider the environment that users are accessing, which can be production, development, or testing environment. Manager is designed to be customizable to accommodate enterprise’s security configurations and policies. The following users require access to a production environment:
The following users require access to a development and test environments:
Users can only log into a form space if their user accounts are assigned to that form space. It is not possible to assign a form space to a user account when the form space is only accessed anonymously.
The following users should have access to form spaces:
A standard user account contains the following information:
Manager enforces some extra security rules to ensure that users cannot access restricted content, such as organizations, groups, users, forms, transactions and collaboration jobs. | 19.05 This feature was introduced in 19.05.
Next, learn how to view user accounts. | https://docs.avoka.com/Users/UserAccountsOverview.htm |
The GitHub Trust & Safety team is looking for a full-time team member based remotely in EMEA.
Trust & Safety is a distributed team whose overall mission is to help maintain a safe, welcoming, and productive environment for GitHub’s diverse community of users. The Trust & Safety team investigates reports of abuse, processes content removal requests, and works closely with other internal teams to help develop and maintain GitHub’s user-facing site policies and internal policies and procedures related to user-generated content moderation.
This role requires working one weekend day a week and may include non-standard working hours, including holidays. This role will review content that may be graphic, violent, sexual, or otherwise distressing.
Responsibilities:
Capably and consistently maintain multiple task queues covering different fields of knowledge.
Investigate and respond to reports of abuse on GitHub.com, including categories such as:
Malware, cryptocurrency abuse, and other reports of fraudulent activity
Spam
Harassment
Impersonation
Privacy violations
Objectionable speech and content such as hate speech and pornography
Organize and triage the team’s ticket queue periodically on a daily basis
Review and process requests to remove sensitive data and personal information
Catalog workflow, log noteworthy events, and apply data to improve procedures
Minimum Qualifications:
1+ years experience in Malware and Malicious Code
Basic understanding of coding languages such as Java and Python
Excellent organizational, analytical, and written communication skills
Operationally minded, with excellent attention to detail
A strong sense of empathy and the ability to advocate for others
Interest in user-generated content moderation and the policy behind it
Self-starter who works well without direct supervision
Team player with excellent collaboration skills
Comfortable with ambiguity and making decisions with incomplete or conflicting information
Tech-savvy, and have no problem learning and becoming proficient in new tools
Preferred Qualifications:
Previous Trust & Safety experience
Experience with the open source community
Previous experience as a legal assistant or other law or policy-based profession
Experience communicating with users/customers
Experience working remotely
Who We Are:
GitHub is the developer company. We make it easier for developers to be developers: to work together, to solve challenging problems, and to create the world’s most important technologies. We foster a collaborative community that can come together—as individuals and in teams—to create the future of software and make a difference in the world.
Leadership Principles:
Customer Obsessed - Trust by Default - Ship to Learn - Own the Outcome - Growth Mindset - Global Product, Global Team - Anything is Possible - Practice Kindness
Why You Should Join:
At GitHub, we constantly strive to create an environment that allows our employees (Hubbers) to do the best work of their lives. We've designed one of the coolest workspaces in San Francisco (HQ), where many Hubbers work, snack, and create daily. The rest of our Hubbers work remotely around the globe. Check out an updated list of where we can hire here: https://github.com/about/careers/remote
We are also committed to keeping Hubbers healthy, motivated, focused and creative. We've designed our top-notch benefits program with these goals in mind. In a nutshell, we've built a place where we truly love working, we think you will too.
GitHub is made up of people from a wide variety of backgrounds and lifestyles. We embrace diversity and invite applications from people of all walks of life. We don't discriminate against employees or applicants based on gender identity or expression, sexual orientation, race, religion, age, national origin, citizenship, disability, pregnancy status, veteran status, or any other differences. Also, if you have a disability, please let us know if there's any way we can make the interview process better for you; we're happy to accommodate!
Please note that benefits vary by country. If you have any questions, please don't hesitate to ask your Talent Partner. | https://www.jointheambitious.com/remote-job-europe/malware-specialist/github |
Students reflect on their life after high school. In groups they are tasked with finding facts about various Post-Secondary Education options through a virtual scavenger hunt. Acting like a recruiter, students present their key information to the class. They reflect on which school matched best with their goals and, as a group, reflect on what appealed to them.
Essential Questions
What is college life like?
What do I need to know about a college before deciding to go there?
Learning Goals
Analyze personal goals for life after high school
Share important elements to consider about college life
Reflect on and share what you are looking for in a college
Materials List
Lesson Slides (attached)
Group Scavenger Hunt (attached; print front/back, one per group)
Campus Tweet Up (attached; one per student)
Blank paper (one per student)
Engage
20 Minute(s)
Introduce the lesson using slides 1-4. Move to slide 5 and hand out pieces of blank copy paper. Ask students to draw what their life will be like after high school. Ask them to include as much detail as possible. Give students time to work and then pair them up to share. Come together as a whole group and ask students: "What are you planning on for life after high school?" Have students answer the question and explain their drawings. Ask if anyone saw something in their partner’s picture that they hadn’t thought of themselves.
Explore
40 Minute(s)
Place students in groups of 3-4 or allow them to select groups themselves. Either assign a Post-Secondary Education (PSE) campus to each group, or allow groups to pick from the list you generated. Slide 6 includes expectations for completing the scavenger hunt. Give each group a copy of the Group Scavenger Hunt handout. Groups need to use keyword searching to visit their assigned campuses and fill out their scavenger hunt handout. This is a good time to highlight looking for the .edu and .org extensions when gathering information about PSEs. Give students time to gather their information.
Explain
60 Minute(s)
Move to slide 7 which includes presentation expectations. Share with students that they are now the classroom experts on their campus. At a campus, the person who knows a lot about the culture of the school and prepares to share that with others is called an admissions recruiter. This person needs to highlight all that is great about the school to make others want to attend.
Explain to students that they are now the admissions recruiter for their PSE and need to create a presentation to share essential information about their PSE with the rest of the class. They need to make their school look as good as they possibly can. Challenge groups to limit their presentations to six slides. They need to give a lot of thought to how to share all of their great information from their scavenger hunt in a short amount of time. Students should include what they consider the key information on the scavenger hunt sheet, as well as key additional information the students find interesting or important about the PSE. Remind the students they are trying to recruit their peers to attend their PSE options. Students will vote at the end of the presentations on which PSE they would like to attend in the Extend phase.
Extend
40 Minute(s)
Display slide 8. Hand out a copy of the College Tweet Up handout to students. Tell students to use the Tweet Up strategy to take notes on the PSE presentations to inform their final decisions on which they would most likely attend based on the presentations. Remind students that to remain fair, they should evaluate the schools based on the presentations and not their own outside information.
Groups present their assigned PSE presentation as if they are an admissions recruiter for their PSE. Give each group about 5 minutes each to present, depending on the class size and the number of groups. Remind the student audience to be considering what they might "Tweet out." Have the students who are in the audience record their feedback on the College Tweet Up handout after each presentation using the following format @CollegeName Reflection on the School #PositiveTake-Away.
Evaluate
10 Minute(s)
After all presentations, have students vote on which PSE they would most like to attend as a student. Using the Sticky Bar strategy, give each student three separate colored sticky notes. Each color corresponds with either third, second, or first place. Students should place their individual sticky notes on their top three choices using the appropriate sticky note colors.
Ask students to share out what influenced their choices.
In their presentation small groups, have students reflect on if there were things they should have highlighted about their PSE that they left out from their presentation. Remind students that we may not learn everything about a school in one search or even one in-person visit so it is important to continue learning about a school and making sure it matches with the goals they drew for themselves earlier in the lesson.
Follow-Up Activities
40 Minute(s)
To extend to individual interests, have students complete the College Search for students to determine potential PSEs based on their individual interests. https://bigfuture.collegeboard.org/college-search
Once students see the results based on their interests, they can research those PSEs to see if it would be a location to peruse for after high school.
Research Rationale
College can be a life-altering experience for students, and not only academically. Here are just a few of the ways in which college can change students' lives for the better: Earning a bachelor's degree allows students to earn, on average, $1 million more than high school graduates over the course of their careers (Abel & Deitz, 2014). College offers students an opportunity to build relationships with mentors and peers that can benefit them throughout their careers (Campbell, Smith, Dugan, & Komives, 2012). College graduates tend to have more job satisfaction, jobs that offer a greater sense of accomplishment, more independence and opportunities for creativity, and more social interactions in their jobs than noncollege graduates (Oreopoulos & Petronijevic, 2013). College graduates increase their chance of employment. Over the last 20 years, the unemployment rate for college graduates has been approximately half that of high school graduates (Bureau of Labor Statistics, 2018). College helps students develop skills that prepare them for careers in the tech-driven economy, including nonroutine, abstract skills that aid in problem solving, multitasking, and creativity (Oreopoulos & Petronijevic, 2013).
Resources
Abel, J. R., & Deitz, R. (2014). Do the Benefits of College Still Outweigh the Costs? Current Issues in Economics and Finance, 20(3).
Bureau of Labor Statistics, (2018). Labor force statistics from the current population survey. Retrieved fromhttps://www.bls.gov/cps/cpsaat07.htm
Campbell, C. M., Smith, M., Dugan, J. P., & Komives, S. R. (2012). Mentors and college student leadership outcomes: The importance of position and process. The Review of Higher Education, 35(4), 595-625.
College Board. (2022). College Search. Big Future. https://bigfuture.collegeboard.org/college-search
K20 Center. (n.d.) Sticky Bars. Strategies. https://learn.k20center.ou.edu/strategy/129
K20 Center. (n.d.) Tweet Up. Strategies. https://learn.k20center.ou.edu/strategy/130
Oreopoulos, P. & Petronijevic, U. (2013). Making college worth it: A review of the returns to higher education. The Future of Children, 23(1), 41-65). | https://learn.k20center.ou.edu/educator-resource/2353 |
The Cell Cycle Lab # 5. Presented by Kami Dykes. Rebecca. What is each stage of the Cell Cycle?. Interphase. Anaphase. Prophase. Telophase. Metaphase. Cytokinesis. http://www.biology.arizona.edu/cell_bio/tutorials/cell_cycle/cells3.html.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
The Cell Cycle Lab # 5
Presented by Kami Dykes
Rebecca
What is each stage of the Cell Cycle?
Interphase
Anaphase
Prophase
Telophase
Metaphase
Cytokinesis
http://www.biology.arizona.edu/cell_bio/tutorials/cell_cycle/cells3.html
The ALLIUM (onion root tip) is a region of fast growing tissue where many cells are dividing at any given moment. Furthermore, these plant cells have very nicely defined cell borders (cell walls) and unlike many animal cells are easy to distinguish.
http://millette.med.sc.edu/Lab%203%20pages/cell_structure%20II_Lab3.htm
Some of the mitotic figures marked are
a.
Anaphase
b.
Prophase
c.
Interphase
a.
Anaphase
b.
Prophase
This is late TELOPHASE at the arrow. Why is this a TELOPHASE figure?
In plants, telophase begins the moment you can first detect a cell plate beginning to form, and it does so from the middle or inside out, rather than pinching from the outside inward as in animal cells.
This shows a PROPHASE nucleus at (a). What structure do you think is shown at (b)? Can you find another prophase figure? How does it compare with that seen at (a)?
Nucleus, the nuclear envelope has dispersed.
This shows an ANAPHASE cell at the arrow. What stage of mitosis is the cell next to it exhibiting? In what stage are most of the cells seen here?
Metaphase, prophase
What is each phase of the Blastula (white fish)?
http://biog-101-104.bio.cornell.edu/BioG101_104/tutorials/cell_division/wf_review_fs.html
PROPHASE
is characterized by well-formed chromosomes and a nuclear envelope that is dispersing
METAPHASE
is characterized by chromosomes lined up in single file in the center of the cell.
is characterized by two groups of chromosomes moving towards the opposite poles of the cell.
ANAPHASE
TELOPHASE
is characterized by nuclei that have reached the poles of the cell and the onset of cytokinesis.
Drink with the highest calories per ounce
Jenna
“tastes like deodorant”
Can you label the parts? | http://www.slideserve.com/mave/presented-by-kami-dykes |
Do you feel like it's long past time to totally transform information literacy instruction? If so, this indispensable new book by Joan Kaplowitz has everything you need to help you..
Transforming Information Literacy Programs
ISBN 10 : 9780838986035
ISBN 13 : 083898603X
The book raises a broad scope of themes including the intellectual, psychological, cultural, definitional and structural issues that academic instruction librarians face in higher ..
Worldwide Commonalities And Challenges In Information Literacy Research And Practice
ISBN 10 : 9783319039190
ISBN 13 : 3319039199
This book constitutes the refereed proceedings of the European Conference on Information Literacy, ECIL 2013, held in Istanbul Turkey, in October 2013. The 73 revised full papers p..
Information Literacy In The Workplace
ISBN 10 : 9781783301324
ISBN 13 : 1783301325
This book explains how information literacy (IL) is essential to the contemporary workplace and is fundamental to competent, ethical and evidence-based practice. In today’s infor..
Critical Literacy For Information Professionals
ISBN 10 : 9781783300822
ISBN 13 : 1783300825
This edited collection explores critical literacy theory and provides practical guidance to how it can be taught and applied in libraries. Critical literacy asks fundamental questi..
Library And Information Science
ISBN 10 : 9780838911853
ISBN 13 : 0838911854
This unique annotated bibliography is a complete, up-to-date guide to sources of information on library science, covering recent books, monographs, periodicals and websites, and se..
Information Literacy Lifelong Learning And Digital Citizenship In The 21st Century
ISBN 10 : 9783319141367
ISBN 13 : 3319141368
This book constitutes the refereed proceedings of the European Conference on Information Literacy, ECIL 2014, held in Dubrovnik, Croatia, in October 2014. The 93 revised full paper..
Ways Of Experiencing Information Literacy
ISBN 10 : 9781780633244
ISBN 13 : 1780633246
This book has two aims: firstly to present an investigation into information literacy by looking at how people engage with information to accomplish tasks or solve problems in pers..
Handbook Of Research On Transformative Digital Content And Learning Technologies
ISBN 10 : 9781522520016
ISBN 13 : 1522520015
Technology is constantly evolving and can now aid society with the quest for knowledge in education systems. It is important to integrate the most recent technological advances int..
The Fortuitous Teacher
ISBN 10 : 9780081002407
ISBN 13 : 0081002408
The Fortuitous Teacher: A Guide to Successful One-Shot Library Instruction discusses how librarians have become accidental teachers in the academic university setting. It covers ho.. | http://lucasyan.me/the-facet-information-literacy-collection-transforming-information-literac/ |
Early Childhood Activities
Many states are now supporting all-day kindergarten programs. There are mixed views on the role that technology should and can play on this age group. There are many preschool software programs available on most any content area, especially mathematics and reading. The following lessons will demonstrate how technology can be integrated into the lesson. When creating early childhood lessons that inte- grate technology, the teacher needs to consider the same factors as the previous con- tent lesson plans. The availability and location of computer hardware certainly affect the way activities will be presented and whether students will work individually or in small interest center groups. When using educational early childhood software, read the product’s suggestions for maximizing the learning experience. It is a good idea to review these suggestions carefully before presenting activities. Some of the early childhood software comes with special keyboards or speaking devices. They may take several weeks to arrive after they are ordered. Make sure to test them be- fore class use.
Teachers may also benefit from talking with other teachers about their early child- hood technology integration experiences. For example, teachers who have used ed- ucational software before can judge how much knowledge—both of computer usage and of the subject being presented—is required for the student to operate the pro- gram effectively. Will special orientation sessions be needed before the children com- plete the assigned tasks? Are some activities in the program more interesting, more useful, or more challenging to students than others? Experienced software users can answer these and other questions that the new user may have. Planning is the key to integrating technology with this age group. Also, although many of the children have experienced some sort of technology, including electronic games, digital phones, and computers, be sure not to assume that all are technology savvy. Teach- ers are strongly urged to do some research of their own to see that the specific soft- ware is age appropriate and that it will work with the computers in the classroom.
Early Childhood Mathematics
Title of Activity: Animals using Beanie Babies Basics
In this activity, students use Beanie Babies to calculate amounts, sort and classify, as well as work cooperatively on a project. They use technology resources, they count and record, practice their problem-solving and communication skills, and illustrate their results, thoughts, and ideas.
In this activity students address the following National Educational Technol- ogy Standards (NETS) for Students: 1—Basic operations and concepts; 2—Social, ethical, and human issues; 3—Technology productivity tools; 4—Technology com- munication tools; 5—Technology research tools; and 6—Technology problem- solving and decision-making tools.
Description. This learning activity capitalizes on students’ fascination with Beanie Babies. Students bring their Beanie Babies to school (or other popular toys) to count, classify, tally, and graph according to student-selected categories such as “clothed or unclothed,” “feathers or fur,” “real or imaginary,” color, number of legs, and animal family.
Students create new Beanie Babies electronically, using their creations for math- ematical comparisons and technological excursions and discoveries. After complet- ing this study, students electronically contact another class and compare Beanie Babies data by email.
Note: Although this activity is constructed around the notion of Beanie Babies, any seasonal or popular toy can be used.
Activity Preparation. Have students bring in their favorite Beanie Baby or what- ever object you decide to use.
Activity Procedure. Adjust this procedure to include age appropriate activities, such as an introduction to animals and their characteristics using stuffed animals. Have students weigh and measure their Beanie Babies using a balance scale and ei- ther rulers (standard measurement) or Unifix cubes (nonstandard measurement). Record the data in a database and compare results.
Compare the Beanie Babies to the real animals they represent. Have students in- vestigate each real animal through a multimedia encyclopedia to find the animal’s length, weight, and other characteristics. After all comparisons have been made, the children can rank the real animals by size. Classify animals again by real size.
Design Beanie Babies using geometric shapes. Students may use software such as Shape Up from Sunburst Communications. These stylized Beanie Babies provide an opportunity to use geometric language and creative expression to name each new animal and write a story about it.
Contact another class by email to compare data for the Beanie Babies. Have the class share its data on Beanie Babies. Check the Beanie Babies website for input op- tions. Use a search engine to connect to other resources about Beanie Babies.
Option: Have students apply knowledge about animal characteristics through KidPix and the online game Critter Characteristics, as well as by having students create quizzes.
Assessment. Entering characteristics of a Beanie Baby into a database will help as- sess students’ abilities to identify characteristics as well as classify and sort by spe- cific criteria.
Using the students’ completed graphs, assess students on their ability to construct the graph and interpret the results correctly by comparing data. Observe students working in groups and individually.
Keep anecdotal notes on students explaining their thinking. Evaluate students on their ability to explain solutions, and discuss alternative ideas and approaches
Comments. Various versions of this learning activity have been done using teddy bears, matchbox cars, stuffed animals, and so on. The phenomenon of Beanie Ba- bies and the Internet, however, have added an entirely new dimension to the proj- ect. Children have seen parents buying Beanie Babies as investments, thus making them aware of the increasing value of Beanie Babies as posted on the Internet. Even young children have followed their Beanie Babies’ increasing value. Some teachers have had their students graph the value of a specific Beanie Baby over time, spec- ulating on its ultimate value when sold at a fictitious sale at the end of the school year. It may be helpful to have a parent or aide assist in visiting Beanie Babies In- ternet sites.
Tools and Resources
Software
KidPix, Internet, Database (e.g., Tabletop), drawing or painting, graphing (e.g., Table- top, Graph Club, GraphPower), geometry (e.g., Shape Up by Sunburst Communi- cations), multimedia encyclopedia (e.g., Encarta), CD-ROMs about animals
Other
- Beanie Babies
- Balance scales
- Rulers
- Unifix cubes
Websites
Beanie Babies Collection Birthday Roster: www.ohio-usa.com/beaniebabies/birthday.html
Finding keypals/project partners; epals Classroom Exchange: www.epals.com
Global Schoolhouse: www.gsn.org
Intercultural E-Mail Classroom Connections: www.iecc.org
Kids’ Space: www.kids-space.org
Ty Company: www.ty.com
Credits
Susan Nothwehr, Spencer Community School District, Spencer, Iowa ([email protected])
Frada Boxer, Evanston/Skokie School District, Evanston, Illinois ([email protected]. ia.us)
Source: Thomas, L., & Bitter, G. G. (2000). (Eds.). National Educational Technology Standards for Students: Connecting Curriculum and Technology. Eugene, OR: International Society for Technology in Education and The Digital Video Library (2005). Technology Based Learning & Research (TBLR), Arizona State University, Tempe, AZ.
Early Childhood Language Arts
Title of Activity: Awesome Authors
In this activity students use spoken, written, and visual language to communicate effectively. In addition, they use a variety of technological and information resources to gather and synthesize information, and create and communicate knowledge.
In this activity students address the following National Educational Technology Standards (NETS) for Students: 1—Basic operations and concepts; 2—Social, ethical, and human issues; 3—Technology productivity tools; 4—Technology communication tools; 5—Technology research tools; and 6—Technology problem-solving and decision-making tools.
Description. In this activity, students work in small groups to learn about plot de- velopment (beginning, middle, and end), character development, story structure, and creating parallel stories. Students are introduced to illustrations and associated copyright issues. They then write and illustrate their own stories. Students edit, revise, and publish their stories electronically. Parents and other students are encouraged to read and respond to students’ published stories. Videotaped author interviews are conducted with students questioning their peer authors.
Activity Preparation. Adjust this activity to age-appropriate activities.
- Meet with the school library media specialist to identify an author to be
- Identify available resources on or about the author (e.g., video, websites, print, CD-ROMs, software, audiotapes, laser discs).
- Locate and highlight the author’s website (if available).
- Assemble a list or a collection of the author’s books to use as an
- Discuss illustrators and
- Identify style, composition, color, and media used by the author or
Activity Procedure. Adjust this procedure to include age-appropriate activities.
- Introduce the class to Maurice In the library, find books and other media about the selected author. Let students select their own books to read, but encourage them to select one by the author.
- Help students read about the author and become familiar with the author’s life and
- Read and view a variety of stories by the selected Work in small groups to identify story plots and character features.
- Facilitate a brainstorming session for students to describe what they have learned about the author and what they still need to learn. Record what students have learned about the author and his
- Have students research Maurice Sendak on the
- Have students select a character from one of the stories they have read and include the character in a short story of their own. Students use electronic- publishing software to write and illustrate a story that parallels the author’s story structure. Have students or adult helpers print copies of the stories for friends, family, and the
- Divide students into teams of three to develop questions and make plans to interview a peer Create a situation where students assume the role of a famous author. Set the scene such as a talk show or book signing. Students take turns trying on the following roles: author being interviewed about a story, interviewer, and cameraperson who is making sure that the camera angles are correct and that the interview is properly recorded on video. (Ask for assistance from another adult or from an older student.) Compile all videos onto a single tape for distribution to parents and families.
Tools and Resources
Software. Internet, Concept-mapping (e.g., Expression, Inspiration), multimedia- authoring and presentation (e.g., HyperStudio, KidPix Studio), desktop publishing (e.g., Easy Book, Kid Works Deluxe)
With very young children, it might be a good idea to provide illustrations that spark ideas and lend a coherent plot to the story. An interesting variation of this lesson is for the teacher to create a form in which students can fill in their own names and other information to create a story about themselves. Most major soft- ware companies have specific writing software programs that allow students to write and illustrate their own storybooks. The Key Caps accessory allows for foreign-language stories. Some programs are also available in a second language. Students should be encouraged to decorate their book covers further with their own artwork. This lends color and creativity to the book. Giant George and Ruby Robot, BIG and Little (Sunburst Communications) requires a Muppet Slate but also allows students to create and design various publications of differing sizes up to 5 feet tall. Clifford’s Big Book Publisher (Scholastic New Media) provides graph- ics, clip art, and fonts for younger students to write their own big books. Other related programs are Print Shop Deluxe (Broderbund), Hyperstudio (Roger Wagner Publishing), Kids Works Deluxe (Davidson & Associates, Inc.), My Own Stories (The Learning Company), Children’s Writing and Publishing Center (The Learning Company), and The Multimedia Workshop (Davidson & Associates, Inc.).
Hardware
- Video camcorder
- TV
- VCR
Other
- Library reference materials, both print and nonprint (books, videos, CD-ROMs)
Websites
Aaron Shepard’s RT Page (reader’s theater): www.aaronshep.com/rt
Authors and Illustrators on the Web (guide): www.acs.ucalgary.ca/~dkbrown/authors.html
Carol Hurst’s Literature Site (reviews): www.carolhurst.com/
Celebrating Cultures with Tomie de Paola: http://7-bar.aps.edu/library/cultures.html
Children’s Literature (reviews): www.childrenslit.com/home.htm
Index to Internet Sites–Children’s and Young Adults’ Authors and Illustrators: http://falcon.jmu.edu/~ramseyil/biochildhome.htm
New York Public Library: www.nypl.org/chat
Read In!: www.readin.org
Scholastic Network: http://teacher.scholastic.com
Author Site
Maurice Sendak: http://www.pbs.org/wnet/americanmasters/database/sendak_m.html falcon.jmu.edu/~ramseyil/sendak.htm
Other Author Sites
Mike Artell: www.mikeartell.com
Judy Blume: http://judyblume.com
Jan Brett: http://www.janbrett.com
Marc Brown’s Arthur site: www.pbskids.org/arthur
Eric Carle: www.eric-carle.com
Robert Quackenbush: www.rquackenbush.com
Janet Stevens: www.janetstevens.com
Dr. Seuss: www.seussville.com
Assessment. Were students able to formulate and type in original sentences?
Were they interested in the story?
Did the story have basic elements such as plot, setting, and characters? Assess students on:
- Their ability to work cooperatively in small groups
- Their contribution to the short
- Their participation and contribution to the online author
Develop a rubric to assess individual student stories. The rubric should cover age- appropriate mechanics, content, voice, grammar, spelling, characterization, plot, and the effective use of writing and multimedia-authoring software.
Comments. The notion of communities is no longer limited to just a neighbor- hood. Children bring to school perceptions of the world they have gained from tele- vision, networks, and computers. Technology opens the door for children to participate in information exchanges with children from various cultures and dif- ferent parts of the world. In this unit, students learn about various aspects of their communities, beginning with the family and extending to the greater community. Students then compare their communities with the communities of other children around the world. | https://learn-u.com/lesson/using-technology-in-the-classroom/early-childhood-activities/ |
Mbilo, Céline and Kabongo, Jean-Baptiste and Pyana, Pati Patient and Nlonda, Léon and Nzita, Raymond Williams and Luntadila, Bobo and Badibanga, Badivé and Hattendorf, Jan and Zinsstag, Jakob. (2019) Dog ecology, bite incidence, and disease awareness : a cross-sectional survey among a rabies-affected community in the Democratic Republic of the Congo. Vaccines, 7 (3). p. 98.
|
PDF
- Published Version
|
Available under License CC BY (Attribution).
918Kb
Official URL: https://edoc.unibas.ch/71970/
Downloads: Statistics Overview
Abstract
Despite the existence of safe and efficacious human and animal rabies vaccines, millions of people remain at risk of exposure to this deadly zoonotic disease through bites of infected dogs. Sub-Saharan African countries, such as the Democratic Republic of the Congo (DRC), bear the highest per capita death rates from rabies where dog vaccination and availability of lifesaving post-exposure prophylaxis (PEP) is scarce. Mass dog vaccination is the most cost-effective and sustainable approach to prevent human rabies deaths. We conducted a cross-sectional household survey in a rabies-affected community in Matadi, DRC, to estimate the size of the owned dog population and dog bite incidence and assess knowledge and practices regarding rabies, as preparation for future mass dog vaccination campaigns. Our study revealed that the owned dog population in Matadi was almost ten times larger than assumed by local veterinary officials, with a large proportion of free-roaming unvaccinated dogs. The annual dog bite incidence of 5.2 per 1000 person years was high, whereas community rabies knowledge was low resulting in poor practices. Given these findings, human rabies deaths are likely to occur in this community. Lack of disease awareness could negatively affect participation in future mass dog vaccination campaigns. A public sensitization campaign is needed to promote appropriate rabies prevention (washing bite wounds and PEP) and control (dog vaccination) measures in this community. | https://edoc.unibas.ch/71970/ |
Please read this page in its entirety!
JOB DESCRIPTION & PHYSICAL REQUIREMENTS
LICENSED PRACTICAL NURSE
DESCRIPTION:
The LPN works under the immediate supervision of the Director of Nursing in optimizing health and wellness, and implementing nursing care for residents of Carobell.
DUTIES:
- Provide for the health and safety of individuals who reside at Carobell, in compliance with Standards of D.D. Nursing Practice and ICF/MR Guidelines; adheres to all nursing licensure rules. Attend to individuals participating in day programs & camps as needed.
- Conduct daily health inspection of each resident. Report acute illness and infectious diseases to the Director of Nursing. Report changes in seizure patterns, behavioral changes and other problems to the Director of Nursing.
- Document in each individual’s Interdisciplinary Notes at least monthly, or more frequently as determined by acuity, illness and/or health activities. Complete Quarterly Nursing Physical Exam and Health Summary, submitting to the RN for review and co-signature.
- Verify accuracy and completion of information in individual Program Books, including Hab Technicians’ Daily Notes, BM & seizure documentation, etc. Obtains physician’s signature on Standing Orders semi-annually and other orders in a timely manner according to the policies and procedures.
- Order medications and review medications received in a timely manner. Monitors medication administration of Medication Technicians. Ensure proper documentation of regularly scheduled and PRN meds on the MARs as needed, and documentation in Controlled Substance Count sheets. Monitor storage and disposal of expired medications.
- Report any problems to the Director of Nursing.
- Monitor health care performance of Direct Care Technicians and provide hands-on training/ instruction as needed. Inform the Director of Nursing of such needs/ activities. Assist the Director of Nursing in reinforcing adherence to Exposure/Infection Control Procedures.
- Write thoroughly in the house Nurse/ Staff Communication Book, noting diagnosis of new illness, treatment plan, medical orders from the Physician, new medications’ possible effects, any changes made to the MAR, and any other pertinent information.
- Maintains adequate nursing/ health-care supplies while preventing overstock/ waste of medications and supplies.
- Ensure cleanliness, working order and adequate suppies for medical equipment such as feeding drip pumps, suction and nebulizer machines and oxygen tanks. Ensure proper storage and disposal.
- Assume on-call responsibilities on a rotating basis with the other nursing staff, as directed by the Director of Nursing.
- Attend doctor’s appointment with the individual, and meet their needs. Communicate professionally and accurately regarding the individual’s relevant medical history, medications and current needs. Ensure understanding of Physician orders before leaving the MD office, ED or hospital.
- Accurately transcribe MD/ Dentist orders onto the Medication Administration Record, and ensure staff understanding of new orders in person with staff at hand, and also in the Nurse/ Staff Communication Book. Communicate same to Director of Nursing.
- Attend IHP, Team and Psychotropic Drug Review meetings, as directed by the Director of Nursing.
- Maintain professional, harmonious environment/ communication style with internal and external customers, including Hab Techs, Individuals and their Guardians/ Family, Residential Managers, QDDP’s and all other Carobell staff, as well as Physician & Dental office staff, Lab, Radiology, Emergency Depts & Inpatient areas, our delivery Pharmacy and back-up Pharmacy, etc.
- Performs other assigned duties by the Director of Nursing.
Licensed Practical Nurse
Page #2
Recruitment Standard:
The position requires a current license issued by the North Carolina Board of Nursing with two years experience. Employee must have a valid drivers license with no restriction. | https://www.carobell.org/lpn-job-2/ |
Kim Kardashian has started this trend of matching her clothes with the cars last month, she wore a neon green outfit and even a wig matching up with her Neon Green vehicles she drove during her family trip to Miami.
Now the same trend is been followed by her sister Kylie Jenner, who has channeled her 37 year old sister Kim Kardashian fashion choices by matching her clothes to her cars.
Both the sisters were giving us new fashion trends to follow and even making us to match up our fashion with our daily routines. | https://www.filmyhut.online/fashion-news/kylie-jenner-vs-kim-kardashian/ |
There are two distinct kinds of retributive justice. The classical definition embraces the idea that the amount of punishment must be proportional to the amount of harm caused by the offense. A more recent version, supported by Michael Davis, discharges this idea and replaces it with the idea that the amount of punishment must be proportional to the amount of unfair advantage gained by the wrongdoer. Davis introduced this version of retributive justice in the early 1980s, at a time when retributive justice was making a recovery within the philosophy of law community, maybe due to the multiple failures of reform theory in the previous years.
In the early period of all systems of law the redress of wrongs takes precedence over the enforcement of contract rights, and a rough sense of justice demands the infliction of proportionate loss and pain on the aggressor as he has inflicted on his victim. Incapacitation in the context of sentencing philosophy refers to the effect of a sentence in terms of positively preventing (rather than merely deterring) future offending. Imprisonment incapacitates the prisoner by removing them from the society against which they are deemed to have offended.
Cutting off a hand of a thief is also an example; this acts to prevents further thefts in a drastic manner, in addition to its having a deterrent effect on others. Like deterrence, incapacitation can be specific to an individual and/or specific to a particular crime, or can be general in either respect. Deterrence is one of the primary objects of the Criminal Law. Its primary goal is to discourage members of society from committing criminal acts out of fear of punishment.
The most powerful deterrent would be a criminal justice system that guaranteed with certainty that all persons who broke the law would be apprehended, convicted, and punished, and would receive no personal benefit from their misconduct. United States policy of deterrence during the Cold War underwent significant variations. The early stages of the Cold War were generally characterized by ideology of Containment, an aggressive stance on behalf of the United States especially regarding developing nations under their sphere of influence.
Rehabilitation means to restore to useful life, as through therapy and education or To restore to good condition, operation, or capacity. The assumption of rehabilitation is that people are not natively criminal and that it is possible to restore a criminal to a useful life, to a life in which they contribute to themselves and to society. Rather than punishing the harm out of a criminal, rehabilitation would seek, by means of education or therapy, to bring a criminal into a more normal state of mind, or into an attitude which would be helpful to society, rather than be harmful to society.
Although the importance of inflicting punishment on those persons who breach the law, so as to maintain social order, is retained, the importance of rehabilitation is also given priority. Humanitarians have, over the years, supported rehabilitation as an alternative, even for capital punishment. Restoration is to return what has been unjustly taken; to place the owner of a thing in the state in which he formerly was. By restitution is understood not only the return of the thing itself, but all its accessories. It is to return the thing and its fruits. Retributive justice began to replace this system following the Norman invasion of Britain.
William the Conqueror’s son, Henry I, issued laws detailing offenses against the “king’s peace. ” In the 20th century, restorative justice started becoming more popular. In my personal opinion I just think we need to let them all ROT in prison depending on the crime of course. If I had to choose from all of the choices I think that rehab is one of the better choices because it is trying to make the person a constructive part of society and gives them another chance but I think that their should be a limit to how many times someone can try to be rehabilitated, like two.
I think two is a good number because the first time is on them and pretty much everyone deserves a second chance, and then if they mess up a again after their first chance try again cause everyone makes mistakes and everyone has those days, if they mess up again they apparently won’t learn their lesson. [pic]A Law Dictionary, Adapted to the Constitution and Laws of the United States. By John Bouvier. Published 1856. Martin, Jacqueline (2005). The English Legal System (4th ed. ), p. 176. London: Hodder Arnold. ISBN 0-340-89991-3. Cavadino, M & Dignan, J. (1997). The Penal System: An Introduction (2nd ed. ), p. 39. London: Sage. | https://hstreasures.com/kinds-of-retributive-justice-48350/ |
In the northern part of Nigeria two vessels were used as vehicles for conquest by the Fulani: religion and language.
Islam was imposed by the force of arms and the use of the sword by Sheikh Usman Dan Fodio whilst Hausa, the language of the conquered people of the old Habe kingdom, was adopted as the official language of the new Fulani caliphate.
As brilliant and calculating as ever, the Fulani ruling class insisted that Fulfulde, their own native language, was spoken only by the Fulani themselves.
To them their's was the language of emperors, kings and conquerors and they refused to allow their subjects and vassals to speak or learn it because they considered them to be nothing more than serfs and slaves.
For the conquered it was Hausa that was to be spoken and not Fulfulde because the latter was considered to be too good for them.
There are over 100 distinct and independent ethnic groups in northern Nigeria, each with their own cultural and historical heritage and distinct language, yet they are all compelled to speak Hausa.
Some of them have even forgotten their own native language or never learnt to speak it in the first place. Others do not know who they are or where they are coming from.
Some do not even know that they ever had a language or heritage of their own. Everything for them, from beginning to end, has become Hausa. This speaks volumes.
Ironically the white Boers of apartheid South Africa, who were originally from Holland, adopted the same strategy of linguistic and cultural conquest when they arrived in southern Africa and established their hegemony and racist enclave in the 17th century.
Their native language was Afrikaans (which was a local derivative of the Dutch language) and, like the Fulanis of northern Nigeria, for many generations they ensured that only they were allowed to speak it.
The black South African natives were not taught Afrikaans and they were not allowed to speak it because it was considered to be the language of the elite ruling class and their racial masters.
Allowing them to speak it would bear the implication that they were on the same level as their religious, racial and cultural masters and that was unacceptable.
They were only allowed to communicate in their own native African languages and English. This was very effective and it essentially kept the conquered in their place whilst it exalted the conqueror. That is the power and secret of language and linguistic conquest.
It is for this reason that the people of France, for example, refuse to speak English with you when you are in their country even though their English may be fluent.
They recognize the fact that once they adopt another man's language as being the one that is commonly used in their own land it is an acceptance of subjugation.
They acknowledge the fact that such a concession or, to use a more appropriate word, "submission", is essentially an acceptance and wholehearted espousal of a sinister and subtle form of cultural and linguistic imperialism.
Worst still it is symptomatic of the fact that your own culture and language no longer bears any relevance or has any value.
The English, who are undoubtedly the masters of the game when it comes to the art of cultural and linguistic imperialism, did it successfully to the Scottish, the Welsh and the Irish, all of whom had and spoke their own distinct languages until they were conquered and subjugated and turned into vassal states.
Today few of the Scottish, Welsh and Irish people are able to speak their native languages anymore. The language that they all speak now is English, the language of their oppressors.
Those that imposed and established Hausa as the lingual franca in the north and those that seek to establish it as the lingual franca of Nigeria know what they are doing and why they are doing it and so do their British and, more recently, American friends and allies.
There is a long-term game plan unfolding and a not-so-hidden agenda. Yet sadly it is only those that are discerning, insightful, incisive and historically-literate that can possibly grasp or see it.
It is not for the dull, the unenlightened, the uneducated or the slow. Such souls cannot possibly grasp or understand such complex issues and one can hardly blame them for that because they are simply ignorant.
It is points like this that those in our country that know no better and that believe that hausa is just a "unique language" which should be spoken and adopted by all in our nation fail to comprehend.
Such people fail to appreciate the fact that if you take a man's language and faith away and super-impose another on him, for whatever reason, that man loses his identity, his heritage, his culture and his history and he becomes absolutely nothing.
Once that is achieved he is successfully stripped bare of who and what he once was and all memory of the past is erased. That is tragic.
From the 18th century when the Caliphate was established in northern Nigeria the forceful imposition of islam and the hausa language were used by the Fulani as tools of conquest and what the French describe as the "raison d'etra" (which means "rationale") to dominate and rule over the people of that region.
They used both to reduce the so-called "minorities" of the north to slavery and servitude. They also used both to humiliate them and bring them to their knees.
It was brutal and ugly and it continued right up until the time that there was no more resistance and it was accepted as the norm.
Now they want to do the same thing to the rest of Nigeria. Many fail to appreciate or recognise this because they are shallow.
Yet a failure to fully grasp or appreciate such things will eventually lead to nothing less than slavery.
May God open our eyes, may He continue to guide us and may He give us courage and understanding. Shalom. | http://www.thebiafrapost.com/2016/09/fulani-used-forceful-imposition-of.html |
…Throughout 2015, the networks covered the minimum wage controversy many times. …Those stories favored wage increases and often failed to mention possible negative consequences. Anchors cheered for the CEO in Seattle who raised the salary of all his employees to $70,000 a year. Others stories prodded liberal politicians to chime in about wages, or interviewed multiple protesters demanding wage increases.
The proof that the three networks favored wage increases was obvious by who they turned to as sources in their stories:
CBS, ABC, and NBC morning and evening shows included four times as many pro-wage hike opinions as wage hike critics (41 to 10).
…On April 15, NBC’s Nightly News showed hundreds of “Fight for $15” protesters in footage from around the country. Reporter Stephanie Gosk’s story included seven voices for wage hikes (including five groups of protesters), without[any] balance from the other side.
The networks reacted with praise when Gravity Payments CEO Dan Price announced he’d set a $70,000 minimum salary for all of his employees. All together, the networks spent 18 minutes, 44 seconds praising Price’s decision, including the following comments:
The networks aired little coverage of possible problems (unintended consequences), and did not report on it three months later when the New York Times reported the problems that were surfacing with Price’s policy of giving every employee the same salary. Problems included employees quitting, partly due to the fact that newer employees saw their pay double while long-time employees saw little to no bump in salary.
CBS’s This Morning promoted minimum wage increases at McDonald’s as well. Instead of asking about potential downsides to raising the wage, Charlie Rose prodded McDonald’s CEO Steve Easterbrook twice about why the company had only raised minimum wage in the corporate owned stores but not required franchise owners to increase the minimum wage in their stores. …
Much later, on Sep. 1, 2015, CBS reported that Walmart cut employee hours after raising its minimum wage to $9 per hour. Walmart recently announced a 6-12% decline in earnings for 2016, with the company’s CFO attributing 75 percent of that decline to the company’s wage hikes.
A Dec. 28 op-ed for The Wall Street Journal used the Walmart figures to illustrate the Congressional Budget Office (CBO) conclusions that raising the federal minimum wage would actually cost low-wage workers their jobs. The February 2014 report indicated that raising the wage would reduce the amount each employee contributed to their company’s profit, and literally cost employees their jobs.
Much of the network bias over wages came from CBS’s This Morning, where hosts either hyped calls for higher wages or refused to challenge a guest’s promotion of wage hikes. In two different broadcasts, co-anchor Charlie Rose also prompted politicians to weigh in on the minimum wage.
[While some people] may have benefitted from the minimum wage, research suggests wage hikes could hurt many people. In Oct 2014, Vice President of the Competitive Enterprise Institute Iain Murray wrote that “The overwhelming majority of…studies into the effects of the minimum wage find that it erodes employment.”
One of the reasons, as Murray explained, is that an employer “sees the new wage cutting into his bottom line and he chooses to do other things rather than pay a marginally effective worker more than he thinks he is worth.” …
All three democratic presidential candidates (O’Malley, Clinton, and Sanders) have called for federal minimums of at least $12. Sanders is a self-described “socialist” who introduced legislation last year to raise the federal minimum wage to $15 per hour.
CBS host Norah O’Donnell boosted Sanders when she interviewed Mayor of New York City, Democrat Bill De Blasio on July 23, 2015. During the interview, Rose raised the idea of a $15 minimum, and De Blasio explained that he was “working toward it every day.” O’Donnell followed up by asking, “So why not endorse Bernie Sanders?”
In 2014, the CBO did not evaluate the impact of a $15 minimum wage. The government agency calculated that raising the wage to just $10.10-an-hour would decrease company’s profit per employee by 71 percent, and cost the economy 500,000 low-wage jobs. A higher wage would almost certainly mean more jobs lost.
Rather than trying to promote Sanders, O’Donnell should have asked De Blasio tough questions about wages, such as “what level of wage increases causes jobs to be cut?”
(The above excerpt is adapted from a Media Research Center post by Sam Dorman, January 7, 2016)
Methodology: MRC Business searched Nexis for transcripts from ABC, CBS and NBC morning and evening news programs (Good Morning America, This Morning, Today, World News, Evening News and Nightly News) that contained the phrase “minimum wage.” Each story was examined to find all the people used as experts (non-network journalists) to see if they were speaking in favor of raising them minimum wage or critical of raising it.
Groups of protesters were only counted once if their chanting was clearly audible, unless one individual said something on his or her own, then that person was counted as one voice. The count did not include various protesters speaking in the background. MRC Business found 41 people or groups of protesters promoted raising wages while just 10 were against raises. | https://www.studentnewsdaily.com/example-of-media-bias/no-negatives-to-minimum-wage-increases/print/?part=article |
Then you shall take the anointing oil and anoint the tabernacle and all that is in it, and shall consecrate it and all its furnishings; and it shall be holy.As for you, the anointing which you received from Him abides in you, and you have no need for anyone to teach you; but as His anointing teaches you about all things, and is true and is not a lie, and just as it has taught you, you abide in Him.Then Samuel took the flask of oil, poured it on his head, kissed him and said, “Has not the Lord anointed you a ruler over His inheritance?Cross references: Psalm 133:1 : Gen 13:8; Heb 13:1 Psalm 133:2 : Ex 29:7; 30:25, 30; Lev 8:12 Psalm 133:2 : Ex 28:33; 39:24 Psalm 133:3 : Prov 19:12; Hos 14:5; Mic 5:7 Psalm 133:3 : Deut 3:9; 4:48 Psalm 133:3 : Ps 48:2; 74:2; 78:68 Psalm 133:3 : Lev 25:21; Deut 28:8; Ps 42:8 Psalm 133:3 : Ps 21:4 end of crossrefsYou did not anoint My head with oil, but she anointed My feet with perfume.
#8.500 We are commanded to anoint the "HOLY" temple with "HOLY Anointing Oil" -->> 1 Corinthians 3:16-17 ✔16 Know ye not that 👉YE ARE THE TEMPLE OF GOD,👈 and that the Spirit of God dwelleth in you? ✔17 If any man defile the temple of God, him shall God destroy; for 👉THE TEMPLE OF GOD IS HOLY,👈 which 👉TEMPLE YE ARE.👈 ✔ Exodus 40:9 And thou shalt take the 👉ANOINTING OIL,👈 and 👉ANOINT THE TABERNACLE, AND ALL THAT IS THEREIN,👈 and shalt hallow it, and and all the vessels thereof: and 👉IT SHALL BE HOLY.👈 ✔ Revelation 1:6 And 👉HATH MADE US KINGS AND PRIESTS 👈 unto God and his Father; to him be glory and dominion for ever and ever. Unlike other so called "Holy Anointing Oils" out there, HAO Blessed Holy Anointing Oil is made to the exact specifications of God in Exodus 30:22-25 and contains ALL OILS specified in those verses.... the FINEST, therapeutic grade oils. ✔ 25 And thou shalt make it an oil of HOLY OINTMENT, an ointment compound after the art of the apothecary: it shall be an HOLY ANOINTING OIL. ✔ Mark 6:13 And they 👉CAST OUT MANY DEVILS👈 and 👉ANOINTED WITH OIL👈 many that 👉WERE SICK AND HEALED THEM👈 __________ HAO Blessed Holy Anointing Oil makes a GREAT gift!
#10.000 Anointing oils, use olive oil as a carrier. Olive oil is rich in vitamins E and K: vitamin E is a powerful antioxidant which helps fight free radical damage. These spices are blended with olive oil in this anointing oil. This Frankincense and Myrrh anointing oil is made in Israel, using natural Galilee and Jerusalem Virgin Olive Oil and scented with flowers and herb essences mentioned in the Bible. Let him call for the elders of the Church, and let them pray over him, anointing him with oil in the name of the Lord.
#5.000 Both proven antiseptic and inflammatory properties Frankincense and Myrrh were components of the holy incense ritually burned in Jerusalem’s sacred temples Packed in ceramic Treasure Box in a gift box Frankincense is known for its use in consecration, meditation, protection, and purifying. Myrrh is known for protection, purification, healing, and magical potency
#5.000 This is an original anointing oils made with virgin olive oil from olive trees from Galilee, scented with biblical essences and Holy Land flowers extract. When using this anointing oils in religious ceremonies, prayer or everyday life it is well to remember that there is a link lasting over thousands of years which bind the believer to the Holy Land. Since ancient times, myrrh has been valued for its fragrance. Light of Jerusalem anointing oil is perfumed with myrrh, Frankincense and cinnamon and is as close as possible to the ancient holy anointing oil it resembles. “And when they were come into the house, they saw the young child with Mary his mother, and fell down, and worshipped him: and when they had opened their treasures, they presented unto him gifts; gold, and frankincense and myrrh.
#6.000 This Anointing Oil made of virgin Galilee and Jerusalem olive oil, scented with Spikenard Oil and essences of flower extracts from the land of the bible. Nard oil is used as a perfume, an incense, a sedative, and as herbal medicine said to fightinsomnia, birth difficulties, and other minor ailments. The oil was known in ancient times and was part of the Ayurvedic herbal tradition of India. It was obtained as a luxury in ancient Egypt, the Near East, and Rome, where it was the main ingredient of the perfume nardinum. Nard is mentioned a number of times in the Old Testament. | https://www.kensirka.com/ANOINTING_OILS.html |
The Mobility Crisis Threatening the Baby Boom Generation
Imagine living in a place where there is no public transit system. No bus, no taxi, no train, no subway. No public passenger transport at all. Unimaginable? Not at all. The town where I live in Texas has none of the above. There is a small airport, and a freight train goes through town, and long ago in the 40’s, there used to be a street car to Dallas, but not anymore. You could get a taxi to come get you from Dallas, but that would be expensive. So, if you are elderly, you cannot drive, or do not own a car, how do you get around? In this town of 30,000 people, there is no public transportation. You will have to depend on friends and family to drive you in their cars. And you are not alone. Public transportation in suburban and rural areas, even in cities, is inadequate at best for many elderly and disabled people.
A report by Transportation for America titled, “Aging in Place, Stuck Without Options”, states that, by 2015, more than 15.5 million Americans 65 and older will live in communities where public transportation service is poor or non-existent. That number is expected to continue to grow rapidly as the baby boom generation “ages in place” with few transportation options for those who do not drive. The report ranks metro areas by the percentage of seniors with poor access to public transportation now and in the future, and presents other information on aging and transportation.
In the report, an analysis by the Center for Neighborhood Technology looks at metropolitan areas within each of five size categories. It shows that in just four years, 90 percent of seniors in metro Atlanta will live in neighborhoods with poor access to options other than driving. Other cities such as San Bernardino, Houston, Detroit and Dallas will have similar outcomes. Smaller cities such as Kansas City, Oklahoma City, Fort Worth, Nashville and Raleigh-Durham will also be affected, according to this analysis.
The term “aging in place” means that many people over 55 years of age would prefer to stay in their present residence in their old age, rather than move to an assisted living center, retirement center or other community. Between 2001 and 2009, the number of older Americans who do not drive increased by 1.1 million according to an AARP Public Policy report. If you choose to age in place in a community where there are limited public transportation options, then you are stuck, literally, stuck in place.
What’s the impact of this statistic in a place without local transportation? This means that people will skip doctor’s appointments, go without groceries, give up social and recreational visits, and become more and more socially isolated in their residence. It means that caregivers will also face a considerable burden in their tasks also, unless they drive.
What’s the solution?
1. Agency on Aging
According to the U.S. Government Accountability Office, 62 federal programs fund transportation services primarily for older adults, people with disabilities and low-income individuals. Ask at your local area Agency on Aging for information about a program which operates in your area and for which you are eligible. There might be help available.
2. City Officials, State Transportation Dept. and your Congressmen
Bring attention to the need for accessible public transportation to your city officials, state transportation department and your Congressmen. Write and call about the problem and see what governmental efforts are being directed to the need. “The squeaky wheel gets the grease.”
3. Hospitals, Local Churches and Non-profit agencies
See if your local hospital or clinic would coordinate with a bus or van company to help get patients to appointments by creating a regular route and schedule for fare-paying patients. I know that some of the major hospitals in Dallas offer patient transportation. Perhaps, local churches or non-profit agencies which have a van or bus would get involved in bringing clients in for services by offering rides on certain days. There is usually limited transportation to shopping and recreation opportunities at most public recreation or senior centers.
4. Taxi companies and Bus lines
Let local transportation companies, like taxi companies or bus lines, know that there is a need and a market for their services in your neighborhood. Get together with other seniors and others who need transportation and form a coalition to get entrepreneurial businessmen to realize that there is a market wherever there is a population which needs transportation.
5. Friends, Neighbors and Volunteers
Organize drivers in your own circle of influence who can be relied upon to be safe and willing drivers on a regular basis. This would mean contacting friends from church, from work, from the neighborhood who are willing to volunteer or receive payment for driving.
There is no easy answer for every person in every community, but access to transportation is an important consideration for caregivers and care-receivers in making life livable wherever one lives.
See the full report here.
Any thoughts? Please let us know. | https://www.caregiver-aid.com/going-places/ |
ISBN:
9781592334957 (pbk.)
Publisher, Date:
Beverly, MA : Fair Winds Press, c2012.
Description:
176 p. : col. ill. ; 21 cm.
Other Title:
Let us play and learn together
Notes:
Includes index.
Contents:
Play ideas for learning and loving
Let's talk : developing new vocabulary and communication skills
Strong and healthy: building motor skills, strength, and agility
All about me! : developing a positive self-image
Let's pretend : developing imitation and imagination skills
Alphabet and number games : promoting alphabet skills and mathematical thinking
Scientific thinking : understanding the world around us
Making friends : games and activities for developing social skills, empathy, and kindness
Laugh with me : games and activities for promoting a sense of humor
Icky, sticky fun : fostering creativity and sensory awareness
Play activities and ideas for making life with your child fun and hassle-free
Play begins at home
Places to go and people to meet
Tried and true : classic songs and rhymes for babies, toddlers, and preschoolers.
LCCN:
2011036000
Subjects:Infants -- Development.
Parent and infant.
Play.
Other Authors:Masi, Wendy S. | https://pahkisimon.ca/sm/search/item/830172 |
UV filter decomposition. A study of reactions of 4-(2-aminophenyl)-4-oxocrotonic acid with amino acids and antioxidants present in the human lens.
Deamination of UV filters, such as kynurenine (KN), in the human lens results in protein modification. Thermal reactions of the product of kynurenine deamination, 4-(2-aminophenyl)-4-oxocrotonic acid (CKA), with amino acids (histidine, lysine, methionine, tryptophan, tyrosine, cysteine) and antioxidants (ascorbate, NADH, glutathione reduced) were studied. The rate constants of the reactions under physiological conditions were measured. The rate constants of CKA addition to cysteine k(Cys)=36+/-4M(-1)s(-1) and to glutathione k(GSH)=2.1+/-0.2M(-1)s(-1) are 4-5 orders of magnitude higher than the rate constants of CKA reactions with the other amino acids and antioxidants. The Arrhenius parameters for k(Cys) and k(GSH) were determined: A(GSH)=(1.8+/-0.7)x10(5)M(-1)s(-1), E(GSH)=29.2+/-5.6kJmol(-1), A(Cys)=(2.7+/-0.9)x10(8)M(-1)s(-1), E(Cys)=40.4+/-5.7kJmol(-1). The large difference in frequency factors for k(Cys) and k(GSH) is attributed to steric hindrance, peculiar to the bulky GSH molecule.
| |
A group of Democratic senators on Friday sent a letter to Facebook urging the company to take steps to curb coronavirus misinformation that is not in English.
The letters from Sens. Bob MenendezRobert (Bob) MenendezWatchdog confirms State Dept. canceled award for journalist who criticized Trump Kasie Hunt to host lead-in show for MSNBC's 'Morning Joe' Senators ask for removal of tariffs on EU food, wine, spirits: report MORE (D-N.J.), Mazie HironoMazie Keiko Hirono Senate Democrats call for ramped up Capitol coronavirus testing Supreme Court battle turns into 2020 proxy war Judge Barrett's hearing: Democratic senators left holding an empty sack MORE (D-Hawaii) and Elizabeth WarrenElizabeth WarrenOvernight Health Care: Trump takes criticism of Fauci to a new level | GOP Health Committee chairman defends Fauci | Birx confronted Pence about Atlas Senate Democrats call for ramped up Capitol coronavirus testing Government watchdog to investigate allegations of Trump interference at CDC, FDA MORE (D-Mass.) notes that more than 60 million Americans speak another language and cites a study that Facebook fails to issue warning labels on content in those languages at a much higher rate.
“Unfortunately, it continues to be far too easy for anyone to share false, misleading and potentially dangerous misinformation and disinformation about the virus [on Facebook-owned platforms],” the lawmakers wrote to Facebook CEO Mark ZuckerbergMark Elliot ZuckerbergConservatives seize on New York Post story to push Section 230 reform Hillicon Valley: Trump refuses to condemn QAnon | Twitter revises its policy, lets users share disputed article | Google sees foreign cyber threats Chairman: Senate Judiciary to vote on subpoena for Mark Zuckerberg MORE.
“As the world grapples with COVID-19 by remaining at home, millions of Americans and billions of people around the world are turning to social media platforms like Facebook as a way to connect with their loved ones," they added.
The study cited by the lawmakers from nonprofit activism group Avaaz found that the social media giant failed to issue warning labels on 70 percent of Spanish-language content and 68 percent of Italian, compared to 29 percent of English.
"This is a significant gap exposing non-English speakers to the perils of misinformation and disinformation,” the lawmakers wrote.
"Making sure that people, especially our most vulnerable communities, in the United States and around the world receive the most accurate information about how to prevent and protect themselves from COVID-19 is not only a moral imperative, it’s the only way we can beat this virus together," they added.
Conspiracy theories and unfounded claims about the coronavirus, its origins and ways to combat it have surged on social media in step with the disease itself, causing what WHO has branded an “infodemic.”
Facebook has taken several steps to combat the spread of that disinformation, elevating information from trusted sources and limiting the spread of potentially harmful posts.
The social media giant also began notifying users who have interacted with misinformation about the coronavirus and connecting them with debunkings of common misinformation.
It has also donated $1 million to fact-checkers around the world and added eight new fact-checking partners in the last two months.
Facebook spokesman Andy Stone confirmed the company received the letter. | https://thehill.com/policy/technology/498017-democratic-senators-urge-facebook-to-curb-coronavirus-misinformation-in |
st the prime factors of 31926.
2, 3, 17, 313
List the prime factors of 145359.
3, 31, 521
What are the prime factors of 1685?
5, 337
What are the prime factors of 3016?
2, 13, 29
What are the prime factors of 34514?
2, 17257
List the prime factors of 2103.
3, 701
What are the prime factors of 756?
2, 3, 7
List the prime factors of 112882.
2, 7, 11, 733
What are the prime factors of 2808?
2, 3, 13
What are the prime factors of 56559?
3, 17, 1109
What are the prime factors of 79883?
17, 37, 127
What are the prime factors of 1327?
1327
What are the prime factors of 1086?
2, 3, 181
List the prime factors of 8016.
2, 3, 167
What are the prime factors of 38427?
3, 12809
List the prime factors of 175.
5, 7
List the prime factors of 6202.
2, 7, 443
List the prime factors of 93464.
2, 7, 1669
What are the prime factors of 3260?
2, 5, 163
List the prime factors of 6616.
2, 827
List the prime factors of 54152.
2, 7, 967
List the prime factors of 810.
2, 3, 5
List the prime factors of 124.
2, 31
List the prime factors of 713.
23, 31
List the prime factors of 7302.
2, 3, 1217
List the prime factors of 461.
461
What are the prime factors of 698?
2, 349
What are the prime factors of 436?
2, 109
What are the prime factors of 30987?
3, 11, 313
List the prime factors of 3931.
3931
What are the prime factors of 539?
7, 11
List the prime factors of 655.
5, 131
List the prime factors of 1447.
1447
What are the prime factors of 8457?
3, 2819
List the prime factors of 47645.
5, 13, 733
List the prime factors of 541.
541
List the prime factors of 4595.
5, 919
List the prime factors of 1011.
3, 337
What are the prime factors of 5275?
5, 211
List the prime factors of 97388.
2, 97, 251
What are the prime factors of 20739?
3, 31, 223
List the prime factors of 150.
2, 3, 5
What are the prime factors of 12774?
2, 3, 2129
List the prime factors of 1719.
3, 191
List the prime factors of 2026.
2, 1013
List the prime factors of 1397.
11, 127
What are the prime factors of 1994?
2, 997
What are the prime factors of 1597?
1597
What are the prime factors of 29488?
2, 19, 97
List the prime factors of 923.
13, 71
What are the prime factors of 1877?
1877
List the prime factors of 74.
2, 37
What are the prime factors of 64181?
13, 4937
List the prime factors of 46598.
2, 23, 1013
What are the prime factors of 553?
7, 79
What are the prime factors of 14200?
2, 5, 71
What are the prime factors of 58696?
2, 11, 23, 29
List the prime factors of 70256.
2, 4391
List the prime factors of 2152.
2, 269
List the prime factors of 312.
2, 3, 13
List the prime factors of 1925.
5, 7, 11
What are the prime factors of 178934?
2, 7, 12781
List the prime factors of 26055.
3, 5, 193
What are the prime factors of 3921?
3, 1307
List the prime factors of 23156.
2, 7, 827
What are the prime factors of 137538?
2, 3, 283
List the prime factors of 12147.
3, 4049
What are the prime factors of 98046?
2, 3, 13, 419
What are the prime factors of 448?
2, 7
List the prime factors of 810.
2, 3, 5
List the prime factors of 4106.
2, 2053
List the prime factors of 6893.
61, 113
List the prime factors of 97154.
2, 31, 1567
What are the prime factors of 1291?
1291
What are the prime factors of 104?
2, 13
What are the prime factors of 3140?
2, 5, 157
List the prime factors of 25096.
2, 3137
What are the prime factors of 954?
2, 3, 53
List the prime factors of 2226.
2, 3, 7, 53
What are the prime factors of 328?
2, 41
List the prime factors of 12706.
2, 6353
What are the prime factors of 1216?
2, 19
List the prime factors of 1277.
1277
What are the prime factors of 9227?
9227
List the prime factors of 732.
2, 3, 61
What are the prime factors of 664?
2, 83
List the prime factors of 19184.
2, 11, 109
What are the prime factors of 43885?
5, 67, 131
What are the prime factors of 3689?
7, 17, 31
List the prime factors of 4137.
3, 7, 197
What are the prime factors of 22101?
3, 53, 139
What are the prime factors of 11832?
2, 3, 17, 29
List the prime factors of 5580.
2, 3, 5, 31
List the prime factors of 607.
607
What are the prime factors of 2245?
5, 449
What are the prime factors of 17212?
2, 13, 331
List the prime factors of 1665.
3, 5, 37
What are the prime factors of 32893?
7, 37, 127
What are the prime factors of 1553?
1553
List the prime factors of 465.
3, 5, 31
List the prime factors of 5530.
2, 5, 7, 79
List the prime factors of 18523.
18523
What are the prime factors of 82062?
2, 3, 47, 97
List the prime factors of 6882.
2, 3, 31, 37
List the prime factors of 11897.
11897
List the prime factors of 15149.
15149
What are the prime factors of 93?
3, 31
List the prime factors of 1759.
1759
List the prime factors of 1903.
11, 173
List the prime factors of 27024.
2, 3, 563
What are the prime factors of 27156?
2, 3, 31, 73
List the prime factors of 243.
3
What are the prime factors of 276?
2, 3, 23
List the prime factors of 245.
5, 7
What are the prime factors of 2500?
2, 5
List the prime factors of 94017.
3, 7, 11, 37
What are the prime factors of 16855?
5, 3371
List the prime factors of 720.
2, 3, 5
What are the prime factors of 58?
2, 29
What are the prime factors of 3081?
3, 13, 79
What are the prime factors of 8008?
2, 7, 11, 13
What are the prime factors of 1092?
2, 3, 7, 13
What are the prime factors of 41624?
2, 11, 43
What are the prime factors of 9180?
2, 3, 5, 17
List the prime factors of 1236.
2, 3, 103
What are the prime factors of 16940?
2, 5, 7, 11
List the prime factors of 95290.
2, 5, 13, 733
What are the prime factors of 65907?
3, 2441
List the prime factors of 4459.
7, 13
List the prime factors of 7638.
2, 3, 19, 67
What are the prime factors of 13953?
3, 4651
What are the prime factors of 15512?
2, 7, 277
List the prime factors of 616.
2, 7, 11
List the prime factors of 1944.
2, 3
List the prime factors of 14939.
14939
What are the prime factors of 644?
2, 7, 23
What are the prime factors of 7465?
5, 1493
What are the prime factors of 2931?
3, 977
List the prime factors of 41869.
149, 281
List the prime factors of 39613.
7, 5659
List the prime factors of 117006.
2, 3, 19501
List the prime factors of 110895.
3, 5, 7393
List the prime factors of 1194.
2, 3, 199
List the prime factors of 1660.
2, 5, 83
List the prime factors of 99637.
17, 5861
What are the prime factors of 9161?
9161
List the prime factors of 22.
2, 11
What are the prime factors of 2542?
2, 31, 41
List the prime factors of 3377.
11, 307
What are the prime factors of 3170?
2, 5, 317
List the prime factors of 81647.
81647
What are the prime factors of 72099?
3, 8011
List the prime factors of 128137.
97, 1321
What are the prime factors of 3353?
7, 479
What are the prime factors of 1229?
1229
List the prime factors of 17860.
2, 5, 19, 47
What are the prime factors of 246?
2, 3, 41
What are the prime factors of 100?
2, 5
What are the prime factors of 134494?
2, 67247
What are the prime factors of 2117?
29, 73
List the prime factors of 49075.
5, 13, 151
List the prime factors of 709.
709
What are the prime factors of 17301?
3, 73, 79
List the prime factors of 17585.
5, 3517
What are the prime factors of 38634?
2, 3, 47, 137
List the prime factors of 56244.
2, 3, 43, 109
List the prime factors of 1930.
2, 5, 193
List the prime factors of 7027.
7027
List the prime factors of 561.
3, 11, 17
List the prime factors of 270.
2, 3, 5
What are the prime factors of 3968?
2, 31
What are the prime factors of 32216?
2, 4027
What are the prime factors of 424?
2, 53
What are the prime factors of 7010?
2, 5, 701
List the prime factors of 594.
2, 3, 11
List the prime factors of 11665.
5, 2333
What are the prime factors of 2037?
3, 7, 97
What are the prime factors of 265?
5, 53
List the prime factors of 1170.
2, 3, 5, 13
What are the prime factors of 23229?
3, 29, 89
What are the prime factors of 165?
3, 5, 11
What are the prime factors of 6645?
3, 5, 443
What are the prime factors of 6437?
41, 157
List the prime factors of 1322.
2, 661
List the prime factors of 35099.
35099
List the prime factors of 893.
19, 47
What are the prime factors of 1909?
23, 83
What are the prime factors of 69271?
53, 1307
What are the prime factors of 455?
5, 7, 13
List the prime factors of 10175.
5, 11, 37
What are the prime factors of 307?
307
What are the prime factors of 20224?
2, 7
| |
Egrets are found throughout Florida.
A Snowy Egret at the water's edge as the sun sets.
A Cattle Egret in partial courtship plumage -- the buff colored feathers on the head.
Cattle egrets are so named because they're often seen perched on the backs of cattle. | https://nsis.org/gallery/brd-egret.html |
Ozzy Osbourne is tearing up Facebook. As of 2019-11-03, Ozzy Osbourne has 11,870,426 likes on their facebook page, 10,896,329 followers and has been posting on the page since October 19, 2007. We analyzed the couple-dozen latest posts from Ozzy Osbourne and found some things of interest. First, check out the most shared post (ID 10157678556557318) with a whopping 2,079 shares, 10,961 reactions (including 1,789 likes, 70 ‘loves’ and 26 ‘wow’s) and 288 comments:
This post has ZERO shares (yes – not a single one as of the time of writing), zero comments and 76 total reactions, with 1,326 likes, 316 loves, 14 wow reactions, 3 haha reactions, zero angry reactions and zero sad reactions.
THE BOTTOM LINE: We did a ton of research, digging through all of Ozzy Osbourne’s latest social media postings, their stats, and what people are saying about them. This is our conclusion AS OF 2019-11-03: Ozzy Osbourne is…
Going excellent 🙂
Sure Ozzy Osbourne has their detractors, but almost all their comments are very positive lately, so we’re confident in this conclusion. But what do you think? Do you love Ozzy Osbourne… or hate them? Is their popularity deserved? Let us know in the comments. | http://amarketreportsjournal.com/2019/11/04/is-ozzy-osbourne-causing-anger-on-facebook-2019-11-03/ |
SpeedTest wow! i will definately try all these suggestions. 16 May 2018, 3:00pm Often the most neglected yet important aspect of growing healthy hair is the vitality of the scalp. I found that maintaining the scalp and keeping it conditioned, invigorated, and inflammation-free is key. It’s easy to overlook the scalp; after all, it is hidden beneath a mane of hair, so dryness or irritation can be quite literally veiled. But the scalp is also prone to the elements you place on it, so principal consideration is generally to avoid overwashing. Washing two to three times per week was sufficient for me, which also helps maintain the scalp’s microbiome and leaves its natural sebum production intact.
Hayley says: sarah says: January 3, 2015 at 1:56 am If your hair grows 4 inches every two weeks, then you’d have about 8 inches through the whole month.
abby says: May 11, 2018 In Chinese medicine hair growth is related to a healthy liver and kidneys. The herb that is traditionally prescribed for hair growth is called He Shou Wu. During my program, I added it to my smoothies. I also started every morning with a cup of warm water, half a lemon, and liquid artichoke extract, which I found at the health food store, which is said to detoxify the liver.
May 24, 2015 at 1:17 am Health & Fitness Nation Now Coconut oil works the best for my type of hair. I have thick hair so if you have thick hair too I would suggest to use coconut oil. I saw result in about 2 weeks.
7 Reasons You Should Eat Grape Seeds Guidelines 2 months after being septic, my hair s started to fall out…..I should say” I was Shedding”. Kit did not come out in clumps but fell out when I brushed my hair, washed it or just ran my fingers thru my hair….pretty even all over except in back where I started to get a bald spot….From all my research, I found out that my hair Went to Sleep” due to the shock to my body of the Sepsis……It has stopped shedding & is slowly growing back…..curly! I don’t mind just as long as it grows!!
AngryBird says: the union juice really works amazing. i tried it As of right now, there is no proven scientific remedy to “turn on” the anagen “active growth” phase, which would allow hair to grow faster. However, keeping your hair healthy while in the anagen phase can extend the phase’s length, offering each strand more time to grow before it’s time to fall out.
I am on Birth control, but I never realized how many side affects there are… I hope these help, my hair I wan to grow it out a lot longer.
Sangeetha Purushoth says: January 23, 2015 at 9:09 pm Denver Weather Summary: 60 degrees All I did was put castor oil coconut oil and almond oil and jojoba oil on the microwave for 10 seconds and put it in my hair?
10 Things You Will Never Have To Buy Again khushboo says: Careers Your privacy is important to us.
11 May 2018, 11:10am Growing long, beautiful hair doesn’t happen over night. After all, the average human grows hair at a rate of a half inch per month. Follow these tips, be diligent, and have patience knowing that you are on your way to longer, stronger, healthier hair.
CLIMATE CHANGE Biotin – a B vitamin found naturally in such foods as salmon, Swiss chard, wheat germ, egg yolks, chicken, and almonds – is known to prevent hair from falling out in addition to a long list of other health benefits. Biotin can also be taken in supplement form for those busy individuals who are unable to commit to a steady eating schedule. | https://iceev.com/make-my-hair-grow-how-to-make-your-hair-grow/ |
Versatile, charismatic, confident, passionate, hilarious – these are the words that come to my mind when I think of the tremendous acting prowess that was Sridevi. I remember hearing “Megha Re Megha” ring from our black double-speaker stereo that sat on a small stool in my New Jersey kitchen, connecting me and bringing me back to my motherland in an instant. I think that if India could be personified, she would be Sridevi – her iconic pan-Indian career spanned fifty years and five languages, highlighting the richness of our culture and all the diversity that exists within the subcontinent.
Not too long before her death, I started following Sridevi on Instagram, noting how she had changed since she was “Hawa Hawai” in “Mr India”. Still elegant and graceful, Sridevi seemed like a happening yet doting Mumbai mother. Her feed was filled with photographs of her family, stills of her wearing beautiful high-fashion couture and hanging out with Bollywood’s best. However, it was pretty obvious that her appearance had to be solid and perfect for her to stay relevant with today’s age group, a dangerous trend that I feel is causing much harm to Bollywood’s leading ladies and our national identity at large.
I think that Sridevi’s death can open up an important discussion of the beauty standards that are becoming normalized in the Indian world, the stress they may cause, and how we can change the conversation about what it really means to be beautiful within any skin color, caste, and religion, and most importantly – age.
Just eight months ago, Sridevi’s film “Mom” released with decent success at the box-office. Then 54, the actress portrayed a mother avenging her daughter’s rape and acting in scenes that required her to hold a handgun, break-into homes, and stand-up to villains. Notably, her acting in these roles was quite different from what most actresses her age are made to do in a film.
As a quick comparison, Jaya Bachchan was 52 when she played Nandini Raichand in “Kabhi Khushi Kabhi Gham”… her role required her to act as a quintessential, homely Indian mother conducting aarti processions during Diwali, sacrificing her life for her family’s, her voice for her husband’s and her mornings for getting him ready for work. If you note the difference in appearance between Sridevi at 54 and Jaya Bachchan at 52, I believe it would be an insightful indicator as to the increasing demands women have to look a certain way if they want to have access to more diverse roles in Bollywood.
It would be difficult to imagine that anyone would think to cast Jaya Bachchan in the same role that Sridevi played in “Mom”, mostly because the collective Indian conscious considers her to be an “aunty” or “mummy” type. In order or Mrs Bachchan to have gotten a glimpse of consideration for a similar part, she would have likely had to appear 10 to 15 years younger than her actual age. Why can’t our “aunty”-like and “mummy”-like older actresses be at their natural weight, natural skin colour and full Indianness when playing these non-traditional roles?
For Sridevi, I imagine that the pressure to stay relevant with today’s fan base required an intense relationship with her physical health, whether it was through cosmetic surgery or a strenuous workout regimen. Which leads me to ask, where have our values gone as a nation? Why do we want our non-Western and fully Indian actresses to fit into a westernized box?
I firmly believe that mothers are the soul of every nation, but even more so, I believe this is true in the Indian context. Mothers weave together our different ancestries by acting as living family trees, feeding us ancient medicinal remedies, ageing with grace and splendour and much more. But I feel what’s most comforting about a strong, idyllic Indian mother is that she owns her beauty in its traditional context, not wanting to conform to the ways of the world. When we consistently send our mothers and our women a message about how their value is no longer relevant if they don’t look a certain way, can’t speak English well, or dress too traditionally – I think we are on the path to becoming extremely lost as a nation.
One quick google can yield hundreds of results about Hollywood actresses who have undergone plastic surgery procedures to crystallise their faces with eternal youth, insanity that causes women to sacrifice their self-identity for the collective conscious of “beauty.” Given the endless amount of pressure that modern women already face to be ambitious career seekers and maintain family relationships, normalising a national narrative that then tells women their looks aren’t American or European enough is an assault to our identity.
Classic Indian features such as wide noses, hairiness, dark skin and even dark hair are considered to be outdated in much of Bollywood. Unfortunately, our industry and country at large are going through a phase of disowning the natural features that define us as a people. Colourism is a social disease that pumps over 23.3 billion rupees into an ever-growing market that aims at killing our inherent Indian skin pigment that white individuals spend hours on foreign and exotic beaches to achieve.
Unfortunately, our very own Sridevi is no exception to this phenomenon. Although I feel it is unfair to speculate on what caused her death, I think it is worthy to note the dramatic change in her appearance over the years. It is evident that her skin appears to be lighter as she has aged, reminding many of us of how dark-skinned actresses such as Rekha have traditionally been treated in Bollywood. Avoiding rejection is something that women have to do on a regular basis to live under the threat of Patriarchy, and with Bollywood being a male-dominated industry, our men require women to achieve superhuman and non-native standards of beauty to fit in.
It is without question that Bollywood actors who fit the mould are quickly catapulted to God-like status in India without much second thought. The sheer influence that celebrities have on the national psyche is pretty remarkable, and a woman like Sridevi was no exception.
Before Sridevi, films depended on male stars to carry them into high-profits and shatter box office records. Sri broke the mould in this regard, paving the way for Madhuri Dixit and other now-famous female stars to headline blockbuster films and achieve national fame. Her remarkable career is a testament to the potential influence that just one individual can make in an industry and a country at large.
The social influence that caused Sridevi’s changing appearance has also rejected large numbers of our Indian society as well, more specifically, the less-privileged. Dalits and other lower caste individuals tend to be darker skinned and as a result of this and many other socio-cultural reasons have been persecuted heavily in our country throughout history. By making Fair & Lovely as popular in our bathroom cabinets as Parle-G crackers are in our kitchen cabinets, we are reinforcing a rhetoric that decrees darkness as an inferior characteristic.
I feel that India’s rapid economic growth is going to be a catalyst for even more division between our already heavily-divided country. As always, large cities like Mumbai will continually industrialise while leaving villagers and “gao”-folk in the dust, creating more exclusionary and less intersectional spaces. Terms like “bain-ji” have started to emerge more strongly in the elite-rhetoric, casting off anyone who does not fit the standard for what the new India should look like – light, English-speaking, and dressed in Western clothing.
Sridevi’s death also marks the demise of the modern Indian woman meant before Westernisation took over. The actress bridged a gap between traditional and modern; she could pull off a stellar classical dance ensemble in “Nagina” and also incorporate more Western moves into her repertoire like in “Hawa Hawai.” Her ability to integrate two conflicting ideologies into her persona made her versatility that much more marketable and impactful, and much easier to connect with. Sridevi’s ability to embrace trends that came with changing times was never overshadowed by her Indianness. We see this clearly in “English Vinglish”, where she plays a housewife and entrepreneur moving to the United States. In fact, in this film, she is the perfect example of what it means to become modern instead of losing oneself in Westernisation, and reminding us that the two terms do not need to be interchangeable.
What’s really sad is that many of our new actresses won’t be able to do what early Sridevi excelled at: creating and mastering authentic content to reach the heartland and outlying states in India. Sridevi provided a universality that was rooted in a connection to our collective heritage; she glued us together in a way that an Alia Bhatt or an Anushka Sharma would never be able to do. The supersaturation of social media in Indian hands is also ailing our country by bringing Western expectations to our feeds each and every morning, making the emergence of a new Sridevi almost impossible.
The true measure of an excellent Indian actress is one who is able to bridge the gaps between who we are and who we need to become in order to progress as a nation. Progress can be defined in many ways, and while a growing economy may be able to lift many of our fellow Indians out of poverty, losing culture and our national identity is not something that we will be able to reconcile with checks, ledgers or even demonetisation. Like Sridevi, we must learn to become both the Nagin and the Bijli ki Rani; on the one hand strong, assertive and protective of our culture and the other willing to adjust our sails with the changing winds of time. Let’s hope we can make her proud. | https://www.youthkiawaaz.com/2018/03/remembering-sridevi/ |
Event led by Janice Soderholm
Join us as we come together for an uplifting and entertaining movie night.
June 13th Naked in Ashes
Among India’s one billion people exists a passionate and (to Western minds) eccentric community of 13 million Yogis. They live in a world never seen on film.
This groundbreaking documentary offers an unprecedented look at the Eastern Yogi. These mystics leave everything material to embark on a spiritual quest. With honest simplicity; NAKED IN ASHES provides a magical glimpse into the world of these sages who find deep meaning in a life of chastity; austerity; and utter devotion.From covering themselves for warmth in the ashes of the dead; to pulling automobiles by their genitalia; the film’s candid portraits of ascetics whose only quest is to experience the Divine captivates both the eye and mind.
June 13, 2014
8-9:30pm
Please bring a food bank donation to support our community. | http://www.ottawayoga.com/workshops/movie-night/ |
Organisms require a source of major and trace elements. Heterotrophs use an organic carbon source, and autotrophs use CO2. Bacteria lacking ability to synthesize certain small molecules require these for growth. Phototrophs harvest the energy of sunlight, and chemotrophs obtain energy by degrading chemicals.
■ List the major elements other than carbon required for growth of bacteria.
■ What is the carbon source of a photoautotroph? Of a chemoautotroph?
■ Why would human-made materials (such as plastics) be degraded only slowly or not at all? _
You Are What You Eat
Nutrition is a matter that people spend their careers learning about and requires volumes of books to explain. My objective is to instruct you how to consume a healthy nutritional diet that aids your body in burning off fat instead of storing it. You do not require overwhelming science to get this. | https://www.alpfmedical.info/causative-agent/microcheck-44.html |
CALL FOR PAPERS: Social Media and Online Behavior: Language and Culture Considerations and Challenges for the Intelligence Community
The Foreign Language Program Office of the Office of the Director of National Intelligence (ODNI) and the Culture and Regional Knowledge Expert Group (CRKEG) are pleased to announce a symposium on Social Media and Online Behavior to be held Monday June 8th through Friday June 12th at the Center for Advanced Study of Language. The 2015 CRKEG Symposium Planning Committee invites submission of proposals on the topics relevant to the theme “Social Media and Online Behavior: Language and Culture Considerations and Challenges for the Intelligence Community.” The growth of social media and online networking has provided a new and constantly changing landscape for the collection and analysis of intelligence. This symposium will focus on the linguistic, cultural, and regional knowledge necessary to effectively analyze participants and their activities in online communities and networks. The symposium is open to members of the U.S. Intelligence Community. Funding for any associated travel expenses will be the responsibility of the attendees’ organizations.
CALL FOR PAPERS
Social Media and Online Behavior: Language and Culture Considerations and Challenges for the Intelligence Community - CRKEG Symposium
- Date: Monday 8 June through Friday 12 June 2015
- Location: Center for Advanced Study of Language College Park, MD
Proposal Submission Guidelines
Effective proposals for this symposium will address the building and sustaining of intelligence capabilities for readiness and effectiveness in this changing landscape, while considering the role of language, region, and culture (socio-cultural concepts and processes as well as knowledge of particular cultural groups) knowledge expertise, skills, and tools (LRC) in support of the mission. The CRKEG welcomes proposals that highlight either best practices or challenges in bringing LRC knowledge and skills to bear on this area and also associated technology, policy, training and resource issues that impact the community’s ability to leverage LRC capabilities effectively.
Topics: The following topics are provided to stimulate ideas for potential presenters, and are not meant to be exclusive:
- Trends in online behavior and social interaction within and across culture groups as it affects security concerns
- The varying capacities for analysis by language enabled analysts vs. language professionals
- The use and implications of tactical (rather than comprehensive) language capability and its effects on analysis
- The role of language and culture experts in the fast-changing linguacultural environment of social media and its effects on training, assessment, and intelligence analysis
- The differences or similarities in the capabilities (knowledge, skills and tools) needed for different types of analysis/use (e.g. strategic v. anticipatory)
- The impact of emerging social norms and cultural practices online, and implications on intelligence gathering and analysis of online and “bricks and mortar” communities
- The capabilities of and limitations of technology in addressing and supporting cultural, regional, and linguistic needs in this area
- The challenges to and status of efforts to develop and implement IC strategies and guidance related to language, culture and regional knowledge in a complex and changing environment and in light of diminishing resources and diverse agency needs
- The implications and impact of emerging social media practices on requirements, policy, and strategy, in relation to workforce capabilities in language and CRK
Session Length: We encourage a variety of interactive approaches including panels, interviews, keynote speakers, and individual presentations. Please note
- Long Presentation: 45 min presentation +15 min for Q&A
- Short Presentation: 20 min presentation + 10 mins for Q&A
- Panel Session (3-4 speakers): approx. 10 – 15 min presentations with Q&A time = 60 min total
Required Information:
- TITLE: The topic should be relevant to the conference theme.
- SESSION LENGTH/TYPE: One of the three noted above.
- CONTENT DESCRIPTION: The description must clearly align with the content of the proposal and be in final, publishable format, free of any errors.
- METHODS: This section is limited to a maximum of 150 words and should indicate how information will be shared.
- OUTLINE: Required only for panel submissions. This section should include an outline of the proposed panel, including a schedule for each topic addressed or activity performed.
- PRESENTERS: Name, telephone, email, and organizational affiliation of participants proposed, with a primary POC for the session clearly identified.
- CLASSIFICATION: Classification level of the presentation.
Submission instructions:
Proposals must be submitted in English and in final publishable format. All proposals must be submitted by COB on April 7, 2015. Unclassified submissions should be emailed to Eric Ciliberti ([email protected]), and classified submissions should be emailed to Eric Ciliberti ([email protected]) by the deadline. Confirmation of receipt will be emailed following submission. Presenters will be notified by email of the Review Committee’s decision by May 4, 2015. The submission should be print-ready in final publishable format. Use all punctuation, underlining, italics, etc. exactly as you want it to be printed in the program guide. All proposals must be written in third person format avoiding the use of "I" and "we."
Please direct questions concerning this call for proposals to Wes Lippman at [email protected] or Evelyn Bailey at [email protected]. | http://cultureready.org/blog/call-papers-social-media-and-online-behavior-language-and-culture-considerations-and |
Editor’s note: This article is part of a series on tax reform. Read another perspective here.
Unless California is trying to break the Guinness World Record for tax evasion, it’s hard to see what will be accomplished by the plan put forward by Senate Leader Kevin de León to get around the new federal tax law’s limitation on the deduction for state and local taxes.
“This is legal,” de León insisted, and he could be right, right now. But there is plenty of evidence to suggest that he won’t be right for very long.
The plan, now introduced in the state Senate as Senate Bill 227, would create something called the California Excellence Fund within the state’s General Fund, and if taxpayers chose to make donations to that special fund, they would receive a credit against their state income tax liability equal to the amount of their donation.
The state would end up with the same amount of revenue, but the taxpayer would be better off because donations to the California Excellence Fund would be considered charitable contributions, deductible on federal tax returns. This would get around the new $10,000 limit on the deduction for state taxes.
De León’s contention that this is legal is based on an Internal Revenue Service Chief Counsel Advice Memo from February, 2011. CCA 201105010 states that the payment of cash to a state agency that creates a tax benefit is “not regarded as a return benefit that negates charitable intent.” So it’s an allowable charitable deduction.
However, the CCA memo also warns, “there may be unusual circumstances in which it would be appropriate to recharacterize a payment of cash or property that was, in form, a charitable contribution as, in substance, a satisfaction of tax liability.” In other words, the IRS reserves the right to disallow the deduction in “unusual circumstances.” Possibly one such circumstance would be the entire state of California trying to evade federal taxes at the same time.
Even if the IRS went along with it, Congress could easily change the law and limit or end the deductibility of donations to special funds with a state tax-credit connection.
Here’s where the intrigue begins.
Seventeen states currently offer state tax credits for donations to nonprofit organizations that provide vouchers for private schools. This gets around any prohibition on using public funds for private schools, especially religious private schools.
One of two things will likely happen as a result of de León’s proposal, should it become law: California will get away with it, because Congress doesn’t want to undermine the tax-credit scholarship funds in other states, or Congress will ban the tax-credit deduction entirely and California will have ruined it for everybody.
It’s always more interesting than it first appears, isn’t it?
The Institute on Taxation and Economic Policy estimated that the state tax credits for the voucher programs total $1 billion per year. In some cases, taxpayers can collect more in tax benefits than they donated.
Last summer, the Trump administration was reported to be considering expanding the tax-credit scholarship idea into a nationwide program. “It’s certainly part of our discussion,” Education Secretary Betsy DeVos told the Associated Press in August.
Of course, the public education establishment opposes taxpayer-funded vouchers for private schools, but there wasn’t much they could do about it. The courts upheld the tax-credit arrangement as legal.
It’s an understatement to say the public education lobby is very powerful in Sacramento. Are they cheering on de León’s proposal, waiting in the wings with their legal teams to argue that if California’s tax-credit-charitable-deduction plan is disallowed, all those tax-credit scholarship funds for private schools have to go, too?
Maybe Congress can thread the needle in a way that preserves the state tax-credit charitable deduction for school vouchers while banning California’s scheme to push all state tax payments through the same loophole. But before that happens, the IRS will probably send out a new CCA memo and start disallowing the deduction for “donations” of state tax payments in California.
The whole mess is likely headed for court, giving California politicians many months or years to posture and rant about “resistance.” Meanwhile, beleaguered taxpayers continue to suffer under the highest state taxes in the country, and all state lawmakers are doing about it is giving them sketchy tax advice.
Susan Shelley is an editorial writer and columnist for the Southern California News Group. Reach her at [email protected] and follow her on Twitter: @Susan_Shelley. | https://www.sgvtribune.com/2018/01/13/de-leons-tax-dodge-may-be-legal-but-it-might-not-be-for-long/ |
Taste of the Market: A Sampling of the Future for the Public Market
I don't usually like to write up an event after it's happened. Really, what's the point since you can't attend after the fact? But last weekend's Taste of the Market at the San Diego Public Market in Barrio Logan is the first of a number of community events coming up so if you live in San Diego and are rooting for the future of a public market here, you need to see what you may have missed so you won't make that mistake again.
On Sunday afternoon, more than 230 people ate and drank their way through the two warehouse halls that on Wednesdays and Sundays are the farmers market. But on this late afternoon, the warehouses were transformed into whimsical spaces filled with imaginative eateries created by restaurants and chefs who had been paired with local farmers and purveyors. Andrew Spurgin and Melissa Mayer, who co-chaired and coordinated Taste of the Market, each had their own little hot spots. Spurgin's Broussard's Po' Boy cafe was straight out of New Orleans and featured, what else, a fried oyster po' boy.
Mayer, on the other hand, went south of the border with a vibrant blue cantina, serving a smoked chicken torta.
The list of restaurants and chefs just goes on and on. Plus, there were vendors you'd usually see at the farmers market--Suzie's Farm, Nicolau Farm (chèvre), Cardamom Bakery, Jennywenny Cakes, Viva Pops, and SuperNatural Sandwiches among them. Below is a snapshot of a perfect culinary festival:
|Dave Rudie, Tommy Gomes, and Ken Gardon of Catalina Offshore Products|
|Ken and his octopus salad|
|Chef Chad White of La Plancha Baja Med|
|Chad White's Apple brulee, fermeneted Julian Hard Cider vincotto, Gilbert Qunitos Farm kale, lardo, Smit Orchards apples, sesame nori gremolata|
|Chef Craig Jimenez of Roseville Cozinha and colleagues. He made a luscious goat cheese ravioli (sourced from Nicolau Farms)|
|Snake Oil Cocktail's Michael Esposito, Andrew Spurgin, and Public Market co-founder Dale Steele|
|Jenny Williams of Jenny Wenny Cakes|
|Joanne Squires-Sherif of Cardamom Bakery & Cafe|
|Katie Grebow of Cafe Chloe|
|Gina Frieze of Venissimo topping off a slice of Bread & Cie baguette loaded with Gina's house-made ricotta|
|Salumi, cheese, and bread: a collaboration that is the goal for Pete Balistreri, Venissimo, and Bread & Cie at the Public Market|
|Pete Balistreri of Tender Greens and P Balistreri Salumi|
|Alchemy's Ricardo Heredia and his pig from Da-Le Ranch|
|The Blind Burro's Sara Polczynski making these beautiful veggie tacos below|
|Tony Nguyen and SuperNatural Sandwiches' variation on their Harpy sandwich|
|George's at the Cove's Trey Foshee and (below) his slow-roasted Maciel carrot salad|
|The Red Door and The Wellington's Trish Watlington and Chef Miguel Valdez with their seared local yellowtail over pumpkin uni bisque|
|Hurray for Viva Pops!|
The next Taste of the Market will take place on March 24th, and, says, White, "will be a celebration of our ethnic diversity, with chefs showing off Japanese, Filipino, and Chinese specialties, German sausage, tastes from Tijuana, and more, teamed with local farmers for ingredients, of course."
Between now and then they'll be hosting a movie night with local chefs' takes on snack food. As White says, "Think popcorn, hot dogs, and Junior Mints kicked way up." The tentative date is March 8.
All this, of course, is hand-in-hand in anticipation of the market going full time in spring or summer, with permanent stalls and artisan cheese making, salumi curing, a tortilleria, coffee roasting, and baking on premises. Indeed, the proceeds from ticket sales at Taste of the Market are going toward the development of the Market Kitchen, a commissary kitchen for vendors and micro-businesses that will also feature community education classes on cooking and nutrition.
So, really, pencil in the next dates on your calendar. This was just the first of many special public events at our Public Market. You'll want to be there. | http://www.sandiegofoodstuff.com/2013/02/taste-of-market-sampling-of-future-for.html |
Fully-autonomous operation, including autonomous takeoff and landing, of helicopters and multi-rotor UAVs requires the implementation of a variety of sensory, communication, and processing capabilities. One way UAVs can autonomously take off and land autonomously is to start and finish missions at a known location or landing surfaces utilizing fixed markers for visual or sensory orientation.
In particular, takeoff and landing autonomously requires a precise estimation of the UAV pose (i.e., the three-dimensional position in space) in relation to a landing marker that cannot typically be accomplished with satellite-based navigation systems or other on-board sensors at the precision and framerate required by flight control systems.
Visual sensors can be successfully used during the landing process since they are able to provide the pose with an accuracy typically greater than GPS, sufficient to complete the autonomous landing task. However, vision data provide a considerable amount of information that must be processed. In fact, data provided by visual sensors have two main drawbacks: first, the computation time required to analyze and extract information from each frame reduces the rate at which the sensor can provide information; second, the computation time is typically dependent on the complexity of the image (frame) and on the number of operations that have to be performed. Therefore, providing a high-frequency pose estimation becomes mandatory for more precise localization and control performance especially during takeoff and landing, and a need exists for improved systems and methods to achieve such performance.
| |
The objective of this assignment is to assess your group ability to evaluate and to compare the operation and performance of two companies (Peer Group Analysis) that are competitor to each other using Fundamental Analysis technique (Financial Ratios).
Please note: each group can only choose 2 companies, and once those two companies are chosen, the other group cannot choose the same companies (in other words, the later group has to choose another two companies that haven’t been chosen by others).
The rule is first come first serve, it means let your lecturer knows asap which companies your group want to discuss and your lecturer can check whether your chosen companies are still available or not.
Each company will be evaluated and compared based on the five aspects to determine which company is better fundamentally.
You’re an Investment Manager of a large corporation based in Australia. There’s one big institutional investor from overseas is interested in investing in the Australian market, however the investor only interested in the blue chip companies. You’ve been asked to choose two large companies operating under the same industry in Australia (the industry you think will have the most promising future for investment) then evaluate and compare them. finally make a recommendation through your Report to the investor which one is more superior fundamentally.
1. The quality of your academic writing skills will also greatly influence your mark; therefore, you must follow all the rules.
2. No in text citation in the body will be considered not an academic writing.
3. No reference lists part will be considered not an academic writing as well.
6. You’re required to submit the assignment on Blackboard with the maximum similarity (plagiarism check) not exceeding 30%.
10. Always keep an electronic copy of your assignment until you have received the final grade for your subject or unit.
The quality of your analysis is more important than the quantity of the ratios. Meaning, your assignment mark will be greatly influenced by your interpretation and understanding of the numbers rather than the calculation itself. | https://www.australiabesttutors.com/recent_question/39646/holmes-institutefaculty-of-higher-educationhc2091-business |
Q:
A set of all numbers that can be written as $1\pm2\pm3\pm...\pm n$ if we can replace $\pm$ by $+$ or $-$.
We have a $M_n$ of all numbers that can be written as $1\pm2\pm3\pm...\pm n$.
How many numbers will set contain if we can $\pm$ by $+$ or $-$.
For example if $n$ is $3$ then we have $(1+2+3)=6$,$(1+2-3)=0$,$(1-2+3)=2$ and $(1-2-3)=-4$ so with $n=3$, $M_3$ contains $-4,0,2,6$.
In other words, if $M_n$ is the set of all numbers of the form $1\pm 2\cdots\pm n$, where each sign can be independently chosen to be either $+$ or $-$, then what is the size of $M_n$? By the above, $|M_3| = 4$.
Thanks for your answering.
A:
For any integer $p \le q$, let
$I(p,q) = \{\; k \in \mathbb{Z} : p \le k \le q\;\}$ be the set of integers between $p$ and $q$.
$S(p,q) = \{\; \sum_{k\in A} k : A \subset I(p,q)\;\}$ be the set of subset sums over $I(p,q)$.
For any integer $n \ge 2$, let $N_n = \sum\limits_{k\in I(2,n)} k = \sum\limits_{k=2}^n k = \frac{n(n+1)}{2} - 1$.
For any $t = 1 \pm 2 \pm 3 \pm \cdots \pm n \in M_n$, let $A \subset I(p,q)$ be the
collection of integer $k$ whose sign in the expansion of $t$ is positive. We have
$$t = 1 + \sum_{k \in A} k - \sum_{k \in I(2,n) \setminus A} k
= 1 - N_n + 2\sum_{k \in A} k$$
This establish a one-one correspondence between $M_n$ and $S(2,n)$. As a result,
$$|M_n| = |S(2,n)|$$
It is easy to see $S(2,n) \subset I(0,N_n)$ and $0,2,N_n \in S(2,n)$ but $1,N_n - 1 \not\in S(2,n)$.
Now assume $n \ge 3$. For any $p \in S(2,n)$ such that $2 \le p < N_n - 2$, we can find a non-empty proper $A \subset I(2,n)$ such that $p = \sum\limits_{k \in A} k$. There are two possibilities:
$A$ doesn't have the form $I(q,n)$
In this case, $A$ contains an element $r$ such that $r+1 \in I(2,n)\setminus A$. Let $A' = ( A \setminus \{ r \} ) \cup \{ r + 1 \}$, we have:
$A' \subset I(2,n)$ and $\sum\limits_{k \in A'} k = p + 1$. This implies $p + 1 \in S(2,n)$.
$A$ does have the form $I(q,n)$
In this case, $q > 3$ because $p < N_n - 2$.
Let $A'' = ( A \setminus \{ q \} ) \cup \{ 2, q-1 \}$. Once again, we have
$A'' \subset I(2,n)$ and $\sum\limits_{k \in A''} k = p + 1$. This implies $p + 1 \in S(2,n)$ again.
Combine these and notice $2 \in S(2,n)$, we find
$S(2,n) = I(0,N_n) \setminus \{ 1, N_n - 1 \}$.
As a result, for $n \ge 3$, we have:
$$|M_n| = |S(2,n)| = |I(0,N_n)| - 2 = N_n - 1 = \frac{n(n+1)}{2} - 2$$
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.