content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Iowa has one of the lowest unemployment rates in the nation. The tight job market is creating challenges in filling open positions. Employers are seeking skills that many job seekers don’t have.
55% of all jobs in central Iowa require training or education beyond a high school diploma but less than an associate degree.
32% of central Iowa’s workforce has these skills.
Below are specific challenges central Iowa’s workforce faces—issues that Central Iowa Works is focused on addressing.
Some students struggle to find meaningful career pathways and drop out before earning a high school diploma. 35,000 central Iowa adults do not have a high school diploma, and one in four of them live at or below the federal poverty level. (U.S. Census).
Job seekers may need additional knowledge and training to gain the skills employers are looking for, or they may need support connecting with the right jobs that match their skills and learning how to present themselves during the hiring process.
Hiring practices can keep strong candidates from applying for jobs, or employers may overlook certain job seekers—especially minorities, those with disabilities, and those with a criminal record.
Specific industries in central Iowa especially face challenges in attracting talent that can serve our community’s needs now and in the future.
Central Iowa has a larger share of people employed in the retail, service, and hospitality industries than in the state and nation, but employers struggle to fill entry-level, service-heavy jobs. In particular, employers say it is challenging to find employees who have soft skills and professionalism, while employees struggle with constantly changing schedules and child care and transportation options outside of normal work hours.
40 percent of Iowa’s population growth since 2010 has come from immigration (The Gazette, 2018). The top challenges New Iowans face is a lack of English skills, cultural differences, navigating health care, and finding transportation and child care. (USCRI and Catholic Charities). Many refugees and immigrants have education and skills that are not recognized in the United States or they may struggle to find opportunities. Yet they possess many strengths, including the ability to speak multiple languages and to share rich experiences.
Each year, 5,000 citizens return to Iowa after serving time in state prisons. One year after release, 60 percent of people convicted of a crime are not employed (National Institute of Justice).
Many ex-offenders talk about being offered a well-paying, full-time job after going through the hiring process and then having that offer taken away once a background check is complete. Or, hiring managers ask upfront whether the applicant has been convicted of a crime. As a result, returning citizens are often stuck in a series of part-time or minimum-wage positions or fall back into criminal habits to earn enough money to survive.
The cost of not having a job and being able to re-establish themselves in our community is great. Most people released from prisons are re-arrested within three years, and 70 percent of children with an incarcerated parent will follow in their parent’s footsteps.
11.8 percent of central Iowans have a disability. People with disabilities may be kept out of the hiring process because of requirements that can be accommodated, and many individuals with disabilities can bring unique strengths to positions.
14.8 percent of Polk County’s African American population is unemployed, compared with 3.9 percent of the total population. The average median household income is nearly double for Polk County’s entire population than it is for just African Americans. One workforce challenge is that central Iowa has a higher rate of incarceration among African Americans than the rest of the nation, which creates a huge barrier to employment.
When jobs are left unfilled, employers lose productivity and the ability to grow and serve customers. Our economy struggles to grow at the rate it has in the past several years, especially in a job market that is reaching full employment.
Many potential workers live in poverty and are piecing together jobs to get by. Use of food pantries has increased over the past year as central Iowans have struggled to cover all their basic needs, including housing, child care, transportation, and food.
One in three central Iowans do not earn enough to pay for basic needs and save.
Learn how we are uniting our community to tackle these challenges in multiple ways.
Thank you for your interest in Central Iowa Works. You can support us by clicking the button below and donating through United Way of Central Iowa.
Central Iowa Works is an initiative of United Way of Central Iowa, leading the fight for the health, education, and financial stability of all central Iowans.
© 2018 UNITED WAY OF CENTRAL IOWA. ALL RIGHTS RESERVED. | https://www.unitedwaydm.org/ciw-workforce-challenges |
biblical meaning of octagon
I believe Sir Isaac (Yizchak?) Newton chose to divide the visible spectrum into 7 colors because he believed it was a mystical number. Reply, Maybe it's power only belongs to God?
Abdon (in the East) was a Judge of Israel who served 8 years (Judges 12:13 - 14). Boys were to be circumcised on the 8th day. Although one octave sounds complete and whole
there are 88 keys on the piano and the chess board is 8 times 8 squares, most of the biggest and important skyscrapers in asia are 88 stories, some say eight is infinity, the month eight represents "leo" one of the most respective figures, and much more, if you were born in the year 1988 of august 8 your destiny is beyond your qualityof imagination, believe it.
A Street Drain, Hot Ice & the Chanukah Flames. Since the meaning of four is derived from God's creation of everything, 8 (4 + 4) pictures the new creation after the flood. The C Major 7 scale goes from C-2-B: C-D-E-F-G-A-B. Those in Christ are becoming a new creation, with godly character being created by the power of God's Spirit (2Corinthians 5:17, Ephesians 2:10; 4:23 - 24). His resurrection occurred, exactly as he stated, three days and three nights after he was buried, which was at the end of the weekly Sabbath day that fell on Nisan 17 (seventeen symbolizes victory). Reply, Circumcision Along the …
Reply, Human Examples: octopus (8 tentacles), octagon (geometric shape with 8 sides), octave (musical scale comprised of 8 notes with equal intervals), October (originally the 8 month on the Roman calendar). During these weeks we work on perfecting our seven emotions (love, fear, compassion, ambition, humility, bonding and receptiveness)—one emotion per week.
His final meeting was on the Mount of Olives, where he gave his followers instructions before ascending to heaven (Acts 1).
Pretty close.
In order to make use of the Power Octagon, we must expand the basic magick circle used in Witchcraft to contain not just the four directions and the zenith and nadir in the center, but to also incorporate the four points in between the cardinal directions, and determine a midpoint between the zenith and nadir, which is called the meso-point. But what is the Biblical meaning of the number 33? How Can Humans Claim to Know of "Other Worlds"? You can also increase the number of nodes in a magick circle by expanding the number of points used in the outer circle, increasing the number from four, to eight, from twelve to sixteen. b) Between the holidays of Passover and Shavuot we count seven weeks. An octagon is two interlocking squares drawn within a magick circle, thus it doubles the power of a magickal squaring of the circle. Besides the Miracle of the candles oil lasting 8 days, is that King Antiochus decreed that no Brit Milah can occur under Helenist rule.So there is the importance of the nuber 8 here for Chanukah.
Eight, on the other hand, is symbolic of an entity that is one step above the natural order, higher than nature and its limitations. This article is worthless. January and February hadn’t been added to the calendar yet! a) G‑d created the world and its natural order in seven days.
It's so comprehensive and flexible, and you manged to concisely convey it in a few short paragraphs. Invoking spiral joins cross and middle pentagrams into a pylon, Declare Formula Letter and Keyword (optional), Declare brief summing of the fifth Archangel (Ratziel). Eight is the number of Jesus, whose name in the Greek adds up to 888. It is therefore only by God's grace and love that man will someday be given a chance for a new beginning, as promised in the Word of God. Instead of six points in the magick circle, there are now eleven. The octagon as shown here began as the symbol of the M.I.T. This article will reveal the origin behind this most mysterious number and why it holds so much special meaning to the occult. Hidden in the patterned …
The Meaning of Number 8. I thought it was divine, it wasn't Still, we should examine the ritual structure of the power octagon, step by step, so we can minutely examine and figure out how … This blog is used to discuss various issues and topics pertinent to ritual magick and ritual magicians as proposed by Frater Barrabbas Tiresius - author, witch and ritual magick practitioner.
According to the apostle Paul, Christ also was seen by 500 believers at one time (1Corinthians 15:4 - 7).
Declare brief summoning of the Archangel. The number 8 in the Bible represents a new beginning, meaning a new order or creation, and man's true 'born again' event when he is resurrected from the dead into eternal life.
They were Ishmael, Isaac, Zimran, Jokshan, Medan, Midian, Ishbak and Shuah. They drew on reservoirs of faith and courage that are not part of normative human nature. Why Are There So Many Stars and Galaxies? From the perspective of the angels, the biggest gains we can make are in the area of our spiritual lives. The number 8 is generally read as indicating material abundance and career success, but in the context of Angel Numbers it usually means much more than simple material gain. His first appearance alive was to Mary Magdalene (Mark 16:9 - 11). Is There Anything Wrong with Sinful Thought?
It is the number of chaos and instability in other circles. He was crucified on Nisan 14 (Wednesday, April 5 in 30 A.D.). The tenth month by our Gregorian calendar, October shares a root with octopus and octagon—the Latin octo and Greek okto, meaning “eight.
The repeated C in the C Perfect-8th interval scale completes the intervals of the scale and it is called the Octave note (the 8th note) The quality of the interval is as follow: C (perfect}, D (major), E (major), F (perfect), G (perfect), A (major), B (major), C (perfect.
Where is reincarnation found in G‑d's word? That's why it took 8 days to make new , pure, oil, please clarify You might ask the simple question why is this ritual structure so useful, and the basic answer is that it generates the prismatic double vortices of the Elemental. Reply, My son's birthday is 8/8/94.
An octave is a musical interval consisting of eight notes , not seven as mentioned in article.
A one octave scale consists of eight notes . Reply, Just wanted to correct your comment, an Octave is composed by 7 notes, it is only that the 8th is a repetition of the 1st one only that in a higher scale for example: C, D, E, F, G, A, B, C....as your see, the eight note in an octave is not a NEW note, rather its a repetition, so technically and practically speaking an Octave (as ironically as it may sound) have 7 real notes. c) The Holy Temple's menorah, which served to illuminate the natural world with the holy glow of spirituality, had seven branches. In kabbalistic teachings, the number seven symbolizes perfection – perfection that is achievable via natural means – while eight symbolizes that which is beyond nature and its (inherently limited) perfection. Number in Scripture:Its Supernatural Design and Spiritual Significance, Some information on themeaning of the number 8 derived fromThe Holy Bible in Its Original Order, Second Edition. i'm not chinese btw. They therefore merited a miracle higher than nature – a miracle that lasted eight days – and to commemorate this, we light on Chanukah an eight-branched menorah (click here for the full story). However, one should not discount Newton's belief in mystical numbers, as he was familiar with a great deal of esoteric Jewish learning. Like its neighboring months September, November, and December, the …
It has some historical background, as well as an interpretation. Each of the Elementals corresponds to one of the 16 Court Cards of the Tarot. He then showed himself to two disciples traveling to Emmaus (Luke 24). Reply, Eight here for Chanukah The word Octave comes from the word eight. A completed person has control over all seven emotions.
The New Testament was penned by only eight men (Matthew, Mark, Luke, John, James, Peter, Jude, Paul). Free Personalized Numerology … The Octagon Symbol I am sometimes asked the meaning of the octagon symbol. This comment has been removed by a blog administrator. In it’s most basic form this system of mathematics and geometric shapes in known as the Sephirot – or more commonly the tree of life.
Homemade Ice Cream Packets, Olan Soule Cause Of Death, Joan Jett Illness, 3d Vector Calculator, 8n Ford Tractor Parts Diagram, Lee Funeral Home Manassas Va, Dragon Adventures Elements, Family Economic Background Essay, Flora Perez Net Worth, Diferencia Entre Puma Y Pantera, Olive Garden Franchise Startup Cost, Ruger Blackhawk Rear Sight Blade, 375 Cheytac Energy, White Squirrel In A Dream, Minahil Malik League Video, William Byron Parents, Goodman Serial Number Decoding, Wags Cast Miami, Pibb Xtra Flavors, Cz Trap Gun, Higher Or Lower Unblocked, Emily And Paige, Passat B8 Problems, Hagebuttenpulver Erfahrungen Forum, All Shiny Pokemon Sword And Shield, Jophiel In The Bible, Smit Sleeve Procedure, How Tall Is John Henton, Bravado Gauntlet Hellfire Parachute, Libby's Vienna Sausage Yellow Can, The Great Debaters Essay Topics, Gta 5 How To Unlock Cabernet Red, Grad School Interview Questions Speech Pathology, Ghal Ghal Song Lyrics Meaning In English, Peri Peri Chicken Burger Calories, Salvage Hunters Chemist Shop, | http://www.androidnotizie.it/blog/page.php?tag=f187c9-biblical-meaning-of-octagon |
When it comes to getting a clear understanding of what is happening in the world around us. Regardless of whether it is Odisha latest news, global news stories, economics, Odisha politics or other important current affairs, people in the present times experience two major problems. The first and the most important issue is information overload. The availability of different news sources has led to an overload of information. There are a large number of news sources portraying events from varied perspectives while highlighting different problems and supporting certain political stances at the same time. It is only because of this reason that people become cynical or they close themselves off from getting any kind of information altogether.
The Second Problem
The second issue is reinforcement because of niche media and social curation. Every individual has his or her very own favorite site that sees the world as the individual does. On the other hand, there are even people who go through breaking news Odisha recommended to them by friends and other individuals. These are situations when people might find themselves reading only those stories that confirm world view. This further has individuals coming into contact with only those arguments and facts that support their own political opinion. These are the two main problems and between these problems, it can get very difficult for individuals to come up with a reliable, clear, accurate ad concise picture of big news stories of the day or the big problems that are not only being faced in Odisha but across the world.
Is News Completely Free of Biasness?
The above mentioned problems require due consideration especially by individuals who have this habit of forming unbiased and accurate views of what is happening in the world around them and in Odisha news. As a common man, it is important for you to analyze or recognize which of the two problems are you suffering from and what are the steps that you need to take in order to get rid of the problem. It is also important for you to keep in mind that no single source of news can serve to be 100% unbiased. There will always be some kind of bias creeping into the scene even if it is not done deliberately. This might simply be due to limited space. It is not always possible for an author to include all the salient facts in a news article. At the same time, it is also not possible for every editor to publish all the news stories available. Therefore, the choice of stories and facts lead to the introduction of some kind of biasness. It is also possible that news stories will in some way or the other contain the political opinions of their writers.
Conclusion
Considering the fact that news can never be completely free of biasness, it makes sense to avoid news sources that are explicitly biased. You must remain away from the articles and stories of sources that are known for supporting certain platforms. If you want to be served well then you must try and find television programs, newspapers and news websites that at least try to be unbiased.
Add Comment
Entertainment Articles1. Reason Behind Growing Gambling
Author: subhay kumar
2. Team Singapore Secured The Top 4 At The Asian Nations Cup Finals!
Author: Luther Stracke
3. Giant Inflatables Will Definitely Be Noticed By People!
Author: Mark Bachman
4. Set The Stage For Your Business With These Awesome Opening Ideas
Author: Mark Bachman
5. Vinyl: Why Old Is Gold
Author: Ergodebooks
6. Fansdolls,
Author: Fansdolls,
7. 5 Best Practices In Modern Event Management
Author: NeoNiche
8. Best Online Digital Magazine In India
Author: The Digital Buyer
9. A.rrajani Fashion, Portfolio& Advertising,e-commerce,commercial Photographer In Mumbai,pune,india
Author: A.Rrajani Photographer
10. Unique Ways To Use Non-traditional Event Spaces
Author: David Keller
11. The Main Sponsors Of Team Singapore At The Upcoming Ifmp Asia Nations Cup! | https://www.123articleonline.com/articles/1127701/where-to-get-hold-of-unbiased-breaking-news-odisha |
Within this context, the importance of institutional donors such as the Shubert Foundation has heightened considerably. The Shubert Foundation has increased its annual grantmaking from $22.5 million toward two hundred theater and dance organizations in 2014, to $26.8 million and more than five hundred grantees in 2017. Unlike typical funders of the performing arts, which tend to award multimillion dollar gifts toward a narrow range of high-profile projects, the Shubert Foundation provides unrestricted funding to a wide breadth of organizations. The Shubert Foundation now awards more than half the funding provided by the National Endowment for the Arts, which stood at $47 million in 2016. Along with the Doris Duke Charitable Foundation and New England Foundation for the Arts, Shubert and other institutional donors have emerged as a substantial source of reliable funding.
Read more at Inside Philanthropy and The Washington Post. | https://www.sherryconsulting.com/insights/2017/7/10/as-public-funding-for-arts-shrinks-institutional-donors-bolster-giving |
Since 1968, Graz and Styria, Austria, have hosted the multidisciplinary Steirischer Herbst festival of modern art. It was initially focused on the postwar avant-garde, but it keeps evolving with every new director. The festival has been focusing on new creations in a variety of media—performative as well as installative, cinematographic as well as discursive—shown mostly as site-specific works in unconventional settings and in public space under the direction of Ekaterina Degot (since 2018).Through compelling narratives that have an impact both locally and globally, the festival analyzes the context of Central and Eastern Europe as well as the eroding line between the West and non-West. It is relevant to local efforts and speaks to audiences outside the confines of the art industry. The present version of Steirischer Herbst challenges both the performing arts festival’s and the biennial framework. Its primary goal is to critically assess the festival’s avant-garde past while developing forms, contexts, and languages appropriate for the decentralized world of the twenty-first century.
Steirischer Herbst is looking for a curator in the field of visual arts and performance to strengthen its existing team from 2023 onward.
Your areas of responsibility:
Programming
–researching and monitoring art scenes internationally, nationally, and locally, developing networks
–researching histories and contexts of Graz, Styria, Austria, and Central and Eastern Europe in relation to the whole world
–proposing artists’ projects and program elements to the chief curator, developing them further in case they are confirmed
–supporting the chief curator in all curatorial matters
–contributing, within a team of curators, to festival’s yearly scenario, structure, and themes
Realization
–location scouting for site-specific projects, researching history and basic feasibility
–budgeting, organizing, managing, and realizing/installing confirmed projects together with artists and the festival’s production team
–proactive approach to funding and contributing content to funding applications (researching and writing)
Contextualization
–developing the public/discursive program, participating in the educational program and audience development
–writing about art projects
–developing documentation strategies together with the communications department
Local outreach
–developing connections to local institutions and the art scene in Graz and Styria and collaborating with them, showing a strong local presence
–selecting projects for the festival’s parallel program as part of a jury, monitoring and responding to artists’ proposals
Your profile
–substantial experience in curating and organizing art projects in various media, installative as well as performative
–a substantial network of international artists
–an art network within Graz, Styria, and Austria (alternatively, a willingness to develop one)
–an in-depth knowledge of contemporary art and its discourses
–an anthropological interest in the development of society as translated into artistic forms
–an ability to think outside the box
–a high sensibility to art and the forms it is presented in
–fluency in German and English (the two working languages of the festival)
–good organizational skills, clear and quick thinking, reliability in observing deadlines, and flexibility in uncertain and constantly changing situations
Steirischer Herbst values a diverse and socially inclusive workplace and encourages all applicants to apply, regardless of their race, ethnicity, gender, age, religion, political views, physical or mental limitations, or sexual orientation.
General information
Place of employment: steirischer herbst festival gmbh, Sackstraße 17, 8010 Graz, Austria
The curator needs to be ready to relocate to Graz, if not already based there.
Start of employment: January 1, 2023
Duration: unlimited
Working hours: 40 hours per week
Application
Please submit your application to Rita Puffer, [email protected], by August 15, 2022.
Your application (all documents included in one PDF file) should consist of:
–a cover letter (maximum 1 A4 page) including a few names of artists with whom you would like to work immediately (existing contacts) and in the future (wish list), as well as themes and approaches that interest you;
–a CV with a summary of your professional experience as a curator. | https://dailyart.news/visual-arts/news/steirischer-herbst-seeks-curator-of-visual-arts-performance/ |
The General Data Protection Regulation (GDPR) comes into effect on the 25th of May 2018 across the European Union (EU). This law was designed to strengthen the protection of virtually all types of online data across all platforms, giving EU residents greater control over what businesses and organisations can and cannot do with their information. In Part I, we discussed the details of the GDPR and how this could change the general online landscape. In Part II, we examined how stronger data protection laws will affect social media. In Part III, we will analyse how the GDPR will impact the Australian health, medical and scientific community.
What are its implications on the Australian health and scientific community?
The health, medical and scientific community is a global, intensely interconnected and collaborative space. There is no doubt that the GDPR laws will reverberate across the world, impacting the way scientific research is conducted and how organisations and businesses manage their stakeholders and their identities online.
Research data and collaboration
Research holds a unique position within the legal framework of the GDPR. Institutes and organisations that use personal data and medical records as part of their research may be exempt from certain restrictions, on the proviso that appropriate measures to protect data and to minimise the amount of data that is recorded and processed are put into place. Overall, the burden of data processing and protection will increase under the GPDR.
For the most part, these measures will also extend to the transfer of personal data to countries outside of the EU, meaning organisations receiving this data will also have similar protections in place. While these changes will impact European research institutes, organisations and businesses the most, considering the global and deeply collaborative nature of science, the GDPR will surely affect how all research is conducted, whether it is basic science research, clinical trials or the translation and commercialisation of research.
On the other hand, the GDPR does attempt to foster efficiency with changes to certain areas that can often be hampered by, sometimes unnecessary, bureaucracy. For example, EU citizens may not be able to request the erasure of personal data that has been used in research, while the purpose of research may provide organisations with a legitimate basis to process personal data without an individual’s consent. Similarly, research organisations may be able to use personal data outside the purposes for which they were initially collected. This exemplifies the GDPR’s attempt to balance conservative protection with boundary-pushing innovation.
However, the definition of ‘research’ in the GDPR is diffuse and facilitates a breadth in its scope. While this ensures that research projects are not left behind, it also may leave room for the exploitation of loopholes. Could data mining and analytics endeavours undertaken by organisations, such as the recent controversial actions of private firm Cambridge Analytica, be defined as ‘research?’ This remains an area of concern and in need of further refinement from the lawmakers in the EU.
Managing online communities
The GDPR will also affect how research organisations and businesses manage their global reach with stakeholders online, particularly in regard to social media and newsletters. As discussed in Part II of this blog series, the GDPR will have implications on how user-generated content (UGC) can be repurposed by organisations and businesses. This necessitates the exercise of greater caution when it comes to re-posting content, particularly photos or images, whether it be of themselves in the research environment to any exciting results from an experiment, created by staff or students on official social media channels. While it is highly unlikely that scientists will post fully annotated results and figures on social media platforms, the GDPR will align with current copyright law in publishing, whereby any reproduction of graphs or images can only be done so with the permission of a publication’s authors.
Furthermore, many research institutes keep interested stakeholders informed with newsletters sent via email. With a shift towards an opt-in model, as mentioned in Part I of this blog series, the GDPR may require organisations to revisit email databases to affirm subscriptions. While not a seismic change, it’s the type of painstaking, messy and laborious work that is often neglected or ignored. Also, email address mining of EU residents for newsletter databases will no longer be an acceptable practice, impacting how effectively organisations can build their online audience, with downstream effects on expanding organisation profiles, highlighting research impact, organising conferences and meetings, and even philanthropic missions.
The GDPR and the impact on the innovation, health and scientific community
With big data, bioinformatics and the exponential increase in computer processing capabilities emerging as major players in many fields of scientific research, data privacy and protection have become hot topics of discussion. Not only will the GDPR change how research organisations and related businesses, both inside and outside the EU, work on an operational level, it will also modulate digital communication strategies and protocols. The best way to prepare for these changes is to be informed and to begin complying with this new law before the changes come into effect on the 25th of May 2018. | https://thesocialscience.com.au/blog/will-gdpr-break-internet-part-iii/ |
Japan, South Korea, industry oppose EU plan to cut shipping emissions
BRUSSELS, Nov 27 (Reuters) - Japan, South Korea and a fleet of international shipping groups have warned the European Union against its plan to add greenhouse gas emissions from the maritime sector to Europe's carbon market.
As the 27-country EU seeks to steer its economy towards "net zero" emissions by 2050, the executive European Commission wants to expand its carbon market to shipping.
Currently, the policy requires power plants, factories and airlines running European flights to buy pollution permits to cover their emissions.
The proposal, formally due by next summer, has already run into opposition.
"The application of EU-ETS to international shipping will have adverse repercussion on both environmental integrity and sustainability of global maritime transport and trade," the South Korean government said in its response to an EU consultation on the policy, which closed on Thursday.
"Extension of EU ETS to international shipping is not the suggested way forward, whether the scope is limited to intra-EU shipping only or not," Japan's government said in public documents submitted to the European Commission.
The countries warned that adding shipping to Europe's carbon market could stoke trade tensions, and cause extra emissions by prompting ships to take longer routes to avoid stops in Europe.
The International Maritime Organisation is developing global measures to deliver its pledge to halve shipping greenhouse gas emissions by 2050. It says the EU plan undermines these efforts. Critics say the IMO measures are not ambitious enough and additional action is needed.
"I do not see binding measures to reduce greenhouse gas emissions at IMO level any time soon," said Jutta Paulus, a Green lawmaker in European Parliament. Parliament approved in September a proposal by Paulus to add shipping to the EU carbon market in 2022.
Industry associations BIMCO and the World Shipping Council also said it is too early to add shipping to a carbon market, citing a lack of commercially viable technologies to cut emissions. (Reporting by Kate Abnett; editing by David Evans) | |
The Orbit behavior moves an object in a circle or ellipse around a point. The object's initial position is used as the point to orbit around.
The speed to orbit at, in degrees per second. Positive is clockwise and negative is anticlockwise.
The rate of change to the orbit speed, in degrees per second per second. Positive will accelerate in a clockwise direction and negative will accelerate in an anticlockwise direction.
The distance of the orbit from its center point, in pixels. For a circular orbit, ensure the primary and secondary radii are the same. For elliptical orbits, the primary radius is the one in the direction of the offset angle.
The perpendicular distance of the orbit from its center point, in pixels. For a circular orbit, ensure the primary and secondary radii are the same. For elliptical orbits, the secondary radius is the one perpendicular to the offset angle.
For elliptical orbits, the rotation of the ellipse in degrees. For circular orbits, this does not affect the orbit path (since rotating a circle has no effect), but it changes the initial angle the orbit starts from.
If enabled, sets the object's angle to match the direction of travel in the orbit. If disabled the behavior only changes the object's position without affecting the angle.
Enable to run a preview of the behavior directly in the Layout View.
Test if the behavior is currently enabled.
Set another object as the location to orbit around, following the object if it moves. The Unpin action will stop following the object.
Set the corresponding behavior properties. See Orbit properties above.
Set the current orbit position by its angle from the center point in degrees.
Set the center point of the orbit in layout co-ordinates.
Return the corresponding behavior properties. See Orbit properties above.
Return the distance from the object to the center point of the orbit, in pixels.
Return the current position of the orbit as its rotation relative to the center point in degrees.
Return the current center point of the orbit in layout co-ordinates. | https://www.construct.net/en/make-games/manuals/construct-3/behavior-reference/orbit |
A dhaoine uaisle, a cháirde. Tá an-áthas orm agus ar mo bhean chéile Sabina bheith anseo libh inniú.
Ladies and Gentlemen, Sabina and I are delighted to have the opportunity of joining you here today and thank you for that warm and generous welcome.
In the course of our visit we have the opportunity to meet many parts of the diverse Irish community in Liverpool. But it is especially appropriate to be here at St. Michael’s Irish Centre which has been at the heart of supporting the Irish Community in Liverpool for so many years.
The connection between Ireland and Liverpool is a deep and historical one. For centuries past, Liverpool has represented the first glimpse of a new life for generations of Irish migrants leaving their home country in search of a better future. Of course, many migrants sailed on from Liverpool to make their new home in the United States of America, but many more stayed here, put down roots and began to form a growing Irish community in this beautiful city, establishing their own cultural centres and associations.
Despite the difficulties which many of our migrants historically faced in adapting to their new lives here in Liverpool, their sense of community, their determination to remain united and their dedication to supporting those of their friends and neighbours who were in greatest need, never faltered. It is important to remember the profound debt we owe to those generations who came before us and who forged a better life for themselves and their families in Britain; people who carefully preserved and nurtured Irish culture and heritage and developed a strong and vital community during times of great hardship.
We owe an enormous debt of gratitude to People like Tommy Walsh. Tommy Walsh was a leading figure in the Irish Community in Liverpool until his death in 2010. He campaigned for an Irish centre in Liverpool and became its first manager when it opened in Mount Pleasant in 1965. He went on to become the first national chairman of the Federation of Irish Societies in 1973. He became the first chairman of Irish Community Care Merseyside in 1989.
Tommy and his colleagues, whose spirit and determination was to led to the establishment the first Irish Centre in Mount Pleasant in 1965 and then to the development of this Centre and to the establishment of Irish Community Care Merseyside. It is important that we do not forget the critical role of centres like St Michael’s Cultural Centre and all they do to ensure that our Irish culture and traditions remain relevant in the lives of Irish communities around the world.
In Ireland we are very proud of our cultural heritage and of our world wide reputation for success in the cultural and artistic arena. We are also proud of how that culture has survived and thrived in many countries across the globe, kept alive by our emigrants and their descendants who understand that culture must be based on what we share as a people, and is a process that should continually be reworked if it is to flourish and prosper in a changing and evolving world. The issue of cultural identity is one that must be faced by all emigrants. Migration will always involve a process of social change, one that will require some level of acculturation.
Finding a balance between the inherited legacy of one’s own cultural and the cultures of both the point of arrival and the cultural experience of migration itself – not reducible to either point of origin or destination – is challenging. It is important, however, that the culture, the traditions and the customs that form such an important part of an individual’s legacy are neither rejected and denied or constructed in such a way as to allow a community to be marginalised to the point of exclusion.
The culture of one’s origins and the new culture at one’s destination can co-exist in a shared space that is neither reducible to one or the other, nor a bland homogenised place where distinct cultural identities become lost. It is important that first, second and third generations of Irish people living in Great Britain and in other countries across the globe, are facilitated in experiencing and understanding the culture that formed or influenced their parents or grandparents or great grandparents so that they are enabled to recognise and understand the complex tapestry that is their identity; that is their heritage.
That is why Centres like St Michael’s are such a crucial part of a truly functioning and multi cultural society. The work of this and other centres contributes in a vital way to local communities and to society in general. Vibrant and diverse Irish organisations, proud of their heritage but open to all, continue to play a hugely important role in towns, cities and villages all over the world; and here in Great Britain, Liverpool, the most Irish of British cities, has led the way.
This is, indeed, recognised by the Irish Government. That fact that it, through its Emigrant Support Programme, has awarded over £1.68 million over the last five years alone to Irish organisations in Liverpool and its surrounding area is an acknowledgement of the importance that the Irish Government and Irish people attach to the vital work of these organisations.
This Centre and all of you here today are part of this work. Your commitment to keeping our culture – our music, song, dance, language and literature – alive, continues to make a real and tangible difference to the quality of life experienced by thousands of Irish emigrants in this country and deserves the gratitude of all Irish people.
Ba mhaith liom críochnú le buíochas a ghlacadh libh as cuireadh a thabhairt dom anseo inniu. Tugadh ardú meanman dom agus an-léargas dom nuair a chonaic mé gach a bhfuil ar siúl agaibh lena chinntiú go maireann cultúir agus traidisiúin na hÉireann mar ghnéithe atá ábhartha i saol an phobail Éireannaigh anseo i Learpholl.
Trí ranganna Gaeilge a chur ar fáil, céilithe a eagrú, deiseanna a chur ar fáil leis an damhsa Gaelach a fhoghlaim nó le teacht ar eolas faoin gceol Gaelach, nó tríd na himeachtaí éagsúla luachmhara a chuireann sibh ar fáil anseo, tá sibh ag cinntiú gur féidir lenár muintir Éireannach, cibé acu é gur rugadh in Éirinn iad nó gur de shliocht Éireannach iad, ceangal tábhachtach a choimeád le hÉirinn agus lena bhféiniúlacht chultúrtha Éireannach.
[I would like to conclude by thanking you for inviting me here today. It has been enlightening and uplifting to witness all you do to ensure that our Irish cultures and traditions remain relevant in the lives of the Irish community here in Liverpool. By the provision of Irish language classes, the organisation of traditional Céilí, the opportunities to learn Irish dancing or to become familiar with Irish music, and indeed all the other valuable activities you make available here, you continue to ensure that Irish people, whether by birth or descent, can retain an important connection to Ireland and to their Irish cultural identity.]
Finally I would like to commend the work of all the volunteers who give so freely of their time, energy and imagination to ensure the continued success of community and cultural organisations throughout Britain. I thank you, on my own behalf and on behalf of all the Irish people, for the valuable work you do.
Is iontach an obair atá ar siúl agaibh anseo. Go n-éirí go geal libh ‘s go raibh míle maith agaibh go léir. | https://president.ie/en/media-library/speeches/remarks-by-president-higgins-at-a-reception-at-st.-michaels-irish-centre-li |
Under direction, the SUD Therapist provides program management, counseling and appropriate therapeutic interventions to individuals with both substance abuse and/or mental health issues assigned to the agency's SUD Treatment Program.
MINIMUM QUALIFICATIONS:
1. Education, Training, and/or Experience:
2. Certifications, Licenses, Registrations:
ESSENTIAL FUNCTIONS, DUTIES AND RESPONSIBILITIES:
Other duties may be assigned. To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. Reasonable accommodations may be made for qualified individuals to perform the essential duties.
1. Performs intakes and assessments
a. Screens clients for appropriateness and levels of care. Conducts diagnostic assessments (clients)
b. Identifies and refers clients to specific services available to meet needs
c. Refers inappropriate or ineligible clients to other available services in the community
d. Helps clients to access resources from other agencies and organizations making linkage and advocating for services when barriers are encountered.
e. Participates in staff and/or agency committees and team meetings as needed to assure effective communications regarding new clients
2. Provides individual and/or group intervention services to assigned clients
a. Develops and implements treatment goals/plans to address needs, utilizing appropriate treatment scales/assessment tools
b. Monitors client goal development and follow through with individualized service planning
c. Develops and implements appropriate therapeutic interventions
d. Evaluates effectiveness of treatment plan and services
3. Utilizes effective and appropriate verbal de-escalation techniques and identification of behavioral difficulties.
4. Consults regularly with supervisor regarding cases, case-load size, appointment schedule and required productivity levels
5. Completes progress notes and corresponding billing sheets within established timeframe.
6. Assists with the administration of plans of the residential alcohol and drug abuse services
7. Ensures that facilities and/or programs meet and maintain compliance with funding, accreditation, certification standards, audit requirements and other issues related to quality, productivity, documentation etc. | https://www.aftercollege.com/company/community-care-network/116409/115716636/ |
If you have any problems related to the accessibility of any content (or if you want to request that a specific publication be accessible), please contact us at [email protected].
Quarterback Value Forecasting and Fixing the NFL Draft's Market Failure
AuthorWojcik, Trevor
AdvisorNichols, Mark W.
Date2010
TypeThesis
Department
Economics
Degree Level
Master's Degree
Statistics
AltmetricsView Usage Statistics
Abstract
The National Football League (NFL) is a business that is worth nearly $7 billion annually in revenue. That makes it the largest money making sport in the United States. The revenue earned by each franchise is dependent upon the repeated success of the team. A commonly held belief is that for a franchise to be successful you must have an elite Quarterback. This thesis uses NFL data and for the 2000-2008 seasons to determine the role that Quarterback performance plays in team success. With the determination that Quarterbacks are important to NFL team success the question becomes how does a franchise effectively obtain the best player. The NFL player draft is the most commonly used method for teams to find their Quarterback of the future. The problem is that the success rate for drafting Quarterbacks is very low. In this thesis I have determined a more statistical approach to determining whether a drafted Quarterback will be successful. The model shows that certain college statistics, such a passing completion percentage, are strong indicators of professional success at the Quarterback position. Use of the data may aid teams in effectively drafting Quarterbacks, thereby improving team winning percentage and profitability. | https://scholarworks.unr.edu/handle/11714/4452 |
Diagnosing Asperger’s in Children
Asperger syndrome, or simply Asperger’s, describes the way that an affected person understands other people, talks with other people, and acts with other people, such that he or she may not fit in well with others and may be unable to act like everyone else in different social situations. As of 2013, Asperger’s has been thought of as a developmental disorder and not as a mental illness. Most adults with Asperger’s can learn how to make friends, do useful work, and live successful lives, but it is the children with Asperger’s who need help the most. In addition, studies have shown that both sexes can have Asperger’s, although it is more common in males. Asperger’s in Bloomington should be dealt with by people who care.
Those with Asperger's in Bloomington need help. The illness is an autism spectrum disorder (ASD) characterized by significant difficulties in social interaction, specifically in nonverbal communication, restricted actions, and repetitive patterns of behavior and interests. It is different from other autism spectrum disorders because it relatively preserves linguistic and cognitive development. In some cases of Asperger’s, physical clumsiness and peculiar or odd use of language are frequently reported, although these symptoms are not required for diagnosis. Its exact cause is currently unknown, but research suggests the likelihood of genetics as a cause.
Asperger Syndrome in Bloomington IL
Currently, no single treatment exists for Asperger’s, and the effectiveness of particular interventions can only be supported by limited data. Asperger’s in Bloomington can only be alleviated by expert psychiatrists. In addition, many clinics specializing in the disease conduct effective intervention that is aimed at improving the symptoms and functions of the patient. With behavioral therapy that focuses on specific deficits to address poor communication skills, obsessive or repetitive routines, and physical clumsiness, Asperger’s in Bloomington can be managed. Most children improve as they mature to adulthood, but social and communication difficulties may persist.
Therefore, experts have advocated a shift in the perspective of the disorder, stating that Asperger’s is a difference instead of a disability that must be treated or cured. In helping those with Asperger’s in Bloomington, the Able Center addresses the core symptoms through scientifically proven improvement techniques, such as applied behavior analysis, cognitive behavioral therapy, stress management, sensory processing development, social communication, and positive behavior support. With these methods, patients feel better by connecting with the people and things around them in a higher capacity.
Asperger's Evaluation | Asperger's Intervention | About Us
Helping Patients with Asperger’s in Bloomington IL
Your child can live a life that you’ve always wanted with professionals who can help you deal with Asperger’s. The Able Center can also help you with any of the following:
-
Autism in Bloomington IL
-
Brain Disorders in Bloomington IL
-
Concussion Treatment in Bloomington IL
-
Dyslexia in Bloomington IL
Contact us today for quality services that work.
Professionals who Treat Asperger’s
The Able Center is composed of professionals who deal with Asperger’s using the latest processes and tested methods. Through the adoption of the most effective techniques, our programs are able to help those with Asperger’s syndrome in Bloomington and those with other neurodevelopmental issues, such as general neurodevelopmental delay, autism spectrum disorder, anxiety or OCD, anxiety, depression, and low self-esteem, among others. Our experts are licensed and trained in relevant areas of psychology and neuropsychology, giving our patients peace of mind. Come to the Able Center for help with any child psychological disorder.
Get in touch with The Able Center today. | https://www.theablecenter.com/aspergers-in-bloomington |
There are currently limited resources available to paediatric cancer centres for patient and carer chemotherapy education. Consumer medicines information (CMI) is often inappropriate as paediatric chemotherapy protocols are complex and many chemotherapy medications in paediatric oncology are used outside of their approved indications.
Aim:
To identify patient/carer information needs and their expectations and preferences regarding written paediatric chemotherapy information.
Methods:
A questionnaire was developed to collect data on families’ preferred form of information delivery, type and amount of written information, and the format and depth of information. Families were asked to comment on 2 examples of publicly available paediatric chemotherapy information leaflets and a CMI. The questionnaire was available to families at a tertiary paediatric cancer centre from January to March 2016.
Results:
Twenty two completed questionnaires were returned. Fifty percent of families preferred printed information whilst 45% preferred both printed and electronic versions. Verbal counselling by a pharmacist in addition to written information was considered important by 91% of families. Information families considered extremely important were the types (95%), time of onset (77%) and likelihood (73%) of side effects, as well as what to do when they occur (82%). Parents preferred a simple layout of information with bullet points, tables and clear headings. They valued specific and practical information involving how each medication works, explanation of symptoms, the reversibility of side effects and the role of supportive care medicines. Although families generally welcomed all available relevant information, they disliked the CMI for including unnecessary information and the lack of explanation about side effects.
Conclusion:
The publicly available paediatric chemotherapy leaflets and CMI do not adequately address the information needs of paediatric oncology families. This information can be used to development chemotherapy information which meet families’ preferences to improve the relevance and quality of patient education. | http://cosa-2017.p.asnevents.com.au/days/2017-11-14/abstract/47507 |
Bread is a staple food prepared from a dough of flour and water which is then baked.
There are many combinations and proportions of types of flour and other ingredients, and also of different traditional recipes and modes of preparation of bread. As a result, there are wide varieties of types, shapes, sizes, and textures of breads in various regions.
Bread may be leavened [made to rise] by many different processes ranging from the use of naturally occurring microbes and yeasts, to – and I quote – “ high-pressure artificial aeration methods” during preparation or baking in industrial bakeries.
Some products are left unleavened, either for preference, or for traditional or religious reasons.
Many non-cereal ingredients may be included, ranging from fruits and nuts to "various fats". Commercial bread in particular, commonly contains additives, some of them non-nutritional, to “improve texture, colour, shelf life, or ease of manufacturing”.
And it is the means by which bread is harvested together with the additives and the manufacturing process that have led us, sadly, to move bread to the overload section, where it can be found under Eating bread.
If you make your own bread and have a genuine source of organically grown flour that has not been tampered with by the addition of extra chemicals like folates, then bread is fine. We cannot, however, provide you with the nutrient value of this bread, as there are no details on nutritional value on any site. | https://allaboutheaven.org/suppression/bread/105 |
The master¿s programme in fine art prepares students for activity as independent, critical practitioners in the field of contemporary art. The programme provides students with the artistic, methodological and theoretical qualifications necessary to work professionally in the complex world of contemporary art. Students develop a broad range of qualifications, including creative thinking, artistic development work, project management and technical knowledge related to their own artistic practice.
Self-directed artistic practice is central in all the semesters. The master¿s project constitutes the main component of the programme. Each student defines his or her own artistic project. The master¿s project is developed over two years through individual research, experimentation, criticism and discussion with the main supervisor, other academic employees, guest teachers and fellow students. The master¿s project can be formulated and developed in relation to a given medium or a combination of media and materials.
The Art Academy - Department of Contemporary Art has workshops, technical expertise and academic supervision competence in photography, graphics, installation, ceramics and clay, sound, painting, performance, sculpture, drawing, textiles, moving images and digital art. A master¿s project can also be based on other criteria, linked to location-specific, relational and/or interdisciplinary approaches.
Through their master¿s project, students become familiar with artistic development work and its methods, which the Universities and University Colleges Act classifies as equivalent to academic research. Artistic development work is based on the artist¿s own experiences and reflection, and contributes to knowledge development on an artistic basis, in the form of artistic results, as well as text and other media.
The Art Academy - Department of Contemporary Art offers a stimulating and challenging learning environment for motivated students. A central objective of the master¿s programme is to promote and establish academic discourse among the students, and thereby stimulate continuous critical reflection on their own and others¿ artistic practices.
Semester
Every autumn.
What you Learn
Required Learning Outcomes
Learning outcomes: On completion of the programme, graduates should have the following learning outcomes defined in terms of knowledge, skills and general competence:
Knowledge and skills:
¿ Demonstrate the knowledge, skills and competencies required for contemporary artistic practice at a professional level
¿ Develop and execute a major, independent art project and exhibit it in a professional context
¿ Identify and assess artistic and ethical challenges and relevant theoretical issues
¿ Analyse, formulate and communicate intentions, values and meaning in artistic work to an audience with varying levels of knowledge about art
¿ Have knowledge about artistic development work as a direction within an academic, artistic context
¿ Establish, maintain, administer and present a professional artistic practice
General competence:
¿ Negotiate and interact effectively with others to initiate artistic projects
¿ Evaluate their own artistic work, and take responsibility for their own academic and artistic development
Study Period Abroad
Study Period AbroadExchanges and external project periods: The programme allows for student exchanges and/or external project periods for students in the second semester (MAKU2). This must be pre-approved by the student's main supervisor. | https://www.uib.no/en/studies/MAKUNST |
Spinal Stenosis Overview
Spinal stenosis is a narrowing of the spinal canal, the cavity within the vertebral column through which the spinal cord and nerves pass. Nerves leaving the spinal cord are called nerve roots; they pass through the vertebral column via small canals. Arthritic changes that cause overgrowth of vertebral bones may compress the spinal cord or the nerve roots, impairing sensation and muscle strength in the affected portion of the body. Most common among people in their 50s and 60s, spinal stenosis affects the lumbar (lower back) portion of the spine more than the cervical (neck) region. Symptoms include pain, numbness, and weakness in the neck, arms, lower back, and legs.
What Causes Spinal Stenosis?
- Disorders that involve arthritic degeneration and abnormal overgrowth of bone tissue, such as osteoarthritis or Paget’s disease, may cause spinal stenosis.
- Age or aging of the body causes spinal stenosis.
- Spinal stenosis can be hereditary.
Symptoms of Spinal Stenosis
- Back pain that may radiate to the buttocks and legs. Pain worsens with activity.
- Numbness in the buttocks and legs
- Weakness in the legs when walking
- Back pain with loss of or changes in bowel or bladder function
- Balancing problems
- Neurogenic claudication (inflammation of the nerves emanating from the spinal cord)
- Insensitivity and losing sensation in back, neck, arms and shoulders
- Cramping
- Foot disorders
- Less pain when leaning forward or sitting
Spinal Stenosis Prevention
- There is no way to prevent spinal stenosis.
Diagnosis of Spinal Stenosis
- Patient history and physical examination. Reflexes in the legs are tested to assess nerve involvement.
- X-rays, sometimes with injected dyes (myelography)
- CT (computed tomography) scans or MRI (magnetic resonance imaging)
How to Treat Spinal Stenosis
- Losing weight and toning the abdominal muscles with exercise may reduce pressure on the spine. Check with your doctor before beginning any weight-loss program or new exercise regimen.
- A lumbosacral support (a corset available at some pharmacies and medical-supply stores) may discourage motion that causes pain and help ease walking and exercise. It should not be worn all day, however.
- Anti-inflammatory drugs may relieve pain.
- If pain prevents normal activities despite self-care and medication, surgery to relieve pressure on the nerves (decompression surgery) may be warranted. The surgeon opens the spinal column where narrowing has occurred and removes the constricting bone or fibrous tissue. The opening through which nerve roots pass may be widened; if an excessive amount of bone is removed, the affected vertebrae may be fused together to increase spinal stability. Physical therapy may aid rehabilitation.
- Acupuncture can help to relieve some of the pain for mild cases of spinal stenosis.
- Surgical methods such as spinal fusion and laminectomy may be recommended.
When to Call a Doctor
- Call a doctor if you have persistent pain, numbness, or weakness in the back, legs, or neck, or if back pain accompanies changes in bowel or bladder function.
Source: | http://www.healthcommunities.com/back-pain/what-is-spinal-stenosis.shtml |
For those who buy bags of peatmoss, you may occassionally find long rope-like strands mixed in with the peat. There is a good chance that these long-strands are rhizomes (i.e., underground stems) from pod grass (Scheuchzeria palustris). This plant is common in northern bogs where most of the peat is harvested. A detailed study of the peat allows you to identify other plant species from old stems or other parts located within the peat.
There are 26 known populations of this peatland plant, with some populations containing hundreds of individuals. Possibly as many as a dozen more populations might be encounted through focused surveys in peatlands within the Adirondacks and Tug Hill.
Due to the specific habitat requirements, this plant has a limited range. However, due to various protection efforts, populations of this peatland plant have remained stable for the past 100 or so years.
In the distant past (i.e., prior to European colonization) this plant was likely more common. As peatlands were mined or otherwise altered, some popualtions were undoubtedly lost. Today peatlands are rarely altered, except for natural succession, and the forward-looking trend should see our current populations continuing long into the future.
As a peatland plant, this species is protected by various wetland regulations. The only possible threat is natural succession where shrubs may shade out this plant and alter the hydrological regime.
As long as the peatlands where this plant is found are protected, there are no specific management requirements. If a peatland becomes too shrub dominated, specific measures to address this natural succession may be desired if the goal is to maintain an open peatland environment.
At this point, no research needs have been identified.
This plant is found in sphagnous bogs and nutrient poor to medium fens, mostly within the Adirondacks and Tug Hill regions. It often prefers the wet swales and depressions within these peatlands (New York Natural Heritage Program 2005) Sphagnum bogs, marshes, and lake margins (Flora of North America 2000). Sphagnum bogs (Rhoads and Block 2000). Cold sphagnum bogs (Gleason & Cronquist 1991). Almost entirely restricted to bogs, where it tends to thrive in wetter sphagnum areas (Voss 1972). Bogs, quagmires and peaty shores (Fernald 1970).
Most popualtions of this plant are from peatlands within the Adirondacks, Tug Hill, and Oswego County. A few scattered populations range south to the Capital District and west along the Great Lakes Plain to Rochester.
This is a circumboreal species that ranges from the south to northern New Jersey, northern Pennsylvania, Wisconsin, Minnesota, northern Idaho, and along the coast to northern California. There are disjunct populations in Virginia, West Virginia, Illinois, Iowa, and possibly New Mexico.
Pod grass is a grass-like perennial herb with stems that are 2-4 dm high and arising singly from creeping rhizomes. Alternate, strap-like leaves that are 1-4 dm long, sheath the stem and become smaller upward. The lower stems are usually covered with old membranous sheathing bases. 3-12 stalked flowers are borne on the upper stem in the axils of reduced leaf sheaths. Each flower has 6 undifferentiated, greenish white, separate petals and sepals that are ca. 3 mm long, 6 stamens, and 3 ovaries that are united at the base. The (usually) 3 capsules are 5-8 mm long and have 2 seeds.
The zig-zagging stems range from 10-40 cm tall. The leaves are grass-like with conspicuously dilated sheaths where the basal leaves are clustered with overlapping sheaths. There are 1-3 stem leaves that are well separated. The leaf- blades are erect, 5-30 cm long, 1-3 mm wide, and have a distintive pore at the apex. The few-flowered (2-14) racemes are 3-10 cm long. The lowest bract is foliaceous and has a well developed blade. The second bract only has a small blade (or none at all), and the others are reduced to small, bladeless sheaths. The fruits form 3 diverging and inflated 1-2 seeded follicles that are 6-7(10) mm long, brown or straw color, and with a curving beak 0.5-1 mm long.
In order to help locate the plant and develop a better visual search image, this plant is easiest to locate when in fruit. It may be identified at any life stage, including dead rhizomes. It is probably best to collect an entire stalk, including stem, leaves, and mature fruit, if you need someone to verify the identification. Photos may also be used for verification.
This plant is very unique when in fruit or flower. It may be overlooked when found with various rushes and grasses, but it is easy to identify once noticed.
This plant flowers in early spring and fruits are produced shortly thereafter. Surveys may be conducted at any point during the growing season, but they may be most effective when this plant is fruting. Sometimes this plant can be overlooked, especially when various rushes are present.
The time of year you would expect to find Pod Grass vegetative, flowering, and fruiting in New York.
Pod Grass
Scheuchzeria palustris L.
Some manuals treat the North American plants as a separate subspecies or variety. Flora of North America reports "variability in [follicle and stigma characters], in specimens from both hemispheres, vitiates their worth for varietal distinction" (FNA 2000).
Flora of North America Editorial Committee. 2000. Flora of North America north of Mexico. Vol. 22. Magnoliophyta: Alismatidae, Arecidae, Commelinidae (in part), and Zingiberidae. Oxford Univ. Press, New York. xxiii + 352 pp.
Fernald, M.L. 1950. Gray's manual of botany. 8th edition. D. Van Nostrand, New York. 1632 pp.
Gleason, Henry A. and A. Cronquist. 1991. Manual of Vascular Plants of Northeastern United States and Adjacent Canada. The New York Botanical Garden, Bronx, New York. 910 pp.
New York Natural Heritage Program. 2005. Biotics Database. Albany, NY.
New York Natural Heritage Program. 2023. New York Natural Heritage Program Databases. Albany, NY.
Reschke, Carol. 1990. Ecological communities of New York State. New York Natural Heritage Program, New York State Department of Environmental Conservation. Latham, NY. 96 pp. plus xi.
Voss, E.G. 1972. Michigan Flora, Part I. Gymnosperms and Monocots. Cranbrook Institute of Science Bulletin 55 and the University of Michigan Herbarium. Ann Arbor. 488 pp.
Information for this guide was last updated on: January 22, 2009
Please cite this page as:
New York Natural Heritage Program. 2023. Online Conservation Guide for Scheuchzeria palustris. Available from: https://guides.nynhp.org/pod-grass/. Accessed January 30, 2023. | https://guides.nynhp.org/pod-grass/ |
The Saatchi Gallery welcomed a stylish crowd to the launch of artist Sacha Jafri’s retrospective, Universal Consciousness. Meredith Ostrom turned heads as she arrived in a jazzy popcorn-themed dress, while Camilla Rutherford and Tracy Ann Oberman talked all things arty as they admired the abstract paintings. After a few glasses of Taittinger, guests began bidding on Jafri’s artworks. The auction ended up raising almost £1m for the Royal Foundation’s Heads Together charity. | https://www.tatler.com/gallery/sacha-jafris-retrospective-at-the-saatchi-gallery |
This is a warm and sweet oil painting created by Mary Cassatt, who was an American painter and printmaker. Cassatt often created images of the social and private lives of women, with particular emphasis on the intimate bonds between mothers and children. And The Boating Party was the very painting.
If you like it, everything is possible. When you appreciate it, I am sure you will like it because of the happiness it conveys. Have you ever dreamed a dream that you were hanging out with your families? Have you ever dreamed a dream that you with your honey and little baby were going on a honeymoon?Have you ever dreamed a dream that… Have you ever had a day that you were having a beautiful with you husband and your lovely little baby girl? In the blue boundless ocean sea, the mild sun shining and sea wind breezing, a man in a dark blue robe was pulling the oars. He wore a dark blue cap matching his robes, and a blue girdle. In front of him, there was his wife hugging their lovely baby sitting at a small white boat. His wife was in a long lavender dress and wearing a white hat dressed by some yellow flower and green leaves. The baby in pink dress and white hat was staring her father. So did her mother. By far, it was a beautiful seaside village. And there were full of green tress and houses.
In all, it is just like a family party in a boat. As it was named The Boating Party, this painting showed a pleasure and romantic family party for us. | https://www.artisoo.com/OilPaintingBlog/the-boating-party-days-of-family-happiness/ |
PORTLAND, OR – Glaucoma – often called “the sneak thief of sight” because it can strike without symptoms – is one of the leading causes of blindness in the United States. Unfortunately, 91 percent of Americans incorrectly believe glaucoma is preventable, according to the American Optometric Association’s third annual American Eye-Q® survey.
Although glaucoma is not preventable, The Oregon Optometric Physicians Association says if diagnosed and treated early, the disease can be controlled with eye drops, medicines, laser treatment or surgery. January is National Glaucoma Awareness Month and a good time to become educated on the disease.
Approximately 2.2 million Americans age 40 and older have glaucoma, according to National Glaucoma Research and as many as 120,000 are blind because of the disease. The number of Americans with glaucoma is estimated to increase to 3.3 million by the year 2020, as baby boomers age.
Glaucoma occurs when internal pressure in the eye increases enough to cause damage to the optic nerve, leading to loss of nerve tissue and vision. The most common type, primary open-angle glaucoma, develops gradually and painlessly. A much rarer type, acute angle-closure glaucoma, can occur rapidly and its symptoms may include blurred vision, loss of peripheral vision, seeing colored rings around lights, and pain or redness in the eyes.
Dilating the eyes in examination allows a doctor to see the retina, optic nerve and vessels in the back of the eye more clearly. Yet, even though African-Americans and Hispanics are genetically more susceptible to glaucoma, 37 percent of those surveyed did not have their eyes dilated during their last eye exam.
The Oregon Optometric Physicians Association recommends comprehensive eye exams every two years for adults under age 60 and every year thereafter. Your doctor may recommend more frequent exams depending on your medical or family history.
To find an optometrist in your area, or for additional information on glaucoma and other issues concerning eye health, please visit www.oregonoptometry.org or www.aoa.org.
The American Optometric Association represents approximately 36,000 doctors of optometry, optometry students and paraoptometric assistants and technicians. Optometrists serve patients in nearly 6,500 communities across the country, and in 3,500 of those communities are the only eye doctors. Doctors of optometry provide more than two-thirds of all primary eye care in the United States. For more information, visit www.aoa.org. | http://www.oregonoptometry.org/glaucoma-may-be-stealing-your-sight-2/ |
5-HTP (5-hydroxytryptophan) is a naturally occurring substance derived from the seed pods of Griffonia simplicifolia, a West African medicinal plant. In humans, 5-HTP is the immediate nutrient precursor to the neurotransmitter serotonin (5-HT). This means that 5-HTP converts directly into serotonin in the brain (see Figure 1). Serotonin has many profoundly important functions, including a role in sleep, appetite, memory, learning, temperature regulation, mood, sexual behavior, cardiovascular function, muscle contraction, and endocrine regulation.
WHAT'S THE PROBLEM WITH SEROTONIN DEFICIENCY?
Serotonin production declines with age, and at any age its abundance can be compromised further by stress. Low levels of serotonin are most commonly manifested by depressed mood, anxiety, and insomnia. They can also lead to various other complaints and disorders, diminishing one's quality of life. But now something can be done: Download the 2015 5HTP Special Report or continue reading.
Figure 1. Serotonin metabolism. The brain neurotransmitter serotonin is replenished naturally by the nutrient 5-HTP, leading to more efficient functioning of neural pathways.
5-HTP CAN RESTORE SEROTONIN LEVELS AND HELP IMPROVE:
√ General mood1
√ Depression2
√ Anxiety3
√ Insomnia4
√ Weight loss5
√ PMS6
√ Chronic headaches7
√ Migraines"8
√ Fibromyalgia9
WHAT'S THE DIFFERENCE BETWEEN 5-HTP AND PROZAC®?
Prozac is a prescription drug, whereas 5-HTP is a natural nutrient supplement. Prozac is in a class of drugs called SSRIs (selective serotonin reuptake inhibitors), other examples of which are Zoloft® and Paxil®. These drugs were originally developed to treat depression. Now they are widely prescribed for other disorders, including anxiety, sleep disturbance, PMS, obesity, chronic headaches, and other chronic pain disorders. Studies of 5-HTP have shown it to be valuable for all the same disorders. In direct comparison with an SSRI, 5-HTP has been shown to be equivalently beneficial for depression, but with significantly fewer side effects.
Both 5-HTP and SSRIs increase the availability of serotonin in the brain, but they work in different ways. SSRIs prevent serotonin from being taken back up into the neurons, leaving more of it available in the synapses between neurons. In other words, SSRIs allow the brain to reuse the serotonin that is already there. By contrast, 5-HTP replenishes serotonin levels by biological synthesis of additional serotonin molecules, providing new stores of this necessary neurotransmitter in the brain.
5-HTP MAY BE BETTER THAN TRYPTOPHAN
Tryptophan supplements have a long history of use for treating depression and anxiety disorders and for enhancing sleep. Unfortunately and unjustifiably, the FDA has since 1988 prohibited the manufacture and sale of tryptophan in the U.S., based on a single contaminated batch by a Japanese company during the late 1980s. The FDA has maintained this ban despite overwhelming evidence that it is not only unnecessary, but it is inducing people to take dangerous and expensive drugs to achieve the benefits they could achieve safely and inexpensively with tryptophan.
While the ban on tryptophan may be needless and deplorable, it has had one unforeseen benefit: it has allowed 5-HTP, which is one step closer to serotonin in the metabolic pathway, to take the stage. It turns out, by some studies, that 5-HTP may be even better than tryptophan for treating suspected serotonin deficiency disorders of the brain.
HOW ARE 5-HTP FORMULATIONS DIFFERENT?
5-HTP alone is available for those who do not wish to use it in a formulation. In general, it has few to no side effects. Because of the many benefits that 5-HTP can offer, however, several advanced formulations provide a range of options for people with differing personal needs.
SUPPLEMENTING WITH 5-HTP FOR ENHANCED MOOD
A mood-enhancing formulation containing primarily 5-HTP, a form of vitamin B6, and St. John's wort is designed to counteract age-related serotonin depletion and to provide general mood enhancement. Because it supports healthy serotonin levels, it may help those who are subject to mild to moderate depression. In addition, it may help with any of the other disorders discussed above.
5-HTP is well established for its mood-enhancing properties, among other benefits. St. John's wort, as well, has been studied with positive results for the treatment of mild depression. Its mechanisms of action are not clearly understood, but they are probably different than 5-HTP's. It has been used for a wide variety of conditions since at least the time of ancient Greece, and it was commonly used throughout the folk medicine of the Middle Ages. It is currently in widespread use in Germany as a standard treatment for depression.
Over the centuries, St. John's wort earned its reputation as a powerful mood-altering substance. Now science is confirming this reputation. When combined judiciously with 5-HTP, St. John's wort acts synergistically, i.e., the combined effect of the two ingredients is greater than the sum of their individual effects. Other uses are for improved wound healing, anti-inflammatory effects, antimicrobial activity, sinusitis relief, seasonal affective disorder (SAD), and, especially, relief from depression. | https://www.gh-biologics.com/5-HTP-Enhance-Your-Mood-Your-Sleep-and-a-Lot-More-id18833.html |
8 practical tips, with examples.
Good UI design is the thoughtful application of whitespace at all scales of an interface, from component to page, micro to macro. When whitespace is used well, the result is an interface that is harmonious, legible, and, above all, effective and easy to use.
1 / Follow the Law of Proximity.
The amount of whitespace between elements in the UI indicate how the elements relate to one another. The Law of Proximity suggests that:
- Related elements should be spaced closer together. Conversely, unrelated elements should be spaced further apart.
- Elements of the same “type” should be spaced evenly apart.
Follow these basic rules to help users readily organize and perceive logical groupings in your UI.
2 / Start from a baseline of generous whitespace.
Let your design breathe. A reliable way to improve the usability of an interface is to ensure that there is a generous amount of whitespace between all its elements.
There are exceptions, of course (see the last tip below), but for most UIs, having a generous amount of whitespace is usually better than having too little.
3 / Use whitespace to focus attention on particular design elements.
Having less information and fewer elements on the page can help bring clarity and focus, and draw attention to the information and elements that are on the page.
Whitespace can also be an effective way to add emphasis to text. It can be used in combination with — and even as an alternative to — bumping up the text size, or changing the color, case, or weight of the text.
̇
̇
̇
This sentence, surrounded by whitespace, is a case in point.
̣
̣
̣
Making an element bigger or brighter isn’t the only way to draw attention to it. Consider that when everything is bigger and brighter and important, nothing actually is.
4 / Use the same method for measuring space in both design and implementation.
The space between adjacent text elements can be measured in one of two ways.
Between adjacent “bounding boxes”
This method is how most UI rendering engines (eg. the Document Object Model on a webpage) measure space. However, this method is not particularly precise because there is excess space that is “unaccounted” for at the top and bottom of each bounding box.
Between adjacent cap heights
This method is more precise, but could complicate implementation.
Both methods are reasonable, but have different trade-offs. What is important here is that the same method for measuring space is used in both design and implementation. This is to ensure that the design can be accurately translated into code.
5 / Use a spacing system.
A spacing system specifies the set of possible spacing values to be used in a design. Using a spacing system can help bring about a sense of consistency and harmony to a UI.
A spacing system is to whitespace what a color palette is to color. Just like a color palette, a spacing system forces you to make UI design decisions from a constrained set of options. With a spacing system in place, you need only consider the handful of spacing values from the system during the UI design process. This makes design iteration faster and more systematic.
6 / Avoid using spacing values that are visually too similar.
When spacing values are mathematically different but visually too similar, the way that users perceive logical groupings in the UI could become ambiguous. Contrast matters. If your intent is for two spacing values to be different, then make it readily obvious that they are in fact different.
Consider having a wider “spread” of values in your spacing system, with a visually obvious difference between adjacent spacing values.
7 / Reduce the line-height (ie. leading) as text size increases.
Increasing the text size while keeping the same proportional line-height will result in there being too much whitespace between each line of text. Relative to the text size, the proportional line-height of headings should generally be less than the line-height of the body copy.
8 / In an information-dense UI, rely on other techniques besides whitespace to convey how elements in the UI relate to one another.
For example:
- Adding a subtle fill or border around a group of related elements.
- Using a line to separate adjacent elements that are closely-packed vertically. Or, using an interpunct character (“·”) to separate adjacent elements that are arranged horizontally.
- Changing the size, case, weight, or color of the text as a way to associate or differentiate UI elements.
Making an interface more information dense could help to make it more efficient to use. Remember that an interface that is information dense does not necessarily need to feel cluttered or overwhelming.
An effective way to learn about whitespace — and, indeed, UI design in general — is to create a “master copy”: pick one or more screens from any app or website with an interface that you admire, and recreate it in its entirety, from scratch. You will gain an insight into the many small design decisions that were made, discover interesting patterns, and see how the above tips about whitespace actually play out in well-crafted UIs.
There are reasons for why an interface “looks right”. Through experience and practice, you can hone your visual sense and intuition about how to apply whitespace in your designs. Your users will thank you for it.
If you enjoyed this article, follow me on Twitter, LinkedIn, or Medium.
Applying white space in UI design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. | https://idevie.com/design/ux/applying-white-space-in-ui-design |
Duties and Responsibility
The Salesforce Administrator will work as part of the team dedicated to ensuring that we are maximizing efficiency and capitalizing on the full features and benefits of the system based on the various user group needs. We are seeking someone with excellent technical and communication skills who can interface directly with internal stakeholders to understand their needs in order to administer and enhance the system accordingly. This person will serve as the “go to” for users, promote adoption, keep current on new releases and AppExchange solutions, provide training, and more. Additionally, the Administrator will perform basic updates, such as add/delete users, and adding basic custom fields.
RESPONSIBILITIES:
Create and Manage Changes to the System –
• Proactively seek out and identify needed system changes.
• Proactively gather feedback from users.
• Manage system changes without interruption to the user.
• Communicate system changes to the users in advance so they understand the change and how to use it prior to implementation.
• Gather requirements from end users.
• Modify the system to increase benefits and usability.
• Manage the change control process and “Change Management” Committee if appropriate.
• Manage all processes that impact / relate to Salesforce.com.
• Manage new releases of SFDC and efficiently roll out new features.
• Create and maintain fields, views, reports, dashboards, campaigns and other salesforce.com objects and functions.
• Create custom objects when necessary.
• Handle on-going customization/ alteration of Salesforce.com.
• Maintain, enhance and create workflows, functions and configurations within the Salesforce.com environment.
• Create new reporting capabilities and respond to ad hoc reporting requests as needed.
• Provide support functions as needed.
• Provide sales and financial data to company executives.
Maintain System, Security and Integrity –
• Map salesforce.com hierarchy and territories in response to personnel changes.
• Reassign Accounts, Contacts, and Opportunities in response to personnel changes.
• Grant/ remove and maintain user licenses.
• Maintain security including sharing rules and security levels.
• Design, Create and maintain user roles, profiles and hierarchies.
• Monitor application storage usage and archive data as needed. User Assistance, Training, Adoption and Satisfaction –
• Create and administer training to existing or new users/groups.
• Provide one to one training to end users on an on-going basis.
• Expand use of Salesforce.com – attend planning meetings, assist with determining if /how salesforce.com can be used in new ways as opposed to purchasing a new internal system.
• Assist sales management to create processes in salesforce.com to help monitor activities,trends, sales and leads.
• Communicate regularly with user base regarding new features, enhancements and changes to
the system.
• Monitor usage and mentor users/groups needing assistance.
• Continually seek ways to further enhance the end-user experience.
• Be the company SME on Salesforce.com.
Process Creation, Documentation and Maintenance –
• Document company processes and workflows.
• Develop process documentation and field maps.
• Create new processes and associated reporting.
Data Quality, Migration and Maintenance –
• Assist with migration from older systems/processes into Salesforce.com.
• Monitor neglected Leads, Opportunities, Accounts, and Contacts as appropriate.
• Import data as appropriate.
• Monitor and manage exception logs for back end system integration with SFDC.
• Manage duplicate records.
• Monitor and improve data quality.
• Ensure data integrity by merging duplicate Leads, Contacts, and Accounts; performing mass uploads and updates of data as required; Removing unnecessary fields and data; ensuring screens, fields and workflow have accurate names and reflect current workflow.
Report and Dashboard Creation and Maintenance –
• Create and maintain dashboards.
• Create and maintain reports including folder maintenance.
• Develop complex, macro driven reports to summarize system information for Senior Management.
• Build and manage report folders for reps to improve sales efficiency.
ESSENTIAL REQUIREMENTS:
A system administrator needs a solid understanding of the organization’s business. They must have solid
salesforce.com skills and ideally, understand the general concepts of Customer Relationship
Management (CRM). Most importantly they should be proactive, organized and logical. | http://freshermart.in/vacancies/salesforce-administrator-jaipur/ |
Michael R. Lewis is a retired corporate executive and entrepreneur. During his 40+ year career, Lewis created and sold ten different companies ranging from oil exploration to healthcare software. He has also been a Registered Investment Adviser with the SEC, a Principal of one of the larger management consulting firms in the country, and a Senior Vice President of the largest not-for-profit health insurer in the United States. Mike's articles on personal investments, business management, and the economy are available on several online publications. He's a father and grandfather, who also writes non-fiction and biographical pieces about growing up in the plains of West Texas - including The Storm.
Astute readers will realise that the above guidance is mainly taking different angles to help prepare for and guide decision making by the investor. The ability to confidently make decisions is vital for investment profits and long-term success. This pdf about the decision making models of Charlie Munger (business partner to Warren Buffett at Berkshire Hathaway - both are certified investment immortals) is almost certain to prove helpful.
What are your financial goals for 10, 15, or 20 or more years down the line, and how do you plan on getting there? What is your level of risk tolerance, and what sort of investment approach will you take (value investing, dividend investing, or some combination of multiple strategies)? As you consciously outline your financial goals and the type of investor you want to be, you can experience success as a disciplined investor in the long run and stay on track with your plans.
Taxes like broker fees will cut into your profits, as will any penalties for failing to pay the correct dues. But, with so many differences between tax systems, knowing where you stand and what your obligations are isn’t always straightforward. The best free tips, therefore, will help you maximise your profits whilst remaining within the parameters of tax laws.
Every investor should try to establish what their goals and objectives are prior to investing. There isn’t necessarily a wrong objective, but it’s more important to understand your goals because that will help drive your decisions. For instance, if you plan to regularly trade in and out of stocks, you might be better off opening an IRA account so you don’t have to pay taxes on your trades. If you plan to be a long-term investor, taxes won’t be as important of a factor and you could hold your account in a taxable or tax-free account.
Since the underlying businesses operate in differing markets, sectors and countries, their quoted prices move independently as supply and demand in them rises and falls and new information is released to the public about the current business situation. It is the changing of prices that offer investors the opportunity to make a capital gain (or loss) via ownership.
Control greed – Greed often influences traders in the following way; you enter a trade at $80 with a target of $95, but then it hits $95 and you think ‘I’ll just hold on a bit longer and increase profits further’. This only ends with you eventually losing big. The solution; stick rigidly to your strategy. Think long term and don’t deviate from your strategy, there’s simply no need to gamble.
Understand that for both beginning investors and seasoned stock market pros, it's impossible to always buy and sell the best stocks at exactly the right time. But also understand that you don't have to be right every time to make money. You just need to learn some basic rules for how to identify the best stocks to watch, the ideal time to buy them, and when to sell stocks to lock in your profits or quickly cut any losses.
Experienced investors such as Buffett eschew stock diversification in the confidence that they have performed all of the necessary research to identify and quantify their risk. They are also comfortable that they can identify any potential perils that will endanger their position, and will be able to liquidate their investments before taking a catastrophic loss. Andrew Carnegie is reputed to have said, “The safest investment strategy is to put all of your eggs in one basket and watch the basket.” That said, do not make the mistake of thinking you are either Buffett or Carnegie – especially in your first years of investing.
You can buy stock directly using a brokerage account or app. Other options exist for those who are employed—either a 401k plan or a 403b plan if you work for a non-profit. Then there's the IRA—be it a Traditional IRA, Roth IRA, Simple IRA, or SEP-IRA account. You can also set up a direct stock purchase plan or dividend reinvestment plan (DRIP). Each type of account has different tax implications.
The biggest obstacle to stock market profits is an inability to control one’s emotions and make logical decisions. In the short-term, the prices of companies reflect the combined emotions of the entire investment community. When a majority of investors are worried about a company, its stock price is likely to decline; when a majority feel positive about the company’s future, its stock price tends to rise.
Understand blockchain – Whilst you don’t need a thorough understanding of the technical makeup of cryptocurrencies, understanding how blockchain works will only prove useful. Once you understand how they secure transactions (blocks) publicly and securely, you’ll be in a better position to gauge the market’s response to big news events. Such as a huge company incorporating blockchain technology into their everyday business operations.
There are also other reasons for putting out free stock picks. In many cases, the actual companies themselves are paying various people or services to tell the world about their business. It's common to have a small, publicly traded penny stock pay a lot of money to get the right kind of exposure to help lift their share price. The aim is to issue more stock at a higher price and raise money more easily.
Should you sell these five stocks, you would once again incur the costs of the trades, which would be another $50. To make the round trip (buying and selling) on these five stocks would cost you $100, or 10% of your initial deposit amount of $1,000. If your investments do not earn enough to cover this, you have lost money by just entering and exiting positions.
Finding the best stocks to buy and watch starts with knowing what a big market winner looks like before it takes off. As noted above, IBD's study of the top-performing stocks in each market cycle since the 1880s has identified the seven telltale traits of market winners. Your goal is to find stocks that are displaying those same traits right now. Traits like explosive earnings and sales growth, a strong return on equity, a fast-growing and industry-leading product or service and strong demand among mutual fund managers.
6. Find a good investment service to subscribe to. Many of the suggestions above can now be covered by joining just one stock market service. These services now aim to pick stocks, offer trading and portfolio management software and educational services too. If things go well, then by investing in the stock market picks, the service can be paid for with profits.
It is also important to know what you want to accomplish with your investments before you actually invest. For example, you might want to purchase a home, fund a child’s college education, or build an adequate retirement nest egg. If you set financial goals at the outset—and match your investments to achieve those goals—you are more likely to reach them.
One constant principle of investing is that markets fluctuate. Stock prices will rise and fall for a number of reasons: the economy, investor sentiment, political uncertainty at home or abroad, energy or weather problems, or even corporate scandals. This means market performance isn’t always predictable. That is why diversification, or spreading the investments in your portfolio among different asset classes and across different sectors within each class, is such an important strategy. Diversification is a time-tested way to manage risk.
Whilst some day traders are tuned in every day from 09:30 to 16:30 EST (for the U.S stock market), many trade for just a 2-3 hour window instead. As a beginner especially this will prevent you making careless mistakes as your brain drops down a couple of gears when your concentration wanes. The hours you’ll want to focus your attention on are as follows:
Define and write down the conditions under which you'll enter a position. "Buy during uptrend" isn't specific enough. Something like this is much more specific and also testable: "Buy when price breaks above the upper trendline of a triangle pattern, where the triangle was preceded by an uptrend (at least one higher swing high and higher swing low before the triangle formed) on the two-minute chart in the first two hours of the trading day."
The two types of brokers are full-service and discount brokers. Full-service brokers tailor recommendations and charge higher fees, service charges, and commissions. Once an account is set up, a discount broker can allow you to do it yourself at minimal cost through their website and offers support online, by phone, or in a branch when needed. The cost of buying continues to decrease with the introduction of apps. Apart from cost, a distinguishing factor is the research provided.
But building a diversified portfolio of individual stocks takes a lot of time, patience and research. The alternative is a mutual fund, the aforementioned ETF or an index fund. These hold a basket of investments, so you’re automatically diversified. An S&P 500 ETF, for example, would aim to mirror the performance of the S&P 500 by investing in the 500 companies in that index.
Many orders placed by investors and traders begin to execute as soon as the markets open in the morning, which contributes to price volatility. A seasoned player may be able to recognize patterns and pick appropriately to make profits. But for newbies, it may be better just to read the market without making any moves for the first 15 to 20 minutes. The middle hours are usually less volatile, and then movement begins to pick up again toward the closing bell. Though the rush hours offer opportunities, it’s safer for beginners to avoid them at first.
When you buy a stock, you should have a good reason for doing so and an expectation of what the price will do if the reason is valid. At the same time, you should establish the point at which you will liquidate your holdings, especially if your reason is proven invalid or if the stock doesn’t react as expected when your expectation has been met. In other words, have an exit strategy before you buy the security and execute that strategy unemotionally. | https://bobcapitalmarkets.com/stock-market-tips-in-tamil-stock-market-tips-telegram-group.html |
ARLINGTON, Va. – A wave of harsh winter weather and bitter cold temperatures left a swath of the Pacific Northwest and parts of the East blanketed in ice, snow and power outages as more foul weather took aim at the nation’s southern tier.
The storm that blasted the west left more than 200,000 homes and businesses without power Sunday in Oregon alone. Parts of the East were covered with a sheet of ice, and more than 270,000 homes and businesses in Virginia were dark.
The near-record cold temperatures could be blamed in part on the Polar Vortex, a large area of low pressure and cold air surrounding both of the Earth’s poles that has sagged down into the U.S. The result has been brutal conditions for hundreds of millions of Americans this week.
Oregon Gov. Kate Brown declared a state of emergency late Saturday. Warming centers and other services were being provided.
“Crews are out in full force,” Brown said in a statement. “I’m committed to making state resources available to ensure crews have the resources they need on the ground.”
The National Weather Service said Oregon, Washington and Idaho should prepare for another surge of winter moisture Sunday night, potentially leading to more heavy snowfall through Monday. The “unsettled winter conditions” would likely continue throughout the week, the weather service said.
Winter storms and extreme cold affected much of the western U.S., particularly endangering homeless communities. Volunteers worked to ensure homeless residents in Casper, Wyoming, were indoors as the National Weather Service warned of wind chill reaching 35 degrees below zero.
Winter storm: Foul weather takes aim at the East this weekend
The South was not exempt. Winter storm warnings were in effect through Monday as a string of Southern cities braced for a blast snow, ice and bitterly cold conditions.
Arlington was covered in ice Sunday morning.
“Travel only when necessary,” the county Environmental Services Department said Sunday on Twitter. “Crews continue to check/salt known problem areas but temperatures rising above freezing are the best guarantee for safety.”
Snow began spreading across the Plains early Sunday in places such as Amarillo, Texas, Oklahoma City and Wichita, Kansas. Accuweather forecast snow and ice accumulations during the day on Monday for a list of cities not accustomed to such wintry weather, including Monroe and Shreveport, Louisiana, Little Rock, Arkansas and Memphis, Tennessee. All could see at least a few inches of fresh snow.
In Texas, Gov. Greg Abbott, issued a state of emergency across the entire state ahead of the storm and requested a federal emergency declaration from the White House.
“Every part of the state will face freezing conditions over the coming days, and I urge all Texans to remain vigilant against the extremely harsh weather that is coming,” Abbott said. “Stay off the roads, take conscious steps to conserve energy, and avoid dangerous practices like bringing generators indoors or heating homes with ovens or stovetops.”
The Texas city of Lubbock was bracing for 3-5 inches of snow; areas outside the city could see up to 8 inches. Temperatures were forecast to drop into negative numbers overnight, and Monday could see wind chill temperatures of minus-21, the National Weather Service forecast.
“Travel could be very difficult,” the weather service warned. “Areas of blowing snow could significantly reduce visibility. Road conditions will become hazardous.”
In El Paso, city water authorities urged residents to “protect your pipes” ahead of the cold front to avoid costly repairs and damage.
“When water freezes, it expands its volume by nearly 10%, and the pressure can result in broken water lines,” the water company said in a statement.
Experts recommended insulating outdoor pipes and even allowing cold water to drip from faucets. Running a drip of water through the pipes helps prevent pipes from freezing because the temperature of the water is above freezing.
In Louisiana, the state Department of Transportation issued closings for bridges, overpasses and interstate ramps in the northeastern part of the state. The National Weather Service forecast snow, sleet and wintry mix Sunday and Monday throughout the region.
Temperatures were expected to reach record lows and not rise above freezing for multiple days. | https://www.nytimespost.com/snow-ice-bitter-temperatures-fueled-by-polar-vortex-mean-weather-havoc-for-much-of-nation/ |
What> In commemoration with the World Book Day 2019 the British Council Bangladesh in partnership with Padatik Nattya Sangsad is staging Macbeth at the British Council Library. William Shakespeare’s play been translated by litterateur Syed Shamsul Haq, directed by Sudip Chakrobrothy, The destruction wrought when ambition goes unchecked by moral constraints—finds its most powerful expression in the play’s two main characters. Macbeth is a courageous Scottish general who is not naturally inclined to commit evil deeds, yet he deeply desires power and advancement. He kills Duncan against his better judgment and afterward stews in guilt and paranoia. Toward the end of the play he descends into a kind of frantic, boastful madness. | https://whatson.guide/macbeth-3/ |
The U.S. Environmental Protection Agency on Wednesday announced it is continuing to focus on further development of integrated pest management practices through three new awards in a recurring grant program.
Louisiana State University, the University of Vermont, and Pennsylvania State University IPM projects were selected for the latest round of the program. Two of the proposals include research on minimizing pesticide exposure for bees.
IPM refers to the practice of combining several environmentally sensitive control methods to foster pesticide risk reduction in agriculture. These practices involve monitoring and identifying pests and taking preventive action before pesticides are used.
James Jones, assistant administrator for the Office of Chemical Safety and Pollution Prevention, said promoting the IPM grants will positively affect pesticide use.
"Initiatives such as these will encourage others to adopt promising technologies and practices across the nation to reduce pesticide risks while maximizing crop production and protecting public health," Jones noted in an EPA statement.
The nearly half a million in agricultural IPM grants will be awarded to:
• The Louisiana State University project to minimize impacts to bees from insecticides used in mosquito control.
Mosquito control is critical for public health; however, insecticides can be hazardous to bees, EPA said. Practices and guidelines resulting from the project will be distributed to mosquito control districts and beekeepers throughout the U.S.
• The University of Vermont project to reduce pesticide use and improve pest control while increasing crop yields on 75 acres of hops in the Northeast.
The awardees will develop and distribute outreach materials to help farmers adopt new pest control practices. The project's goal is to reduce herbicide and fungicide applications by 50% while decreasing downy mildew.
• The Pennsylvania State University project to protect bees and crops by reducing reliance on neonicotinoid pesticide seed treatments and exploring the benefits of growing crops without them.
IPM in no-till grain fields will be used to control slugs and other pests that damage corn and soybeans, EPA said. Researchers will share their findings with mid-Atlantic growers and agricultural professionals.
Protection of bee populations is among EPA's top priorities, the agency said. According to a May, 2013, joint report from the USDA and the EPA, the U.S. is suffering from a pollinator decline due to loss of habitat, parasites and disease, genetics, poor nutrition and pesticide exposure.
In response, EPA introduced a new pesticide label in August which includes a bee advisory box and icon. The box reminds users to be cautious when using the pesticide where bees are present.
Though not in effect in the U.S., a ban on certain neonicotinoid pesticides was recently implemented in Europe. The EPA has reviewed conclusions of the European Safety Authority's conclusions regarding neonicotinoid studies, noting on Dec. 20 that it found "both (acetamiprid and imidacloprid) pesticides are safe for humans when used according to the EPA-approved label."
For more information on the EPA's Regional Agricultural IPM Grants, click here. | http://www.kansasfarmer.com/story-three-universities-take-epa-grants-pest-management-research-8-107042 |
Explainable AI (XAI) has seen steady growth in recent years. Innovative methods include calculating Shapley Values, quantifying backward pass gradients, occluding input sections, counterfactual input editing, and employing simpler surrogate models to explain model predictions. Despite having the same goal, each technique has odd arrangements and justifications. Take the class of feature importance methods, for instance. LIME calculates word significance by training a regression model and displaying the user with the learned weights.
When quantifying word contributions, researchers frequently consider the loss of the model or how sensitive the model is to each input component. According to recent studies, these differences are not just slight; they drive people to choose one strategy over another. The need for analyzing and measuring justifications has grown due to updated regulations and social decision-making guidelines. The characteristics of fidelity and the plausibility of explanations have been examined in recent studies. Others developed new diagnostic criteria, datasets, and benchmarks for contrasting various interpretability approaches.
The paper’s “ferret” authors design and develop their study in technological isolation without using a unified framework that would enable testing of other explainers, new assessment measures, or new datasets. Eventually, this prevents accurate benchmarking. In other words, are they answering essential questions like Which explanation technique should one select given all explanation methods suitable to one’s use case? Which approach is more dependable? Can one believe it? Researchers provide “ferret”, a free Python package for comparing interpretability strategies. They provide a principled assessment framework with “ferret” that combines cutting-edge interpretability metrics, datasets, and methodologies with an intuitive, extendable, and transformers-ready interface.
The Hugging Face model names and free text or interpretability corpora are used as the input for the ferret’s Evaluation API making it the first interpretability tool to do so. “ferret” is based on four fundamental ideas.
1. Built-in Post-hoc Interpretability: There are three interpretability corpora and four cutting-edge feature significance approaches. Annotated datasets offer helpful test cases for novel interpretability methods and metrics, while ready-to-use methods enable users to explain any text with any model.
2. Evaluation of Unified Faithfulness and Plausibility: They suggest a single API to assess justifications. They presently support six current measures that adhere to the fidelity and plausibility standards.
3. Capable of transforming: Ferret provides direct communication with models from the Hugging Face Hub. Users may easily describe models with the built-in methods and load them using traditional naming conventions.
The code and documentation for “ferret” are available under the MIT license. | https://www.marktechpost.com/2022/08/08/researchers-from-italy-introduce-ferret-a-novel-python-library-for-benchmarking-explainers-on-transformers/ |
What does a more equal and inclusive society in Singapore look like to you? How do we get there?
Together with experienced youth leaders, join the conversation and brainstorm tangible ways that youths can contribute and work together as a society to achieve a more equal and inclusive Singapore.
While Government policies help in addressing problems in the area of social inequality and mobility, legislation alone is insufficient in plugging all the gaps. Some issues may even require a marked shift in how society defines success and ascribe values to certain jobs. What kind of society do youths want to see in the future? What more needs to be done by various segments of society to get there?
On 17 May 2022, about 30 youths and a panel comprising youth leaders vested in the issue discussed youth concerns and questions about social mobility and inequality in Singapore society.
Panellists:
This conversation was part of a series of engagements that attempt to delve deeper into the issue of social mobility and inequality. Youth sentiments gathered from this engagement series would be aggregated and delivered as recommendations that might inform future policies on this topic.
If you have missed the engagement, check out the highlight reel below.
@youthopiasg Here are the highlights from the Social Inequality, Mobility and Resilience engagement! #sgnews #fyp #foryousg #tiktoksg ♬ Another Retrospect - DJ BAI
Increasing opportunities for social mixing/awareness
Varying definitions of success
Technological Development
Government Policies
Technology’s impact on education and careers
Creation of public policies and grants to help lower-income community
Social mixing could create more awareness
Opportunities to pursue aspirations are necessary
Creating a change in society
Keen to learn more about social inequality and mobility? Check out the article to learn more. | https://youthopia.sg/converse/nyckopisessions/simr/ |
the behavior themselves. Social learning theory focuses on the learning that occurs within a social context.
It considers that people learn from one another, including such concepts as observational learning,
imitation and modeling. Learning can occur without a change in behavior. Behaviorists say that learning
has to be represented by a permanent change in behavior, in contrast social learning theorists say that
because people can learn through observation alone, their learning may not necessarily be shown in their
performance (Omrod, 1999).
Julian Rotter moved away from theories based on psychosis and behaviorism, and developed a learning
theory. In Social Learning and Clinical Psychology (1954), Rotter suggests that the effect of behavior has
an impact on the motivation of people to engage in that specific behavior. People wish to avoid negative
consequences, while desiring positive results or effects. If one expects a positive outcome from a behavior,
or thinks there is a high probability of a positive outcome, and then they will be more likely to engage in
that behavior. The behavior is reinforced, with positive outcomes, leading a person to repeat the behavior.
Here is an example of social learning… a student researches her homework online, but despite her good
intentions, the over whelming amount of Instant Messages sent by her friends leaves her unable to finish
her assignment in time and gets her grounded for the weekend.
Social disorganization theory: focuses on the relationship between neighborhood... | https://www.brainia.com/essays/Social-Learning-And-Disorganization-Theories/137506.html |
Help Executives Make The Right Project Decisions
An effective preplanning session can provide valuable guidance
Year after year, authors and industry consultants proffer advice on how to make a business successful. Some compare business to chess while others equate it to war. Regardless, they invariably stress the importance of sound decision-making. But what underpins that? What gives you the best chances of success? Planning. There’s no substitute for it. In “The Art of War,” Sun Tzu dedicates the entire first chapter to assessing the situation, planning and decision-making. If you prefer a scientist’s perspective, Louis Pasteur declared, “Fortune (Chance) favors the prepared mind.”
A plan isn’t a guarantee — but the best plans include flexibility and contingencies to ensure that even changes in direction remain guided by your leadership.
So, how do we best apply that line of thinking to projects? Techniques for successful planning begin early, with preplanning sessions conducted before the detailed project planning phase.
An effective preplanning session requires the right core project team and due diligence on the part of that team to properly prepare for those meetings. The objective is to present project goals — framed around a concept that already has been developed and supported by financial data — to senior stakeholders (Figure 1).
Planning Meeting
Figure 1. This provides an opportunity to present senior stakeholders with the project goals, framed around a concept that has been developed and supported by financial data.
Conducting a thorough fact-finding mission prior to the preplanning session equips project teams to lead an orderly and effective decision-making process.
Focus First On Four Factors
Defining preplanning goals. After identifying a concept with substantial value, establish a launch team to do preplanning around that concept. Then, at a preplanning session, that team should present information to key stakeholders about how best to initiate the project. The first objective is to fundamentally define the goals of the project and identify relevant internal experts to involve in the planning phase. Adding definition to project goals requires an understanding of the expected end-result. While that will be discussed, negotiated and further defined in later stages of the planning phase, starting with a clear high-level vision puts the entire team on the same page.
Sometimes, there are multiple touchpoints for decision-makers. In preplanning, there is authority from the organization’s leadership to pursue the development of concepts and goals and to propose projects. After completion of some of the development, decision-makers can become more involved, beginning with a concept that already has been validated.
Preparing for decision-making. Senior leadership and management buy-in is critical because these stakeholders typically have authorization to approve project plans and make a financial commitment. An important aspect of the preplanning phase is to realize that senior executives and management, while providing the final sign-off of any project, may not have the availability to attend sessions in later planning stages. Presenting technical data that explain clearly defined goals and an agenda prior to a preplanning session affords these leaders the opportunity to evaluate a proposal, thus allowing them time to best prepare for the preplanning session, recommend substantial changes in direction or supply important resources.
Gathering technical input. Prior to the preplanning session, the team should interview technical stakeholders in the project; these may include engineers, technicians and operating staff with expertise in electrical, mechanical and construction areas of the project. A technical evaluation or study for any proposed project requires their input.
The operators of equipment at the plant are key stakeholders. After all, they are the people who actually use or maintain the equipment and have hands-on experience. To give these operators a voice, it’s important to engage them in understanding how the project goals will affect them. A designated supervisor capable of offering input about facility upgrades or replacement of existing equipment usually can represent them.
Technical experts and operations leaders have limited time available. So, make the most of the time you have with them. If you can’t get complete interviews and answers, you almost always can get consensus on who are the best people to provide those answers. If your project is important enough, management can prioritize making these employees available once the project plan is approved — but only if you know to include them in your plan.
Optimizing information. The project team aims to leverage knowledge from experts and coordinate information from all departments to present the best plan. Having access to and a positive working relationship with other departments within the company will influence the quality and clarity of the proposed project goals.
The project team can conduct scheduled interviews with individual department heads, gather information and then use those data to prepare an agenda for the planning session. For example, the department that tracks financials and scheduled maintenance will know about seasonal shutdowns that factor into design upgrades; they may dictate the best and least-invasive times to inspect equipment and make seamless upgrades.
Leveraging the knowledge of plant experts will help shape the preplanning session agenda and narrow the scope of the project. The project goals should focus on improvements the stakeholders want to realize but often lack the time or resources to address, such as improving plant efficiency.
Conducting thorough research prior to the planning session is an investment that saves time, opens communication among stakeholders and minimizes time commitments.
Align Goals
After conducting interviews, verifying data and drafting an agenda for a preplanning session, the project team should focus on achieving defined goals.
For example, a goal may be to reduce energy usage. The project team should be prepared to show statistical data on how to achieve that goal and the magnitude of resources required.
Metrics that bring project goals into focus include department progress milestones. There are internal drivers within all departments associated with achieving tangible goals. Aligning project goals with the overall goals of the organization and the metrics used to measure achievement of those goals will enhance buy-in from each group. These progress reports factor into the company’s annual management meetings and quarterly reports. Such data also allow the company to reflect, review progress and determine where best to allocate resources. Proper allocation of resources contributes to the bottom line and involves the support and investment of all departments. For your project to succeed, you must be prepared to show how the project supports those metrics and the overall goals of the team.
Scrutinize Scope
The key drivers and essential elements of scope aren’t always clear when looking at the physical size of equipment. However, after collecting data to review usage, it’s possible to diagnosis efficient and inefficient systems. This is where equipment size does matter.
For example, a large fired heater may appear to be the best solution — but greater savings may accrue by opting for a smaller more-efficient boiler. The initial assumption might be that a large fired heater provides the most direct heat transfer. Yet, a smaller, compact boiler may be a more-cost-effective alternative. The boiler not only cuts capital costs but also needs less space.
Presenting the best solution requires research. In this example, a boiler may appear to be the most economical choice but the team should focus on the overall energy efficiency of the selected model and its interconnectivity with the whole facility.
Data Collection
Figure 2. Once substantiated and documented, data can be presented during a planning session to facilitate a cost/benefit analysis.
Because of the dependency of interconnected systems, it’s important to evaluate the unique contribution of each component. For instance, in an air compressor system that’s not properly maintained, leaks may account for 30% of the air consumption. Here, simply sealing leaks would noticeably improve the overall system.
The interconnection of systems also provides an opportunity to look at more-advanced techniques, such as heat exchange networks and pinch analysis. Identifying simple changes in how systems relate can present solid energy savings with minimal capital outlay. Sometimes, just reevaluating the order in which equipment are placed can uncover opportunities that didn’t exist when the units originally were selected and installed.
Data collected on the efficiency of heating/cooling elements and how they interrelate (Figure 2) may identify important opportunities. Substantiate and document these data before presenting ideas based on them during the planning session. Without information on individual systems, it’s difficult to look at system interconnections and determine feasible alternatives or upgrades. Also, provide other information relevant to scope, such as capital costs and budget, to facilitate a cost/benefit analysis.
Furnish Financials
Another important aspect of the preplanning session is to provide financial data on upgrades and equipment costs. Management expects a high-level concept to include an estimate of the commitment, time and money they’re being asked to invest in the project. Having financial data readily available during a preplanning session helps the design team gain the support and trust of stakeholders.
During the preplanning session, focus on one piece of equipment that can be modified or delivered in a defined amount of time. Summarize how other systems can gain from this action. Show the larger value of the proposed change and leave the door open to revisit during future sessions the potential time and money benefits to other systems from the move. Documentation that supports the research will increase buy-in from stakeholders to conduct additional studies on other systems in the facility. A firm commitment by stakeholders to conduct or fund research prior to the planning phase is an investment that pays off in the future.
Guide Good Decisions
A successful preplanning session requires commitment by the project team to research the important details of proposed upgrades and modifications. Gathering data from multiple departments and providing an agenda that shows the design team has the knowledge and understanding to attain the stated goals increases stakeholder confidence.
For the project team, fact-finding research is a critical mission. These data will shape the agenda and provide the documentation stakeholders expect to review during a preplanning session. This research will allow the design team to conduct an organized and professional planning session serving the decision-making needs of senior stakeholders.
Ultimately, executives want to support worthwhile projects that have substantial potential. Your efforts help ensure that those projects rise to the top of the list and have the best chances of success. So plan early, plan often and keep your eyes open to the many risks and the opportunities ahead.
ERIC HOPKINS, PE, is a Cincinnati-based senior chemical process engineer and senior associate at SSOE Group. Email him at [email protected]. | |
Active citizenship is one of the most important steps towards healthy societies. This means the public getting involved at all levels. When citizens choose not to participate, the community suffers.
Adlai E Stevenson, an American statesman, could not have put it better while pointing out the importance of active citizenry: “As citizens of this democracy, you are the rulers and the ruled, the law-givers and the law-abiding, the beginning and the end.”
Public participation directly engages the citizens in decision-making and gives full attention to contributions of public in decision making. Note, it is an active process and not an event.
Involvement of people in governance is important as they are key stakeholders in the process.
Public participation therefore seeks to inform the citizens by providing information to help them understand the issues, consulting with them to obtain their feedback on alternatives or decisions, involving them to ensure their concerns are considered, cooperating with them to develop decision criteria and, most importantly, empowering them by placing final decision-making authority in their hands
The Constitution places public participation at the heart of governance. The citizen, at one point or another, should have a say in most of the decisions to be made both at national and county level. This is basically to ensure transparency and accountability within government and even in private entities. That is why, in many instances, the 2010 constitution is referred to as a people’s constitution.
Right from the preamble, the people are given prominence. Article one of the Constitution gives all the sovereign power to the people of Kenya. This implies that as citizens, we have a greater role than anyone else in ensuring the constitution is implemented to the later.
Other areas captured include the guarantee for equality, freedom of expression, right to access information by citizens and the budget making process. On citizens as watchdogs of their leaders, Article 118 calls on Parliament to conduct its business in an open manner. It further calls on Parliament to be open to the public; and facilitate public participation and involvement in the legislative and other business of Parliament.
This should happen at county assemblies as well because participation of people at the lowest level will be key in ensuring the success of devolution.
When conducting meaningful public participation, government should gather input from a wide spectrum of stakeholders, occasioning in a wide range of views and concerns and providing fair treatment, meaningful involvement and social inclusion for all people regardless of race, color, national origin, or income, with respect to the development, implementation, and decisions made through the public participation process.
While citizens have to honour their role as key stakeholders in governance, government should facilitate this. We have had cases where people are given a very short notice to attend public forums -- this, with a view of discouraging them to attend. Also, there have been complaints where the public is invited to give their views, as a formality. This is the sad reality considering the counties are tasked to coordinate and ensure the participation of communities in governance.
Meaningful public participation creates stronger communities working together to solve problems, making the world a better place. | https://www.standardmedia.co.ke/business/commentary/article/2001328944/it-is-a-right-not-a-privilege-for-your-voice-to-be-heard |
Graham Archer, Director for Qualifications, Curriculum and Extra-Curricular, Department for Education, speaks about the challenge of delivering more than 1.3 million laptops and tablets to UK pupils in lockdown.
It has been a trademark feature of working in this pandemic that the unexpected often becomes the norm.
Suddenly, circumstances change, policy changes, lives change and you find yourself with the spotlight firmly trained on the work you’re doing. This certainly was the case for my team's work on remote education back in January, as schools and colleges closed again to most pupils overnight.
From early in the pandemic, the Department for Education swung into action, working closely with schools and colleges and their senior leaders to ensure high quality remote education was in place for pupils, when schools closed for all but vulnerable and key worker children. Our goal was to ensure that no child or student missed out on learning during lockdown.
It all began…
In April 2020, a small group of DfE colleagues came together to form the Get Help with Technology programme, following the first closure of schools due to COVID-19.
The goal was to support disadvantaged school children by giving them access to remote education and online social care services. The programme aimed to provide laptops, tablets, and access to the internet and equip schools to set up and get trained in digital education platforms (such as Google Classroom).
Growing together
At the time, no one could have anticipated the sheer scale of the programme or, as a team, how we would learn and grow together. We had to rapidly build an understanding of what was needed and develop a proposal for supporting disadvantaged and vulnerable children while also recognising the spending pressures that the department was facing.
What followed were (several periods of) intense negotiations with HM Treasury to secure the funding needed to buy and deliver the laptops and tablets that schools and vulnerable children desperately needed.
Remote education
An early priority for the Department was clearly setting out the expectations from schools and colleges in terms of what needed to be delivered to pupils for remote teaching to be effective.
Teachers were expected to set meaningful and ambitious work each day in several subjects under a well-planned curriculum. That included recorded or live direct teaching, as well as factoring in time to carry out independent study.
Ofsted, a non-ministerial department, which inspects and regulates education and skills services for learners of all ages, played an essential role in holding schools and colleges to account for the quality of remote education.
The expectations that were set were highly challenging and included:
Mandating schools to:
Set online lessons of equivalent length to the core teaching that primary and secondary pupils typically receive in the classroom.
◼︎ Key Stage one (5- 7 years old): three hours of lessons per pupil a day.
◼︎ Key Stage two (7-11 years old): four hours a day.
◼︎ Key Stage three and four (11-16 years old): five hours a day.
To check daily that pupils were engaging with the work set and provide frequent, clear explanations of new content. Understanding how pupils were progressing through the curriculum and providing weekly feedback (as a minimum) on their work.
For colleges, remote education needed to continue to deliver as many of students’ planned hours as possible. Hours included both direct teaching as well as allocating time for students to complete tasks or assignments independently.
Laptops for Learning: Enabling 1.3 million pupils to access provision
The second, equally important part of the Department’s mission was ensuring that pupils could continue to receive a quality education at home by helping schools and colleges to overcome barriers of digital access.
That includes equipping disadvantaged pupils with laptops and tablets where needed.
The Government committed more than £400m to support remote education and access to online social care services and set itself a target to order and deliver 1.3million laptops and tablets to the pupils who don’t already have access to a computer.
To date, 1,313,500 devices have been delivered – so we are delighted to have hit our target, with a small bonus number too.
Alongside that, working closely with the major mobile network operators, the department facilitated internet access, including free data to families where it is most needed.
2021 and beyond
I’m both grateful and proud of my teams and the whole Department for what’s been achieved. There’s no doubt that fast-tracking laptops and tablets have enabled more than a million students to continue their education and study at home during lockdown.
What I’ve learnt from this great enterprise is that the world is constantly changing, and with it, our services need to evolve, but what we achieved allowed students to keep learning and stay connected.
The change from face-to-face education in a real classroom to education delivered remotely has been tremendous, requiring new resources and skills. As we move back to face-to-face teaching, this will require a similar shift and ambition, building on what we’ve learned during the pandemic, as well as our existing skills. But don’t take my word for it. Here’s what some had to say.
Lauren Thorpe, Director of Strategy at Ark Schools
“By the beginning of January this year, we had distributed more than 12,000 devices through both the DfE’s Get Help With Technology scheme and other supportive channels to ensure that students across the network had access to a suitable device for remote study.
All Ark schools can now operate a fully remote learning offer if needed, with a rich timetable of learning activities delivered at home or in their community classrooms. This allows us to continue to support students (and staff) who might be isolating at home, so our children and young people’s education is not disrupted.”
Andrew Beavis, Deputy Head, Copthall School, Barnet
“We were allocated devices through the Get Help with Technology service to support those students who were academically highly vulnerable, or those who just didn't have any access to a device or the internet. These devices quickly enabled us to support a broader range of students and meet all their needs while they were working remotely.”
Katy Bradford, Chief Operating Officer, Outwood Grange Academies Trust
“We have been able to support the home learning of more than 3,000 children across our Trust by providing them with a high quality Chromebook from the government's Get Help with Technology scheme. | https://civilservice.blog.gov.uk/2021/07/13/inside-policy-laptops-for-learning-in-lockdown/ |
All students at Gonzaga are required to bring a web-enabled device to school every day.
BYOD is an opportunity for students to help shape their educational experiences. Acceptable devices include Android tablets, Apple Macbooks and iPads, Chromebooks, Linux laptops and tablets, and Windows laptops and tablets. Any device running Windows RT or Amazon’s Kindle Fire OS and secondary devices such as smartphones, eReaders, portable video game systems, or media players are not included in Gonzaga’s BYOD program and will not be acceptable as a primary device.
The following requirements represent a minimum standard. Most new laptops and tablets sold today meet these requirements, with few exceptions. Each student must bring a device that:
Also, every student should have broadband Internet access outside of school, either through WiFi or a physical connection, to complete homework assignments. This access can be at home, in a relative’s house, or in a public location such as a library or coffee shop. Gonzaga provides computer lab access from 7:30 AM until 4:00 PM on school days, and WiFi is available constantly. Families with financial difficulties should investigate if they’re eligible for the Internet Essentials program ($10/month).
In addition, Gonzaga strongly recommends that: | https://www.gonzaganc.org/technology---Device-Specifications |
From October 24 to 30 the city of Havana will host the XX Iberoamerican Culture Festival that already has the participation of over 250 artists from 20 countries. This edition is dedicated to the Amazonia.
This year the focus of the event is the ecological and cultural sustainability and solidarity. Given the magnitude of this issue will be a diverse programming from all the arts: music, dance, theater, literature and plastics. The countries that will participate are Colombia, Peru, Spain, Argentina, Ecuador, Bosnia Herzegovina and Mexico, among others.
Eduardo Avila, director of the Latin American House and chairman of the organizing committee, said the party tries to rescue and promote cultural roots and traditions ranging from culinary to how to decorate the living environment. Indeed, the business round this year will focus on home decorating, with a mission to show and sell items of national bills in this area.
Some 50 artisans from nine countries confirmed their participation in the Fair Iberoarte that seeks to promote the purchase of crafts and will extend beyond the Festival until November 2 when a jury will award the best 10 works by demonstrations.
Artists such as Argentine playwright Rafael Diego Salva, the Croatian duo guitarists Elena and Sile, Mexican concert Avelino Vega, the Ecuador-Spanish singer Elizabeth Perez and Dance Kaleidoscope, of Colombia, a group act in dissimilar days. The organizers of the event highlighted an exhibition of the excellent landscaper and National Painting Prize Tomás Sánchez. | http://www.thecubanhandshake.org/more-than-250-artists-will-attend-the-festival-of-iberoamerican-culture/ |
There are noticeable differences in perception of audiometric and ultrasonic signals by human beings. The resolution of human hearing, for example, is measured by a quantity referred to as the Just Noticeable Difference (JND). This parameter is determined experimentally as follows. A subject listens to a tone generated at a certain sound pressure level. The frequency of the tone is then shifted slightly and the JND for that frequency and sound pressure level is the amount of frequency shift which can be perceived by the subject.
Using the JND technique, it has been determined that human hearing operates on a logarithmic scale, so that the resolution at low frequencies is finer, in an absolute sense, than at higher frequencies. Generally, the JND is about 1.4% of the test frequency averaged over different sound pressure levels. For example, a 1 KHz test tone yields a JND of about 14 Hz. The same logarithmic behavior is evident in ultrasonic hearing, but with a larger conversion factor on the order of 12%.
The present invention relates to a method and apparatus for translating audiometric signals into the ultrasonic range. The translated signals can be delivered to an ultrasonic transducer which when properly placed on an individual allows the individual to perceive the ultrasonic signals as audible sound.
| |
Nonlinear Finite Element Analysis for Structural Engineering
Presented By: Venkata P. Nadakuditi, P.E.
Abstract:
Engineers employ finite element analysis (FEA) to predict the behavior of complex systems where traditional engineering methods are not well-suited, large numbers of tests are impractical, and/or sensor access is limited. This presentation will introduce the application of FEA to nonlinear behavior, including that related to material nonlinearity and geometric nonlinearity. Two project examples will demonstrate the practical application of nonlinear FEA for structural engineering. The first example will focus on the prediction of end region cracking in prestressed concrete girders. The second example will focus on strength and fatigue evaluation of a portable steel tank. The nonlinear FEA results from both examples will be compared to either field or in-house test results obtained by WJE.
Bio:
Venkata Nadakuditi is an engineer at Wiss, Janney, Elstner Associates, Inc. He specializes in strength, fatigue and fracture mechanics in metals with primary interests in steel structures and pressurized containing systems. Prior to joining WJE in 2016, Mr. Nadakuditi spent his career in the oil and gas industry. His past work included extreme event strength assessments, extreme event strain-based assessments, defect acceptance criteria, fatigue life predictions, and experimental testing. He has experience using API 579, BS 7910, ABAQUS finite element software, special purpose fracture mechanics software programs, classical methods, and design documents (e.g., API, ASME, AISC, BS, DNV) to calculate defect acceptance criteria, material requirements, remaining fatigue lives, strength, and ductility.
He received his MS from the Pennsylvania State University. He is a registered professional engineer in Texas.
Structural Engineers Association of Texas is a 501(c)(6) tax-exempt organization.
Gifts are not tax-deductible per IRS rules.
Contact us:
Phone: +1 (512) 553-9634Email: [email protected]
Address: | http://seaotaustin.org/event-2604669 |
This article may contain an excessive amount of intricate detail that may interest only a particular audience.(August 2021)
Human-computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways.
As a field of research, human-computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their 1983 book, The Psychology of Human-Computer Interaction, although the authors first used the term in 1980, and the first known use was in 1975. The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human-computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.
Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termed human-machine interaction (HMI), man-machine interaction (MMI) or computer-human interaction (CHI). Desktop applications, internet browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today. Voice user interfaces (VUI) are used for speech recognition and synthesizing systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms. The growth in human-computer interaction field has led to an increase in the quality of interaction, and resulted in many new areas of research beyond. Instead of designing regular interfaces, the different research branches focus on the concepts of multimodality over unimodality, intelligent adaptive interfaces over command/action based ones, and active interfaces over passive interfaces.
The Association for Computing Machinery (ACM) defines human-computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them". An important facet of HCI is user satisfaction (or End-User Computing Satisfaction). It goes on to say:
"Because human-computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant."
Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success.
Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster. Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instruments or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.
The human-computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it, including:
Human-computer interaction studies the ways in which humans make--or do not make--use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks to improve the human-computer interaction by improving the usability of computer interfaces. How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.
Much of the research in the field of human-computer interaction takes an interest in:
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.
Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.
The following experimental design principles are considered, when evaluating a current user interface, or designing a new user interface:
The iterative design process is repeated until a sensible, user-friendly interface is created.
Various strategies delineating methods for human-PC interaction design have developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrapping user experience around a finished framework.
Displays are human-made artifacts designed to support the perception of relevant system variables and facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g., navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information a system generates and displays; therefore, the information must be displayed according to principles to support perception, situation awareness, and understanding.
Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering.
These principles of human perception and information processing can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved by utilizing these principles.
Certain principles may not apply to different displays or situations. Some principles may also appear to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.
1. Make displays legible (or audible). A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, the operator cannot effectively use them.
2. Avoid absolute judgment limits. Do not ask the user to determine the level of a variable based on a single sensory variable (e.g., color, size, loudness). These sensory variables can contain many possible levels.
3. Top-down processing. Signals are likely perceived and interpreted by what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.
4. Redundancy gain. If a signal is presented more than once, it is more likely to be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g., color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant.
5. Similarity causes confusion: Use distinguishable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed, and dissimilar features should be highlighted.
6. Principle of pictorial realism. A display should look like the variable that it represents (e.g., the high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like they would in the represented environment.
7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user's mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.
8. Minimizing information access cost or interaction cost. When the user's attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost.
9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g., proximity, linkage by common colors, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter.
10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information.
11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, memory use may sometimes benefit the user by eliminating the need to reference some knowledge globally (e.g., an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user's head and knowledge in the world must be balanced for an effective design.
12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the user's mental resources. This will allow the user to focus on current conditions and to consider possible future conditions. An example of a predictive aid is a road sign displaying the distance to a certain destination.
13. Principle of consistency. Old habits from other displays will easily transfer to support the processing of new displays if they are designed consistently. A user's long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.
Topics in human-computer interaction include the following:
Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name. Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.
In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors. Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.
In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human-computer interaction. The influence of emotions in human-computer interaction has been studied in fields such as financial decision-making using ECG and organizational knowledge sharing using eye-tracking and face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential to detect human emotions and those information systems can incorporate the data obtained from affect-detection channels to improve decision models.
A brain-computer interface (BCI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Traditionally, computer use was modeled as a human-computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human-computer interaction shifted focus beyond the interface to respond to observations as articulated by D. Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."
How humans interact with computers continues to evolve rapidly. Human-computer interaction is affected by developments in computing. These forces include:
As of 2010 to include the following characteristics:the future for HCI is expected
One of the main conferences for new research in human-computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or Khai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal. | https://popflock.com/learn?s=Human-computer_interaction |
Introduction
============
Patient activation is a key element which refers to patients\' willingness and capacity in managing their own health, undertaking their care and in gaining the basic knowledge about their conditions[@R1],[@R2]. Patients who are more highly activated believe that they had an important role in self-management of their own care, interact with supportive people, know how to manage their condition, protect and maintain their health functions, prevent regression in their health and access appropriate high-quality care[@R3],[@R4]. Hibbard et al[@R3],[@R4] defined patient activation as a concept and elements of the concepts were found to be knowledge, skills, beliefs and self-confidence for managing health.
Patient activation is divided into these four developmental levels; Belief that taking an active role is important, knowledge and confidence for taking action, taking action and maintaining routines even while under stress[@R2],[@R3],[@R4]. In the first level, patients tend to be passive and feel overwhelmed by managing their own health. They may not understand their role in the care process[@R2],[@R3]. In the second level, patients may lack the knowledge and confidence to manage their health[@R2],[@R3]. In the third level, patients appear to be taking action but may still lack the confidence and skill to support their behaviours[@R2],[@R3]. In the fourth level, patients adopt many of the behaviours needed to support their health but may not be able to maintain them in the face of life stressors[@R2],[@R3].
Dixon et al[@R5] evaluated that patients\' self-management approach who were at different levels of activation. They stated that the patients who had low levels of activation tended to see successful self-management as compliance whereas those at higher activation levels saw it as being in control. In addition, patients who had lower activation levels were prone to see lack of knowledge and lack of confidence as barriers[@R5]. Patient activation is related to engagement in preventive behaviours, treatment and healthy behaviours[@R6],[@R7]. Empirical studies indicate that people who are more activated are significantly more likely to engage in healthy behaviours like eating a healthy diet[@R3],[@R4],[@R6] or taking regular exercise[@R2],[@R8],[@R9],[@R10],[@R11],[@R12],[@R13] compared with people who score lower on the activation scale. Conversely, less activated patients are significantly less likely to have prepared questions for a visit to the doctor, to know about treatment guidelines for their condition or to be persistent in asking if they don\'t understand what their doctor has told them[@R8],[@R14]. Wong, Petersonve Black found that there was a positive correlation between patient activation scores and patient provider interactions. Positive interactions between the patient and the provider influenced the patient\'s abilities to engage in and be confident in maintaining health[@R15]. Becker and Roblin stated that supportive interactions between patients and physicians contributing to patients who take a more active role in their health[@R16].
Research shows that patient activation can robustly predict some health behaviours. It is directly associated with clinical outcomes. Highly activated patients are more likely to adopt healthy behaviour, to have better clinical outcomes and lower rates of hospitalisation, and to report higher levels of satisfaction with services[@R2],[@R6],[@R7],[@R9],[@R13]. Patient Activation Measure (PAM) has been developed and translated in many countries to measure patients activation in terms of beliefs, knowledge, skill and self-confidence[@R17],[@R18],[@R19],[@R20],[@R21],[@R22]. The purpose of this study was to test the validity and reliability of the Patient Activation Measure translated to Turkish in a Turkish population.
Methods
=======
Research design
---------------
This was a methodological study, because the aim was to test the reliability and validity of PAM, the research steps and statistical analysis methods, which are used in methodological studies (content validity index, exploratory and confirmatory factor analysis...) were used in this research.
Sample
------
In reliability analysis the standard advice is to have at least 10 participants per item on the scale[@R31]. Since the scale tested in this study was composed of 13 items, the sample of this study included 130 patients with chronic diseases. Among the patients who were referred to internal medicine policlinics in the university hospital, 130 patients were selected according to the following criteria; have any one of these diabetes, hypertension, rheumatoid arthritis, non-malignancy, diagnosis: aged 18 years and above, able to speak and read Turkish, willingness to be a participant. Data was collected by using a sociodemographic questionnaire and a Patient Activation Measure. The study was conducted between December 2014 and February 2015 in a university hospital in Izmir, Turkey.
Instruments
-----------
### A Socio-demographic questionnare
A socio-demographic questionnaire was developed by the authors to capture personal information on age, gender, education, marital status, having children, employment status, income, perceived health and chronic diseases.
### Patient activation measure
Patient Activation Measure (PAM) was developed by Hibbard et al. in patients with chronic diseases in 2004 to determine patient activity and its short version was tested in another patient group with chronic diseases in 2005. PAM is a valid and highly reliable on the Guttmann scale which has one factor structure and 13 items. Higher scores for the scale show higher patient activation: Participation in disease management actively/successful self-management.
The answering categories per item are 4-point Likert scales, ranging from totally disagree to totally agree and 'non applicable'. Activity scores vary from 0 to 100. Level 1, the lowest activity with scores of \<47, refers to believing in the importance of taking an active role. Level 2 with scores of 47--55 refers to having knowledge and confidence to take action. Level 3 with scores of 55--72 refers to taking action. Level 4, the highest activity with scores of \>72.5, refers to maintaining routines even under stress[@R5],[@R6].
Data collection
---------------
The study purpose, procedural details, the participants rights, potential benefits andrisks of the study were explained to patients and informed written consent was taken from them. Data was collected by using a socio-demographic questionnare, and a Patient Activation Measure by the first researcher. Each data collection session took a range of 15 to 20 minutes.
Data analysis
-------------
Data were analyzed in computer environment with SPSS, Winsteps and LISREL programme. SPSS was used forreliability tests and exploratory factor analysis, LISREL was used for confirmatory factor analysis and Winsteps was used for Rasch analysis.
Translation and adaptation
--------------------------
Linguistic validity of PAM was achieved by translation-backtranslation method. The scale was translated from English to Turkish by two experts whose native languages were Turkish. After the most appropriate expressions were selected and one single Turkish version was created, it was back translated to English by two other experts who had good command of both languages and did not have any relation with first two experts. Blinding was taken as basis in this stage. The back translated version of the scale was compared with the original one and the Turkish version was evaluated by a linguist. Right after, twelve experts evaluated content validity of all items and content validity rate was calculated. In this way the last version of the scale was created[@R23]. Ultimately the scale was piloted on ten patients.
Validity
--------
Construct validity of the scale was evaluated with exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Kaiser-Meyer Olkin (KMO) coefficient was used to determine whether the sample size was sufficient and Barlett\'s test was used to determine whether data were appropriate for factor analyses. In EFA, principal component analysis was made to determine whether the scale had a one-factor structure and also explained variance and factor loads of each item were calculated. In CFA, model fit indices and factor loads of the items were examined. Item difficulty and item one-factor structure of the scale was evaluated with fit statistics with Rasch analysis.
Reliability
-----------
Reliability was tested using internal consistency analyses and coefficient of invariance. Internal consistency was analyzed with Cronbach\'s α reliability coefficient, item-total score analysis and item analysis based on lower-upper group means. Reliability over time was evaluated with the test-retest method. Reliability coefficient was calculated with Rasch analysis.
Ethical considerations
----------------------
Permission was requested from the authors who developed PAM via e-mail. This study was cleared by the Research Ethics Committee of the 9 Eylul University Faculty of Medicine at Izmir, Turkey. Approval was obtained from Dokuz Eylul University Hospital Administration and the departments of Endocrine, Rheumatology and Cardiology. Written informed consent was taken from the patients volunteering to participate in the study after they were explained the purpose of the study.
Results
=======
Socio-demographic features
--------------------------
Among 130 patients, 71.5% were female and 28.5% were male. The mean age of the patients was 56.7±13.8 years. 16.9% of all patients were university graduates, 19.2% were high school graduates, 13.8% were secondary school graduates, 44.6% were primary school graduates and 5.4% were just literate. 13.8% of the patients had a job, but 86.2% of the patients did not have a job (unemployed or retired). 84.6% of the patients were married and 15.4% were single. Of all the patients, 60.8% had income equal to expenses, 28.5% had income lower than expenses and 10.8% had income higher than expenses. 87.7% of the patients had a child. Perceived health status was very good in 2.3% of the patients, good in 22.3%, fair in 59.2% and poor in 16.2% of the patients ([Table 1](#T1){ref-type="table"}).
######
Demographic data of the PAM for Turkish sample (n=130)
Sociodemographic Characteristics SD
--------------------------------------- ------- --------
**Age** 56.71 ±13.82
**n** **%**
**Gender**
**Female** 93 71.5
**Male** 37 28.5
**Educational status**
**Literate** 7 5.5
**Primary School** 58 44.6
**Secondary School** 18 13.8
**High School** 25 19.2
**University** 22 16.9
**Marital Status**
**Married** 110 84.6
**Single** 20 15.4
**Having child**
**Yes** 114 87.7
**No** 16 12.3
**Having a job**
**Yes** 18 13.8
**No** 112 86.2
**Income Status**
**Income lower than expenses Income** 37 28.4
**equal to expenses** 79 60.8
**Income higher than expenses** 14 10.8
**Self - rated Health**
**Poor** 21 16.2
**Fair** 77 59.2
**Good** 29 22.3
**Very good** 3 2.3
PAM scores
----------
PAM scores of the participants ranged from 28.8% to 83.3%. Up to 28.7% of patients were in activation level 1, 44.9% were in activation level 2, 20.2% were in activation level 3 and 6.20% were in activation level 4.
Translation and cultural adaptation
-----------------------------------
After linguistic validity of PAM was achieved, expert opinion was requested for contentvalidity index. Content validity index was found to be higher than 0.56 for each item and 0.98 for the scale.
PAM was piloted on 10 patients. Since the expression "medical treatment" in item 7 was not understood easily, the item was changed into "I am not sure whether medical treatment (nutrition, exercise, drug treatment) can be maintained at home". Upon receiving positive feedbacks following this change, the scale was used for data collection in the study sample.
Psychometric features of PAM Validity
-------------------------------------
Construct validity of PAM was examined with exploratory and confirmatory factor analyses (CFA). KMO coefficient was 0.75. Barlett\'s test showed a significant result (x2: 646.870; p: 0.000).
In CFA, principal component analysis (PCA) was made to determine whether PAM hada one factor structure. Eigen value of the resultant factor was 4.3, which is higher than the significant value 1. Total variance explained by this factor was 33.1%. Factor loadings of the items ranged from 0.42 to 0.71. The factor loads of PAM were over 0.30.
In EFA, one factor was tested since the original version of PAM had a one-factor structure. Model fit indices of the scale were as in the following: x2: 98.7, df: 62, x2/df: 1.59, RMSEA: 0.071, GFI: 0.88, CFI: 0.96, NFI: 0.90. According to EFA factors loadings of the items ranged between 0.39 and 0.71. All the factor loads were found to be over 0.30.
Reliability
-----------
Cronbach\'s alpha internal consistency coefficient was 0.81. Item-total correlation coefficients ranged from 0.38 to 0.66. PAM was found to have item-total correlation coefficients of over 0.30 ([Table 2](#T2){ref-type="table"}).
######
Item--Item Total Score Correlation of PAM (n=130)
----------------------------------------
Items Item-Item Total Score\
Correlation
-------- ------------------------ ------
**1** .58 .000
**2** .59 .000
**3** .39 .000
**4** .60 .000
**5** .54 .000
**6** .46 .000
**7** .65 .000
**8** .63 .000
**9** .66 .000
**10** .38 .000
**11** .47 .000
**12** .39 .000
**13** .53 .000
----------------------------------------
Independent groups t-test showed a significant difference between upper and lower group means for each item in the scale (p \< 0.05).
To determine test-retest reliability, scale was administered to the same patient group two times at a two-weeks interval and correlations were examined. The correlation coefficient was 0.98 for PAM and ranged from 0.59 to 0.93 for the items ([Table 3](#T3){ref-type="table"}).
######
Test-retest reliability of PAM (n=30)
Items Test-retest Correlation
----------------- ------------------------- -------
**1** 0.75 0.000
**2** 0.72 0.000
**3** 0.88 0.000
**4** 0.91 0.000
**5** 0.62 0.000
**6** 0.91 0.000
**7** 0.91 0.000
**8** 0.96 0.000
**9** 0.93 0.000
**10** 0.93 0.000
**11** 0.91 0.000
**12** 0.59 0.000
**13** 0.89 0.000
**Total score** 0.98 0.000
Rasch analysis for validity
---------------------------
### Examination of Item difficulty
In Rasch analysis, the unit of measurement is called logit, ranging from 0 to 100. Item difficulty can vary between 38.9 and 61.4. In the present study, based on the item difficulty level analysis results, the items of 4 and 6 were the easiest ones and the items of 9 and 13 were the most difficult ones ([Table 4](#T4){ref-type="table"}).
######
Item difficulty Structure of PAM
Items Item difficulty Structure Values
-------- ----------------------------------
**1** 40.9
**2** 41.3
**3** 49.6
**4** 40.6
**5** 49.0
**6** 38.9
**7** 46.4
**8** 53.2
**9** 61.4
**10** 54.0
**11** 47.2
**12** 50.8
**13** 61.1
### Evaluation of Item fit statistics
Unweighted mean squares (OUTFIT) and weighted mean squares (INFIT) are evaluated to test whether the scale has a one-factor structure. When mean squares are 0.6--1.4, the sample size given is considered sufficient for a good model fit. INFIT values for items of PAM ranged between 0.68 and 1.53 and OUTFIT values ranged from 0.65 to 1.54. It is showed that all 13 items had a good model fit. Only item 13 had slightly high OUTFIT and INFIT values, which can be considered as acceptable ([Table 5](#T5){ref-type="table"}).
######
Item Fit Statistics of PAM (n=130)
Items Infit Outfit
-------- ------- --------
**1** 0.80 0.83
**2** 0.68 0.71
**3** 0.75 0.77
**4** 1.05 0.99
**5** 0.78 0.79
**6** 1.44 1.46
**7** 0.91 0.89
**8** 0.97 0.96
**9** 0.94 0.94
**10** 1.30 1.40
**11** 0.68 0.65
**12** 1.09 1.09
**13** 1.53 1.54
Rasch analysis for reliability
------------------------------
Reliability of the scale is evaluated with person reliability. Separation reliability coefficientand separation index are calculated. Upper/model reliability coefficient and lower /real reliability coefficient for Turkish version of PAM were 0.87 and 0.83 respectively and the person reliability coefficient ranged from 0.87 to 0.83. Since the desirable reliability coefficient is 0.8 and over, the measurement was observed to assign people and items into four levels.
Discussion
==========
Validity
--------
Factor structure of PAM was evaluated with PCA and the total variance explained by theresultant factor was 33.1% (Eigen value 4.3) in the present study. When variance explained by one factor structure is 30% or higher, it can be considered sufficient[@R14],[@R24]. As in theoriginal scale, the Turkish version had a single factor. The total variance explained by the primary factor was 40.9% and the Eigen value was 5.3 in German version[@R17]. The total variance was 34.5% and the Eigen value was 4.5 in German version[@R18]. The total variance was 43.2% in Danish version[@R19] and 57.5% in Korean version of the scale[@R20].
Factor loads of the scale ranged from 0.42 to 0.71 in the present study. It has been reported in the literature that factor loads of 0.30--0.40 could be considered as lower cut-off points in creation of the factor structure[@R24],[@R25]. EFA revealed that all 13 items of PAM loadedon one factor. CFA revealed the following model fit indices in the current study: x2/df: 1.59; RMSEA: 0.071, GFI: 0.88, CFI: 0.96, NFI: 0.90 and NNFI: 0.95. χ2/df\<3 shows a good model fit and RMSEA 0.05 -- 0.08 is acceptable, GFI \> 0.85 --- model is enough to data fit, CFI\> 0.90, NFI\> 0.90 and NNFI\> 0.90 required values, The results are indicate that the model has a good fit[@R26],[@R27],[@R28],[@R29],[@R30]. CFA showed that factor loads of PAM ranged from 0.39 to 0.71(\> 0.30). Fit indices obtained through CFA provides support for construct validity of PAM.
Reliability
-----------
Cronbach alpha internal consistency coefficient for the scale was 0.81. That shows the scalewas highly reliable[@R24],[@R25],[@R31],[@R32]. Cronbach\'s alpha was 0.91 for the original scale developed by Hibbard et al.[@R3], 0.88 for the German version[@R17], 0.89 for the Danish version[@R19], 0.88 for the Korean version[@R20], 0.77 for the Hebrew version[@R21] and 0.88 for the Dutch version[@R22]. It is clear that Cronbach\'s alpha obtained in the present study is close to the one reported in the literature.
According to item analysis, item-total correlation coefficients for PAM ranged from 0.38 to 0.66. In general, the items with an item-total correlation coefficient of minimum 0.30 distinguish individuals well[@R24],[@R25],[@R32]. Since all items had a correlation coefficient higher than0.30, they measure similar characteristics. The item-total correlation coefficient was reported to be 0.46 - 0.63 for the German version[@R17], 0.48--0.65 for the Danish version[@R19], 0.32 -- 0.71 for the Korean version[@R20] and 0.46 -- 0.66 for the Dutch version[@R22]. The item-total correlation coefficient found in the present study is similar to those reported in the literature.
Test-retest was performed to evaluate reliability of PAM across time. The correlation coefficient between the two administrations was 0.98 for the scale and ranged from 0.59 to 0.93 between the items. It was reported to be 0.47 and range from 0.25 to 0.49 for each item for the Dutch version22. It can be stated that PAM is reliable across time.
Item analysis based on differences between upper and lower group means was performed to determine how sufficient PAM is to distinguish patients regarding activation. There was a significant difference between upper and lower group means for each item in thescale (p\< 0.05). All the items could differentiate the first 35 and the last 35 participants who got the highest and the lowest scores respectively. It is clear that the scale can discriminate activation between individuals getting the highest and the lowest scores.
Evaluation of validity with rasch analysis
------------------------------------------
### Examination of Item Difficulty with Rasch Analysis
In the present study, analysis of item difficulty in PAM showed that the easiest items were 4 and 6 and the most difficult items were 9 and 13 for the Turkish population. The item difficulty order arising in the present study was consistent with the one found in the original scale in general. As in the original scale, items 1, 2 and 4 are easiest ones and items 9, 10 and 13 are more difficult ones for the Turkish population. However, there are some differences in item difficulty order between the Turkish version of PAM and the original scale. It may be that 74% of the sample of this study had activity levels 1 and 2, that is, individuals without sufficient confidence and knowledge of their health problems. Also cultural differences between Turkish and American populations may be the reason; that is, protective relatives put the person into patient role easily and they may not give enough responsibility to patient for health care activities.
Evaluation of Item fit statistics with rasch analysis
-----------------------------------------------------
Item fit statistics analyzed to determine whether the Turkish version of PAM had one factor structure, showed that INFIT values of the items ranged from 0.68 to 1.53 and that OUTFIT values of the items ranged from 0.65 to 1.54. Since these values were between 0.6 and 1.4, all 13 items had a good model fit. The values for item 13 seemed to be slightly high, but they were acceptable[@R3],[@R4]. INFIT and OUTFIT values of the short version of PAM were 0.92--1.05 and 0.85--1.11 respectively[@R3]. In the Danish version, INFIT values were 0.67 --- 1.34 and the OUTFIT values were 0.69--1.1617. In the Hebrew version, INFIT and OUTFIT values were 0.70--1.35 and 0.73--1.45[@R19]. In the Korean version, INFIT values were 0.68--1.42 and OUTFIT values were 0.68--1.54[@R20]. In the German version, INFIT and OUTFIT values were 0.68 -- 1.03 and 0.65 -- 1.22[@R21]. It is clear that INFIT and OUTFIT values for the Turkish version were close to those for the other versions. The finding that these values were in the expected range confirms its one factor structure.
Evaluation of reliability with rasch analysis
---------------------------------------------
Upper / model reliability coefficient was 0.87 and lower / real reliability coefficient was 0.83 for the Turkish version of PAM and the person reliability coefficient ranged between these values. Since the desirable person reliability coefficient was 0.8 and higher than 0.8, the Turkish version had a good discriminatory index, could divide a sample into four activity levels and had sufficient reliability[@R3],[@R4]. The person reliability coefficient was reported to be 0.87--0.91 for the original scale 1 and 0.81--0.85 for the short version by Hibbard et al.[@R3] It was found to be 0.83--0.85 for the Danish version and 0.87--0.89 for the Korean version[@R19],[@R20]. It is obvious that the Turkish version of PAM is similar to other versions reported in the literaturein terms of person reliability.
Conclusion
==========
The results showed that Turkish translation of PAM is a valid and reliable tool to assess patient activation in patients with chronic diseases. Only the order of the items wasn\'t the same one to one with the original version of PAM.
Implications for practice
=========================
Health professionals encourage patients to get involved in care, but they do not usually know about abilities and skills of patients individually. Therefore, they design and implement the same interventions. Knowing patient activation score and level will allow planning appropriate interventions by activity levels of the patients. This method could increase patient activation gradually, thereby patients can build confidence and skills necessary for effective self management and health outcomes can improve. It provides guidance for self-management education programs, allows health professionals to design patient specific interventions and helps to improve health outcomes.
Conflict of interest
====================
All authors have no conflict of interest.
[^1]: **Emails:** Cansu Kosar: cansukosar\@hotmail.com; Dilek Buyukkaya Besen: <[email protected]>
| |
The Culture Behind The Harlem Renaissance Cultural Studies Essay
Published: Last Edited:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The Harlem Renaissance was an African American cultural movement that spanned between the 1920's and the 1930's. It was known as the "New Negro Movement" and was centered in the Harlem district outside of New York City. New York City began to expand in becoming the "Negro middle class" after being abandoned by the native white middle class. Many more African Americans arrived during World War I because of The Great Migration. Due to the war, there was a need for unskilled industrial workers because immigration virtually ceased. The Harlem Renaissance assisted in launching works of music and literature that aided in redefining how America viewed the African American Population.
In 1917, the premier of Three Plays for a Negro Theatre was performed featuring African American actors conveying complex human emotions and yearnings. James Weldon Johnson stated that the premieres of these plays were "the most important single event in the entire history of the Negro in the American Theatre". In 1919, Claude Mckay published his famous poem, "If We Must Die". The poem never alluded to race but it put racism, national race riots, and lynching to the forefront of his work. These early works describe the reality of contemporary Negro life in America.
Jazz music became an essential element of the Harlem Renaissance during the 1920's. This genre of music blossomed with low-class African Americans around the distinct neighborhood of Harlem. Many of the middle and upper-class African Americans were unsure or hostile towards jazz music because it was believed that African Americans should "assimilate into the white business culture" in America such as in Chicago, New York, St. Louis, and Detroit. Those people that did enjoy jazz music would attend nightclubs in which jazz artists would perform regularly. The popular nightclubs during this period would be the Savoy Ballroom, the Apollo Theatre, and the Cotton Club. The Harlem Stride Style of playing the piano was created during the Harlem Renaissance and helped intertwine the low-class African Americans and the socially elite African Americans because wealthy blacks had more access to jazz music. With traditional jazz, the piano was seen as an instrument of the wealthy and assisted in connecting all African Americans. The popularity of jazz music spread throughout America and soon was at an all-time high. Jazz musicians at the time like Fats Waller, Duke Ellington, Jelly Roll Morton, and Willie "The Lion" Smith were very talented and competitive, and were considered to have laid the foundation for future musicians of their genre.
As jazz music spread throughout the country, the music style of blacks started to become more attractive to white Americans. White novelists, dramatists, and composers started to exploit the musical tendencies and themes of African Americans in their works. Composers began to use poems that were written by African Americans in their own compositions and would implement them into their own rhythms. African Americans now were able to connect with White America in the world of musical composition. Roland Hayes was the first African American to gain nation wide appreciation as an artist. He trained with Arthur Calhoun in Chattanooga, and at Fisk University in Nashville. Later, he studied with Arthur Hubbard in Boston and with George Henshel and Amanda Ira Aldridge in London, England. He began singing in public as a student, and toured with the Fisk Jubilee Singers in 1911.
The Savoy Ballroom was the most sophisticated venue for swing dancing and jazz influenced by the popular song "Stompin' At The Savoy". Even with the popularity of the Savoy Ballroom, the Apollo Theatre has had the most lasting impression of the Harlem Renaissance. The Apollo Theatre was a former burlesque house and has become an everlasting symbol of African American lifestyles during this exclusive period. The theatre is known as "one of the most famous clubs for popular music in the United States and a popular location for artist to display their talents and start their careers." Some frequent topics represented by the artist during the Harlem Renaissance were influenced by the experience of slavery, emerging African American traditions, the national racism, and the dilemmas inherent with performing and writing for elite white audiences.
The Harlem Renaissance was successful in the way that it portrayed the African American lifestyle to the corpus of America. Not only through an explosion of culture and arts, but on a social level, the Harlem Renaissance aided in influencing how America and the rest of the world viewed the African American population. The migration of southern Blacks to the north changed the image of the African-American from rural, undereducated peasants to one of urban, cosmopolitan sophistication. This new identity led to a greater social consciousness, and African-Americans became players on the world stage, expanding intellectual and social contacts internationally. This point of time became a reference as a foundation for communities to build upon during the Civil Rights Era in the 1950's and 1960's. The urban setting of rapidly developing Harlem and the rural African Americans moving north while adapting to the urban life provided a community for African Americans of all backgrounds. Through this, the Harlem Renaissance encouraged a new appreciation of folk roots and culture. Cultural materials and spirituals offer a source for the artistic and intellectual minds that freed African Americans from their past establishments. Through these experiences, a confidence sprung throughout this group of people that united and progressed them as a people. | https://www.ukessays.com/essays/cultural-studies/the-culture-behind-the-harlem-renaissance-cultural-studies-essay.php |
Between campaign promises of tuition-free college from Bernie Sanders, the “Turning the Tide” report published by Harvard’s Graduate School of Education and its subsequent endorsements by the admissions offices of many of the nation’s top elite schools, the topic of making college affordable and accessible to low-income students is becoming more of a public issue and priority. A remaining question, however, is how accessible elite universities actually are for students in poverty and which admissions policies hurt or help low-income applicants. This article will explore whether low-income students are at a disadvantage prior to and during the college application process and the potential for policy solutions.
Recent high school graduates now studying at elite universities will recall the long process of building their applications and resumes over time. A majority of this comes well before students go to fill out their Common Application and students without a head start often fall behind. While colleges look at extracurriculars and standardized tests back to students’ freshman year, not all applicants are aware of this scope.
Firstly, a student’s ability to perform well on standardized tests prior to applying to colleges has been linked with socioeconomic factors. A study conducted by Saul Geiser at the University of California, Berkeley, on SAT and ACT performance suggests factors outside of students’ control, including family income, parental education, and race/ethnicity, account for 33% of the variance in scores between test takers, putting students of low socioeconomic status at a huge disadvantage in this area. In short, where any given student’s score stands compared to their peers can be largely accounted for by socioeconomic barriers, factors that need to be remedied for during the admissions process.
In addition, compared to students coming from households with higher family income, low-income students are less likely to participate in extracurricular activities, creating an activity gap. Due to “the rising costs of sport teams and school clubs” along with “parents’ uncertain work schedules and precarious household budgets”, 75% of middle and upper-class seniors participated in at least one extracurricular, as opposed to 56% of low-income students, with strong evidence for a downward trend in participation in low-income brackets over the last few decades. Because top universities increasingly prefer the applications of candidates with impressive resumes demonstrating achievement and involvement outside the classroom, low-income students without a suite of extracurriculars on their applications are becoming less competitive.
Low-income students face two significant obstacles on the path towards admission: barriers to apply and disadvantages in the admissions review. There are a variety of reasons why low-income students may not even apply to elite schools, even if they could get in. One, as identified by Stanford economist Caroline Hoxby, is location. Students who live outside of large metropolitan centers are often ignored by college recruiters and do not get the proper information about selective schools. This leads to widespread “under-matching” for many low-income rural students, in which they end up at local or community colleges that are lower in academic rigor than universities of higher standing in which the students’ achievement levels would be matched.
Another reason for not applying is poor counseling. Hoxby says counselors “may not have gone to selective colleges themselves,” adding that, “…they’re really busy, and the students who require the most attention aren’t usually the good kids with good grades.” Without access to quality counseling in schools, where there is often a 400 student-to-counselor ratio, many high achieving students miss out on the opportunity to gain insight on the process from their counselor and take advantage of resources the counselor may have at their disposal.
Another crucial reason many low-income students don’t apply is a conception that selective schools are “out of their league”, both academically and financially. What Hoxby, however, notes is that selective schools are actually “cheaper for low-income high achievers than colleges that have fewer resources,” though most low-income families are unaware of this due to the advertised cost of attendance and historical perceptions of elite institutions. Perceived and real costs, in addition to the lack of knowledge of financial assistance, in standardized testing, application fees, tuition, cost of living, and transportation discourage low-income students from applying to a variety of schools, particularly “reach” schools that may advertise higher sticker prices.
While barriers to admissions often deter low-income students from applying to institutions of higher learning, the ones who do might still be at a disadvantage due to admissions processes within universities. For one, the “holistic review” approach many schools employ takes weight away from test scores and GPAs, which may hurt low-income students who lack diverse extracurriculars due to the aforementioned “activity gap”. To be sure, there is indeed an income-correlation with standardized testing as well, but it’s far cheaper in terms of study resources to prepare for the SAT than it is to fund an expensive and involved extracurricular career. Furthermore, admissions policies common at elite universities include a consideration of legacy status and family donations for acceptance; low-income students rarely enjoy these benefits.
Many different initiatives have been created with the aim of reducing the university admissions gap, such as the Coalition Application, designed by the Coalition for Access, Affordability and Success. This collaborative of several dozen top tier universities sought to create an alternate application that reflected what they believe were the most important aspects of a candidate’s portfolio, and this is now being accepted for the 2017-2018 application season at many major elite institutions. The Coalition Application aims to restructure the way applications are structured and ultimately judged, to be more inclusive of a variety of factors that are currently less emphasized, like the relative availability of educational resources for students of varying socioeconomic status. However, from a governmental standpoint, research points to two policy changes that should be made in concert to increase the number of low income students at selective schools: targeting and increasing the amount of information on federal and institutional financial support towards low-income students, and increasing the number of Pell Grant recipients at universities through federal funding and admissions policies at universities.
Tailored and targeted information on financial assistance would help because one of the primary barriers to apply lies within students’ and their families’ misconceptions about affordability. Caroline Hoxby points to the fact that just providing an “informational tool-kit” with information about programs and respective adjusted costs of attendance made students 53 percent more likely to apply, 78 percent more likely to get admitted, and 50 percent more likely to enroll in a selective institution. The government spends $1 billion annually on different initiatives to increase the number of disadvantaged student enrolled in colleges, but those initiatives, according to a Brookings Institute and Princeton University study, have “no major effects on college enrollment or completion.” By reevaluating these programs for effectiveness and channeling more resources towards disseminating targeted facts about both public and private universities will likely help to bridge the gap.
Finally, public and private schools can adopt policies that would aim, to admit low-income students such that Pell Grant recipients make up at least 20% of undergraduate student populations. 20% is about half of the proportion of all undergraduate students receiving Pell Grant aid, and many universities could admit the necessary number of students to reach 20% without compromising significant financial resources or test score averages. The impacts of reaching such a goal at the university level cannot be understated. As a study published by Dr. Jeffery Denning in the National Bureau of Economic Research shows, Pell Grant aid not only significantly increases the likelihood of low-income students obtaining a degree, but the costs are also fully recouped through the increase in taxed income of recipients in the years following graduation. Consequently, despite the Trump Administration’s proposal to cut funding to federal student financial aid, funding and eligibility for Pell Grants ought to be increased to bridge the gaps to educational access and increase societal welfare without incurring significant economic costs. However, policymakers should leave discretion of proportions to admit to institutions themselves rather than passing mandates, as increasing the number of Pell Grant recipients may have impacts on the number of minority and international students that can be admitted, potentially decreasing cultural and ethnic diversity on campuses.
Last modified on Jan. 25th, 2018 at 3:51pm by Lisa Marie Patzer. | https://publicpolicy.wharton.upenn.edu/live/news/2302-impacts-of-lower-socioeconomic-status-on-college |
Expert International GmbH (hereinafter referred to as „we“, „to us“, „our“, etc.) is the owner of the website www.expert.org („website“). We created this privacy statement to show that we care about the privacy of our users and in order to inform them about the manners of collection and processing of personal data on this website. Please read this Statement carefully in order to learn how your personal data is collected, processed, protected or used in any other way.
DATA COLLECTION
DATA PROCESSING
We use the data obtained about you for one or several mentioned purposes:
- In order to personalize your user experience (collected data helps us to respond better to your individual needs);
- In order to improve our website (we are constantly trying to improve the offer of our website based on feedback from our visitors)
DATA PROTECTION OFFICER
PROTECTION OF YOUR DATA
In order to protect personal data you are sending through this website, we use physical, technical and organizational safety measures. We are continuously upgrading and testing our safety technology.
PERSONAL DATA BREACH NOTICE
In case of personal data breach, we will notify you and the competent authority via email within 72 hours on the scale of breach, data involved, possible impact on our services and our planned measures for data protection and restriction of any harmful impacts on
INDIVIDUALS
The breach notice will not be sent in case:
- if technical and organisational protection measures (such as encryption) were applied to the personal data affected by the personal data breach, which render the personal data unintelligible to any person who is not authorized to access it;
- if we have taken subsequent measures to ensure that the high risk to the rights and freedoms of individuals is no longer likely;
- if it would involve disproportionate effort (in such a case we will notify you via public communication or similar equally efficient measure).
Personal data breach means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or processed related to offering of our services.
YOUR CONSENT | https://www.expert.org/privacy-policy/ |
Methyl-directed DNA mismatch correction.
In 1964 Robin Holliday (1) proposed the correction of DNA base pair mismatches within recombination intermediates as the basis for gene conversion. The existence of the mismatch repair systems implied by this proposal is now well established. Activities that recognize and process base pairing errors within the DNA helix have been identified in bacteria, fungi, and mammalian cells. However, the functions and mechanisms of such systems are best understood in Escherichia coli, an organism that possesses at least three distinct mismatch correction pathways. These three systems are involved not only in the processing of recombination intermediates but also contribute in a major way to the genetic stability of the organism, a function anticipated for mismatch repair by Tiraby and Fox and by Wagner and Meselson. The significance of mismatch correction in the maintenance of low spontaneous mutability becomes apparent when one considers that seven E. coli mutator genes (dam, mutD, mutH, mutL, mutS, mutU, and mutY) have been implicated in mismatch repair. This minireview will summarize information on the most extensively studied E. coli system for mismatch correction, the methyl-directed pathway for processing of DNA biosynthetic errors and intermediates in genetic recombination. A discussion of other E. coli mismatch correction systems may be found in the recent literature and in several recent reviews. Mismatch repair pathways in other organisms and descriptions of the structural properties of mispaired bases may also be found in several of these reviews.
| |
Fig trees (Ficus carica) produce furanocoumarins, a class of small organic molecules with various medicinal and agricultural applications. Villard et al. studied the enzyme catalysing the first synthetic step in the production of these molecules. They revealed how this enzyme emerged recently and independently within the Ficus lineage in a mechanism called convergent evolution.
Furanocoumarins are small organic molecules produced by plants and known to play defense roles against pathogens and herbivores. Interestingly, some of them are also potential treatments for cancer or vitiligo. These compounds are found in specific but surprisingly distantly related plant families, such as in the parsley family (e.g. bishop's weed) and mulberry family (e.g. common fig).
To produce these molecules, plants use some enzymes that act as biological catalysts, molecular factories that construct these chemicals from simple precursors. Villard and colleagues investigated the genes that code for these enzymes to understand how the synthesis of furanocoumarins emerged independently in distant plant lineages.
The authors examined the first chemical step in the synthesis of linear furanocoumarins in common fig (Ficus carica). They already had two clues to examine this system: first, in bishop's weed plant, the equivalent enzyme catalysing the same reaction belongs to the cytochrome P450 enzyme superfamily, enzymes known for their versatility and for contributing immensely to the chemical diversity in plants. Then, in common fig, furanocoumarins are more concentrated in the leaf’s stalks compared to the trunk and fruits. This means that the concentrations of the target enzyme will likely be higher in the leaf’s stalks. This directed the search toward enzymes which belong to the cytochrome P450 family and that would be present in the leaf’s stalks.
The starting point was looking into the common fig RNA library. RNAs are intermediate gene products that are the first molecules produced in the making of a protein. Through their search, they found nine candidate genes that match the desired criteria. Next, the function of the candidate cytochrome P450 genes was tested in the lab through enzymatic assays. The authors discovered that one of the candidates (CYP76F112) could perform the first step of furanocoumarin synthesis. Further in-depth characterisation of the enzyme’s activity showed that this conversion occurred with high affinity and efficiency. This indicates adaptation to the presence of the substrate in the plant at low concentrations, which requires highly efficient enzymes.
Subsequently, the authors studied the enzyme mechanisms using a method called site-directed-mutagenesis. It consists of targeted changes to the enzyme coding sequence, to study the importance of the modified region. When investigating the enzyme activities after these changes, the authors discovered that an amino acid, named M117, seemed to be critical for the activity of the enzyme. This amino acid is located next to the substrate binding site and assumed to shape the binding region, thus playing a key role in the substrate specificity and selectivity.
Next, the authors investigated how the newly discovered enzyme CYP76F112 emerged in fig tree lineage by building phylogenetic trees. These are evolutionary trees built based on the comparison of a set of homologous DNA or protein sequences in different organisms. Their pattern of branching provides clues on how specific genes evolved from a series of common ancestors. The authors built an evolutionary tree of homologs to CYP76F112 gene from species belonging to different plant families, such as mulberries and hemps. The tree showed that CYP76F112 is placed within a group of branches that includes exclusively Ficus sequences. This finding supports the hypothesis that CYP76F112 emerged recently and independently within the Ficus lineage. This mechanism of independent emergence of the same feature in distant branches is called convergent evolution.
In summary, the authors have identified a new cytochrome P450 enzyme that catalyses the first step of furanocoumarin synthesis. Using interdisciplinary approaches such as evolutionary trees and enzyme characterisation studies, they elucidated how this enzyme evolved and highlighted important parts for its activity.
The discovered enzyme is highly stable and efficient, thus could be used as a versatile biocatalyst for sustainable synthetic chemistry applications.
Furanocoumarins are considered promising drug candidates against diseases such as cancer and vitiligo. They also hold promises as sustainable biocontrol tools to reduce the use of synthetic pesticides. Uncovering the synthetic steps of these molecules will help increasing their availability and accessibility, for various medicinal and agrochemical applications.
Original Article:Villard, C. et al. A new P450 involved in the furanocoumarin pathway underlies a recent case of convergent evolution. New Phytologist 231, 1923-1939 (2021).
Next read: How a mint turned into catmint by Benjamin R. Lichman
Edited by: | https://thesciencebreaker.org/breaks/plant-biology/figuring-out-the-evolved-chemistry-of-fig-trees |
IIt’s a story we’ve heard before and are hearing more and more: a person quits a stressful job (usually in advertising) in the city, moves to the countryside to reconnect with nature, and lives happily ever after.
“Well, it all sounds very idyllic,” Kathy Slack tells me. She would know, having moved to the Cotswolds hoping for a respite from the rat race. But, now, with an extra four hour commute on top of an already hectic schedule, burnout and depression have set in. She quit, not knowing what to do next.
It was in the garden that Slack found solace. Wandering, weeding, sowing the first seeds of what was to become a prolific vegetable garden from which she rebuilt a new career, a new life.
“Without sounding too flippant about it, part of the reason I love vegetables so much is because they were my saviors,” she wrote in her first book. Plant patch, which chronicles an entire year in the garden, celebrating its 10 favorite things to grow and the most exciting ways to eat them. This has since evolved into a podcast of the same name, in which she aims to provide “15 minutes of rural tranquility” to her listeners.
As her favorite time in the gardening calendar approaches (“Late August, early September is heaven…not too much work and the harvests are spectacular”), we take five minutes with the busy cook to talk about the effect switching to an agrarian lifestyle has had on her mental health, dealing with the ebbs and flows of a busy veggie patch schedule, and her advice for novices and green hands .
In your own words, how did you get to where you are now?
I’m not sure I would recommend burnout and depression as a way to discover your true purpose in life, but it was definitely the nudge I needed to turn things around, even if it was a bit dramatic. I’ve always dreamed of country life and the idea of growing up, although my only experience of it is a few potted fuchsias on my basement window sill in London and gazing at River Cottage. So it was a natural refuge when I was sick.
Once I recovered, I had no big plans to change careers and become a food writer. I just did things I loved, thrilled to enjoy anything again, and grateful to have the security and freedom to do so. I worked in a vegetable garden, then in a cooking school, then I did some private kitchen work, I started writing a little more, I started photographing vegetables and I continued to learning and loving it as you go. No direction except towards joy.
I think it’s sometimes tempting to create a simple narrative – I got sick, I got better, I had a bright, healthy life again – but it’s really not like that. Life is not Instagram. I wasn’t a linear progression, just bits and pieces that accumulate over time, and there were a lot of twists, wrong turns, ups and downs along the way.
How does being in the garden and growing and cooking your own vegetables affect your mental health?
It’s magic, really. Both soothing, but also energizing and inspiring. Seeing nature continue despite everything is very reassuring and reduces my worries. It also gives you agency, seeing something you sowed as a small seed grow into a huge plant is very empowering. And at the same time, all the harvests fill me with ideas and inspiration so it’s the perfect combination for me.
Name 3 of your biggest gardening mistakes and what you learned from them.
1. Thinking bigger is better. I had a few different grow spaces on friends smallholdings before I built a small raised bed at home. The first ones were large spaces, three or four times the size of the house plot, but I much prefer to grow small and grow in my house. It’s more manageable and so inspiring to see the vegetables from the kitchen window.
2. Running before you can walk. When I first started growing, I gave it my all and tried to grow crops that were, unbeknownst to me, very technical like melons and cauliflower. It would have been much better to start simple and have a few easy successes.
3. Grow potatoes. I know this will be controversial, but honestly, why did I bother? ! They take up a lot of space, are cheap like french fries (literally!) in stores, and don’t really taste any better than store-bought. I don’t even eat a lot of potatoes. If you have plenty of space then fine, but I crossed them off my growing list.
What is your favorite time of year to grow and/or cook vegetables and why?
Late August, early September is paradise. There isn’t too much work to do apart from weeding and watering, and the harvests are spectacular. You have the summer crops in full swing – zucchini, eggplant, beans, etc. — but fall crops like kale and other crucifers are just starting to arrive, too.
How organized do you need to be to keep control of the vegetable plot? Is it a full time job or do you just do it sometimes? Should you plan the rest of your life around planting and harvesting?
I have a lot of wing. I have a planting plan, but I never stick to it because I’m distracted, for example, by a bunch of x’s I see in a store I can’t resist and have to make room for.
I plan my vacation around the patch though. Why would you leave in July when the crops are plentiful and the plot probably needs a lot of watering. Besides, who could leave a tomato plant at this crucial stage?
Your first book From the Veg Patch (which I love, by the way), was shortlisted for a GFW award this year alongside winner Ruby Tandoh and Ed Smith. What did you feel ?
Thanks. I was pretty blown away to be honest. It’s nice to be among such a talented company and a real honor to be shortlisted for my first book.
Tell me about the process of creating a new recipe. For example, does it start in the garden by looking at what is growing here and there, or is there some sort of overall plan to pair the ingredients together?
Yes, it mainly starts in the garden. Either I sow seeds and think about what I will do with the harvest when it comes to bearing fruit. Or I’m cooking and staring out the window at the patch when I spot, say, the huge triffid-like tarragon plant and think, ‘oooh that would go with the mushrooms I’m about to fry’. The garden also guides you. “What grows together goes together” as the saying goes, so if it’s at the same time – basil and tomatoes, for example – you can be pretty sure it will go well together in a dish. It’s very organic, a pun, and, to me, that’s the most exciting part of the whole process.
Do you think growing your own vegetables made you go back to a simpler way of cooking or made you more adventurous in the kitchen?
Much simpler. When you’ve grown the vegetable yourself, you’re so thrilled with it that you don’t want to smother it in sauces, mousses, and thrift stores. You just want to put it on a plate and love it, the center of the meal. It makes cooking much simpler which I love. I can’t stand a plate feeling more full of chef’s tips than good ingredients.
The recipes in the book are vegetable-centric, but not necessarily vegetarian. I think a lot of people have a hard time accepting the idea of a vegetable being at the center of a dish (although that has changed in recent years). What would you say to these people?
It’s true. And I think the key is to get you out of the idea that a “good” meal has to have a central protein-based goal with supporting stuff on the side – which comes from the “meat and two” tradition. vegetables “. I think it’s really limiting when working with vegetables, as it limits you to nut roasts and quiches as a focal point or risotto bowls. But it is not necessary. As long as the flavors all work together, you don’t need a focal point for the meal.
The podcast seems like a very natural format for the book, which is peppered with such beautiful and easily digestible stories and tidbits, but I was perhaps surprised to learn that a podcast is such a great platform. form for gardening. What do you hope to instill in your listeners? Is it meant to be a guide for people to follow at home, or just listen on the go for some (very soothing) inspiration?
I’m aiming for 15 minutes of rural tranquility. I want to recreate the sense of calm and contentment I feel being in the vegetable garden as a podcast for people who may not be able to get into nature that easily. A listener told me he played the podcast as a way to relax and fall asleep at night. Which I take as a compliment!
Let’s talk about tips for people who are just getting started. What are some easy starter plants and your top tips, and what are your tips for someone growing on a balcony versus someone with a garden?
Just start. If you are growing in containers and have never grown before, don’t start with tomatoes. They are sometimes a bit tricky. Go for easy wins like lettuces or radishes that grow fast, delicious, and easy. Peas are also great in pots or small spaces and don’t need a lot of watering (which is a major consideration if you’re on a balcony).
For the seasoned grower, what do you recommend that they may not have tried before?
Kohl rabi. I had never grown one before this year and it is a revelation. Really easy to grow and so much sweeter and crunchier than store bought.
“From the Veg Patch” by Kathy Slack (Ebury Press, £25; photography by Kathy Slack). | https://rubyreloaded.com/kathy-slack-from-the-veg-patch-life-is-not-instagram/ |
NEXO has been designing ground-breaking sound reinforcement solutions at its Parisian headquarters since 1979. The company’s pioneering technology, innovative designs and sonic excellence have enhanced live events across the globe for decades, gaining the respect and trust of sound professionals everywhere.
Today, NEXO is a wholly-owned strategic business unit of Yamaha Corporation, a convergence of technological expertise that has resulted in such innovations as the NXAMP Series and 4×4 Powered TDControllers which provide full integration of speaker and amplifier control, and console management of PA systems over the latest and most popular digital networking protocols.
Through the application of convergence-inspired design, NEXO seeks not only to enhance the audience experience through the development of increasingly more transparent and consistently controllable sound systems, but also to serve the wider public through the improved directivity of its systems.
NEXO is wholly focussed on serving the environment, through the adoption of more sustainable and less polluting manufacturing processes, and through the development of systems that are ever more compact, easier to transport and less demanding to set up. | https://www.nexo-sa.com/about-us/philosophy/ |
The recent discovery of the Higgs boson is one of the most important scientific achievements made so far this century, and more than 100 students and community members gathered Wednesday to hear Northwestern professors and CERN researches discuss the finding.
CERN, the European Organization for Nuclear Research, is one of the world’s largest research organizations, focusing on the investigation of particle physics.
Prof. Michael Schmitt, who worked with Fermilab before joining the Compact Muon Solenoid experiment at CERN, explained the origins of the Higgs boson to the audience in the McCormick Auditorium in Norris University Center.
“All matter is made of fundamental particles,” Schmitt said. “What makes the Higgs so special is that it’s in a category all by itself. It has connections with bosons, force carriers and fermions, which carry matter.”
Schmitt said the resulting Higgs force field is unique because it is everywhere and exists even in a perfect vacuum where other forces do not.
“The Higgs particle is a bump that propagates in a Higgs field,” Schmitt said. “It’s like if you hit the top of a drum, and it reverberates. We measure those theoretical reverberations.”
CERN experimentalist Mayda Velasco described the procedure CERN uses to isolate and identify Higgs bosons.
“We take hydrogen atoms, take off an electron, then accelerate the resulting protons to 3000 times heavier than usual and collide them at different locations in the LHC (Large Hadron Collider), where the results are measured,” Velasco said.
The LHC is a massive, 27-kilometer long circular track 300 feet below the CERN laboratory in Geneva, Switzerland. Each piece of the LHC weighs between 100 and 2,000 tons and has to be lowered by crane. The detectors used in experiments are five stories high and have to record collisions with at least 99 percent accuracy to be useful.
Velasco, who works on the empirical side of LHC research, clarified how Higgs reactions are measured.
“Imagine two big laser beams colliding,” she said. “For every 10 to the 12th interactions, there are only 10 million interesting interactions and 100 Higgs reactions, and even among those, we have to find the cleanest, which are very, very rare. From that number, we get only two photons, or one interaction.”
Physics Prof. Frank Petriello, a theoretical particle physicist, explained how the nature of the Higgs boson is still unknown.
“There are aspects of the Higgs that puzzle us,” Petriello said. “Its mass was supposed to be much heavier, 100 million times heavier, than what we’ve actually measured. One theory about the reasons for this is called super-symmetry, but that might take a while to prove.”
Theorist and physics Prof. Andre de Gouvea illustrated the theory of super-symmetry.
“Super-symmetry tries to help us understand the new particle we’ve found,” de Gouvea said. “It also predicts other particles, which we should start seeing soon … within five to 10 years. Even if we don’t find anything else in the meantime, this experiment will tell us something.”
Petriello agreed, stressing the significance of the Higgs discovery. | https://dailynorthwestern.com/2012/11/08/campus/particle-physicists-discuss-higgs-boson-discovery/ |
As the campus finalizes plans for the arrival of Students, Faculty, and Staff, parking is always a question that needs to be addressed.
With the anticipated return to a fully on-ground campus in the fall, very limited virtual learning, and the elimination of an auxiliary lot, parking will once again be limited. Parking passes/decals will only be issued to those students that are either commuters or have earned 50 or more academic credits.
Registering a Vehicle
The portal to apply for a parking pass/decal for the 2021-2022 academic year will remain open into the fall semester. Students are encouraged to register between July 15, 2021, and August 15, 2021, to allow for the permits to be mailed to a home address. The portal can be found by accessing your myMarist account and clicking on “Parking Registration” under the Quick Links on the left. All students, including Commuter students, are required to obtain and display a parking permit to park a car on campus.
Click to Register for Parking or Check Vehicle Registration
Enforcement
Enforcement of parking policies will commence on the first day of classes. Each member of the Marist community should park in the lot(s) where your permit is assigned. There are signs at each parking lot entrance for your reference. Fines vary from $30 to $100 per violation, and repeat offenders will be booted. This includes the failure to display a parking decal.
Accommodations
For those students that require accommodation, but do not meet the above requirements, please consider completing the Medical Parking Waiver Form which is reviewed by a panel external to Safety and Security.
View the 2021-2022 parking map here.
Students will need to review the map carefully to determine appropriate parking locations and avoid unnecessary parking tickets.
Parking Permits
Parking on campus is limited to those vehicles registered with the Office of Safety and Security and for which a Marist College parking permit has been issued.
View the 2021-2022 parking map here.
Permits issued are valid for the specified lot only. Resident students are not permitted to use staff lots even on weekends. All student vehicles must be registered for each academic year. Faculty, administrator, and staff permits will be issued upon hiring and reissued as required. All permits must be affixed to the driver's side rear seat window.
Parking decals will be mailed to the address of your choice.
Resident Parking
Students will need 50 or more earned credits to be assigned a parking permit.
Students are sent an email in the summer announcing when the online parking permit application system opens. There is no guarantee that a resident will receive a parking permit for the lot closest to their residence. Some resident parking lots are not close to the residence halls and may have to be accessed by walking up or down hills.
Parking in a lot other than the assigned lot is not permitted and tickets will be issued to violators. Students must park in their assigned lot(s) even after 5:30 pm.
Commuter Parking
Commuter students will be assigned each academic year to the College designated commuter parking areas.
Commuters may park in staff lots (Monday thru Friday) after 5:30 pm and all day weekends (excluding lots #6 and #8). Vehicles must be moved by 10:00 pm.
Faculty, Administrator, and Staff Parking
Faculty, administrators, and staff will be assigned, upon hiring or change of work location, to designated parking facilities as near as possible to the location of their work assignment. Parking in any unauthorized lot at any time is not permitted and vehicles will be cited.
Faculty, administrators, and staff who need permits for more than one vehicle should contact the Office of Safety and Security.
Information for Part-Time Faculty
New Part-Time Faculty (First time teaching at the College)
To register online, allow 2 business days after ALL your paperwork (Contract, I-9, Tax Forms, etc.) has been submitted to the HR Office. Your parking permit will be mailed to your home address.
If you have any questions, please email [email protected] or contact the Security Office during business hours (8:00 am to 4:30 pm) or call (845-471-1822).
Returning Part-Time (Prior Teaching position at Marist)
Once you have electronically signed your contract you will be able to register your vehicle online immediately.
Your permit will be mailed to your home address.
Handicapped Parking
Marist College provides designated handicap parking spaces in all lots across the campus. Only vehicles exhibiting official state-issued handicap plates or official local government-issued handicap permits will be permitted to park in designated handicap parking spaces. These permits can be obtained, with appropriate medical documentation, through local law enforcement agencies. The official plates or local permits will be recognized only when the driver of the vehicle or the passenger present is the individual to whom the plates/permits are issued. The College does not issue handicap parking permits. Please be aware that handicap parking rules are enforced on campus by the Town of Poughkeepsie Police Department as well as Marist College Security. Staff and student vehicles with handicapped permits must have their cars registered with the Office of Safety and Security.
Visitor Parking
Visitor Parking is reserved for the use of off-campus visitors only. Visitor parking can be located in the Mid Rise Lot #10.
Parking Policies and Rules
1. The maximum speed limit on all campus roadways and in all campus parking lots is 20 miles per hour.
2. No motor vehicle may be parked at any time in or on:
- Any campus roadway or shoulder of a road or on the grass.
- Fire lanes; within 20 feet of a fire hydrant; an emergency zone or any other area restricted by the college.
- Service vehicle areas; loading dock areas; sidewalks or other pedestrian walkways, including crosswalks.
- Any parking lot other than the one to which the vehicle is assigned and for which a parking pass has been issued.
- Any part of an assigned lot other than a space designated by striping.
- Any designated handicap spot unless a handicap permit is visible.
- Any space designated for visitors or otherwise reserved or restricted.
- Any parking lot access road or through lane, end zone, etc.
- Any location that obstructs roadway or parking lot traffic flow or blocks building access or blocks another vehicle.
3. Snow Removal and Temporary closure of Parking Lots and Roadways:
- The college will temporarily close parking lots and roadways to conduct snow removal operations, to make necessary repairs, or for special events. Registered vehicle owners are expected to comply fully with all related snow removal/closure restrictions and to remove parked vehicles.
4. Traffic Control:
- Vehicles shall be operated on campus at all times so as not to endanger life or property.
- Vehicle operators shall follow the directions on posted traffic signs throughout the campus at all times.
- Vehicle operators shall yield to pedestrians in designated crosswalks and at all other times on campus.
5. Other:
- Abandoned motor vehicles will be towed at the owner's expense.
- Vehicles booted must pay fines and remove the vehicle within 48 hours or vehicle will be towed.
- No trading permits and/or allowing others to use your permit w/out permission from the security office.
- Permits must be affixed to the driver’s side rear window.
- Motorcycles also require a permit.
Parking Ticket Appeals
Appeals must be submitted to, and are reviewed by the Student Judicial Board of Student Government.
Penalties for Violation of Vehicle Use and Parking Regulations
The following fines will be assessed for violators of College vehicle and parking policies. Booting will occur on the fifth ticket, and every subsequent ticket, issued within an academic year. An individual will be considered a "repeat" offender even if all previous fines have been paid.
|
|
Failure to register vehicle
|
|
$30.00
|
|
Failure to display parking permit
|
|
$10.00
|
|
Parking in a restricted area
|
|
$50.00
|
|
Obstructing traffic
|
|
$50.00
|
|
Failure to park in marked space
|
|
$50.00
|
|
Parking in NO parking area
|
|
$50.00
|
|
Parking on road
|
|
$25.00
|
|
Parking in Fire/Loading Zone
|
|
$50.00
|
|
Blocking doors and exits
|
|
$50.00
|
|
Driving on walkway or grass
|
|
$25.00 + DAMAGES
|
|
Passing Stop sign
|
|
$25.00
|
|
Excessive Speed
|
|
$25.00
|
|
Hindering snow removal
|
|
$25.00
|
|
Parking in Handicap Space/Ramp
|
|
$100.00
|
|
Parking in Cross Walk
|
|
$25.00
|
|
Display of unauthorized/altered decal/pass
|
|
$75.00 + LOSS OF CAMPUS PARKING
|
|
Blocking vehicle
|
|
$25.00
|
|
Boot removal
|
|
$75.00
|
|
Forged Permit
|
|
$50.00 + LOSS OF CAMPUS PARKING
Registering a Vehicle
Click to Register for Parking or Check Vehicle Registration
(you will be prompted for your Marist Account and password)
If you have questions or experience any problems, please contact the Office of Safety & Security at 845-471-1822. | https://linux.marist.edu/web/guest/security/parking |
Halo effect refer to the widespread human tendency in impression formation to assume that once a person possesses some positive or negative characteristic, other as yet unknown qualities will also be positive or negative, in other words, consistent with the existing impression. It seems as if known personal characteristics radiate a positive or negative halo (hence the name halo effect), influencing a person’s expectations about other as yet unknown qualities. Halo effects reflect the apparent belief that positive and negative characteristics occur in consistent patterns. For example, if you have a positive impression of your colleague Sue because she is always clean and well groomed, and somebody asks you whether Sue would be the right person to organize the office party, you are more likely to answer yes, not because you have any real information about Sue’s organizational abilities but because you already have an existing positive impression of her.
Halo effect were first described in the 1920s by Edward L. Thorndike, and numerous experimental studies have since documented their existence. Halo effects can operate in strange ways, especially when the known qualities of a person are totally unrelated to the characteristics to be inferred. For example, external, physical appearance often serves as the basis for inferring internal, unrelated personal qualities. This was first shown in a study that found that physically attractive women were judged to have more desirable internal qualities (personality, competence, happiness, etc.) than homely, unattractive looking women. In a similar way, several studies found that attractive looking people are often judged less severely when they commit a transgression, and attractive looking children are punished less severely than unattractive children when committing the same misdemeanor. The fact that people are even prepared to make judgments about another person’s personality, let alone culpability, based on that person’s physical attractiveness is quite surprising. People can perform this task only by confidently extrapolating from physical attractiveness to other, unknown, and hidden qualities.
Halo effects occur because human social perception is a highly constructive process. As humans form impressions of people, they do not simply rely on objective information, but they actively construct a meaningful, coherent image that fits in with what they already know. This tendency to form meaningful, well-formed, and consistent impressions is also confirmed by other studies conceived within the Gestalt theoretical tradition (represented in social psychology by the work of Solomon Asch, for example).
Halo effect represent an extremely widespread phenomenon in impression formation judgments. Even something as innocuous as a person’s name may give rise to halo effects. In one telling experiment, schoolteachers were asked to rate compositions allegedly written by third- and fourth-grade children. The children were only identified by their given names, which were either conventional names (e.g., David, Michael) or were unusual names (e.g., Elmer, Hubert). These researchers found that exactly the same essay was rated almost one mark worse when the writer had an unusual name than when the writer had a common, familiar name. In this case, names exerted a halo effect on the way a completely unrelated issue, essay quality, was assessed.
In some intriguing cases, halo effects also operate in a reverse direction: Assumed personal qualities may influence people’s perceptions of a person’s observable, objective external qualities. In one fascinating experiment, students were asked to listen to a guest lecture. Some were told that the lecturer was a high-status academic from a prestigious university. Others were told that the lecturer was a low-status academic from a second-rate university. After the lecture, all students completed a series of judgments about the guest lecturer. Among other questions, they were also asked to estimate the physical height of the lecturer. Amazingly, those who believed the lecturer to be of high academic status overestimated his physical height by almost 6 centimeters compared to those who believed him to be a low-status person. In this case, academic status exerted a halo effect on perceptions of height, despite the fact that height is in fact a directly observable, physical quality.
When a known negative characteristic gives rise to unjustified negative inferences about the unrelated qualities of a person, the halo effect is sometimes called the devil effect or the horn effect. For example, if your office colleague is often unshaven or unkempt, people are more likely to assume that the person is lazy or incompetent, even though these two qualities may be unrelated.
The existence of halo effect may often give rise to long-term biases and distortions to the way a person is assessed. If people expect a person to have generally positive or negative qualities based on very limited information, it is usually possible to find subsequent evidence to confirm such expectations given the rich and multifaceted nature of human behavior. Such biases may lead to a self-fulfilling prophecy, when people selectively look for and find information to confirm an unjustified original expectation, often triggered by an initial halo effect. The practical consequences of halo effects can be very important in personal and working life, as people will draw unjustifiable inferences on limited samples of behavior. Being untidy, messy, unattractive looking, or late may lead to more negative judgments about other hidden qualities. The principle appears to be the following: Emphasize positive details, and avoid giving people any negative information about yourself, especially when they do not know you very well and so are likely to draw unfavorable inferences based on limited and easily accessible information.
Forgas, J. P. (1985). Interpersonal behaviour. Oxford, UK: Pergamon Press. | http://psychology.iresearchnet.com/social-psychology/social-cognition/halo-effect/ |
New Findings Contradict Major Arguments from Medicaid Expansion Supporters
MacIver News Service | January 6, 2014[Madison, Wisc…] A common argument to expand Medicaid nationwide was squashed late last week when newly released data found that people on Medicaid actually go to the emergency room more often than those without insurance.
The results were released by Amy Finkelstein, Ph.D., and Katherine Baicker, Ph.D., the principal investigators of The Oregon Health Insurance Experiment. Finkelstein is the Ford Professor of Economics at the Massachusetts Institute of Technology and Baicker is a Professor of Health Economics at the Harvard School for Public Health.
The Oregon study is one of the first experiments that can accurately measure the outcomes of health insurance and Medicaid with the use of a control group. According to the website, the study, “uses a randomized controlled design – the gold standard for medical evidence – to evaluate the effects of insurance.”
This was all possible because Oregon expanded Medicaid by the use of a lottery system in 2008, which created the perfect opportunity to study the outcomes.
The information released last Thursday said individuals on Medicaid were 20 percent more likely to use the emergency department. Actual visits were measured over an 18-month period and Medicaid enrollees visited the emergency room 40 percent more than the uninsured.
The study found that individuals on Medicaid were more likely to go to the emergency room for cases that did not require a hospital stay, what they deem “outpatient emergency department visits.” Medicaid enrollees were also more likely to visit the emergency room with cases that were classified as, “non-emergent,” “primary care treatable,” and “emergent, preventable.”
This group of individuals on Medicaid is more likely to go to the emergency department because they know the bill will be paid for. Previously, when uninsured, these individuals would have stayed home until they truly felt it was necessary to get medical care. Instead of changing their behavior once they were covered by Medicaid to use a primary care doctor for non-emergency cases, many still just go to the emergency room because they know they will be admitted and the bill will be paid.
Results released previously revealed that Medicaid did not improve health outcomes, as well. Individuals reported they were healthier, but clinical tests showed no statistically significant effect on blood pressure, cholesterol, diabetes blood sugar control, or diagnosis of or medication for blood pressure or cholesterol.
The only health outcome that saw a significant change was levels of depression. Medicaid reduced observed rates of depression by 30 percent, according to the study. However, there was no statistical increase in the use of medication for depression.
This study refutes the claims of many that support the Affordable Care Act and the expansion of Medicaid. A common argument to expand Medicaid is that individuals would no longer have to rely on uncompensated care at the emergency room. The Oregon experiment shows that they are more likely to rely on the emergency room and there is no statistical change in health outcomes.
Supporters may have to ask themselves if expanding a $415 billion a year program that doesn’t improve health is really the best decision. | https://www.maciverinstitute.com/2014/01/medicaid-raises-emergency-room-visits-by-40-over-the-uninsured/ |
During the 1980s, Professor Hans Moravec articulated a problem that he believed existed within the field of robotics and computing. The way he saw it, computers could be made to achieve high-level reasoning without much trouble or computation. Low-level sensorimotor skills, on the other hand, were the real challenge - skills that would require enormous computational resources to achieve.
Or as he put it in his book Mind Children (1988): "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."
This problem, known as Moravec's paradox, aptly describes the gap we see in robotics today. Whereas supercomputers are increasingly becoming the norm - i.e. IBM's Watson, Blue Gene, and Titan - today's robots are still rather clumsy and delicate compared to humans. And considering that robots are increasingly being looked to to deal with dirty, dangerous and dreary situations that human beings cannot (or can't be bothered to) deal with, this needs to change.
Towards this end, the Defense Advanced Research Projects Agency (DARPA) has been holding an event where robots undergo a series of trials that test their ability to navigate disaster areas and perform tasks with specialized tools. Known as the DARPA Robotics Challenge (DRC), this competition pits various robot systems and software teams from around the world against each other in a bid to develop robots capable of assisting humans in responding to natural and man-made disasters.
This competition was first announced back in 2012, and over 30 teams assembled to compete in the DRC Trials between December 20th and 21st, 2013 in Florida. At this event, robots were required to make their way through obstacle courses and pass skill-testing scenarios. These included driving a utility vehicle at the site, traveling dismounted across rubble, removing debris from an entryway, opening a door and entering a building, climbing an industrial ladder and traversing and industrial walkway, using a tool to break through a concrete panel, locating and closing a valve near a leaking pipe, and connecting a fire hose to a standpipe and turn on a valve.
The Finals, which will involve 11 teams from all over the world - including Team MIT, NASA's Robosimian and Surrogate, Lockheed Martin's Team Trooper, and Team KAIST from South Korea's Daejeon Metro City - will be taking place between June 5th and 6th at the Fairplex in Pomona, California. Additional teams sponsored by the EU, Japan and South Korea are expected to enter the DRC Finals competition before it officially kicks off.
For the Finals competition, teams will be expected to compete in scenarios similar to what happened at the Trials. However, a number of new elements will be added to challenge the various teams and their designs. For instance, the robots are no longer allowed to be connected to power cords, fall arrestors, or wired communication tethers; humans will not be allowed to physically intervene if their robot falls or gets stuck; speed will more heavily weighed in scoring and test will need to be completed in less time (one hour instead of four); and communications will be further degraded and intermittent.
In short, the Finals will test the absolute limits of the robots' ability to function on their own and without human assistance, and will make room for unexpected interference and connectivity issues. And the team that demonstrates the best human-supervised robot technology for disaster response will be taking home the $2 million prize.
Back in June, Dr. Gill Pratt, the DRC program manager had this to say of the impending Finals:
“Six months ago at the DRC Trials, we began physically testing human-supervised robots against disaster-relevant tasks. Their impressive performance gave us the confidence to raise the bar. A year from now at the DRC Finals we will push the technology even further... For the first time, teams will be empowered to exploit cloud and crowd-augmented robotics, two highly promising research areas that allow onsite operators to leverage remote data, computing, and human resources, These research areas are in their infancy, but after the DRC Finals we hope to see significant innovation.”
Last year, Google’s Schaft humanoid robot took home the top prize after scoring 27 points out of a possible score of 32. IHMC Robotics, based in Florida, grabbed second place, while Carnegie Mellon University’s Team Tartan Rescue placed third. Built by a Japanese start-up – one of Google’s many recent acquisitions – the Schaft is an updated version of the Humanoid Robot Project robot (HRP-2), with hardware and software modifications that include more powerful actuators, a walking/stabilization system, and a capacitor instead of a battery.
Over the course of the trials, the bipedal robot was able to bring stable walking and significant torque power to the fore as it opened doors, wielded hoses, and cut away part of a wall. However, team Schaft lost points when a gust of wind blew a door out of the robot’s hand and the robot was unable to exit a vehicle after navigated a driving course successfully.
At the other end of the spectrum was the Johnson Space Center’s Valkyrie, a biped, anthropomorphic robot that honestly looks like something out of anime or Tony Stark’s lab. This latter aspect is due largely to the fact that it has a glowing chest light, though the builders claim that it’s just a bulge to make room in the torso for linear actuators to move the waist. Officially designated “R5″ by NASA, Val was designed to be a high-powered rescue robot, capable of traversing uneven terrain, climbing ladders, using tools, and even driving.
According to the designers, the Valkyrie was designed to be human in form because: "a human form makes sense because we’re humans, and these robots will be doing the jobs that we don’t want to be doing because they’re too dangerous. To that end, Valkyrie has seven degree of freedom arms with actuated wrists and hands, each with three fingers and a thumb. It has a head that can tilt and swivel, a waist that can rotate, and six degree of freedom legs complete with feet equipped with six-axis force-torque sensors."
Unfortunately, the robot failed at Trials last year, scoring 0 points and placing amongst the last three competitors. Luckily, NASA is still in the running thanks to their Robosimian and Surrogate robots. The former, as the name would suggest, was designed with the physical abilities of a simian in mind; the ability to get around on all fours, and use all four limbs as legs and grippers. Surrogate, in contrast, has a twisted spine that connects a set of arms to a wheeled base, and is designed for fast transit and getting into difficult spaces.
Another major contender at the Trials was the Atlas Robot, the humanoid machine created by the robotics company Boston Dynamics. This company, which was acquired by Google last year, is also responsible for the development of the Cheetah robot, the Legged Squad Support System (LS3), Big Dog, RiSe, and Petman. Unfortunately, and despite its anthropogenic appeal, Atlas did not make it to the Finals either.
Still, amongst the remaining 11 teams and robotic systems, there are plenty of interesting designs that are most adept at handling hazardous situations and performing tasks that could mean the difference between life and death for many.
DARPA says that the point of the competition is to provide a baseline from which to develop robotics for disaster response. Events such as the 2011 Fukushima nuclear disaster, which not only damaged the reactors but made it impossible for crews to respond in time, demonstrate that robots have a potential role to play. DARPA believes that robots that could have navigated the ruins and worked in the radioactive environments would have been of great help.
The problem is that current robots simply aren’t up to task. Specialized robots can’t be built to deal with the unpredictable, full telepresence control is neither practical nor desirable, and most robots tend to be a bit on the delicate side. What’s needed is a robot that can work on its own, use the tools and vehicles at hand, deal with the unpredictable, and is durable and agile enough to operate in the ruins of a building or a rubble-strewn environment.
If there’s one thing the challenge has demonstrated so far it is that anthropomorphic designs may not be the most well-suited to the task of handling disaster response. Somehow, the human form seems cumbersome and unreliable when powered by actuators and batteries rather than flesh and blood. An ironic outcome, considering that one of the aims of the challenge is to develop robots capable of performing human tasks, but under conditions considered unsafe for humans.
But until such time as robots can be designed to mimic human biology, and not just their form and function, those most suited to the task of replacing us may have to look nothing like us. And as always, the top prize seems to go to those who can think outside the box!
In the meantime, enjoy these videos of some of the most noteworthy contenders, plus some footage of the robots that made it to the Finals:
NASA Valkyrie Robot:
Atlas Robot performing robo-Karate:
Footage of the DRC Finalists:
Sources:
- http://www.theroboticschallenge.org/
- www.darpa.mil/Our_Work/TTO/Programs/DARPA_Robotics_Challenge.aspx
- www.jpl.nasa.gov/news/news.php?feature=4401
- http://www.bostondynamics.com/
- drc.mit.edu/challenge.php#
- http://www.gizmag.com/darpa-robotics-challenge-the-winners-are/30007/
- http://www.cnet.com/news/japans-schaft-has-all-the-right-stuff-at-darpa-robot-trials/
- http://www.wired.com/2013/12/darpa-challenge/
- http://io9.com/meet-valkyrie-nasas-superhero-robot-with-a-glowing-1481990829
- http://www.wired.com/2013/08/robosimian-jpl/
- http://www.cnet.com/news/be-afraid-darpa-unveils-terminator-like-atlas-robot/
Image Credits: | https://www.herox.com/blog/148-the-darpa-robotics-challenge |
My name is Carly and I have a love for all animals great and small. My whole life revolves around animals and I know how worrying it is to leave your babies while you go away.
I currently own 12 rabbits, a parrot and 2 dogs and also have 2 other family dogs living with me. I have always had lots of animals and up until recently I have always had cats too. So I have a wide knowledge of the most common companian animals. I used to own an African Pygmy Hedgehog and sugar gliders too. I have experience with guinea pigs and chinchillas also.
I am very knowledgeable about rabbits in particular, and I know how hard it is to find someone trustworthy who knows enough about rabbits as they are complicated. As rabbits are prey animals they tend to hide illness very well and a lot of problems will go unnoticed if you are not able to pick up on the very subtle signs. I can identify these signs and ensure your pet gets immediate veterinary treatment. I am aware of the proper diet and care that they require and the gentle handling needed in order to avoid unnecessary stress.
I currently work at a cattery, around caring for my own animals and pet sitting for several regular cats/rabbits through Pawshake. I have always worked with animals and have worked in several kennels/catteries, several rescues and veterinary practices too. I previously volunteered with Wildlife Aid and regularly pick up injured wildlife to take them to rescues.
VISITS TO YOUR HOME: I am able to do up to 2 home visits a day for cats or small animals in order to feed them, clean any cages/litter trays, give them a brush and some love and anything else that is required. I am able to administer medications as I have done so with my own rabbits and with a lot of cats and dogs at my work. I am responsible and I take the welfare of animals very seriously, so you can be 100% that I will always ensure that your pets are cared for the best I possibly can. I will give you regular updates on how they are doing and will notify you of any problems right away. I look forward to hearing from you and I am happy to answer any questions you may have.
My cat Pookie (age 14) was very relaxed when we got back home. So different from when I last used a cattery. I'm not suggesting the cattery did anything wrong, but cats are very territorial, especially when older. He obviously had a bit of TLC too.
Carly took on a big job for us, looked after our difficult cats for many weeks!
I completed my first year of my foundation degree in Animal Management and Conservation, I am currently taking a break from studies to work full time.
I would like to undertake pet sitting and animal first aid courses in the future.
Carly
Book via Pawshake to enjoy The Pawshake Guarantee, in-house customer support, booking guarantee, safe cashless payments, no booking fees, no change fees, daily updates and more! | https://www.pawshake.co.uk/petsitters/greater-london-england/crazy-rabbit-lady-30294 |
Summary
This year’s update of the Women in Work Index shows that the OECD has continued its gradual progress towards greater female economic empowerment. The Nordic countries, particularly Iceland, Sweden and Norway, continue to occupy the top positions on the Index.
We also explore the gender pay gap in more detail and the time it will take for each country to close the gap at current rates of progress. The gains from closing the gap are substantial: achieving pay parity in the OECD could increase total female earnings by US$2 trillion.
View the key findings below for highlights from our research and explore the results further using our interactive data tool. We provide more detailed analysis and commentary in the full report which you can download below.
Hear key highlights from the research in this 60 second video update
Key findings
The UK experienced a small improvement in its performance, rising from 14th to 13th position in 2015.
Poland stands out for achieving the largest annual improvement, rising from 12th to 9th due to fall in female unemployment and an increase in the full-time employment rate.
Over the longer term there have been more significant movements in country rankings. Israel and Poland stand out for improving by more than 10 positions since 2000, while the US and Portugal have lost ground.
There are significant economic benefits in the long-term from increasing the female employment rate to match that of Sweden. The GDP gains across the OECD could be around US$6 trillion.
Fully closing the gender pay gap could increase total female earnings by US$2 trillion across the OECD. However, at current rates of progress, the average OECD country would take almost a century to close the pay gap.
Share
Share
Explore the data: click on the country or year for full results
2000
2007
2011
2012
2013
2014
2015
Gender wage gap
Shortfall of female relative to male wages
Female boardroom representation
* showing figures for 2009
Composition of female and male populations
%of the total male or female population
■ Full time employment
■ Part-time employment
■ Unemployment
■ Outside labour force
Closing the gender wage gap
% increase for women's wages
USD bn increase in female wages
GDP impacts of increasing female employment rates to Swedish levels
% increase in GDP
USD bn increase in GDP
Note: estimates of the economic gains from closing pay gap and increasing female employment rates are only available for the most recent year of analysis.
| |
Swimmer must have achieved a qualifying time to be entered into this meet. There are several process and continuing updates going on for this specific meet that can confuse parents, so we ask you to review information for this meet carefully.
UPDATED 12-14-18
Sat/Sun: MORNINGS: 7:30am Swimmers/Coaches/Officials
7:50am Parents/Spectators
AFTERNOONS: 4:30pm Swimmers/Coaches/Officials
4:50pm Parents/Spectators
Monday :MORNING: 8:30am Swimmers/Coaches/Officials
8:50am Parents/Spectators
AFTERNOON: 4:30pm Swimmers/Coaches/Officials
4:50pm Parents/Spectator
To best help parents and qualified swimmers decide upon attending the meet at this time, please read the following:
#1 Attached is a list of swimmers qualified to attend this meet. If your swimmer is not on the list they do not currently have times fast enough to enter this meet.
#2 Attached is a list of qualified swimmers, with all the events each swimmer has qualified to enter. 11-05-18 Coach Kiely has finalized which events swimmers will compete in. Relays will be assigned on deck at start of meet. Each day/session has a limit on the number of events that each swimmer can be entered in.
*Addendum: Ongoing process: Some swimmers have only 1 or 2 qualified events to swim, but...a bonus event may have been assigned/added to the swimmers entry.
Also we learn from meet host 11-5-18 numbers projected to attend meet are lowerd on 11-5-18 enough to allow qualified swimmers to add one or two qualified entries. So a swimmer originally with one event a day may be given additional events to swim .
UPATED INFO: Scratch Request was sent evening of 11-29-18
Please text or email Coach Kiely if you must scratch this meet ASAP.
Qualified Swimmers entered in events held on the Monday 12-17-18 may elect to scratch. Attending Monday is a personal choice regarding conflict with school attendance. Parents & swimmers should discuss and understand that Coach Kiely supports each qualified swimmers personal choice to attend or decline swimming on Monday.
Team Specific warm up times will be posted when/if received from meet host.
Officials that can volunteer please let Coach Kiely know the day/session
Volunteer SCY-RI Team timers will be sent a sign up sheet by our helpful team parent volunteers:)
Please review meet format: *Important some of our top 9-12 swimmers have qualified for an event as a 9-12 and qualified in same event as 11 & over. *For this meet at this time of season and it is Coach Kielys' decision,... we WILL NOT enter our 11-12 swimmers in 11 & over events. They will swim timed final 9-12 events only.
MEET FORMAT: The meet will be conducted in a prelims/finals format for 11 & Over Events (including relays), and a timed finals format for events for swimmers aged 9-12, in accordance with USA Swimming Technical Rules. The Meet Referee reserves the right to make any adjustments to the provisions of the meet announcement necessary to ensure the fair and efficient operation of the competition. In the evening there will be 4 heats for individual events: a 14 & Under non-scoring final, a bonus, a consolation, and championship heat in that order in all events, except the 1650 freestyle. For All Relay Events, the top 20 performing relays from prelims will swim again in Finals (scoring will still occur to 30 th place).
WAIVERS: Boston University requires all participating athletes to have a signed waiver from their parents/guardians (if under the age of 18) or from themselves if aged 18 and older. The waiver will be available online for teams to download and distribute.
|
|
Forms/Documents:
- UPDATED ATTENDING LIST (red line=new scratch)
- Updated Most current entry times 12-8-18
- PSYCH SHEET for Meet
- Time Line for AM Prelims session & evening Finals
- UPDATED 12-13-18 *New SCRATCH event List
- Prelims-Finals Tip sheet for general athlete info
|
|
Photos: | https://www.teamunify.com/EventShow.jsp?id=962792&team=nelssc |
This year ESC chaired a panel on Connectivity at the 5th International Physical Internet Conference 2018 that took place 18-22 June. The newest insights and best practices about data sharing, service platforms, and computational support were discussed during this panel.
It allowed us to stress again the importance of increased connectivity through extensive digitalisation as a key to more effective and efficient supply chains. Examples, among others, came from projects partly funded by the European Commission, like CORE and AEOLIX, and the platform presented by Logit One. Logit One’s SaaS (Software-as-a-Service) platform is uniquely positioned through its combination of rich consolidated data, synchromodal planning, agile networks of forwarders, and a step towards an automatic process execution.
Finally, a project led by the Austrian Institute of Technology dealing with Intermodal Planning gave proof of this increased effectiveness and efficiency.
The conference topics included inter-connected logistics, PI fundamentals, business models, governance and implementation, cross-chain control, synchromodal transportation, IT systems, stakeholders and their roles. New business models, enabling technologies and experimentations that already underway were presented, making this meeting a unique opportunity to learn, network, and discuss the latest results and challenges about interconnected logistics.
The conference became an open forum for researchers, industry representatives, government officials and citizens to explore together, discuss, introduce leading edge concepts, methodologies, recent projects, technological advancements, start-up initiatives, for current and future Physical Internet implementation.
The Physical Internet Initiative aims at transforming the way physical objects are moved, stored, realized, supplied, and used, pursuing global logistics efficiency and sustainability. Originating from Professor Benoit Montreuil in 2006, this ground-breaking vision, revolutionising current paradigms, has stirred a great interest from scientific, industrial, as well as governmental communities. You can watch here the video made by the European Technology Platform ALICE to explain the concept of the Physical internet. | https://europeanshippers.eu/5th-physical-internet-conference-2018/ |
The ZIP Code maps and database are updated 4 times per year.
Database updated: August 1, 2019, Maps updated: August 2, 2019.
HOUSING AFFORDABILITY INDEX
|Wyncote, PA 19095 Housing Affordability Index is 115
|
State of Pennsylvania Housing Affordability Index is 131
|The Housing Affordability Index base is 100 and represents a balance point where a resident with a median household income can normally qualify to purchase a median price home. Values above 100 indicate increased affordability, while values below 100 indicate decreased affordability.|
WEALTH INDEX
|Wyncote, PA 19095 Wealth Index is 132
|
State of Pennsylvania Wealth Index is 99
|The Wealth Index is based on a number of indicators of affluence including average household income and average net worth, but it also includes the value of material possessions and resources. It represents the wealth of the area relative to the national level. Values above or below 100 represent above-average wealth or below-average wealth compared to the national level.|
These new demographic attributes are availiable for cities, Counties, ZIP Codes and even neighborhoods (Census Blocks) when you search by a specific Pennsylvania address.
See our:
POPULATION
|Total Population||6,949|
|Population in Households||6,285|
|Population in Familes||4,228|
|Population in Group Qrtrs||664|
|Population Density||3,319|
|Diversity Index1||64|
INCOME
|Median Household Income||$81,501|
|Average Household Income||$103,818|
|Per Capita Income||$43,035|
|Wealth Index3||132|
HOUSING
|Total Housing Units||3,154 (100%)|
|Owner Occupied HU||1,199 (38.0%)|
|Renter Occupied HU||1,669 (52.9%)|
|Vacant Housing Units||286 ( 9.1%)|
|Median Home Value||$320,735|
|Housing Affordability Index2||115|
HOUSEHOLDS
|Total Households||2,868|
|Average Household Size||2.19|
|Family Households||1,445|
|Average Family Size||3|
|
|
GROWTH RATE / YEAR
|2010-2019||2019-2024|
|Population||0.81%||0.57%|
|Households||0.9%||0.63%|
|Families||0.61%||0.47%|
|Median Household Income||2.65%|
|Per Capita Income||2.5%|
The table below compares 19095 to the other 1,684 ZIP Codes in Pennsylvania by rank and percentile using July 1, 2019 data. The location Ranked # 1 has the highest value. A location that ranks higher than 75% of its peers would be in the 75th percentile of the peer group.
|Variable Description||Rank||Percentile|
|Total Population||# 531||69th|
|Population Density||# 165||90th|
|Diversity Index||# 65||96th|
|Median Household Income||# 187||89th|
|Per Capita Income||# 172||90th|
Additional comparisons and rankings can be made with a VERY EASY TO USE Pennsylvania Census Data Comparison Tool. | https://pennsylvania.hometownlocator.com/zip-codes/data,zipcode,19095.cfm |
structural integrity and the architectural character of the Mission.
In June 2013, the Foundation announced completion of the first phase of
the Mission’s restoration, which was the seismic retrofit and restoration
of the historic Mission’s Basilica. It was the third major restoration
of the Basilica since it was built in 1797.
Work began in August 2012, when scaffolding was erected, followed
by the installation of a weather protection structure over the
Basilica. After removing the roof tiles, roof trusses were strengthened
by
the installation of additional wood beams and metal collectors.
Cement bond beams and steel I-beams were inserted to reinforce
and tie the
structure together. Meanwhile, the 220-year-old-walls were stabilized
by drilling over 300 center-cored vertical and horizontal holes
into which steel rods were inserted and grouted into place, thus
strengthening
the existing walls without affecting the appearance of the Basilica.
New electrical and fire suppression systems were installed, together
with new interior lighting and custom-made chandeliers. The radiant
heating system was upgraded and a new Americans with Disabilities
Act restroom building constructed. Finally, before the scaffolding
was removed, repairs were made to the exterior walls, buttresses,
towers, and dome. Special restoration techniques and materials
had to be developed that were compatible with existing historic materials.
The Basilica is now three times stronger than before. As a result,
the earthquake warning signs have been removed, as it is no longer
an unreinforced masonry structure. Because of all these efforts,
this historic treasure will be preserved for the enjoyment of
future generations.
Originally estimated to cost $7.2 million, over $1 million was
saved by overlapping future Basilica restoration work with
the seismic
retrofit to take advantage of existing scaffolding and contractor
infrastructure already in place. The seismic retrofit, plus all of
the restoration work, was accomplished in record time and with no
lost-time accidents,
mainly due to
the professional leadership and the unprecedented cooperation
of the
preservation team and the outstanding efforts of the general
contractor, Blach Construction.
This most recent restoration work on
the Basilica represented the completion of the first phase of a multiphase
restoration
program
of the Carmel Mission complex. For a more complete description
of this restoration, see the 2013 Basilica Restoration Report.
Following
completion of the successful restoration of the Basilica, the Foundation
commissioned
and funded a comprehensive study to develop a Preservation Master
Plan for the remaining historic structures and artifacts in the
Carmel Mission complex. This Plan addressed the Mission’s five
museums (Downie, Mora, Convento, Munrás, and South Addition); the
Basilica forecourt; the Quadrangle Courtyard; the Orchard House
complex; and other remaining historic structures. This project
is much larger and more complex than the recent $5.5 million Basilica
restoration.
Courtesy of
Architectural Resources Group and Franks Brenkwitz & Associates
Applying the word restoration to this extensive effort would be
somewhat of a misnomer. Though the project builds on the past,
in many cases it goes beyond mere “restoration.” Not only will
we repair and restore the remaining historic structures, but most
importantly, we will seismically strengthen them, enhance life-safety
requirements like exiting and fire suppression, and upgrade infrastructure
(electrical, lighting, climate control, plumbing). Plans also include
improving accessibility to the site and courtyards.
Quadrangle Courtyard Renovation Completed
The first project in Phase II was the $2.0 million renovation of
the Quadrangle Courtyard, completed in 2016. The old concrete
surface, cracked and with many trip hazards, was removed. New
subterranean utility infrastructure such as water and fire lines,
drains, sewer, electrical, and communications was installed to
support future restoration of the Mission’s historic structures
surrounding the Courtyard. The Courtyard was then resurfaced
with a stronger, safer, and similar looking hardscape designed
to last for the next 75–100 years.
Museums Next
Currently, the next project will involve the Mission’s museums.
The project is being analyzed to determine the scope and optimize
preservation work sequencing to minimize cost and Mission disruptions.
This project will begin once sufficient funds have been raised.
Plans are to complete it in time for the Mission’s 250th anniversary
in 2021. As this project involves multiple structures and courtyards,
it will be the largest project undertaken yet. In the final analysis,
the tradition and responsibility of maintaining this 245-year-old
treasure and National Historic Landmark continues with all of us
in order to make the Mission safer, more enjoyable, and preserved
for future generations.
One of the major topics being addressed in the Mission preservation
effort is the visitor experience and visitor traffic flow. To
improve
visitor accessibility,
the slope of the upper parking lot and Basilica forecourt entrance will
be reduced to make access easier. A new arrival and entry patio
is planned to
better handle buses and visitors. The Downie Museum will become an orientation
center with a patio containing a 3-D tactile site model of the entire Mission
complex. A gently sloped boardwalk is planned for the covered veranda on
the south side of the Mora Chapel and Convento museums to provide easier
access. The Basilica forecourt will be resurfaced and the entry arch replaced
once work in the forecourt has been completed.
Visitor circulation, display cases, lighting, climate control, and information
signage will be improved. Museum displays will be upgraded and art and artifacts
cleaned and reorganized centering on the overarching theme of the importance
of the Carmel Mission. Most importantly, the historical look and feel of
the museums will be preserved.
The Preservation Team
The same preservation team that did such an outstanding job on
the Basilica restoration and Quadrangle Courtyard renovation
has now been re-assembled for the next project. This team consists
of the Mission’s Pastor, Diocesan Construction Coordinator and
Architects, the Carmel Mission Foundation, and the General Contractor,
Blach Construction. This Team, working collaboratively, funds
and drives the Mission preservation projects forward.
Preservation
Team members meet with Blach Construction to discuss planning for
the remaining Carmel Mission preservation work. (Left to right,
Kevin McIntosh, Blach Construction Project Manager; Mike Harney,
Blach Construction Project Superintendent; Vic Grabrian, Carmel
Mission Foundation President & CEO; Ken Treadwell, Blach Construction Vice President and General Superintendent;
Brett Brenkwitz, Franks Brenkwitz & Associates Principal Mission Architect; Brian Kelly, Mission Construction Coordinator;
and Pete Johnston, Blach Construction Sr. Project Estimator)
The Process
The Preservation Team has made a concerted effort to be as open
and inclusive as possible. Work on the Phase II Mission
restoration master plan began with a series of Team meetings in
2014. Subsequently, the
Docents
were
asked to give the Team guided tours of the Mission complex, identifying
from a visitor’s perspective things that worked, things that didn’t,
and to suggest improvements. This was followed by a series of Foundation
sponsored workshops, attended by Team members, Mission staff, Docents,
and parishioners to further refine ideas. Meanwhile the Team retained
preservation architects, museum consultants, structural engineers,
civil engineers, and other consultants to begin investigative and
discovery work and to develop recommendations for site safety and
accessibility changes, structural seismic strengthening, electrical
and fire safety infrastructure
upgrades, and museum improvements.
Over
the years, there were a series of building campaigns at the Mission.
Construction on the present Basilica began in 1793 and was completed
in 1797. Also, at this time, adobe buildings were constructed to
enclose the Mission courtyard on all sides. Following secularization
in 1834, all buildings fell into disrepair by the mid-nineteenth
century. The first restoration was undertaken in the 1880’s by
Father Casanova, who raised necessary funds to put a roof on the
old church.
In the second quarter of the twentieth
century, a second restoration effort commenced under the leadership
of Harry Downie. He devoted
five decades of his life to the restoration of the Carmel Mission
complex. According to multiple sources, the Basilica is one of
the most authentically restored of all mission churches. His work
included putting a new roof on the Basilica in 1937, returning
the roofline to its original look. The overall restoration of the
Carmel Mission complex, now underway, constitutes the third major
restoration of the Mission.
We
need your help now. To learn more about the different ways
you can help, please visit
our How to Help web page or click on the Donate button
below. Thank you. | http://carmelmissionfoundation.org/restoration.htm |
Description:
One of the main goals of the US Department of Health & Human Services Healthy People 2010 was to eliminate health disparities associated with gender, race/ethnicity, education/income, disability, geographic location, or sexual orientation1. This goal has been carried over to Healthy People 2030 to “eliminate health disparities, achieve health equity, and attain health literacy to improve health and well-being of all.2 The goal of eliminating health disparities is a complex process that requires attention to “powerful, complex interactions between health and biology, genetics, and individual behavior, and between health and health services, socioeconomic status, the physical environment, discrimination, racism, literacy levels and legislative policies” 3. One of the APTA’s guiding principles for achieving the vision of our profession addresses “access/equity”. Through this principle, the APTA acknowledges the existence of health inequities and disparities and committing to the development of innovative models for addressing them, that include partnering with communities.4 Community-based participatory approaches have been shown to be effective in providing a systematic approach to addressing these complex multiple factors that perpetuate health disparities.5
This session will describe APTA member physical therapists’ experiences developing, implementing and advocating for community-based outreach programs and initiatives to improve health, wellness, and prevention awareness and address racial health disparities. The session will draw from community-academic partnership projects, focus groups of Hispanic & African American communities and medical providers in medically underserved Chicago neighborhoods, results from focus groups of APTA members (conducted at CSM 2016 & 2017), and review of literature, to guide participants through the various phases of forming culturally responsive community-based partnerships and navigate sociocultural dynamics that emerge in the process.
CEU: 0.15
|
|
Search sessions by day, speaker, track, keyword, event type, or display all sessions. | https://apta.expoplanner.com/index.cfm?do=expomap.sess&event_id=28&session_id=14661 |
BASES OF RESISTANCE OF MATERIALS AND PERCEPTIONS ON STRENGTH
After studying chapter 2, the bachelor should:
know
- basic definitions, hypotheses and assumptions;
- types of deformation of the body;
- the basis of the stress-strain state of the body;
- the basic formulas for determining stresses and strength reserves;
- methods and principles of calculation for strength, rigidity and stability of structural elements;
be able to
- apply the knowledge gained in practice;
- perform strength calculations for various types of body loading;
- choose different schemes for evaluating the strength of real parts and mechanisms;
own
- conceptual apparatus in the field of strength;
- methods of calculation for strength;
- the skills of applying the knowledge obtained to practical calculations.
Basics
Hypotheses and assumptions
The problem of material resistance (CM) is the development of fairly simple but effective methods for calculating the strength, stiffness and stability of structural elements.
We give the definitions of the basic concepts of resistance of materials.
A beam - a body , whose two dimensions are small compared to the third (length). The line connecting the centers of gravity of the sections of the beam is called its axis. Depending on the shape of the axis, straight and curved bars are distinguished. The bars are of constant and variable cross section, solid and non-continuous, with an open and closed cross-section profile.
Deformation - changing the shape, dimensions and individual parts of a solid.
Moving - changing the position of the body or its individual parts in space.
If, after lifting the load, the body takes its original shape and size, then this phenomenon is called elasticity . Deformations of the body, disappearing after the removal of the load, are called elastic . If, after removing the loads, the body does not fully assume the original shape and dimensions, i. E. gets residual deformations, then this phenomenon is called ductility .
Strength - The ability of a design or its elements to withstand external influences without breaking.
Stiffness - the ability of a structure or its elements to resist elastic deformations.
Sustainability - the ability of the structure and its elements to maintain a certain equilibrium shape.
The SM is based on a number of hypotheses and assumptions that make it possible to simplify the solution of the tasks posed.
1. It is assumed that the material of the deformed body before and after loading fills the entire volume, i.e. The body has no voids and cracks. This assumption makes it possible to apply methods of mathematical analysis to solving problems of material resistance.
2. The material of the deformable body is homogeneous; does not contain any inclusions that change its physical and mechanical properties in any arbitrarily small microvolume.
3. It is assumed that the material is isotropic, i.e. its physical and mechanical properties in all directions are the same in the process of loading. Materials that do not have this property are called anisotropic.
4. The material has an ideal elasticity, i.e. After de-stressing, deformations completely disappear. The property of ideal elasticity is determined by the physical law Hooke: the displacement of points of the elastic body within certain loading limits are directly proportional to the forces causing these movements.
For linearly deformed systems, i.e. in the framework of Hooke's law, the principle of superposition or independence of the action of forces : the result of the action of a group of forces does not depend on the sequence of loading of the structure and is equal to the sum of the results of the action of each of the forces separately.
5. The principle of Saint-Venant: in sections , sufficiently far from the place of application of the load , the stress-strain state does not depend on the way the load is applied. Based on this principle, the distributed load can be replaced by concentrated forces in calculations.
6. The principle of invariance of the initial dimensions: the change in linear dimensions under loading is much less than the initial dimensions, ie. the displacement of body points due to its elastic deformations is small in comparison with the size of the body.
thematic pictures
Also We Can Offer! | https://testmyprep.com/subject/equipment/bases-of-resistance-of-materials-and-perceptions |
Patients with longstanding rheumatoid arthritis, called refractory, who had been treated with tumor necrosis factor inhibitors (TNFis), saw improvements after a year of taking rituximab (Rituxan). That’s according to an analysis from the Corrona RA registry, which was published in Clinical Rheumatology.
“Our results demonstrated that treatment with rituximab can improve health-related quality of life,” says Leslie Harrold, an associate professor at University of Massachusetts Medical School and a senior medical director for pharmacoepidemiology and outcomes research at Corrona.
A year after taking rituximab, the 667 patients in the registry, who failed to see improvement from at least one TNF inhibitor, saw 49 percent improvement in overall patient global assessment, 47.1 percent experienced less pain and, and 49.8 percent was less fatigued. (Of the 667 patients, 57.4 percent had used two or more TNFi before rituximab, and nearly 80 percent are female.)
“This is particularly notable, since the cohort consisted of the harder-to-treat patients, meaning those with long-standing disease with prior use of other biologic agents,” Dr. Harrold says.
The Corrona RA registry, per its site, is the “largest RA real world prospective cohort study in the world,” with more than 40,000 patients enrolled and more than 130,000 patient years tracked.
Data from patient-reported outcomes (PROs) is “incredibly important,” Dr. Harrold says. Since the goal of RA therapy is to reduce inflammation and pain and to improve function, physicians and researchers need to ask patients how they’re doing for them to receive optimal care.
“Collection of PROs is essential in order to ensure we are addressing the needs of the patient,” she says. “I think there is growing interest in patient-reported measures, and more research is emerging in this area.”
The data can also inform patients’ decision making processes. “Patients want to know what to expect for outcomes,” Dr. Harrold says.
The analysis “is among the first to describe patient real-world experience with rituximab treatment one year after initiation,” she and colleagues write in the analysis.
Having “just scratched the surface looking at the impact of disease on patient-reported measures,” Dr. Harrold says future research will focus on understanding how illness and its treatment affect patients’ day-to-day lives. “We should better understand the impact on home life, participation in social activities, and employment,” she says.
A study limitation is that researchers only collected patient-reported outcomes at year one, “so longer-term effects are uncertain,” reports Medpage.
“In this cohort of patients with established disease, it was gratifying to see the improvements that can occur,” Dr. Harrold says. | https://creakyjoints.org/treatment/patients-report-successful-outcomes-after-a-year-of-rituximab-analysis-shows/ |
THE fryxell LAB
Understanding the impact of harvesting pressures on population dynamics.
Our lab's research on harvesting is working to understand how different harvest strategies affect exploited populations and to develop more sustainable harvesting management techniques to aid in the population recovery of fish and wildlife.
Goal
To explore the interaction between harvest selectivity, harvest intensity and environmental factors and to evaluate the effects of these different drivers on the population dynamics of a continuously exploited, size-structured population.
We explore the impact of various harvesting strategies such as harvest pressure, selectivity as well as interactions with other anthropogenic or environmental factors on model populations through controlled laboratory experiments, field studies and by evaluating long-term historical data sets.
Interactions between harvest selectivity and temperature in an exploited population
Observing the responses to harvest selectivity within differing climate conditions in a continuously exploited, size-structured population.
Long-term effects of harvest selectivity on life history traits
Using bench top experiments to look at long term effects of different harvesting strategies on individual life history traits (number of offspring, reproductive frequency, body size).
Sustainable harvesting strategies.
Developing theoretical models to test the impact of current harvesting techniques on both the harvested population and the resulting economic consequences. | https://www.fryxell-lab.com/harvesting |
Selection of song perches by Cerulean Warblers.
Keywords: Cerulean Warbler, Dendroica cerulea, song-perch trees, territory, Indiana
**********
The Cerulean Warbler is a species of conservation concern; Breeding Bird Survey (BBS) data have demonstrated an annual population decline of 3.7% between 1966-1996 (Hamel 2000a). Only five other North American breeding bird species showed greater declines. Until recently, very little study had focused on this species (Hamel 2000b). Because of its dependence on large tracts of mature deciduous forest for successful breeding, much of its decline is likely due to extensive loss and fragmentation of forest tracts for agricultural use (Oliarnyk & Robertson 1996; Hamel 2000a).
Robbins et al. (1992) demonstrated that the Cerulean Warbler is a canopy-dwelling species. Among closely related species, this small bird spends most of its time higher in the canopy than other wood warblers (Hamel 2000a). Some studies have suggested that Cerulean Warblers use larger trees as song perches (Lynch 1981; Robbins et al. 1992; Hamel 2000a). However, given the substantial variation in habitat across its breeding range (differences in tree species composition, size of available trees, and forest tract size) and in behavioral variables (territory size and site fidelity), inquiry into song-perch tree characteristics on the regional level is essential (Hamel 2000a; Jones & Robertson 2001; Roth 2004). The purpose of this study was to determine if song-perch trees are larger and taller than surrounding trees within Cerulean Warbler territories in southern Indiana. Also, selection of specific tree species as song-perches was investigated.
STUDY AREA
This study took place from 1 May to mid-August of 2004 and 2005 in the Pleasant Run unit of the Hoosier National Forest, Yellowwood State Forest, and Morgan-Monroe State Forest in Brown, Morgan, Lawrence, and Jackson counties, Indiana (Fig. 1). Historically, the Cerulean Warbler was one of the most abundant breeding warblers in the Ohio and Mississippi river valleys (Hamel 2000a). As a part of that area, forest blocks used in this study are among the largest and most unfragmented in southern Indiana.
[FIGURE 1 OMITTED]
METHODS
Bird surveys.--In each of 10 study sites, presence of male Cerulean Warblers was determined by walking seven transects within a 1.96 [km.sub.2] plot, with seven sampling points per transect, each point 200 m apart (Fig. 2). Transect point locations were recorded in Universal Transverse Mercators (UTMs) coordinates using Global Positioning System (GPS) receivers. To reduce edge influences, surveys were conducted > 50 m from roads. Bird surveys began 1 May, and were completed by 30 May. Surveys were conducted between 0530-1030 h, excluding rainy days (presence of precipitation), when cessation or reduction of vocalizing may occur. At each survey point, 3 minutes of listening for Cerulean Warbler vocalizations commenced, followed by a 15 second playback of a conspecific male song in each of the cardinal directions to elicit a vocal response. This was followed by an additional three minutes of listening before moving to the next survey point (Falls 1981). The compass bearing and distance of detected males was estimated from the nearest transect point.
[FIGURE 2 OMITTED]
Territory mapping.--Most male Cerulean Warblers were relocated after initial detection during surveys because they maintain territories during the breeding season (Hamel 2000a). Once surveys provided the initial location of a male, territories were mapped by flagging a minimum of 5 trees in which males vocalized and perched. Singing from territory boundaries is the primary means of defending a territory and attracting potential mates (Hamel 2000a). For the purpose of this study, trees in which males vocalize are called song-perch trees. A territory is defined as the area within the perimeter of song-perch trees. UTMs of song-perch trees were recorded for ease in returning to the territory and calculating the territory center.
Vegetation measurement and analysis.--From early July to mid-August 2004 and 2005, vegetation was sampled within each of 43 territories, using the methods of James & Shugart (1970) outlined below. In territories, a 0.04 ha circular plot was marked at the approximate center of the territory. Diameter at breast height (dbh) and height were recorded for each tree in the plot with dbh [greater than or equal to] 10 cm, and species for each tree with dbh [greater than or equal to] 3 cm. A Nikon Laser 440[TM] compact rangefinder was used to determine tree heights. Dead trees were measured in the same manner as live trees. Dbh, height, and species were also recorded for all song-perch trees.
Dbh, height, and species were compared between trees from territory sample plots and song-perch trees. Results were calculated using all individual trees, as well as means calculated by territory, t-tests were used for comparison of individual trees, and paired t-tests were used for comparison of means. Chi-square analysis was used to compare tree species diversity between trees from sample plots and song-perch trees. Level of significance was set at P = 0.05.
RESULTS
Statistical analyses were computed using pooled data from 2004 and 2005. Mean values are reported as mean [+ or -] 1 SD. Mean number of Song-perch trees was 13.8 trees per territory (range 5-27). Cerulean Warblers used significantly larger ([bar.x] = 43.0 [+ or -] 14.1 cm, n = 594, P < 0.001, t = 19.26) and taller ([bar.x] = 26.9 [+ or -] 4.11 m, n = 591, P < 0.001, t = 18.10) trees for song perches than the average trees available within territories (2 = 27.4 [+ or -] 15.6 cm, n = 751; [bar.x] = 21.5 [+ or -] 6.04 m, n = 604, dbh and height, respectively) (Figs. 3, 4). Comparison of means by territory reflected the same pattern; perch trees were larger ([bar.x] = 44.3 [+ or -] 6.3 cm, n = 43) and taller ([bar.x] = 27.0 [+ or -] 2.2 m, n = 43) than surrounding trees ([bar.x] = 27.7 [+ or -] 3.71 m, n = 43; [bar.x] = 21.6 [+ or -] 2.0 m, n = 43, dbh and height, respectively) within territories (Figs. 5, 6).
[FIGURES 3-6 OMITTED]
Of the 39 tree species (including snags) present in territories, 12 were used more often than expected as song-perch trees (Table 1). Six species were used with less frequency than expected as song-perch trees (Table 1). Pawpaw (Asimina triloba), blue beech (Carpinus caroliniana), Ironwood (Ostrya virginiana), and grape (Vitis spp.), were never observed being used as song perches by Cerulean Warblers. Notably, 116 of 595 song perch trees in our study were white oak, and 105 were bitternut hickories. These two species made up 37% of all song perch trees in the study.
DISCUSSION
Within our study, only a handful (~2%) of song-perch trees within territories were < 15 m, and these were associated with canopy gaps or clearings. The vast majority of song-perch trees were relatively mature trees that were larger than those around them. Cerulean Warblers have long been known as canopy-dwelling birds (Robbins et al. 1992; Hamel 2000a). Robbins et al. (1992) noted that Cerulean Warblers were more often found in trees with larger dbh, and spent most of their time above the middle of the tree. In this study, song-perch trees had significantly larger dbh and height. Dbh and height are related, but there is much variation possible across individual trees, which gives value to comparison of both variables (Morey 1936).
Cerulean Warblers may also be selecting territories based on the presence of ideal song-perch trees. This will not always be apparent, as they will sing while foraging, preening, and in flight. However, when singing is the focus, they often choose and make return visits to exposed perches from which they sing (Hamel 2000a; KJ pers. obs.). Taller and larger trees may offer more high-quality song perches conducive to vocal projection. In a study of Golden-winged Warblers (Vermivora chrysoptera), song-perch trees were significantly larger than expected; and it was suggested that this song perch selection enhanced the birds' ability to display vocally and visually in mate attraction (Rossell 2001).
Much of the research on Cerulean Warbler habitat has focused largely on canopy structure, with less attention to tree species composition (Hamel 2000a, Jones and Robertson 2001, Weakland and Wood 2005). Oliarnyk (1996) and Hamel (2000b) reported no preference in Cerulean Warblers for tree species in nesting or foraging in Ontario and Tennessee populations, but song-perch tree preferences were not investigated. Among 13 species of foliage-gleaning birds in floodplain forests in southern Illinois, the Cerulean Warbier was the second most selective bird, closely following the Yellow-throated Warbler (Dendroica dominica), in tree species usage (Gabbe et al. 2002).
Gabbe et al. (2002) found that Cerulean Warblers in Illinois showed the strongest preference for shellbark hickory (Carya laciniosa) and bitternut hickory (Carya cordiformis). Tree species that were most strongly avoided were red maple (Acer rubrum) and blue beech. Shellbark hickory had such a low density overall in our study area, its usage by Cerulean Warblers cannot be compared to the study in southern Illinois. We also found that males selected bitternut hickories as song perches together with white oaks. Red maples were avoided, and blue beeches were never used for song perches. It appears that Cerulean Warblers are adaptable in their use of tree species across their breeding range, but where breeding areas have tree species in common, Cerulean Warblers appear to be fairly consistent in species selection.
Thirty percent of trees in territories were sugar maples, yet only 2.4% of song perch trees were of this species. On the other hand, white oaks and bitternut hickories represented only 5.1% and 3.7% of trees within territories, respectively, but 19.5% and 17.6% of song perch trees were of those species. The appearance of tree species selectivity in our study area may be due to a relationship between species and size parameters. Sugar maples may not have been used in proportion to their abundance because many of them were not canopy trees, and being a canopy-dwelling species, Cerulean Warblers would not encounter them. However, even in territories where sugar maples made up the majority of canopy trees, some males avoided them completely during our observations. The only sugar maples recorded as song perch trees were concentrated in the territories of what appeared to be a few exceptional birds, most of which were unpaired. Further study addressing tree species importance, or at least importance of certain crown types associated with groups of species, would be worth pursuing.
This study demonstrated that in southern Indiana, Cerulean Warblers are utilizing the largest and tallest trees in their territories as song perches. These trees may offer individuals some advantage in territory defense and mate attraction, acoustically and/or visually. Cerulean Warblers in southern Indiana also used tree species disproportionately to their availability, just as they have been demonstrated to do in Illinois (Gabbe et al 2002). Exploration of more specific aspects of song perch selection (e.g. documentation of perch heights relative to tree heights, approximate girth of perch branches or twigs, foliage cover on perch) throughout the breeding range would be helpful in deepening our understanding of the specific habitat needs of Cernlean Warblers (Robbins et al. 1992).
ACKNOWLEDGMENTS
Funding for this study was provided by the Indiana Academy of Science, Ball State University Office of Academic Research and Sponsored Programs, U.S. Fish and Wildlife Service, U.S. Forest Service, and Sigma Xi. This study would not have been possible without the field assistance of Kirk Roth, April Howard, Sarah Register, Matt Kellam, and Corey Shaffer. We thank John Castrale, one anonymous reviewer, and the Editor of the Proceedings for their comments and suggestions which greatly improved the quality of the manuscript.
LITERATURE CITED
Falls, J.B. 1981. Mapping territories with playback: An accurate census method for songbirds. Studies in Avian Biology 6:86-91.
Gabbe, A. P., S. K. Robinson, & J. D. Brawn. 2002. Tree-species Preferences of Foraging Insectivorous Birds: Implications for Floodplain Forest Restoration. Conservation Biology 16:462-470.
Hamel, RB. 2000a. Cerulean Warbler (Dendroica cerulea). In The Birds of North America, No. 511 (A. Poole and E Gill, eds.). The Birds of North America, Inc., Philadelphia, Pennsylvania.
Hamel, RB. 2000b. Cerulean Warbler status assessment. U.S. Fish and Wildlife Service, Minneapolis, Minnesota.
James, EC. & H.H. Shugart. 1970. A quantitative method of habitat description. Audubon Field Notes 24:727- 736.
Jones, J. & R.J. Robertson. 2001. Territory and nest-site selection of Cerulean Warblers in eastern Ontario. The Auk 118:727-735.
Lynch, J.M. 1981. Status of the Cerulean Warbler in the Roanoke River basin of North Carolina. Chat 45:29-35.
Morey, H.F. 1936. Age-size relationships of Hearts Content, a virgin forest in northwestern Pennsylvania. Ecology 17:251-257.
Oliarnyk, C.J. 1996. Habitat selection and reproductive success in a population of Cerulean Warblers in southeastern Ontario. M.S. thesis, Queen's University, Kingston, Ontario, Canada.
Oliarnyk, C.J. & R.J. Robertson. 1996. Breeding behavior and reproductive success of Cerulean Warblers in southeastern Ontario. Wilson Bulletin 108:673-684.
Robbins, C.S., J.W. Fitzpatrick & P.B. Hamel. 1992. A warbler in trouble: Dendroica cerulea. Pp. 549-562 In Ecology And Conservation Of Neotropical Migrant Landbirds (J. M. Hagan, III. and D. W. Johnston, eds.). Smithsonian Institution Press, Washington, D.C.
Rossell, C.R. Jr. 2001. Song perch characteristics of Golden-winged Warblers in a mountain wetland. Wilson Bulletin. 113:246-248.
Roth, K.L. 2004. Cerulean Warbler breeding biology. M.S. thesis, 46 Pp. Ball State University, Muncie, Indiana.
Weakland, C.A. & EB. Wood. 2005. Cerulean Warbler (Dendroica Cerulea) Microhabitat and Landscape-Level Habitat Characteristics in Southern West Virginia. The Auk 122:497-508.
Manuscript received 27 December 2005, revised 7 March 2006. | https://www.thefreelibrary.com/Selection+of+song+perches+by+Cerulean+Warblers-a0162455041 |
Acadian cuisine facts for kids
Acadian cuisine (French: Cuisine acadienne) is the traditional dishes of the Acadian people. It is primarily seen in the present-day cultural region of Acadia. Acadian cuisine has been influenced by many things throughout its history, namely the Deportation of the Acadians, proximity to the ocean, the Canadian winter, soil fertility, the Cuisine of Quebec, Native Americans, American cuisine and English cuisine. The cuisine of immigrants and trade with specific regions of the world have also played small roles.
Acadian cuisine is not very well known in Canada or internationally. It has much in common with Québécois cuisine because of shared history and heritage as well as geographical proximity. The two often feature the same dishes, but the cuisine of Acadia puts more emphasis on seafood. Acadian cuisine has notably served as the base for Cajun cuisine because the Cajun are descendants of Acadians who were deported to Louisiana. Its also believed that Acadians are responsible for normalizing potato consumption in France—a vegetable the French once considered poisonous.
History
French colonists who settled Acadia in the 17th century adapted their 16th-century French cuisine to incorporate the crops, seafood and animals that flourished in the region. Their descendants became the Acadian people and their ingenuity created Acadian cuisine.
After the English conquered Acadia during the 18th century, they decided to deport the Acadians and take their settlements, which were often build on the most fertile earth in the colony. Most Acadians did not manage to escape the deportation. But, of those who did, most fled to the east and north of New Brunswick. As such, Acadian cuisine in the 18th century was refocused around what could be grown and used in the less fertile lands of the East Coast of New Brunswick and the Upper St. John River Valley.
Ingredients
Acadian cuisine often features fish and seafood, especially cod and Atlantic herring, but also mackerel, berlicoco, lobster, crab, salmon, mussels, trout, clams, flounder, smelt and scallops. Most fish is consumed fresh, but some are boucané (smoked), marinated or salted.
The most commonly used meat is pork, followed by chicken and beef. As with the rest of North America, turkey is commonly consumed during the Holidays. Game like deer, hares, ruffed grouse and moose is consumed regularly in some regions. Game will replace livestock meat if present and can be given as a gift. In some regions, for example Caraquet and the Îles-de-la-Madeleine, more unusual game is or was caught like seal, bear and seagull.
The vegetables of Acadian cuisine are the potato, onion, carrot, turnip, legume, beet, squash and corn. These vegetables were popular because they were easily preserved for the winter in root cellars and jars.
Popular fruits include: blueberries, apples, strawberries, raspberries, blackberries, plums, pears and cranberries.
Some ingredients like rice, molasses, dried raisins and brown sugar make a frequent appearance in Acadian recipes because of historical commerce between Acadia and regions like the Antilles and Brazil.
Dishes
Some examples of traditional Acadian dishes are:
- Beurre de homard—lobster butter
- Bouilli Acadien—a boiled dinner consisting of potatoes, salted beef or pork, carrots, green beans, cabbage and turnips.
- Bouillie à la viande salée
- Bouillon aux coques
- Chiard/Mioche—purée of potatoes, carrots and/or turnips.
- Chow-Chow— a North American pickled relish.
- Cipâte—sea-pie
- Coques frites—fried clams
- Coquille Saint-Jacques—a sea scallop dish.
- Cretons—a type of boiled, ground pork pâté.
- Croquettes de poisson—fishcake.
- Doigt-à-l'Ail—garlic finger
- Fricot—a type of stew, consisting of potatoes, onions, and whatever meat was available, topped with dumplings.
- Morue bouillie avec patates et beurre fondu—cod and pan-fried potatoes in butter.
- Pain au homard—a lobster and mayonnaise sandwich.
- Pâté au poisson—fish paste.
- Pâté chinois—mashed potatoes, ground beef and creamed corn.
- Pets de sœurs—"nuns’ farts", pastry filled with butter and brown sugar, rolled, sliced and baked.
- Ploye—pancake-like mix of buckwheat flour, wheat flour.
- Pouding chômeur—poor man’s pudding.
- Poutine râpée—boiled potato dumpling with a pork filling.
- Poutine à trou—baked apple dumplings.
- Poutine au bleuet—French fries with cheese, gravy, and blueberries.
- Ragoût—a thick kind of soup.
- Rappie pie/Râpure—grated potatoes and chicken or salted pork.
- Soupe aux pois—Canadian pea soup.
- Tarte au sucre acadienne—sugar pie.
- Tchaude—fish chowder.
- Tourtière: meat pie. | https://kids.kiddle.co/Acadian_cuisine |
Fog over the Forest: How do the land and atmosphere affect one another?
We all know that humans are changing the planet’s surface – deforestation, farming and urbanisation have radically transformed the environment that surrounds us. We also know that humans are changing Earth’s climate, mostly through greenhouse gas emissions, and that the changes we make to the landscape contribute directly towards this, whether it’s through methane released from the ruminants we graze or smoke from our burning of the forests. But what’s less clear is the effect that changes to the climate and atmosphere themselves have on the world’s physical landscape beyond our direct control, and whether this can help to offset or will worsen global warming and further climate change. In other words, will Earth’s surface will provide a ‘negative’ or ‘positive’ feedback to its climate?
Plants can be affected by our greenhouse gas emissions in two ways. As we all know, they take in carbon dioxide during photosynthesis, so we might expect that increased concentrations of the gas in our atmosphere would increase the rate of plant growth, a so-called ‘negative feedback’ because this would draw down more carbon dioxide, locking it up in trees and other vegetation and decreasing the atmospheric concentration again. Studies have shown that this is indeed occurs, at least on a small scale: pumping carbon dioxide into tree plantations to double its concentration produced a twenty per cent increase in plant biomass above ground and a forty per cent increase below ground in a recent Free Air Concentration Enrichment (FACE) experiment.
But the extent to which this will follow through to larger-scale forests and grasslands is unclear. Furthermore, in parts of the world where the climate changes significantly, the excess stress put on plants may hinder their growth – a second, less desirable, effect of our emissions. For example, the Amazon is expected to get warmer and drier, conditions that are less conducive to tree growth and survival. Therefore, a ‘positive feedback’ could develop in some regions whereby climate change induced by greenhouse gasses makes it harder for plants to grow, decreasing the uptake of carbon dioxide. This would switch places like the Amazon from carbon ‘sinks’ to carbon ‘sources’ within decades, even if humans don’t cut down more trees and convert the land to animal pasture.
And it’s not just vegetation we need to account for. The soil itself ‘respires’ through a variety of chemical and biological processes that cause it to give out greenhouse gasses of its own. The amount released changes with temperature and moisture content, and is expected to increase as the world as a whole warms. So whilst Earth’s land surface is at the moment a net sink of carbon dioxide, taking into account both vegetation and soil, it could become a net source by 2050.
This could be counterbalanced, at least initially, by an increased uptake of carbon by the oceans – after all, they are estimated to have already absorbed up to half of the carbon dioxide we’ve emitted since the industrial revolution. More carbon dioxide in the atmosphere means more photosynthesis by ocean plant-life – the microscopic ‘phytoplankton’ in surface waters – and that more of the gas gets dissolved directly in surface waters. But both these effects will begin to fall away as the ocean becomes more saturated with it.
All in all, it looks as though feedbacks from the land and oceans that help to mitigate climate change at the moment will become less and less powerful as greenhouse gas emissions climb. If the ground beneath our feet changes from sink to source, it’s vital that we take into account its interaction with the climate and atmosphere in our projections of future change. But much more needs to be done if we are to understand this fully: as the population expands and land-use is changed directly as well as indirectly by human hands, the feedback effects and its repercussions for our climate, health and survival will become increasingly difficult to predict. One thing that is for sure is that we can’t rely on land-atmosphere feedbacks to provide a buffer against our emissions indefinitely. | https://www.bangscience.org/2015/02/environmental-damage-positive-feedback/ |
Mechanisms of Resistance
Bacteria can resist the effects of antimicrobials through a variety of mechanisms. In some cases this resistance is innate, but in many others it is acquired. Figure 21.14 depicts the most common mechanisms of acquired antimicrobial resistance.
Drug-Inactivating Enzymes
Some organisms produce enzymes that chemically modify a specific drug in such a way as to render it ineffective. Recall that bacteria that synthesize the enzyme penicillinase are resistant to the bactericidal effects of penicillin. As another example, the enzyme chloramphenicol acetyltransferase chemically alters the antibiotic chloramphenicol.
Alteration in the Target Molecule
An antimicrobial drug generally recognizes and binds to a specific target molecule in a bacterium, interfering with its function. Minor structural changes in the target, which result from mutation, can prevent the drug from binding. For example, alterations in the penicillin-binding proteins prevent b-lactam drugs from binding to them. Similarly, a change in the ribosomal RNA, the target for the macrolides, prevents those drugs from interfering with ribosome function.
Decreased Uptake of the Drug
The porin proteins in the outer membrane of Gram-negative bacteria selectively permit small hydrophobic molecules to enter a cell. Alterations in these proteins can therefore alter permeability and prevent certain drugs from entering the cell. By excluding entry of a drug, an organism avoids its effects. ■ porins, p. 59
Increased Elimination of the Drug
The systems that bacteria use to transport detrimental compounds out of a cell are called efflux pumps. Alterations that result in the increased expression of these pumps can increase the overall capacity of an organism to eliminate a drug, thus enabling the organism to resist higher concentrations of that drug. In addition, structural changes might influence the array of drugs that can be actively pumped out. Resistance that develops by this mechanism is particularly worrisome because it potentially enables an organism to become resistant to several drugs simultaneously. ■ efflux pumps, p. 56
You Are What You Eat
Nutrition is a matter that people spend their careers learning about and requires volumes of books to explain. My objective is to instruct you how to consume a healthy nutritional diet that aids your body in burning off fat instead of storing it. You do not require overwhelming science to get this. | https://www.alpfmedical.info/causative-agent/mechanisms-of-resistance.html |
Autonomic dysfunction
Please note: The following text cannot and should not replace advice from the patient's healthcare professional(s). Any person who experiences symptoms or feels that something may be wrong should seek individual, professional help for evaluation and/or treatment. This information is for guidance only and is not intended to provide individual medical advice.
The information included in this sheet relates to hypermobile Ehlers-Danlos syndrome (hEDS) and the hypermobility spectrum disorders (HSD) only.
The autonomic nervous system (ANS) is responsible for controlling blood pressure, fluid and salt balance in blood and body tissues, visceral (e.g. heart, lung, kidney, bowel) function and body temperature. Individuals with hypermobile Ehlers-Danlos syndrome (hEDS) can suffer with symptoms that appear to be related to abnormal function of the ANS; the term autonomic dysfunction is used. In particular they may suffer from problems related to heart rhythm and blood pressure. Similar problems are found in fibromyalgia and chronic fatigue syndrome. How chronic widespread pain, fatigue and autonomic dysfunction are linked is a question open to research. Potential mechanisms include the interactions of hormones and other chemicals in pain centres and autonomic centres of the brain and spinal cord, and the impact of physical de-conditioning (e.g. of muscles, heart) that occurs as a consequence of widespread pain and reduced activity.
Symptoms
Many of the common symptoms reported relate to changes in posture. They occur when changing from a lying or sitting to a standing position, and are relieved by sitting or lying down. These include:
Fast heart rate (palpitations)
Dizziness
Light-headedness
Blurring of vision
Loss of concentration
Fear of or actual ‘blacking out’
Swelling in the legs after standing for only relatively short periods of time (e.g. 30 mins)
Individuals often also notice associated tiredness, tremor, sweating, anxiety and clumsiness at the same time. The symptoms are often more sudden or more severe if:
Dehydrated
Anaemic (low red cell blood count)
In a hot environment
After exercise
After alcohol or caffeine
During other illness
After long periods of rest
Other symptoms may NOT be related to sudden changes in posture. These include:
Tiring easily
Reduced concentration
Inability to exercise
Intolerance of hot or cold environments
Anxiety
Excessive sweating
Muscle and joint pain
Bowel dysfunction – akin to irritable bowel syndrome
Many different types of medications can also cause these kinds of symptoms, particularly related to their effects on blood pressure control when changing posture. The more common ones include:
There are three typical conditions described: orthostatic hypotension (OH), orthostatic intolerance (OI), and postural tachycardia syndrome (PoTS). These can be diagnosed in a clinic, without the need for complex tests, if the following are identified:
Orthostatic hypotension – a rapid drop in blood pressure by more than 20 systolic /10 diastolic mmHg from that when sitting that occurs within 3 minutes of standing.
Orthostatic intolerance – the same degree of blood pressure drop as above but over a more protracted period of time, e.g. 5–10 minutes, and symptoms relieved on lying down.
Postural orthostatic tachycardia – a greater than 30 beat-per-minute rise in the pulse on standing or a count greater than 120 beats per minute after 10 minutes with no other known cause.
Other tests that can be done simply in a clinic with the right equipment include an ECG (electrocardiogram) heart trace, measure of heart rate changes with deep breathing and the Valsalva manoeuvre (where one takes a deep breath in, holds the breath, and forces the pressure up inside the lungs and abdomen (as if straining on the toilet!)), and measuring the effect on pulse and blood pressure of sustained forceful handgrip. Even mental arithmetic testing can activate an over-sensitive autonomic nervous system (though this may of course be ‘nerves’ if your maths is weak!)
Laboratory investigation of cardiovascular autonomic dysfunction
Complex symptoms that are not easily identified by clinical tests or seem not to respond to simple treatments require further investigation. The common tests used include:
Head-up tilt: This tends to be used to study causes of blackouts (syncope). After resting flat for 30 minutes on a specially designed bed, the bed is tilted upright to about 60–80° from horizontal. The normal body response would be an increase in heart rate by about 10–15 beats per minute, a rise in diastolic pressure (the lower figure of the blood pressure, i.e. 80 if the bp was recorded as 120/80) by approximately 10 mmHg, and virtually no change in systolic pressure (the upper figure of the blood pressure recording). If the test reproduces the patient’s symptoms it is considered positive, even if there is no actual blackout or changes akin to OH or PoTS as described above.
Heart rate variability analysis: These tests are based on the fact that heart rate is modulated by impulses from two types of autonomic nerves and chemicals. These are the ‘sympathetic’ and ‘parasympathetic’ branches of the ANS. The tests can be done during a head-up tilt test. Different responses to the heart rate and nature of the heart’s electrical signals can inform the autonomics expert of either sympathetic or parasympathetic dysfunction, which can help in determining next steps in treatment.
Other screening tests: Food ingestion can sometimes trigger low blood pressure (postprandial hypotension). Again using the head-up tilt table, the cardiovascular responses to a balanced liquid meal can be measured, responses are measured while lying down before the meal and on tilt test 45 minutes later. Responses to hot and cold can also be tested.
Finally, it may be necessary to take blood tests that measure catecholamine levels (sympathetic and parasympathetic chemicals).
Treatment of OH and PoTS
The symptoms can often be successfully managed with the simple remedies of increasing water and salt intake, and support stockings. Exercise to improve muscle re-conditioning and heart condition is also important.
Medications:
Different classes of drugs do different things to help the symptoms of OH and PoTS. These are best prescribed by an expert and after more detailed testing as to the cause of the autonomic dysfunction.
They may have the effect of:
Increasing the blood flow / total amount of fluids in the circulation (e.g. fludrocortisone and clonidine)
People with OH / PoTS often express concern over bowel symptoms that are labelled “irritable bowel”. These include bloating, pain and hard stools fluctuating with diarrhoea. In the majority of cases, a cause for these symptoms is not found following investigations such as upper and lower bowel endoscopy (camera tests) and dynamic bowel tests using things like barium X-ray and CT scanning. The term functional gastro-intestinal disorder (FGID) is used to describe this situation when no abnormality can be found.
It has been suggested that some individuals may have autonomic dysfunction of the bowel as a consequence of imbalance or over-sensitivity to the same chemicals that are associated with pain and autonomic dysfunction in the brain and the heart. Changing the effect of these chemicals in the bowel may be one way in which classes of drugs like anti-depressants help reduce the symptoms of irritable bowel. The more common treatments for irritable bowel syndrome may be of benefit. These include:
Fibre: There are two main types of fibre – soluble fibre (dissolves in water) and insoluble fibre. More soluble fibre is the current advice. It can be found in powder form in pharmacies and health food stores and sourced from oat, ispaghula (psyllium), nut and seed including linseed oil (good for bloating). Limit insoluble fibre intake, e.g. reduce corn and limit fresh fruit to three portions (about 80g each) per day. Fibre helps bulk up faeces (stool), encourages retention of water in faeces and thus better transit through the bowel whilst also reducing the risk of constipation.
Have regular meals and take time to eat at a leisurely pace.
Drink at least eight cups of fluid per day, especially water or other non-caffeinated drinks.
Restrict tea, coffee, fizzy drinks and alcohol to a minimum.
Diarrhoea can be triggered by the artificial sweetener sorbitol often found in sugar-free foods, sweets and drinks.
A dietician may be able to advise on an exclusion diet if there appears to be an intolerance to certain food products such as dairy, refined sugar / flour or certain vegetables such as onions.
Probiotics – these are nutritional supplements that contain good gut bacteria that may not be present in healthy quantities. When levels are low, it allows ‘bad’ bacteria the opportunity to flourish, often leading to bloating. Probiotic bacteria are found in dairy products, i.e. milk drinks, yoghurt, cheese and ice creams, often advertised in food stores as containing ‘live’ bacteria or ‘cultures’.
Antispasmodic medicines – the most common ones to be prescribed are mebeverine, hyoscine and peppermint oil.
Anti-diarrhoeal drugs – loperamide is the most commonly used.
Clinical psychology – anxiety and stress can often be the trigger for IBS.
| |
Patients with chronic fatigue syndrome (CFS) not only present a constellation of symptoms, but physicians also use a variety of heterogeneous criteria to diagnose them. Patients’ disease burden is made heavier by the fact that researchers still don’t understand the pathophysiology that underlies this stigmatized and debilitating condition.New research suggests that, although wholesale differences in autonomic parameters may not exist between patients and controls, differences in objective autonomic parameters do exist across different groups of patients with CFS, suggesting that different CFS phenotypes may exist.
Also by this Author
Laura Maclachlan, PhD, formerly a graduate student at Newcastle University in the U.K., and colleagues published their findings on CFS online Oct. 20 in PLOS One.1 This is the first study to use a validated measure of CFS symptoms, the DePaul Symptom Questionnaire (DSQ), to place CFS patients along a disease spectrum. The investigators recruited 49 individuals with CFS and 10 matched controls, all of whom completed a comprehensive series of tests. In addition to the DSQ, the investigators measured heart rate and blood pressure as indicators of autonomic function, because variability over a 10-minute rest can indicate an imbalance between sympathetic and parasympathetic autonomic function.
The investigators found no significant differences in objective autonomic testing between patients with CFS and controls. Their findings contradict previous studies that showed significant objective differences in autonomic function between CFS and control subjects. The authors suggest their results reflect the fact that sedentary controls were included in their study and individuals with comorbid depression were excluded from the study. Autonomic dysfunction is associated with depression, and thus, removing individuals with depression from the study meant researchers were examining only a specific subset of individuals with autonomic dysfunction.
Although the researchers did not find objective, autonomic dysfunction in the patients with CFS, the patients with CFS reported significantly greater autonomic and cognitive impairment relative to matched sedentary controls. When researchers placed the patients with CFS in subgroups based on their DSQ score, they found different groups had different levels of autonomic dysfunction and cognitive impairment. This finding suggested to them that different CFS criteria may not only diagnose a spectrum of disease severities, but also diagnose different CFS phenotypes or different diseases altogether.
The investigators propose that the differences between the DSQ subgroups may reflect an additive effect of diagnostic criteria. In other words, the absence of autonomic symptoms may suggest a different, less severe disease phenotype with fewer features of autonomic dysfunction. In contrast, more symptoms result in greater symptom burden and disease impairment, leading to a subgroup of patients with greater functional impairment overall. | https://www.the-rheumatologist.org/article/chronic-fatigue-syndrome-different-phenotypes/ |
A hectic workload. A sick family member. An important exam that determines your future. A natural disaster. An interpersonal conflict in your office.
All of these scenarios have one thing in common: they are inherently stressful.
Regardless of our backgrounds, at some point we all encounter many types of stressors – or threatening situations that trigger our brains to release stress hormones. Many refer to this as the fight or flight response. This reaction has protected humans from danger since we first walked the earth, because stressful situations have always been an inevitable part of life.
Although we’ve all experienced stress to some degree, many people often mistake it for anxiety – or vice versa. Increased heart rate, irritability, difficulty concentrating, and fatigue are all symptoms of both stress and anxiety, so how do you distinguish one from the other?
Continue reading to learn the difference between these conditions and how to know it’s time to seek help from one of our mental health providers.
Stress vs. Anxiety: What’s the difference?
When faced with pressure at work or school, a sudden life change, financial challenges, or another trying situation, it’s normal to feel stressed. Stress occurs whenever you experience a situation that makes your body react with physical, emotional, or mental strain. This response is intended to keep you “safe” until the external stressor is no longer affecting you.
Anxiety, on the other hand, deals with hypothetical stressors that could affect you in the future. You may feel scared, worried, or nervous when thinking about possible situations that are inherently stressful, like failing a test or having an accident.
However, when you’re experiencing anxiety, there is not an external stressor to warrant this response – at least not yet. Instead, your brain is trying to keep you safe ahead of time by preparing for every outcome.
How to Find Relief
Stress and anxiety are a natural part of life. Without intervention, however, both can lead to a range of chronic issues that affect your mind and body. Some healthy ways to cope with stress and anxiety include:
When to Seek Professional Help
If you practice healthy habits like those listed above but still feel mentally imbalanced, it may be time to schedule an appointment with a professional.
Short-term stress and anxiety are normal. However, experiencing either of these conditions long-term can be a sign of a mental illness, like generalized anxiety disorder, posttraumatic stress disorder, or another condition. Serious health effects like heart disease, gastrointestinal issues, headaches, lack of sleep, and depression can occur if these issues are left undiagnosed and untreated.
If you’re struggling to cope with excess stress or anxiety, our mental health providers at Rural Psychiatry Associates are here to help. We offer psychological screenings and a variety of treatment options to help you live a calmer, more fulfilling life.
Contact us today to schedule an in-office or telehealth appointment. | https://ruralpsychiatryassociates.com/feeling-stressed-out-heres-how-to-tell-if-its-actually-anxiety/ |
Аннотация: Looking at authentic student errors, classifying them, using a correction code, producing remedial exercises and improving your own practice.
Psycholinguistics is the study of language acquisition. It brings together the theories of psychology and linguistics in order to carry out this study. This is truly an interdisciplinary field. Linguists study the structure of language, ie sounds and meanings, and the grammar to which they relate, and they come together with psychologists who investigate how people acquire the structures and functions, and use them in speech and understanding. The work can be used to inform TEFL but also to help people with speech and language difficulties where some important component of communication may not have been acquired.
This knowledge of the difficulties in learning a foreign language, and consideration of the possible causes of error should lead you, as a teacher of ESOL, to develop a helpful attitude towards your students. Your students need to be confident enough, and 'uninterrupted' enough to be fluent, while at the same time, they need to know that they will be corrected and not allowed to continue making the same mistakes. Fluency and accuracy are the aims, effective communication, the ultimate goal, being achieved by a blend of the two.
The relationship between their first language and English will affect students learning in both negative and positive ways. Your attitude to languages will also affect your students negatively and positively. Think of all the elements we have talked about that go together to produce a successful language learner. Learning English is a difficult process! | http://www.intuit.ru/studies/educational_groups/1296/courses/778/lecture/28767 |
The present research looks at the main migration patterns and trends of internal and outward migration from Ukraine trying to assess the push and pull factors for regular and irregular migration which affect children. It focuses on the impact of migration on children’s human rights, on the risks, and on the vulnerabilities that children are confronted with at different stages of migration. Also, we sought to identify and analyse specific systemic failures and gaps, the needs and the rights violations of children and families, and whether these elements are specific to a particular group or to a migration pattern. Our conclusions are based on the analysis of available data (grey literature, legal and policy frameworks, and other documents) and of the information collected during four focus-groups and 51 semi-structured interviews with children affected by migration, as well as in several cases with their families, with teachers, and with statutory and non-statutory stakeholders (government officials and staff from supporting NGOs).
The first chapter captures different migration patterns and trends in, to, and from Ukraine. There are two types of migration within Ukrainian territory: the voluntary movement of people and the forced internal displacement caused by the armed conflict in the Eastern regions of the country. The main reasons of continuous emigration of Ukrainians appear to be related to the need for more work and education opportunities. One of the most serious problems induced by parental migration is its distressing psychological impact on the children left behind. As a result, these children often have to face an increased psychological stress; they can get in difficult life situations, and have to deal with behavioural problems. The immigration trends towards Ukraine are declining because of the economic, social and political situation, which makes it less attractive for foreigners. Moreover, persistent obstacles to access the asylum procedure, the lack of legal assistance, and the risk of detention are some of the factors that dissipate the will to seek asylum in this country.
In the second part, we analyse the legal and policy frameworks related to children’s rights in the migration field. The multiplicity of international conventions, domestic laws, and other regulations do not effectively guarantee the right of children to protection, let alone the rights of those affected by migration. Particularly difficult situations such as the sexual exploitation of children in travel and tourism, trafficking of children for the purpose of sexual or labour exploitation, and the forced displacement of children produce persistent problems which are not met with adequate responses from Ukrainian authorities. The gaps in regulations, in practice and procedures, and the scarce understanding of these phenomena by both authorities and society are among the main obstacles in solving these problems.
The third part reflects the children’s narratives on their migration experiences. We used children’s rights as a grid for structuring the interviews with children. The results of our empirical study show how important the ideas of ‘belonging’, ‘parenting’, ‘understanding’, ‘home’, ‘friendship’ and others are for the children and youngsters. Their perception about their ‘human rights’ supports many transformations in various environments: ‘home’, ‘school’, ‘origin country’ or ‘host country’. Our focus was on how mobility between these environments can shape their sense of human rights, and to what extent the protection or the violation of these rights influence their wellbeing in everyday life. We found that the majority of them have some knowledge about their rights and are capable to express with their own words whether these rights are protected or not, respected or violated in different circumstances.
Our conclusions are based on the analysis of legal and policy frameworks on migration and children’s rights and on the findings of our empirical research with concerned children. Some of the structural and institutional failures and gaps that we uncovered have a direct impact on children’s rights; others are more linked with the adults but also have indirect repercussions on children’s rights. | http://tdh-europe.org/library/situational-analysis-the-impact-of-migration-on-childrens-rights-in-ukraine/7279 |
How to Combine Quilting Designs
With so many different quilting motifs available, it’s hard to decide which ones to use and how. With custom quilting, we now have to decide on multiple quilting designs to use on our quilt top. When using multiple designs, how do you decide which ones will work together and how to combine the different elements? This week we dive into techniques you can use to discover how those elements will interact with each other and different ways to combine them to create multiple outcomes.
When deciding how to combine different motifs, I start with about 8-10 foundational designs. I will get very comfortable with these designs and use them in many different ways until I am very familiar with how they interact together and will look together. This allows me to have a strong starting point when creating a quilting plan. Before I even put those designs down on paper, and later on the quilt, I’ll know how they will look. After I have this foundation pack of elements I am confident with, then I will add more designs in to my plans.
After you have decided on your favorite foundational quilting motifs, sketch them all out on one piece of paper. Don’t worry about any specific pattern or layout, just draw them one after the other. Seeing them all in one space will help you to start to understand how they are going to interact with each other. Are they very similar and will blend well? Do they create a lot of strong contrast? Is the scale or density similar or very different? Are the styles cohesive and complimentary? By thinking about the different elements in these ways, you’ll be able to see how they will work together (or not) in the quilting.
There are different ways to combine quilting designs. One way, is to quilt them in such a way that they become one element. By simply switching back and forth between complimentary designs and keeping them all the same scale, they will work together and read as one element. This works very well in background areas or as fills around other designs or pieced elements. If you take those same designs and alter the scale or density of one of the elements, they can now work to really highlight one specific area or design. The larger or less dense quilting motif will stand out and take center stage.
Creating multiple quilting designs can also be done with one motif. You can change the scale of that one element to create two almost distinct motifs. Having a small tightly packed area next to an area with that same motif on a larger scale will create new shapes and movement throughout the quilt.
Another important principle to remember when combining designs is directional tendencies. If I have two similar designs, but I want them stand out from each other, you can often alternate the direction that you quilt them. There are some elements that are more prone to being vertical or horizontal, and by alternating how you quilt them, they really become strong elements that make each one distinct. The contrast in the direction enhances both of the motifs creating interest.
Finally, when designing quilting plans, how you separate the areas can make a big difference. Sometimes we simply choose to differentiate the designs by scale, density, or direction changes. Other times, you can separate those designs by stitching out new shapes to fill. When I do this, I often stitch an echoed line close to the first to really define those areas. When filling in the different shapes, you will want to consider the same principles that we’ve discussed to create the final look you want.
Learning to combine quilting designs and which ones will work best with each other is a process. Each time I complete a quilt top, I learn a little more about how the final project came together and how I could improve it the next time. Take the time to practice and do lots of doodling. I never start without a quilting plan so I have seen which elements I plan to use and how they will work together on paper before I start stitching them with thread. And as always, most of all, have fun. Quilting is an art form and you should definitely show in whatever you create. | https://www.onwilliamsstreet.com/how-to-combine-quilting-designs/ |
What Type of Content Will Drive Growth in Live Streaming?
2:29
Bulldog Digital Media's John Petrocelli predicts that live music experiences will be the next key driver in growth for live streaming.
-
Three Ways to Replace Flash for Low-Latency Live Streaming
5:11
Limelight's Charlie Kraus discusses three emerging strategies for delivering low-latency live streaming in the post-Flash era.
-
The Case for IP Streaming Over Satellite for Large-Scale Live Events
3:17
GigCasters' Casey Charvet and BlueFrame's Chris Knowlton discuss legacy and emerging workflows for delivering large-scale live events via satellite.
-
How to Maximize Reach With Niche Audiences on Social Video Platforms
3:14
LiveU's Claudia Barbiero, Take One's Troy Witt, and Oomba TV's David Compton discuss strategies for maximizing reach with niche content on Twitch, Facebook Live, and other platforms in this clip from Live Streaming Summit.
-
Network Compliance Caveats for Social Media Streaming
1:22
Wowza's Tim Dougherty underscores the importance of knowing and following network compliance guidelines when streaming to multiple social media sites such as Facebook, YouTube Live, Twitch, and Periscope.
-
How Much Latency Will Consumers Tolerate in NFL Live Streams?
2:34
Amazon's Keith Wymbs and Jim De Lorenzo discuss how they've met the challenges of improving latency in Amazon's Thursday night NFL broadcasts in this keynote from Streaming Media West 2017.
-
Why Does Bad Bot Traffic Happen to Good Websites?
2:34
Distil Networks' Charlie Minesinger explains what draws bad bot traffic to commercial websites and offers some statistics and analysis on current bot behavior.
-
Choose a Social Media Network That Aligns With Your Streaming Content
1:48
Wowza's Tim Dougherty explains the importance of developing and adhering to a sound content strategy when syndicating streaming content to multiple social media platforms.
-
Will TV Broadcasters Follow Radio in Live Streaming More Content?
1:50
NAB's Skip Pizzi discusses the licensing, commercial, and strategic impediments to broadcasts networks adding full-time streaming of all content as most radio stations have done, and how that trajectory might change.
-
How to Boost Cellular Bonding in High-Traffic Areas
1:59
Tim Siglin and Chris Knowlton discuss the challenges of streaming via cellular bonding in congested cell-traffic areas, and strategies for boosting performance, in this panel discussion from Streaming Media West 2017.
-
How to Use Rebuffering Metrics to Optimize Delivery
1:27
At Content Delivery Summit 2017, Jon Alexander explains why rebuffering metrics are so critical to gauging end-user QoE and pinpointing delivery issues, and how analysis of those metrics can help content publishers optimize delivery. | http://streamingmedia.brightcovegallery.com/category/videos/short-cuts:-streaming-media-west-2017?page=1 |
We are recruiting for an Advanced Software Engineer to join our clients Software, engineering and Connectivity Proposition Team. The team is responsible for ensuring the next generation of connected products and technologies are properly explored, tested and refined in readiness to transition to the team responsible for delivering their IoT solutions. This includes ensuring the investigation and utilisation of the right technologies, techniques, services and security. Core to this is exploring the proposition and technology by developing proof of concept integrations and systems.
Accountabilities:
You will be working in a global, multi-disciplinary team including mobile and cloud Developers, app and product UX/UI designers and electronics and mechanical engineers. You will often be collaborating with experts from different areas of RDD (Research, Design & Development), designing how connectivity is woven deeply into our products. You must be independent and methodical with excellent problem-solving skills. It is essential to be knowledgeable about the latest trends in connectivity and mobile technology and comfortable with rapidly picking up new technologies. A thirst to overcome problems and limitations in order to reach our clients’ vision is essential. You should be comfortable with rapid prototyping of electronic hardware, investigating and evaluating new and emerging technologies, protocols and digital platforms. You should be able to produce clear and concise reports and presentations communicating your research and demonstrating your solutions.
Essential Skills:
- Excellent programming skills with C / C++ experience, preferably on embedded platforms.
- Flexible and dynamic approach to development, with the ability to adopt new concepts, languages and techniques quickly and then convey the benefits to others.
- Experience with scripting languages such as Python or Ruby.
- Understanding of embedded systems design and integration.
- Embedded experience working with different microcontrollers and platforms (e.g. ARM, Arduino, Raspberry Pi, Linux and RTOS environments).
- Good software and rapid prototyping experience.
- Knowledge of IoT protocols (Wi-Fi, ZigBee, Thread, Bluetooth Classic, Bluetooth Low Energy, MQTT etc).
Desirable Skills:
- Electronic circuit design experience.
- Track record of working with 3rd parties to explore technology.
- Proven track record of developing robust requirements specifications.
- Experience of FPGA design (VHDL or Verilog).
- Experience of Software Defined Radio.
- Experience of mathematical/modelling software such as MATLAB, R or Octave.
- Experience with board ‘bring-up’.
- Experience of developing proof-of-concept embedded prototypes to an accelerated timescale.
- Experience of integrating solutions with cloud and web services for data interchange and synchronisation.
- Experience of connectivity technologies, including Wi-Fi, BLE and cellular.
- Experience of developing systems that use messaging services to communicate.
- A strong understanding of/ability to define and prototype the hardware solutions that will benefit most from connectivity.
- Programming for resource constrained devices. | https://www.jonlee.co.uk/job/advanced-software-engineer-26588/ |
MAD Architects is looking for an interior designer to join its team in Beijing, China.
MAD Architects is a global studio of talented architects, designers, and creative thinkers. We are dedicated to impact the experience and understanding of the built space that surrounds us, and from there we develop visionary, fluid and technologically advanced designs that embody a contemporary interpretation of the eastern affinity for nature.
Our team of 130 works across offices in Beijing, Los Angeles, and Rome. Our projects range from large-scale urban developments and masterplan to imaginative civic, cultural, residential and hospitality buildings of varying scale. We endeavour to improve the balance between people, their built environment, and their natural surroundings.
For our growing Beijing office, we are looking for talents of all levels of experience to join our teams on ambitious projects. As an integral part of our international practice, you will be contributing actively to the development of MAD.
The opportunity for you:
- you will be involved in the design and technical aspects of projects and their graphic representation through the entire interior design process from concept to constructed reality
- within dedicated teams, you will develop design and materials packages with clarity, design sensitivity, and a sound understanding of technical feasibility, evolving them into comprehensively detailed information fit for construction
- as an integral part of our international practice, you will be contributing actively to the development of MAD
Key skills to success:
- you are a globally-minded individual with an excellent academic background (bachelor in interior design or a related field of study from an internationally recognised school) and an outgoing, proactive and positive attitude
- you have a track record of at least three years in-depth post-qualification involvement in leading international architecture or interior design firms, as a key contributor to the interior design of complex large-scale mixed-use, residential, hospitality, cultural or civic projects through all stages of design to construction
- you combine strong aesthetic design sensitivity with sound technical and materials knowledge and are a clear communicator. You are able to compile thorough research packages for concepts, materials and products, and are able to articulate your ideas and results precisely, comprehensibly, and in a graphically engaging way
- you have working knowledge of interior design layouts and detailing, are well versed in current materials and FF&E trends, and demonstrate clear understanding of common practice and industry standards and codes
- you thrive in a dynamic and fast-paced environment, enjoy contributing to your team, are flexible and able to set priorities in order to deliver accurately and on time
- you are highly skilled in AutoCAD, Rhino and Adobe CS, ideally with a working knowledge of Maya. English fluency is a key requirement, with business level Mandarin and other business level language skills a plus
How to apply
MAD gives you the opportunity to challenge and develop yourself creatively and technically within our friendly, sociable, supportive and stimulating studio environment.
We offer long-term prospects to advance your career within our worldwide design network.
Salaries and benefits are competitive and commensurate with your experience.
If working with us in the capacities described above excites you, please submit your resume and portfolio (in PDF format, size less than 6MB) using the ‘apply’ link below.
We are looking forward to hearing from you! | https://www.dezeenjobs.com/job/mad-architects-interior-designer-249183/ |
Africa is a continent which is filled with many young men and women, some of which have the aspirations and potential to pursue a career in the medical field. But there are some clear barriers preventing many young Africans from doing so, stemming back to the setup of medical education in Africa.
Africa is rife with major diseases and illnesses and is in desperate need of a vast amount of medical professionals and physicians to tackle the problem head on. This, however, is simply just not achievable, with many of the flaws and barriers to the educational system preventing the medical educative system from producing home-grown professionals.
But there is perhaps a potential solution.
With the advancement of technology in the modern age, has come the ability to deliver an online education to impoverished continents such as Africa, a process which could supplement the medical education setup in Africa.
The advent of e-learning, m-learning and distance education, in which an interactive course can be provided to young Africans via the Internet from universities and institutions around the world, may be the answer to Africa's medical woes.
The existing Medical education system:
The medical education system varies from nation to nation, dependent inevitably on their economic capabilities. South Africa, as the wealthiest country in sub-Sahara Africa has approximately eight medical schools, which have around 8,5000 students per annum and 1,300 graduates per year. The South African medical schools are all institutions which are government funded and each school receives a subsidy from the government, on top of its student tuition fees. This is a system based on a British model and is reasonably successful, as is perhaps to be expected of the wealthiest nation in sub-Sahara Africa.
Kenya's medical education system is less prosperous. Kenya has just two medical schools, with the majority of the county's doctors and medical professionals being produced from the University of Nairobi. Both schools are government funded but Kenya finds itself struggling to financially provide the sufficient means to produce medical professionals, so much so that there was the introduction of self-sponsored medical students.
Nigeria, being the most populated nation of Black Africans, started the process of medical education in 1948, with the establishment of the University college hospital, which was a branch of the University of London. From then on in, four generations of medical institutions have developed, but with the curriculum remaining largely the same. When the curriculum for medical courses adapted around the world, Nigeria's stood still and did not and any later attempts to improve the syllabus and teacher training methods failed. As a result, Nigeria's medical education system is in dire straits and in desperate need of updating and modernising.
Thus, the medical education system in African nations is varied but there are clear limits to how many medical professionals they can produce and to what quality. It is clear that there are changes that need to put in place.
E-learning- the solution?
The whole concept of e-learning and m-learning is to bring a distance education and online courses, via the internet, to places that would not otherwise have access to such opportunities. Dr Yaw Adu-Sarkodie, a professor in clinical microbiology has heralded the use of e-learning to supplement the medical education system in Africa, suggesting: "what I see of the e-learning platform is that it is a limitless thing".
Many African medical specialists have suggested that the implementation of edtech and online courses will enable class sizes to increase dramatically in a short space of time. Crucially, medical professionals have suggested that e-learning initiatives will change the styles and approaches that African students take in the medical sphere and beyond. This could be potentially significant in mobilising a medical workforce that as of now is outdated and cannot produce results.
There has already been some evidence of distance education and edtech assisting the medical education system in Africa. Through the Medical Education Partnership, the US has sought to provide help to Sub-Saharan Africa, utilising edtech and e-learning initiatives. Such online courses include the use of video lectures, in line with the interactive e-learning initiative, particularly geared around exam preparation and practical skills that are absolutely essential to becoming a medical professional.
Selected medical institutions were given access to online medical courses and e-learning, in order to supplement and support the medical education curriculum in these various medical schools. The experts suggest, however, that edtech can only work, provided that there is the right fit with institutions that have the required technological capacity. E-learning then, could be the solution to African's chronic medical education problem.
MCAT and USMLE:
In line with the US initiative to assist African nations in the training and developing of their medical professionals, US tests such as MCAT and USMLE have been introduced to assist with the medical exam preparation process. The MCAT is a medical/science related aptitude test, designed to equip medical students to apply information quickly and precisely and generally examines the practical ability of the student to become the professional.
The USMLE is a test which requires more prior knowledge and preparation and is very much content driven. Both of these initiatives are employed in and work in medical institutions across the US and have been extended to such African nations as Ghana, Kenya, Uganda and South Africa.
Mhealth:
Mhealth is the process in which smartphones are used to help educate and inform students in the medical field. The rise in mobile phone use across Africa has meant that many young men and women have instant access to a learning platform, literally at their very fingertips.
The use of text messages and SMS is a very useful tool, which enables those in remote areas to access information that will be beneficial for the learning process and ultimately help in enabling many more people to gain sufficient medical qualifications. This method of teaching, alongside e-learning in the traditional ways as mentioned before, could help to revolutionise the way in which students learn and are taught. If applied correctly and overcoming some infrastructural barriers, e-learning could help to salvage the medical education system across Africa.
Conclusion:
It is clear the medical education setup in Africa is in desperate need of reform, with widespread disease and poverty but an inability, both structurally and financially, to mobilise a sufficient health/ medical workforce. Could e-learning be the solution to updating and modernising a currently failing and outdated medical education system in sub-Saharan Africa and beyond? Visit apps-for-learning.com to find out more about the role of edtech in Africa, feel free to share this article on Facebook and Twitter and comment any thoughts you might have below!
By Jens Ischebeck
www.apps-for-learning.com. | https://africanexecutive.com/index.php/article/read/9534 |
Elevate your beliefs and create believable goals. You learn to build your staircase to map out how you will achieve your desires and your bigger vision. It is not enough to simply speak an affirmation or make a wish. This step shows how important it is to blend and align your thoughts, feelings, and beliefs to activate your intentions. When your intentions are activated, they become harmonically resonant with what you want. You learn how to overcome inhibitors and habitual coping mechanisms and the importance of transcending scarcity consciousness.
Releasing the outcome means letting go of fear, judgment and, feelings of inadequacy. Live in the present. Get on with your life and know that your desires will show up at the perfect time and in the perfect way.
Years ago, I was commuting to work and thinking about how unappreciated I felt by the individuals in the organization. Knowing that by giving attention to this situation I was setting myself up to attract other people in my life who did not appreciate me, I deliberately changed my mind and chose to focus on all the times that others demonstrated appreciation for the extra things I did for them. Several hours later, when I arrived at my destination, I was greeted by the CEO, who made a point of telling me how much she appreciated the extra effort I had provided on a specific project. A few minutes later, one of my employees stopped me in the hall and expressed gratitude for the mentoring I had offered him. Taking the time to focus on what I desired rather than what was irritating to me created a feel-good moment, which inspired me to go the extra mile for my employer.
The more you radiate positive thoughts and feelings, the more you attract those energies into your life. When you are sending out negative thoughts, it is very difficult to attract the relationships you really desire. The easiest way to let old negative emotions drift away from you is to fill your thoughts and feelings with positive emotions that push them out.
What do you love? Think about that. Just enjoy allowing yourself to feel the pleasure of thinking about what you love.
Select three activities that you enjoy; whatever makes you feel good.
What are you good at? It doesn’t matter what you choose; whatever you feel competent doing. Think about doing those things. Notice how your mood is getting brighter? Choose one activity or memory and focus on how you felt at the moment it was happening. Allow yourself to enjoy the glow. Breathe deep and relax into the feeling.
It is your mind. You can intentionally use it to lift your mood and elevate your self-esteem. You can do this anytime, anywhere. When you set the positive intention to feel good about yourself, you are doing one of the most valuable things you can do for yourself…and for everyone around you as well. Good feelings attract good feelings. Don’t be surprised if people smile at you on the street after you have been exploring positive memories and feelings. It’s natural. What an amazing discovery this is, that you can choose to direct your mind toward positive experiences that create positive emotions, and that this attracts positive relationships and opportunities into your life. For example, on your way to work, you might focus on a memory of a day when you achieved a notable success, and how good you felt about it. When you get to your workplace, your whole being will radiate positive energy that will attract a positive response from your boss and coworkers, resulting in a more productive day
Excerpt from The Relationship Code, Engage & Empower People with Purpose and Passion
By Margaret McCraw
Many of us are greatly affected by our daily interactions with those we regularly encounter, whether these people are our bosses, co-workers, families or friends. These relationships can be major contributors to our overall success at work and at home, our health, and well-being.
Our reaction to challenging situations and conflict in our relationships causes stress, which is a psychological and physiological response to events that upset our personal balance in some way. We have all experienced relationships that have caused tension, and anxiety within us. Although the stressors in our daily lives play a major role in our overall health, happiness, and productivity, many of us believe that we have no control over these.
It is important to respond to challenges and conflict in a manner that prevents or at least minimizes stress and impacts our health. It is crucial, therefore, that we explore the dynamics of our interpersonal relationships and understand how we attract the negative or positive experiences. Our current state of well-being is mirrored through our thoughts, beliefs, and emotions creating experiences that shape our daily lives. We must learn how to create positive vibrations and shape our own destinies.
We must first understand how our thoughts and emotions affect our mental and physical well being. The World Health Organization states, “Mental health is not just the absence of mental disorder. It is defined as a state of well-being in which every individual realizes his or her own potential, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to her or his community.”
To recognize our own potential, we must believe in ourselves and understand our potential to contribute to the higher good for all. We must take responsibility for our lives without judgment, guilt, or blame because these factors can lower our belief in our self-worth and turn us into some of the alarming statistics quoted above. Let us let go of those painful memories from our past once and for all. By releasing judgment and negative thoughts, we create positive energy and attract more of what we desire. Let us take responsibility for our feelings and realize that no one can make us feel good or bad. Let us be true to ourselves and understand that we can only love others as much as we love ourselves.
We must learn to cope with the challenges of life in a positive manner. Think about your colleagues, family, and friends. Do you ever encounter tension in any of these relationships? How we deal with them is the difference between a healthy mental state and an unhealthy one. To begin eliminating interpersonal stress, we must understand what causes tension in these relationships and begin effectively communicating with others. Learning to respond rather than merely react is a common challenge. To reach our highest potential, we must communicate with others openly, honestly, thoughtfully, respectfully, and genuinely. Authentic communication will lead us into greater interpersonal relationships that will benefit all. We will be heard and respected as we interact with others. | https://bhcld.com/category/relationships/ |
,
Hua-Ying Wang
,
Tengjiao Zhang
,
Xiaoxue Fang
,
Meiying Liu
,
Mingzhou Sun
,
Hongxing Xiao
Wei Zhang
Northeast Normal University
Author Profile
Hua-Ying Wang
Northeast Normal University
Author Profile
Tengjiao Zhang
Northeast Normal University
Author Profile
Xiaoxue Fang
Northeast Normal University
Author Profile
Meiying Liu
Key Laboratory of Molecular Epigenetics of Ministry of Education, College of Life Sciences, Northeast Normal University
Author Profile
Mingzhou Sun
Northeast Normal University
Author Profile
Hongxing Xiao
Northeast Normal University
Author Profile
Abstract
How populations diverge into different lineages is a central issue in evolutionary biology. Despite the increasing evidence indicating that such divergences do not need geographic isolation, numerous phenotypic differentiations show a distributional correspondence. In addition, gene flow has been widely detected during and through such diverging processes. We used one widely distributed Aquilegia viridiflora complex as a model system to examine genomic differentiation and corresponding phenotypic variations along geographic gradients. Our phenotypic analyses of 90 individuals from 20 populations from northwest to northeast China identified two phenotypic groups along the geographic cline. All examined traits are distinct between them although a few intermediate individuals occur in their contacting regions. We further sequenced the genomes of the representative individuals of each population. However, we recovered four distinct genetic lineages based on both nuclear genomes and plastomes that were different from phenotypic differentiation. In particular, we recovered numerous genetic hybrids in the contact regions of four lineages. Gene flow is widespread and continuous between four lineages but much higher between contacting lineages than geographically isolated lineages. In addition, many genes with fast lineage-specific mutations were identified to be involved in local adaptation. Our results suggest that both geographic isolation and local selection exerted by the environment may together create geographic distributions of phenotypic variations as well as the underlying genomic divergences in numerous lineages. | https://21docs.com/doi/full/10.22541/au.166020135.56220302/v1 |
In writing a research paper, a section is devoted to the review of related literature. Why is there a need for search section to be there? Why should a researcher take time to write a review of related literature? Can this section be skipped in conducting an empirical investigation?
A discussion of the reasons behind the need to write a review of literature will help clarify this vital component of the research paper.
Overall, the purpose of the review of the literature is to provide an overview of the literature about the researcher's chosen topic of inquiry. The overview will help to determine the current state of research. the same line of interest and what needs to be done. The researcher would want to fill in the "gaps in knowledge."
Why is there a gap in knowledge in the first place?
Although it has gone a long way in the history of modern civilization, it has become more and more popular man as a tool is not ready to provide all the solutions to the many problems besetting man.
A cure for cancer, for example, may work for some people but not for everyone. There are always deviants, or exceptions to the rule. This leaves questions that will war on the issue. Questions may arise like "Why is the cancer cure for the majority of people not working for a specific group of people?" or "Are there other factors, inherent in the specific group of people, that influence the effects of the cancer?"
The answers to these questions are not easy to come by. A good researcher, therefore, wants to make an exhaustive review of literature that will help clarify the issue. It wants to help researcher avoid doing it.
Help the researcher avoid doing it things which have been done before. He can focus his research on those issues which, after rigorous analysis of the literature at hand, are still left unanswered. This will mean savings in time, money and effort. | https://www.dostcoimmobilien.de/what-is-the-purpose-of-the-review-of-related-literature/ |
Q:
Time Complexity of find operation
Possible Duplicate:
C++ string::find complexity
What is the time complexity of the find operation that comes built-in with the string library in STL?
A:
The Standard, §21.4.7.2, doesn't give any guarantees as to the complexity.
You can reasonably assume std::basic_string::find takes linear time in the length of the string being searched in, though, as even the naïve algorithm (check each substring for equality) has that complexity, and it's unlikely that the std::string constructor will build a fancy index structure to enable anything faster than that.
The complexity in terms of the pattern being searched for may reasonably vary between linear and constant, depending on the implementation.
| |
Clinical Data in Diabetes
Clinical data from landmark trials and follow-up studies conducted over the past few decades have demonstrated that lowering blood glucose concentrations toward the normal range can effectively slow or prevent the development of long-term complications such as retinopathy, neuropathy, and nephropathy. In fact, the role of hyperglycemia in diabetic complications was not even established until 1993, when the results of the Diabetes Control and Complications Trial (DCCT) were made public. This major trial, together with two others, established the benefits of long-term blood glucose control in diabetes and is discussed in this module:
• The Kumamoto Study, an eight-year Japanese trial demonstrating that intensive insulin therapy prevents diabetic complications in patients with type 2 diabetes.
• The Diabetes Control and Complications Trial (DCCT), a landmark 10-year trial completed in 1993 in patients with type 1diabetes.
• The United Kingdom Prospective Diabetes Study (UKPDS), the largest and longest trial on the effects of lowering blood glucose in patients with type 2 diabetes.
More recent long-term, large-scale, multi-center studies—some still ongoing, others in follow-up analysis phase—are also discussed here.
Complications of Diabetes
This module describes macrovascular and microvascular long-term complications of diabetes, as well as other medical conditions that commonly impact the cardiovascular health of patients with diabetes, including hypertension and dyslipidemia.
Diabetes and Hypoglycemia (COMING SOON)
The threat and incidence of hypoglycemia is a major concern for both type 1 and type 2 diabetes. Unfortunately, attempts to achieve a normal glucose level, as recommended by current treatment guidelines, increases the incidence of treatment induced hypoglycemia. This module reviews the incidence of hypoglycemia within diabetes, the causes, symptoms, management, treatment, and its effect on quality of life.
Diabetes Management Programs
This module begins with a brief discussion of the key elements of a well-designed comprehensive disease management program. It then describes various diabetes management programs that have been implemented, mostly through partnerships involving employers, pharmacy benefit management companies (PBMs) or disease management companies, managed care organizations (MCOs), and insurance companies. There are many ongoing diabetes management programs.
Diabetes Treatment Principles
This module presents an overview of the key resources providing such guidelines and principles—in particular, those of the ADA and AACE—and the issues they cover.
Glucose Metabolism and Diabetes
This module will begin with a brief description of normal glucose metabolism and how its activities are regulated, and then discuss the key acute metabolic effects of abnormal glucose metabolism, including hyperglycemia, glucosuria, and ketoacidosis.
Insulin and Insulin-Combination Therapy
This module discusses the goals of insulin therapy and combination therapy for patients with type 1 and type 2 diabetes, and the different types of insulin and insulin regimens employed for achieving those goals.
Lifestyle Management of Diabetes
This module will discuss the importance of coordinated MNT and physical activity plans in managing patients with type 1 and type 2 diabetes.
Monitoring of Diabetes
This module describes diabetes monitoring tests and their appropriate uses for managing diabetes.
New Treatment Options and Future Directions of Diabetes Therapy
This module provides an overview of new treatment options and future directions of diabetes therapy, including new methods of insulin administration, pancreas and islet transplantation, and genetic-based therapeutic approaches, including use of stem cells and cells from the spleen, monoclonal antibodies, and individualized treatments.
Noninsulin Injectable Therapy
Beyond insulin and oral antidiabetic agents, there is a third therapeutic group, the focus of this module: two classes of drugs that, like insulin, are injectable agents. They are not able to be given orally, as they would be destroyed in the gastrointestinal tract. One class, the amylin analogs, such as pramlintide acetate, helps lower blood glucose levels, especially the unhealthy rise in glucose that occurs right after eating. This class is of potential benefit to both type 1 and type 2 diabetics. The other class of injectable therapies, incretin analogs, replicate the antihyperglycemic actions of incretin hormones; they are currently indicated for only type 2 diabetic patients, although their use for treating T1DM is being studied.
Oral Antidiabetic Agents
This module discusses the role and types of oral diabetic agents used in the treatment of type 2 diabetes.
Screening and Diagnosis of Diabetes
Hyperglycemia and glucosuria—characteristic signs of diabetes—are the basis for tests used to diagnose diabetes. These tests, described in this module, include fasting blood glucose tests and glucose tolerance tests after eating. The same tests can identify patients with prediabetes—whose elevated glucose levels are not as marked as those of diabetes patients and are a risk factor for developing type 2 diabetes.
Determining the exact type of diabetes (eg, DM1 or DM2) requires additional tests, such as those that determine whether the patient is able to produce any insulin. It is also important to consider the patient’s clinical presentation.
Types of Diabetes
This module describes the key characteristics of both type 1 and type 2 diabetes. It also briefly identifies less-common types of diabetes and abnormal glucose metabolism.
No matches were found. Please expand your selection within the filters. | https://www.cmrinstitute.org/training-catalog/disease-state/diabetes/ |
The way you mow the lawn might teach you something important about how to practice.
Everyone has a method. Do you start at one side and make repeating alternate passes, or circle the perimeter until you get to the center? Either way, it’s a straight line: a job with a beginning and end and a clear indicator of progress.
Practicing an instrument is, of course, nothing like that. But too many people act like it is. As if learning to play were as direct and linear as mowing the lawn.
Or perhaps you mow like I do: starting off in one spot and proceeding along an orderly pattern, but suddenly veering off seemingly on a whim to cut a new path. I end up dividing the lawn into segments, smaller areas that are attacked one at a time.
Now, this is just how my brain works in general. I know that I concentrate best in short bursts, punctuated by breaks and then refocusing. For me, returning with a fresh perspective is a much better way to work: more motivating, more thorough, and more effective.
This has nothing to do with my reasons for breaking the pattern while mowing the lawn, of course. Maybe it’s related, though: when a task starts to become too repetitive, maybe something needs to shift to stimulate the brain. And perhaps approaching the same basic material from different angles will lead to a more thorough coverage of the territory?
The fact is, learning about music is a lifelong task. It won’t get easier.
It’s no easier for me now after almost 40 years – except that I’ve learned ways to streamline the process. The work still needs to get done. And like your lawn, even when it’s finished it’s never finished: it’ll grow back next week. (More like tomorrow, here in Tennessee in the summertime).
If you approach practicing as a track that follows a straight line, it definitely makes it easier to organize and track your progress. This is an important element for some people, especially if you have the kind of mind that prefers structure. But you still need to keep in mind that when you reach the end, you’re not finished. You’ve simply completed one cycle. You’ll need to go back over it again. And every time you do, hopefully you pick up more detail, more familiarity, more accuracy. And if you find you aren’t, vary that pattern. Take a different approach, look at the problem from a different angle. This kind of creative thinking is the basis for problem-solving.
Whether your mind tends towards a single linear order or circular, interconnected set of patterns, both express something important about both the structure and performance of music.
Linear order organizes your knowledge and process. Circular, pattern thinking reveals interconnectedness and multiple pathways through the same territory. Both are essential to developing the real skills you need to play music the way you want to.
So pay attention next time you mow the lawn, you might learn something important. | https://www.nashvilleguitarguru.com/perpetual-beginner/lawn/ |
The Empire of Murmanityed was established in the year 5449 PN, well after the fall of Naduum. The Bronze Elf Murman established the Empire by conquering and enslaving a massive barbaric tribe of desert dwelling humans. A priest of Hylarr, Murman established a massive pyramidal temple to his patroness, and began sacrificing slaves to her. Rewarded by his dark goddess, Murman led his Bronze Elves - and their enslaved barbarians - to conquer the rest of the central desert. For four hundred years the sword of Murman cut across the continent. Thousands of barbaric peoples were ground under his heel; enslaved and absorbed into the growing Empire. By Murman's death in the year 4999 PN, the Empire had englufed the great desert and numbered over twenty million slaves - ruled by less than a hundred thousand Bronze Elves.
Murman was a strict ruler. Every city of Murmanityed was built on the same plan. Every military unit founded exactly like his Great Legion. Every city was centered around four temples - Gathal, Phane, Hylarr, and Shalokar.
News of Murman's success brought other children of Naduum to the Empire. Two powerful Bronze Elves took over the Empire after Murman's death, passing their titles down into history - the Shadow King, a Paladin of Gathal, and the Witch Queen, a Priestess of Phane. The Shadow King rules over the military conquests of the Empire, while the Witch Queen governs the home lands.
So the Empire endured and expanded over the next five thousand years - expanding slowly and steadily, establishing new cities in the interior. By the year 1 TC, the Empire ruled every living soul on the continent, except for the Dragon Kingdoms, the Realms of the Free Kings, and the distant and weak Kingdoms of the North.
The invasion of the Kingdoms of the North - in search of the magical Crystals Nagul believed had been lost there - brought about the end of the Empire. Though the Northern Kingdoms were conquered, they rebelled - won their freedom - and heroes of those lands invaded the Empire, assassinated the Witch Queen, and destroyed Murdamiya. The Empire shattered into a hundred fragments, each trying to reclaim their lost power.
Culture of the Empire
Murmanityed is ruled by the dual monarchy of the Witch Queen and Shadow King, both powerful figures in their own right. The Witch Queen oversees the church and religious affairs, the courts system, all aspects of magic, and the educational system. In short, domestic affairs. The Shadow King holds sway over the military and conducts all affairs of foreign policy, including the waging of Murmanityed’s constant wars. The Shadow King is also charged with enforcing the laws and protecting the church and institutions of the Empire.
The most pervasive aspect of Murmanityed’s society is their religion. Based around the worship of four prime deities, the temples of the church are in every town and outpost. They are large, basalt pyramids surrounded by large moats and four looming statues. The capstone of the pyramids are always made of a single huge crystal. The witch-priestesses and warlock-monks administrate the daily affairs of the temple, including ritual sacrifices of everything from animals to monsters to citizens – though it is said that all those sacrificed are traitors or criminals or other rubbish. The earthly remains are disposed of in the wide moats, which are inhabited by gigantic crocodiles.
Unbeknownst to the populace, the crystal capstone captures the life energy of the sacrificial victims. The dark clergy of Murmanityed draw upon this power for their spellcasting, and a tithe is pulsed through the ether to the pyramid of the Witch-Queen, the most powerful spellcaster in the whole Empire. All magic is controlled by the clergy, be in wizardry or clerical in nature, and all of it is powered by the lives taken on the basalt altar.
The four deities worshipped by the people of Murmanityed are Goz, Phet, Lyrra, and Lokar. Goz is the Lord of Warriors and Battles, patron of the Shadow King and warriors. Goz is actually Gathal, the evil god of chaos. Phet is the Lord of Nature and Wild Things. Phet is actually Phane, the evil god of death and decay. Lyrra is the Goddess of Fertility and Magic, patron of the church. She is actually Hylarr, evil goddess of the moon and darkness. Lokar is the most popular of the Murmanityedi pantheon, the Lord of Luck and Fortune. He is actually Shalokar, a minor god who oversees treachery, lies and deciet. Shalokar himself subverted the ancient religion of the Bronze Elves, and replaced it with his own creation. It delights him to no end that he is worshipped on the same level with the three great powers of evil, and to have hoodwinked an entire Empire into his false faith is one of his greatest achievements. As for the three evil gods, they give no aid to their worshippers, often taking pleasure in visiting the worst fates upon those who are the most devoted to them. The irony of Shalokar’s triumph is not lost on them, and so they allow it to continue.
Murmanityedi society is dominated by the Bronze Elves, a race of cruel, stern and wickedly brilliant Elves from the great deserts. Bronze Elves are long-lived, skilled with magic, and utterly ruthless when it comes to the Empire. It is their belief that they were created to rule the lesser races, and rule they shall. It is their lot in the universe to enlighten the more primitive races by bringing them under their protection, and if a few million of them must perish to maintain Bronze Elf dominance, then it is a small price to pay. Only other Elves are considered near worthy of respect, and even then they are considered unfortunate ‘lesser’ breeds of Elf, fit only as consorts or breeding stock to be ‘uplifted’ into the Bronze Elven race. Bronze Elves control the church and the aristocracy, and through those two arms the army.
It is against the laws of Murmanityed for a non-Elf to meet an Elf’s eyes without permission.
The Bronze Elven witch-priestesses accompany every warrior of note into the field, to observe them for signs of disloyalty. The Army of Murmanityed is the only outlet to a rigid caste system, allowing members of any race or class to rise in rank by merit. However, high command is always held by and Elf, and every important officer has a spiritual advisor to watch over them.
The Murmanityedi Army is massive and powerful. Due to the vast size of the Empire, Murmanityedi Legions are standardized, each being ten thousand troops strong. One thousand light cavalry, two thousand archers or slingers, three thousand light infantry and four thousand heavy. Forces of mercenaries from the outlying provinces are often employed as shock troops, and are allowed to follow their own command structures so long as they accept the spiritual advisors with their officers. Murmanityed is vast, and many regions support specialized mercenary units that serve for decades away from home.
The Legions are always in action on some front. Murmanityed has enemies on every side and foes within.
There is some small resistance to the heavy-handed Bronze Elf government. Bandits eke out a living in the countryside, and thieves guilds lurk in the dark corners of every large city. Periodic efforts of the ruling class to wipe these elements out come every century or so, but the last seed of resistance is difficult to kill. | https://the5kingdoms.com/wiki/index.php?title=Murmanityed |
Copper catalysts for complete oxidation of hydrocarbons supported on natural zeolites of different structure and origin were prepared by ion-exchange procedure. The catalytic experiments demonstrate that the temperature of beginning of hydrocarbons conversion is in the range of 170-300°C, depending on the composition of the catalyst. The complete conversion can be observed for both zeolites, depending (probably) on Si/Al ratio of the zeolite matrix. Different states of the copper have been identified by the methods of UV-VIS and XPS spectroscopies and TPR by hydrogen. Whereas no changes in XRD and 27Al MAS NMR was observed under condition of catalytic runs, that supports conclusion about stability of bulk material, XPS spectroscopy reveals significant altering in surface composition under different treatments due to appearance of complicated nano-species of copper, which are responsible for catalytic activity. | https://tpu.pure.elsevier.com/en/publications/formation-of-catalytically-active-copper-nanoparticles-in-natural |
If you are considering a career change to web design, you may be wondering what the job outlook is like for this field. The good news is that web design is a growing industry with plenty of opportunities. However, it's crucial to consider all aspects of the job before deciding on a new career.
This article will cover all of the essential points to help you decide whether or not web design is the right career for you. Learning about the job outlook, average salary, and necessary skills will give you a better idea of what to expect from this career path.
What is Web Design?
Web design is the process of creating websites. This includes the layout, content, and function of the site. A Web Designer works with clients to create a website that meets their needs and goals.
What are the Responsibilities of a Web Designer?
The responsibilities of a Web Designer vary depending on the project. In some cases, the Web Designer may be responsible for the entire website, while in other cases, they may only be responsible for specific pages or elements.
The typical responsibilities include:
- Meeting with clients to discuss their needs and goals
- Creating wireframes and prototypes
- Designing layouts
- Adding content such as text, images, and videos
- Testing websites for functionality and compatibility
- Updating websites as needed
What Skills Are Needed for Web Design?
To be a successful Web Designer, you will need various hard and soft skills. Hard skills are specific, teachable abilities that you need to be able to do the job. These can be learned through education or training.
Some common hard skills needed for web design include:
On the other hand, soft skills are personal qualities that help you succeed in any job. They cannot be taught and are more difficult to quantify. However, they are just as important as hard skills for web design.
Some common soft skills needed for web design include:
- Creativity
- Communication
- Organization
- Time management
What Training Does a Web Designer Need?
There is no one-size-fits-all answer to this question. The training a Web Designer needs will vary depending on their level of experience and the specific job they are applying for.
Some Web Designers choose to obtain a traditional four-year degree, while others opt for a more affordable certificate course. Many bootcamp and online courses can teach these necessary skills needed to jump-start a new career.
What is the Job Outlook for Web Design?
The job outlook for web design is positive. The industry is growing at a rate of 13% per year, which is much faster than the average for all occupations. This growth is primarily due to the increasing popularity of responsive design and mobile devices. As technology and the demand for responsive online platforms grow, Web Designers can stay updated with the latest trends to meet client needs.
What are the Pros and Cons of a Career in Web Design?
Like any job, there are both pros and cons to working in web design. It's important to consider all aspects of the job before making a decision.
Pros of a career in web design include:
- You can be your own boss: freelancing gives you the freedom to work on your terms.
- There is a lot of room for creativity: you can express your creativity through your work.
- You can work from anywhere: as long as you have a computer and an internet connection, you can work from anywhere in the world.
Cons of a career in web design include:
- It can be stressful: meeting deadlines and dealing with clients can be stressful.
- It's a competitive field: there are many qualified designers competing for jobs.
- It can be a solitary job: working from home can be lonely and isolating.
Is Web Design Right For Me?
Now that you know more about the ins and outs of web design, you can decide if it's the right career for you. If you have a passion for design and enjoy working with technology, then web design may be a good fit. However, if you're looking for a position with little room for creativity or growth, this type of career may not be the best choice.
Is Web Design a Good Career Path?
Now that you know more about the field of web design, you can decide if it is the right career path for you. There are many things to consider, such as job responsibilities, skills needed, and salary potential.
If you are interested in a creative field that offers plenty of growth opportunities, then web design may be a good career for you. Just be sure to do your research and explore all aspects of the job before making a final decision. Signing up for web design classes is an excellent way to learn more about the field before committing to a certificate course or traditional degree. This can help you save money overall and determine whether this is a career you want to pursue.
For those who want a more immersive learning experience, a live online web design bootcamp is a great way to build new skills without commuting to a physical school campus. The curriculum for these courses is the same as in-person classes but allows students to learn from the comfort of their own homes. Bootcamp classes teach all of the skills needed to start a career in web design without the time or cost commitment required with a traditional degree.
Bootcamp courses also allow students to complete hands-on projects and gain a portfolio of experience to use during their job search. This is crucial in a field with such high demand and constantly evolving technology. Staying up to date in the industry is a must for Web Designers, and classes from Noble Desktop are an excellent way to keep building their skillset. If you are ready to start your new career in the design field, try searching for live online web design bootcamps in your area to learn more. | https://www.nobledesktop.com/classes-near-me/blog/is-web-design-a-good-career-path |
The advancement of forensic science has helped make crime scene investigations and forensic lab work more effective and reliable than ever before. Earning a criminal justice degree with a concentration in forensic science, criminology or crime scene investigations can help you develop vital analytical skills related to ballistics, fingerprint analysis, toxicology, DNA analysis and more.
Request information from the criminal justice schools shown here to learn about degrees and training programs in forensic science, crime scene investigations and criminology. Requesting information from multiple schools will help ensure you find the one that’s right for you. | https://www.crimesceneinvestigatoredu.org/school-listing/ |
Proper construction site preparation leads to creation of a safe environment for productive working. Taking time to prepare a site makes it compliance with the local codes and professional construction standards. Here is a step by step guide for preparing a construction site.
Step 1: Site Clearing
This entails clearing the place where a building will be constructed. The condition is also graded. Site clearing may entail demolition of buildings, tree removal, and removal of underground structures as well as obstacles that can affect the building process or hinder proper completion of the project.
Step 2: Site Surveying
If the building block is not identified clearly using survey pegs, it’s impossible to be certain that the building is being erected on the right block. Site surveying entails marking out the location of the structure. This step is not optional for most permitting and zoning processes. Surveying is basically about translating the construction plan of the contractor into its physical representation on the site.
Step 3: Soil Testing
This should be done before purchasing the site. Soil composition determines the ability of the site to withstand the structure. The soil should be tested to determine if it absorbs water. Essentially, soil testing must be done in most cases before the structural work commences.
Step 4: Plan Designing
Once the soil has been tested and the necessary septic tanks and drainage installed, design modification follows. This indicates where fixtures like the septic systems will be placed. Until the site has been designed, nothing else can be done. A permanent record of the site’s underground should also be made at this step. Any construction site is seen as a living thing. It keeps changing and slight changes caused by rock formations for instance should be recorded for reference in the future.
Step 5: Site Investigation
Geotechnical investigation is done to characterize rocks, groundwater and soil condition. This process entails collection and evaluation of the information about the site’s condition. This is crucial for the purpose of foundation design and construction of the structure.
Following these steps ensures that the site is suitable for construction. Nevertheless, site preparation is a job that should be completed with the help of professionals. | http://www.tahitireferendum.com/construction-site-preparation-a-step-by-step-guide/ |
Permissive hypercapnia in acute respiratory failure.
To evaluate the potential efficacy of pressure limitation with permissive hypercapnia in the treatment of acute respiratory failure/adult respiratory distress syndrome on the basis of current theories of ventilator-induced lung injury, potential complications of systemic hypercarbia, and available human outcome studies. Articles were identified through MEDLINE, reference citations of published data, and consultation with authorities in their respective fields. Animal model experimentation and human clinical trials were selected on the basis of whether they addressed the questions of pressure limitation with or without hypercapnia, the pathophysiologic effects of hypercapnia, or the concept of ventilator-induced parenchymal lung injury. Frequently cited references were preferentially included. Data were analyzed with particular emphasis on obtaining the following variables from the clinical studies: peak inspiratory pressures, tidal volumes, minute ventilation, and PCO2. Quantitative aspects of respiratory physiology were used to analyze the theoretical effects of permissive hypercapnia on ventilatory requirements in normal and injured lungs. Extensive animal model data support the hypothesis that ventilator-driven alveolar overdistention can induce significant parenchymal lung injury. The heterogeneous nature of lung injury in adult respiratory distress syndrome, with its small physiologic lung volume, may render the lung susceptible to this type of injury through the use of conventional tidal volumes (10 to 15 mL/kg). Permissive hypercapnia is an approach whereby alveolar overdistention is minimized through either pressure or volume limitation, and the potential deleterious consequences of respiratory acidosis are accepted. Uncontrolled human trials of explicit or implicit permissive hypercapnia have demonstrated improved survival in comparison with models of predictive mortality. Avoidance of alveolar overdistention through pressure or volume limitation has significant support based on animal models and computer simulation. Deleterious effects of the associated hypercarbia in severe lung injury do not appear to be a significant limiting factor in preliminary human clinical trials. Although current uncontrolled studies suggest benefit, controlled trials are urgently needed to confirm these findings before adoption of the treatment can be endorsed.
| |
Manchester Art Gallery,
1 Feb 2019 - 6th May 2019)
For a brief time at Manchester art gallery, the touring exhibit of Leonardo Da Vinci’s sketches are showing a behind the scenes look of his work, and personal insight into his interests, studies and humor.
The room is crowded with fellow visitors, a dimmed space to protect the delicate pages. There is a light buzz about the accessibility of seeing the revered artist in person. Presenting a range of his sketches created with pen, ink, watercolour and chalk, these delicate pieces show articulate sketches exploring fetus development, the ideal human proportion and other sketch studies.
Three pieces particularly captured my attention -
‘Two grotesque profile’ 1485-90 (pen and ink wash)- a satirical look at human proportion. From studying the idea mathematical formula of beauty Da Vinci could distort this to create ‘ideal ugliness’ the composition parody’s the typical 15 century couple portraits that were being painted at the time. For me, this sketch presents a side of his humor and satirical abilities, not having a large awareness of his private life, little highlights can be found.
‘ A woman in the landscape’ 1517-18 (Black chalk)
Being his most mysterious drawing for its lack of origin, the most plausible concept for this piece however is that its based of Dante’s ‘The Divine comedy’. In this sketch it would present us in Dantes position as Matelda indicates her earthly paradise to us. Drawing me in is that this sketch is different compared to the other sketches on show. This is more in the aesthetic of his famed painting. There is movement and a dream like quality. The chalk adds texture while highlighting enough details in her face.
‘ Studies of men in action’ 1508 (Black chalk, pen and ink)
This is one of a sequence of thumbnails attempting to capture every action of man, according to Da Vinci there is 18.
This piece from the exhibit is my favorite. Different in composition and unique in subject. The small men look like cartoons from a 90s football annual. There is a lot of negative space at the top, subjects are cramped into the footer.
Overall the exhibit is beautifully curated, has a good selection of sketches, from sketchbook work including his infamous backward writing, and individual studies of a range of subjects. I suggest going at an off peak time to have more space and intimacy with the artwork instead of fellow visitors. From being a tight space it can get heated and end up queuing to view pieces, adding a sense of guilt at your concentration of a sketch, making you move along. | http://www.kerryanncleaver.com/blog |
Working at Tesla Motors is a dream for many, but what is it really like? This blog post will deeply dive into Tesla’s work schedule, examining how many days and hours employees are expected to work.
According to current and former employees, Tesla operates on an alternating schedule of 3-4-4-3 working days, with a minimum of 10 hours per day and often pushing up to 12 hours per day. However, it’s not uncommon for employees to work 5 or 6 days a week, including weekends and holidays.
It’s important to note that working at Tesla comes with a heavy workload and expectation for overtime, which can be demanding on one’s personal life. Therefore, it’s crucial that the compensation and benefits offered by Tesla align with the job’s demands.
This article explores Tesla’s employee work schedule, whether you get paid overtime, how stressful, and other FAQs relating to Tesla’s working environment and schedule.
How Many Days Do Tesla Employees Work?
At Tesla, your work schedule will likely involve alternating between working 3 and 4 days per week. However, there have been reports of mandatory overtime, which can increase this to working 5 or even six days a week.
This is particularly true in high-demand periods or during production ramp-up. Tesla’s business is fast-paced and fast-changing, and the employees expect to adapt and deliver accordingly.
Four days work week can often be found in manufacturing, and many places do 3 shifts if they are dedicated to operating 24/7. This can work well, as long as you’re not expected to work 5 days a week and you’re guaranteed at least 3 days off per week. With this schedule, you can take 2 vacation days and have up to 7 days off.
It’s also worth noting that some employees have reported being called in for work at unexpected times, such as 2 AM on a Saturday. This doesn’t happen often, but it’s something to remember.
While working at Tesla may seem exciting and prestigious, the job can be demanding, with long hours and the potential for overtime. On the other hand, if you’re in a department that offers 3 days off per week, you’ll have more time to rest and recharge.
How Many Hours Do Tesla Employees Work?
Many Tesla employees commented working anywhere from 36 to 72 hours a week, with many comments suggesting that some people were promised certain hours, and then the company changed their minds and forced 5 or 6 days of workweeks. “Long hours” is a common theme in Tesla Glassdoor reviews:
According to many comments from Tesla workers, they typically work shifts of 10 to 12 hours at the Tesla Fremont factory. Employees from other factories report similar, long hours with potential (sometimes mandatory) overtime in both hours and days.
However, this is not the case at every Tesla factory. For example, Tesla factories don’t allow working more than 12 hours, but sometimes you may get paid more. Plus, assembly line workers reported that the work could be exciting and quick, and times fly by fast at the Tesla Fremont factory in California.
It’s worth mentioning that the work schedule at Tesla can vary depending on the specific role and department you’re in. For example, some employees in management or engineering roles may have a more traditional 9-5 schedule, while those in production or manufacturing roles may have longer hours or rotating shifts.
Tesla factory workers don’t generally have 8-hour shifts, and you only get overtime after 10 hours. The number of hours can vary every other week, with some workers reporting working anywhere from 30 to 80 hours.
While 10 hours for four days is standard in the automotive industry, 12 hours is not, as employees will lose efficiency because such long times wear people out. Tesla interns have similar hours, and while interns can do this for a few months, doing it for years can be difficult and not for everyone.
How many hours do Tesla engineers work?
Tesla engineers work similar schedules as everyone else. While 40 hours a week is the official time to work, it’s also generally expected to work overtime. Engineers in management roles may have a more traditional 9-5 schedule, while those in production or manufacturing roles may have longer hours or rotating shifts.
Tamer Shaheen shares his experience at Tesla, where he worked as a mechanical design engineer, and he said:
“Nine to seven is normal, but it’s not surprising to see people working there until 8 or 9 pm.”
He said that working at Tesla is a bit stressful but not terrible. He had to solve many interesting problems, which forced him to grow as an engineer and learn many cool things.
According to some former employees, engineers at Tesla are expected to work an average of 40-50 hours per week, which can include working weekends or holidays. However, remember that this may vary depending on the specific project or department.
Here’s a video where he explained what it’s like to work at Tesla as an engineer:
How Many Shifts Does Tesla Have?
Tesla has implemented multiple shifts in some of their facilities, but the number of shifts varies depending on the facility and its production needs. For example, in the Tesla factory in Fremont, California, the company has multiple shifts running 24/7.
So far, Tesla has had two shifts, for the most part, though some factories have implemented 3 or even 4 shifts. However, for the most part, Tesla has only two shifts, and it’s usually 10 to 12 hours per shift. For example, 12-hour shifts could be scheduled from 6 am to 6 pm for the dayshift and 6 pm to 6 am for the night shift.
The length of shifts can vary depending on the facility, department, or production needs. It’s best to check with the specific facility or department to confirm the number of shifts and the schedule.
Does Tesla Allow Remote Work?
Remote work is being completely removed at Tesla. In a recent email to Tesla employees, Elon Musk, the CEO of Tesla, asked all remote employees to come to the office and work at least 40 hours a week, or the company would assume they resigned.
It is unclear whether any remote workers still exist at Tesla. Tesla has allowed some employees to work remotely during the COVID-19 pandemic. However, they had problems afterward bringing in people from home to work in offices.
Do Tesla Employees Work on Weekends?
Tesla employees, particularly those in production or manufacturing roles, may be required to work on weekends. This is particularly true during periods of high demand or new product launches.
Depending on the factory and the department, you may be expected to work on weekends, especially during high production needs. In other departments, you may get weekends off and work only about four days a week.
Do Tesla Employees Work on Holidays?
Most holidays at Tesla are paid, but you have to work most of them anyway, except for Christmas, Easter, and Thanksgiving. Engineers may have to work on these too. Tesla employees are paid double on Holidays, which typically include about 7 to 9 paid holidays per year.
You must work the day before and after the holiday, which is the standard practice. You also get 10 to 15 days of PTO (paid time off).
Do You Get Paid Overtime at Tesla?
Tesla pays overtime for most hourly employees, starting after 10 hours of work, not eight as with most companies. According to 229 employees at Tesla, 76% of them said that overtime is paid on time, at about 1.5 times the hourly rate.
Under the Fair Labor Standards Act (FLSA), non-exempt employees must be paid time and a half for any hours worked over 40 in a workweek. Exempt employees, such as some salaried managers and executives, are not entitled to overtime pay. There are many exemptions, which you can check.
Depending on your location, there is lots of overtime at Tesla and an expectation to work over 40 hours. According to a Tesla employee, you are sometimes required to work mandatory overtime, but this may not be the case at every Tesla factory.
Is Working at Tesla Stressful?
Due to Tesla’s working conditions and policy recently, work can be moderately stressful at many factories. When you consider that overtime can often be mandatory for five or even six days a week, you get the picture of a poor work/life balance at Tesla compared to other flexible Big Tech companies.
The expectation at Tesla is that you work over 40 hours a week, which creates a lot of pressure on having to work a lot every day. It’s difficult enough to get a job at Tesla. The stress level at Tesla will depend on the team or a manager and how much they allow their team to burn out during crunch times.
There are a lot of great supporting teams at Tesla, and you’ll be helped and trained well from the start. You’ll have a lot of chill weeks of 40 hours, but there will also be hectic weeks of above 60 or 70 hours, and you should be ready for that as well.
FAQs
Are Tesla employees paid overtime for weekend shifts?
Tesla employees are generally entitled to overtime pay for any work over 40 hours a week, which also applies to weekend shifts. Weekend shifts are paid at Tesla if you’re hourly paid, and the only employees that may not get paid for overtime on weekends are the salaried ones.
It’s always best to check with the company’s human resources department or consult a labor lawyer to understand the company’s overtime policies and ensure that your rights are respected.
Do Tesla Factories Work 7 Days a Week?
There are 10 Tesla factories in the United States, most of which are open seven days a week, with varying hours of operation. For example, Giga Texas Factory in Austin is open from 6 AM to 11:30 PM every week, while the Tesla Factory in Fremont is open from 10 AM to 7 PM daily. Giga Nevada in Sparks is open 24 hours a day, seven days a week.
How Long Is Tesla’s Lunch Break?
Tesla employees are required to sign a meal period waiver which states that Tesla employees at manufacturing are entitled to an unpaid mail period of at least 30 minutes if working more than 5 hours and less than 6, and a 2nd meal period if working more than 10 hours and less than 12.
Here’s an example of the Tesla Meal Period Waiver so that you can check how much the lunch break will be and when you can skip them. You have to sign this Waiver when working at Tesla.
Whether this will be enough will depend on the person. You can also waive your lunch break in agreement with your supervisor. This means you can skip the launch break as long as your shift is 6 hours or less or skip the 2nd meal if your shift is not longer than 12 hours. | https://howmonk.com/how-many-days-hours-tesla-employees-work/ |
A multispectral camera concept is presented. The concept is based on using a patterned filter in the focal plane, combined with scanning of the field of view. The filter layout has stripes of different bandpass filters extending orthogonally to the scan direction. The pattern of filter stripes is such that all bands are sampled multiple times, while minimizing the total duration of the sampling of a given scene point. As a consequence, the filter needs only a small part of the area of an image sensor. The remaining area can be used for conventional 2D imaging. A demonstrator camera has been built with six bands in the visible and near infrared, as well as a panchromatic 2D imaging capability. Image recording and reconstruction is demonstrated, but the quality of image reconstruction is expected to be a main challenge for systems based on this concept. An important advantage is that the camera can potentially be made very compact, and also low cost. It is shown that under assumptions that are not unreasonable, the proposed camera concept can be much smaller than a conventional imaging spectrometer. In principle, it can be smaller in volume by a factor on the order of several hundred while collecting the same amount of light per multispectral band. This makes the proposed camera concept very interesting for small airborne platforms and other applications requiring compact spectral imagers.
© 2014 Optical Society of America
1. Introduction
Multispectral and hyperspectral imaging techniques can exploit spectral information to generate information products not available with conventional imaging. Examples include vegetation index mapping, land cover mapping, environmental monitoring, and target detection. Spectral imagers tend to be relatively large because of the optics used to extract spectral information. An imaging spectrometer, for example, employs three sets of imaging optics, a slit and a grating or prism. However, there are important practical cases where a compact camera is needed, such as for lightweight unmanned aerial vehicles (UAVs) or handheld equipment, and new camera concepts are being developed for such needs [1–3]. Here we discuss a camera concept for applications where moderate spectral resolution is sufficient, or where spectral resolution must be traded for maximum compactness.
The most compact types of spectral imager employ a patterned spectral filter on the image sensor of a regular camera. Most commonly used is the three-band Bayer filter for color photography. In that case, images with good visual quality are obtained by “demosaic” processing of a single image frame. For applications based on quantitative analysis of spectral information, it is often desirable to have more than three bands. However, extending the filter array to larger band count increases the lateral separation between filters for different bands and leads to progressively higher misregistration between bands in a single image frame. A possible solution is to place a patterned filter in the entrance aperture, and use an array of microlenses to map the filter onto individual detector elements . This approach enables snapshot imaging with higher band counts at the expense of spatial resolution. Alternatively, a patterned filter in the focal plane can be combined with scanning so that each point in the scene is imaged in all bands. This concept is commonly used in remote sensing satellites employing linear array detectors with different spectral filters. Yet another concept employs a “linear variable filter” (LVF) in front of a 2D array image sensor , enabling recording of a large number of spectral bands when the field of view is scanned over the scene.
For multispectral imaging concepts based on patterned filters in the focal plane, the scan motion must be accurately known to ensure spatial coregistration of the different spectral bands. Otherwise there is risk of significant errors in the recorded spectral information, which can significantly degrade the data quality [6,7]. Furthermore, it is potentially problematic that the different spectral bands are recorded at different viewing angles and different times, since the accuracy of the recorded spectrum depends on the angular and temporal variations of the scene spectra, as we discuss below.
Line scan or LVF imagers have normally been built as separate instruments. In some applications, such as UAVs, a spectral imager is often used together with a conventional 2D imaging camera . An interesting exception is , where a hyperspectral LVF image sensor is combined with a separate color image sensor behind a common objective lens to form a compact camera for both hyperspectral and conventional imaging.
Along similar lines, we present a camera concept for applications where scanning can be used, for example on a UAV where the platform motion provides the scanning. A multiband filter in the focal plane is patterned so that each band is sampled multiple times during the scan. Only a part of the image sensor area is needed for the multispectral functionality, so that the camera also can be used for conventional 2D imaging. The camera has potential to be used as a compact multifunctional sensor in applications where compactness is essential. We first discuss the camera concept in some detail and then present our implementation in a demonstrator system and the first set of results.
2. Camera Concept
The basic optical layout of the camera is very simple, as shown in Fig. 1(a). An objective lens focuses an image of a scene onto an image sensor with a patterned optical filter. The filter layout has stripes of different bandpass filters across the sensor. By scanning the field of view across the stripes, each scene point can be observed through all the different filters, and its spectral properties can be reconstructed.
Fig. 1.
The image reconstruction is the most difficult aspect of this class of spectral imagers. To record accurate spectral information, it is necessary to track the motion of a scene point accurately in sequential raw images as the point moves across the filter. Data from the raw images must be combined to form a reconstructed output image. Errors in the tracking of scene motion will lead to spatial coregistration errors between bands, potentially resulting in significant errors in the recorded spectra [6,7]. Of course, the images themselves can be used to aid the reconstruction of spectra by tracking scene movements, but the reconstruction remains a nontrivial aspect of this otherwise simple class of spectral imagers. To achieve good spectral coregistration, the output image will typically need to be reconstructed with a lower spatial resolution than the recorded raw images. It is also important to avoid spatial undersampling of the scene in the recording of raw images.
Even if scene movements are tracked correctly, the sequential recording of bands can lead to artifacts in the recorded spectra in two ways: first, if a point in the scene changes in time during the scanning then different bands will tend to represent different states of the scene, leading to errors in the reconstructed spectrum analogous to a spatial coregistration error. Second, if the scan is a linear motion of the camera relative to the scene then different bands view the scene in different angles. If the radiance from the scene depends on viewing angle then spectral artifacts will result. Such angular dependence can easily arise in practice, for example from specular reflections or parallax effects.
Figure 2 illustrates the potentially problematic effect of parallax for the example case of airborne imaging. The different spectral bands are recorded sequentially as the camera moves along the flight path. Typically, the airborne sensor package includes a navigation system, which can be combined with a geometrical model of the terrain to assign a scene position for each recorded pixel. By such georeferencing, it is possible to estimate the amount of light in each spectral band coming from each point in the scene under the flight path, and to construct a spectral image of the terrain. However, the spectrum estimation must make the assumption that the radiance received from the scene is independent of viewing angle during the recording. As seen in Fig. 2, the assumption will not always be valid. In this example case, the trailing “red” band records light from point P on the ground when the camera is at position 2. However, in position 1, the leading “blue” band sees the roof of the building B, which obscures point P. Thus, because of parallax effects, a valid spectrum cannot be obtained for point P. At best, given detailed knowledge of scene geometry, this point can be labeled as invalid in the reconstructed image. These concerns are the same for a camera based on LVF.
Fig. 2.
To minimize signal errors due to time- and angle-dependent scene radiance, the extent of the filter should be minimized in the scan direction. In addition, we introduce multiple repetitions of the filter pattern along the scan direction, as indicated in Fig. 1(b). In the example in Fig. 2, the blue band can be sampled for point P at a later point in the scan if the camera records the bands multiple times. The repeated sampling enables several different strategies for minimizing spectral error, depending on what assumptions can be made about the scene. For smooth angular variations, averaging multiple readings of each band interspersed with the other bands will tend to produce a spectrum representing the scene properties at the middle of the scan. For abrupt variations, such as the parallax case in Fig. 2, a voting scheme can be implemented, or consistency checks can be used to flag unreliable data.
Even with repeated sampling, the extent of the filter may easily be made much shorter in the scan direction than across the scan. This is illustrated by the experimental realization below. For image sensors with normal formats, a large fraction of the sensor area (as well as of the image circle of the optics) might then be left unused. This area can conveniently be used for conventional imaging, as indicated in Fig. 1(b). The resulting camera is then capable of recording multispectral still images by scanning, but it can also be used to record conventional 2D video or still images, all in a very compact package. It can be noted that the image sensor must be capable of handling the larger signal dynamics resulting from having filtered and unfiltered regions, but this is feasible using state-of-the-art silicon image sensors.
Clearly, the 2D images can be used to support the reconstruction of spectral images in various ways, such as by estimation of optical flow. An interesting aspect is that the 2D imagery can be used for reconstructing the 3D structure of the scene. This is useful in itself, and a 3D scene model can also be very helpful for the spectral reconstruction, as pointed out in .
3. Experimental Realization
For our demonstrator, we have selected six bands in the visible and near-infrared (VNIR) spectral range where silicon-based image sensors are readily available. This number of bands is a compromise between fabrication cost and predicted performance. To select the spectral bands, we have used a set of hyperspectral images of natural scenes to synthesize image data for various choices of bands. We have then tested the discriminability of various objects in the images for different band combinations . Somewhat unsurprisingly, we find that good performance is obtained for spectral bands similar to those used on earth observation satellites. It is potentially useful to relate the recorded images to the literature on satellite remote sensing, therefore we choose the bands shown in Fig. 3. This set of bands also enables rendering of RGB color images, using bands 1, 2, and 4.
Fig. 3.
The patterned multiband interference filter is deposited on a glass substrate. The layout is indicated in Fig. 4. Measured transmission spectra for all six bands are shown in Fig. 5. The filters are laid out in 86 μm wide stripes across the image sensor, separated by 80 μm wide shadow masks to avoid cross talk between bands. For the 7.4 μm pixel pitch, the filter stripes correspond to about 10 unobscured detector pixels across each stripe. The six bands take up a total width of 1 mm, and are repeated four times for a total filter width of 4 mm in the scan direction. The remaining areas of the filter substrate are antireflection (AR) coated. This leaves more than half of the image sensor area for conventional panchromatic 2D imaging. A narrow unfiltered region is left on the outer side of the filter stripes, near the edge of the image sensor, intended for use in motion tracking across the filter region.
Fig. 4.
Fig. 5.
Here we use an AVT GE1650 camera based on a Truesense KAI-2020 monochrome CCD with pixels and 7.4 μm pixel pitch. The filter is placed very close (approximately 20 μm) to the image sensor, essentially forming a proximity focus of the filter pattern, as indicated in Fig. 6. (It is possible to deposit patterned filters during the production of an image sensor, with potential for low-cost manufacturing of large production series. However, this requires significant effort in process development and is thus not a viable option for a demonstrator system.) The outer band limits are set by a filter in front of the objective lens, which blocks radiation outside the range 450–900 nm. Figure 7 shows the assembled camera without the lens. The filter is held in place by a mechanical clamp, with spacers between the filter and the CCD to create an air gap of about 20 μm. For the demonstrator system we use objective lenses which are optically corrected and AR-coated for the VNIR spectral range.
Fig. 6.
Fig. 7.
4. Angle Dependence of Filter Characteristics
In this camera concept, appropriate design flexibility and performance can only be achieved by employing interference filters, whose spectral properties depend on the angle of incidence. This becomes a concern here, since angular variations at the filter are inherent to the concept: the focused cone of light from a scene point spans a range of angles of incidence according to the numerical aperture of the lens, inevitably leading to some broadening of the spectral features of the filter. In addition, the angle of the principal ray of the cone varies according to the viewing direction for a conventional lens. However, if the lens is image-side telecentric then this latter angular variation can be avoided.
Spectral features of an interference filter tend to shift to shorter wavelengths with increasing angle of incidence. The relative wavelength shift at an angle of incidence can be approximated by
Figure 8 shows measured spectra for the green band at incidence angles of 0, 12, and 25 deg. There is a significant spectral shift for this relatively large change in angle of incidence. From similar measurements of all spectral edges in the six-band filter, we find that the wavelength shifts can be approximated by Eq. (1) assuming an effective refractive index . The approximated amount of spectral shift is plotted in Fig. 9. Due to the square dependence on angle, the shift can be significant in some cases and insignificant in others, as we discuss in the following.
Fig. 8.
Fig. 9.
The influence of incidence angle on the recorded spectral signal can be estimated by assuming a step-shaped spectrum where two neighboring bands have spectral radiances and , constant within each band. Nominally, these are the values recorded by the camera. Let and be the nominal boundaries of band 1. Now assume that the band is shifted by toward band 2. The radiometric distortion of the signal in band 1 becomes2) represents the maximum possible error in recorded radiance over all bands for a step-shaped spectrum. This may be a reasonable estimate of errors in many practical cases, noting for example that reflectance spectra of solids in the VNIR range tend to be smooth. In other cases, such as for a line-shaped spectrum, Eq. (2) will underestimate the error due to spectral shift.
The effect of angle tuning across the field of view should ideally be less than the noise. For our camera, a single pixel in a raw image will have an RMS noise of the order of 1%, assuming a partial well fill of 10,000 electrons with Poisson noise as the dominating noise source. However, a pixel in the final output image will typically be an average of multiple raw pixels since the filter layout provides for sampling a scene point about 40 times and since the raw image pixels normally will be resampled to somewhat larger output pixels. Therefore, it can be argued that the output noise level may become significantly lower than 1% from averaging over multiple input pixels, depending on the details of the application.
The chosen bands here have a total width of the order of 10% of the wavelength. If a signal error of up to 1% is permitted then, according to Eq. (2), the spectral shift must be less than about 1% of the bandwidth, or 0.1% of the wavelength. The largest permissible angle of incidence is then about 4 deg according to Fig. 9. If a lower noise level is taken as reference then the variation in angle of incidence will need to be even less. This shows that it is strongly preferable to use an objective lens that is image-side telecentric, so that the focus cone spans the same range of incidence angles on the filter independently of the position in the field of view. The spectral broadening due to angular variation within the focus cone will be relatively unproblematic for telecentric lenses with moderate numerical aperture. If, for example, a broadening of about 10% of the bandwidth is allowed then the half-angle of the cone may be as large as 15 deg, corresponding to an F/2 aperture. These estimates of angle tuning effects are based on the properties of the filter used in our demonstrator. Narrower bands will lead to more stringent requirements.
We must point out that the dependence on angle of incidence is a more critical concern in our multispectral camera than in some other filter-based spectral camera concepts where the angular variations can be calibrated out. For an LVF-based camera, for example, angular variation across the field of view can be compensated by mapping the resulting shift in filter wavelength across the image sensor and postprocessing the image data accordingly. This is not possible with our set of discrete bandpass filters. On the other hand, the concept presented here enables repeated sampling of bands, which also helps to preserve signal integrity.
5. Light Collection and Camera Size
Photon noise can lead to significant degradation of spectral image processing results , especially in low-light scenes. This is a challenge for commonly used hyperspectral imagers, such as imaging spectrometers and LVF-based cameras, where a very large fraction of the incoming light is rejected, in the slit or filter, respectively. Noise properties of hyperspectral and multispectral imagers can only be compared in cases where the image analysis is not significantly helped by the higher spectral resolution of a hyperspectral camera. This can be the case if photon noise is dominating, or if the bands chosen for the multispectral imager are well adapted to the task at hand. For such a case, we now compare the light collection capability of our camera to that of an imaging spectrometer.
Assume that input pupil area, pixel size, and scan speed are identical so that multispectral and hyperspectral imaging runs at the same integration time. Assume that images from the spectrometer are formed by averaging groups of bands in the hyperspectral image to match the bands of the multispectral camera. For a given pixel, the imaging spectrometer collects light in one integration time, while the multispectral camera would see the same pixel over 10 integration times in each of four filter stripes (each 10 pixels wide) for a given band. This gives 40 times more light, although admittedly the comparison makes assumptions that are favorable to the multispectral camera. The input pupil area of the multispectral camera could then be reduced by a factor 40 and collect the same amount of light as the imaging spectrometer, in principle. The volume of a camera scales roughly with the input pupil diameter cubed. Recall also that an imaging spectrometer has three sets of imaging optics (in front of slit, disperser, and image sensor). Thus, under our assumptions the multispectral camera could be made a factor smaller in volume. Even if assumptions here were optimistic, this very large factor indicates that the multispectral camera is an interesting alternative to hyperspectral imaging wherever compactness is essential.
6. Preliminary Imaging Results
As a first test, we have used a rotary stage to scan the camera across a simple scene in the lab using an 85 mm focal length lens (Zeiss Planar 1.4/85 ZF-IR) set at F/4. The resulting field of view is small, and we neglect angle tuning and geometrical distortions. The camera was radiometrically calibrated using a reference lamp. The scan movement was nominally 1 pixel horizontally between successive images, but with some irregularity due to inadvertent software jitter. The resulting data then serve to illustrate some challenges that could be encountered in a practical application, for example, a turreted camera on a ground vehicle. We have implemented a relatively simple processing chain for initial image reconstruction. Sample results are shown in Fig. 10.
Fig. 10.
First, a panchromatic mosaic of the scene is created from the unfiltered part of a small subset of the images. This provides a common reference for all images in the sequence. Individual images are then related to the reference image by a homography estimated from a set of corresponding point pairs in the reference image and the unfiltered image section in each image, illustrated in Fig. 10(a). Since the camera undergoes a rotational scanning motion, the homography is simply estimated from RANSAC inliers using the direct linear transformation . The estimated homography for a given frame can then be used to position the spectrally filtered pixels in the pixel coordinate system defined by the panchromatic reference image. For this initial image reconstruction we use a single column of pixels from each stripe in the filter and assign them to the nearest neighbor in the reference image. This results in 24 images representing the 24 filter stripes. The final image is obtained by averaging groups of four images corresponding to the same spectral band. An RGB representation of the final image is shown in Fig. 10(b).
The nearest neighbor pixel assignment leads to a few gaps in the 24 intermediate images due to scan irregularities, but these gaps are eliminated in the averaging step. This illustrates one benefit of the multiple sampling of each band. Otherwise this preliminary reconstruction is obviously suboptimal in many ways, such as by not using all raw pixels and by employing a simplistic nearest-neighbor resampling strategy.
Finally, the image in Fig. 10(c) shows the result of a maximum likelihood spectral classification using a set of multinormal distributions to represent the main materials in the image. Distribution parameters have been estimated from a small sample of each material in the same image. The classification mostly works well, but with notable misclassifications at the black–white transitions in the background. This strongly suggests imperfect coregistration of the spectral bands, which is unsurprising given the simplistic reconstruction used here.
7. Discussion and Conclusions
We have presented a concept for multispectral imaging based on patterned filters in the focal plane and scanning of the field of view. As with many other spectral imaging techniques, there is a risk of spectral artifacts if the radiance from the scene varies with angle or time within the scan. Here, we minimize the risk of spectral errors by making the filter short in the scan direction and by repeated interspersed sampling of the spectral bands. The repeated sampling enables strategies to preserve the integrity of the spectral signal, such as the averaging and gap-filling in our preliminary image reconstruction. Still, the image reconstruction remains a main challenge for this class of spectral imagers. On the other hand, there is potential to make the camera very compact. Spatial downsampling may be necessary to obtain an output image with good coregistration of bands. Angular dependence of the filter characteristics is a potential issue which can be managed by appropriate choice of objective lens.
By minimizing the extent of the filter in the scan direction, most of the image sensor area can be used for conventional 2D still or video imaging. The 2D imagery can be used to support the reconstruction of spectral images, as demonstrated by a simple example here. In cases where the scan is a linear motion, it will also be helpful to use the 2D imagery to reconstruct the 3D shape of the scene.
In summary, the multispectral imaging concept presented here offers a multifunctional camera in a compact package. The concept also has disadvantages, notably the limited spectral resolution and the nonsimultaneous sampling of bands. Still the concept appears attractive in applications where compactness and light weight is critical, since it has potential to bring down the size of spectral imaging sensors from kilos to grams.
References
1. N. Tack, A. Lambrechts, P. Soussan, and L. Haspeslagh, “A compact high-speed and low-cost hyperspectral imager,” Proc. SPIE 8266, 82660Q (2012). [CrossRef]
2. H. Saari, V.-V. Aallos, C. Holmlund, J. Mäkynen, B. Delauré, K. Nackaerts, and B. Michiels, “Novel hyperspectral imager for lightweight UAVs,” Proc. SPIE 7668, 766805 (2010). [CrossRef]
3. M. Pisani and M. Zucco, “Compact imaging spectrometer combining Fourier transform spectroscopy with a Fabry–Perot interferometer,” Opt. Express 17, 8319–8331 (2009). [CrossRef]
4. D. B. Cavanaugh, J. M. Lorenz, N. Unwin, M. Dombrowski, and P. Wilson, “VNIR hypersensor camera system,” Proc. SPIE 7457, 745700 (2009). [CrossRef]
5. A. M. Mika, “Linear-wedge spectrometer,” Proc. SPIE 1298, 127–131 (1990). [CrossRef]
6. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39, 2210–2220 (2000). [CrossRef]
7. T. Skauli, “An upper-bound metric for characterizing spectral and spatial coregistration errors in spectral imaging,” Opt. Express 20, 918–933 (2012). [CrossRef]
8. X. Sun, “Computerized component variable interference filter imaging spectrometer system method and apparatus,” U.S. patent 6,211,906 (3 April 2001).
9. J. Biesemans, B. Delaure, and B. Michiels, “Geometric referencing of multi-spectral data,” Patent application EP2513599 A1 (2012).
10. T. Skauli, “Imaging unit,” Patent application NO20130382 (2013).
11. I. Kåsen, A. Rødningsby, T. V. Haavardsholm, and T. Skauli, “Band selection for hyperspectral target-detection based on a multinormal mixture anomaly detection algorithm,” Proc. SPIE 6966, 696606 (2008). [CrossRef]
12. W. J. Smith, Modern Optical Engineering, 3rd ed. (McGraw-Hill, 2000), p. 208.
13. T. Skauli, R. Ingebrigtsen, and I. Kåsen, “Effect of light level and photon noise on hyperspectral target detection performance,” Proc. SPIE 6661, 66610D (2007). [CrossRef]
14. R. Harley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University, 2003). | https://www.osapublishing.org/ao/fulltext.cfm?uri=ao-53-13-C64&id=283601 |
|Home | About | Journals | Submit | Contact Us | Français|
Mass spectrometry is a method of choice for quantifying low-abundance proteins and peptides in many biological studies. Here, we describe a range of computational aspects of protein and peptide quantitation, including methods for finding and integrating mass spectrometric peptide peaks, and detecting interference to obtain a robust measure of the amount of proteins present in samples.
Mass spectrometry (MS)-based quantitative proteomics has been applied to solve a wide variety of biological problems, and several MS-based workflows have been developed for protein and peptide quantitation (Fig. 1). In mass spectrometric quantitation methods it is usually assumed that the measured signal has a linear dependence on the amount of material in the sample for the entire range of amounts being studied. A prerequisite for accurate quantitation is that unwanted experimental variations in sample extraction, preparation, and analysis be minimized, and it is therefore critical that each step in the workflow is optimized for reproducibility.
One way of optimizing the reproducibility is to label the samples with stable isotopes, mix them together and perform the subsequent sample-handling steps on the mixed sample. The earlier in the workflow that the stable isotope label is introduced and the samples mixed, the smaller is the effect of variations in sample handling. Metabolic labeling (1, 2) provides the earliest possible introduction of stable isotope labels into the sample (Fig. 1a). Here, labels are introduced as isotopically distinct metabolic precursors, and the samples can be mixed before all subsequent steps in the work-flow. It is important to monitor the level of incorporation of the label, but this can, for example, be done by using two heavy labels that are incorporated into the samples with equal efficiency (3). In cases when metabolic labeling is not feasible, the stable isotope labels also can be introduced later in the workflow (4–9) by heavy isotope labeling of proteins (Fig. 1b, c) or peptides (Fig. 1d–f). In general, stable isotope labels need to be designed carefully in order to prevent introducing systematic errors caused by dissimilar behavior of the compounds with different labels. For example, it has been observed that using hydrogen/deuterium substitution in the heavy label can affect the retention time of the labeled peptides, while 12C/13C substitution does not have any observable effect on the retention time (10).
Label-free methods (11–13) for quantitation are often used when the introduction of stable isotopes is impractical (e.g., in many animal studies) or the cost is prohibitive (e.g., in biomarker studies where a relatively large number of samples need to be analyzed). Three label-free quantitation workflows are shown in Fig. 1g–i. In these workflows the different samples are analyzed separately and it is therefore critical that each step of the workflow is carefully optimized for reproducibility. In label-free quantitation workflows, usually the peptide ion peaks are integrated and used as a measure of quantity. This allows the quantity of protein and peptides to be compared in different samples (Fig. 1g) or the absolute quantity can be calculated using a standard curve (Fig. 1h). The peptide fragment ions can also be used for quantitation by integrating one or more of their peaks (Fig. 1i) as, for example, in Multiple Reaction Monitoring (MRM) (14). Using fragment ions for quantitation provides increased specificity because in addition to requiring the mass of the precursor ion be close to its predicted mass, the masses of the fragment ions are also required to be correct. Because peptides fragment in a sequence-specific manner, additional specificity can be gained by requiring that the relative intensities of the fragment ions do not deviate from the expected intensities. Alternative methods for quantitation using fragment mass spectra do not integrate peaks but are based on the results of searching protein sequence collections (see Note 1).
Currently, there are several software packages available for analysis of data from these different workflows where the quantitation is done by integrating peaks of ions that correspond to peptides or their fragments (see Note 2 for a few examples). Here, we describe how the mass spectra are processed to allow for finding the peptide peaks, detecting interference, and integrating the peaks to obtain a measure of the amount of material present in the samples.
Peptide peaks of interest for quantitation may range between smooth peaks with a large signal-to-noise ratio and noisy peaks that are barely above the background. The width of these peaks is, however, characteristic of the resolution of the mass spectrometer, the data acquisition parameters used, as well as the mass-to-charge ratio (m/z) of the peptide. Therefore, peaks can readily be detected by scanning the mass spectra for local maxima of the expected width (see Note 3). In addition, peptides are not observed as a single peak in mass spectrometry, but as a cluster of peaks, because of the presence of small amounts of stable heavy isotopes in nature (e.g., 1.11% 13C) and each peptide contains many carbon atoms. The relative intensities of the peaks in these isotope clusters are characteristic of the atomic composition of the peptides and they are strongly dependent on the peptide mass (Fig. 2a–c, see Note 4).
A majority of quantitation experiments are performed by coupling liquid chromatography with mass spectrometry, which introduces a retention time dimension. During these experiments, usually the same peptide is observed during several adjacent time points (Fig. 2d–g) with highly abundant peptides typically being observed over larger time windows than low-abundance peptides. But even with separation in both m/z and retention time, it is not uncommon to have unwanted interference between peaks from different peptides (Fig. 2e, g).
The following characteristics of peptide peaks can be used as filters to differentiate them from interfering and non-peptide peaks: (1) the width of individual peaks in m/z and retention time, (2) the intensity distribution of the isotope clusters, and (3) the measured peptide m/z. These characteristics are shown in Fig. 3 for two peptides. The width of individual peaks as a function of m/z is highly characteristic of the instrument parameters with very little variation and therefore a narrow peak width filter can be used. The width of individual peaks as a function of retention time (Fig. 3a–c, j–l) shows larger variation. This variation is mainly dependent on the peak intensity and the elution time, although strong peptide sequence dependent variation can also be observed, and therefore a wider filter must be applied. High-accuracy measurement of peptide mass is a sensitive and selective filter that is highly reproducible even at the tails of the peak where the intensity is low (Fig. 3g–i, p–r). The shape of the isotope distribution is also a sensitive and selective filter that can be used to detect interference from other peaks (Fig. 3d–f, m–o). A convenient measure of the similarity of isotope distributions is the dot product (see Note 5) between them (Fig. 3f, o). The dot product can be applied to compare sets containing any number of peaks, for example, to detect interferences when a set of fragment ions is monitored in a MRM experiment. In the example shown in Fig. 3, dot product analysis of the chromatograms shown in the panels on the right shows that only the first isotope cluster corresponds to the peptide of sequence YVLTQPPSVSVAPGQTAR, while the second and third peaks are interfering peaks from peptides whose first three isotope peaks have a similar m/z, but their relative intensity is different.
The quantity of peptides is measured by calculating the height or the area of the corresponding peaks in the ion chromatograms. Careful background subtraction is essential for accurate determination of both the height and the area of peaks (see Note 6). The advantage of using the height of the peak as the measure of quantity is the simplicity and robustness of its calculation (e.g., the average or median height for a few points around the centroid can be used). The peak height is a good measure of quantity if the width of the peak does not vary between samples and the signal is strong with little noise. In contrast, the peak area is a better measurement of quantity when there is substantial noise because many more data points are used, but it is much more sensitive to interference from other peaks because of the larger area in the m/z and retention time space that is used. The difficulty in calculating the peak area is in deciding where the peak ends and the background starts in both m/z and retention time dimensions. This determination can be very challenging for peaks with long tails. It is also important to use the same peak limits for a specific peptide in all samples. One way of circumventing the problem of finding the peak limits is to select a function and fit its parameters (e.g., centroid, width, skewness, etc.) to the peak and integrate the function. However, often it is not straightforward to find a function that fits well to all peaks in the spectrum.
In many quantitation studies more than one experiment (i.e., replicates and/or multiple samples) is performed. This requires the matching of the peptides quantified in the different experiments. For successful matching of peptides, the retention time scales of all experiments have to be aligned, because there are always uncontrolled variations in the experimental conditions that affect the peptide retention times in a nonlinear manner. This alignment can be done by identifying peaks present in all experiments that can be used as landmarks. These peaks are matched across experiments using either their mass and retention time, or their identity as determined by tandem MS. A smooth function is fitted to the retention times of these landmarks and used for aligning the retention times of all quantified peptides. The residual difference in retention time for the landmarks can be used to estimate the uncertainty in the alignment.
For some mass spectrometers, the m/z scale needs to be calibrated between experiments. This mass calibration can be done using the same landmarks as used for retention time alignment. When experiments are aligned in retention time and are mass calibrated, the quantified peptides can be matched within windows determined by the uncertainty in the retention time and the m/z.
The measured intensities of peptide peaks commonly vary from experiment to experiment in a global manner. It is therefore advisable to design experiments so that only a few of the quantified peptides have changes related to the hypothesis, and the majority of peptides change because of random variations in the experimental conditions. The randomly changing peptides can be used to normalize the overall intensity using either their median change in the intensity ratios or by fitting an intensity dependent smooth function to the measured intensity ratios.
Protein quantity can be estimated by measuring of peptide quantities. There are, however, several factors that can make the estimates of protein quantity uncertain even when highly accurate peptide quantities have been obtained. Because only a few peptides are typically measured for a given protein, these peptides might not be sufficient to define all isoforms of the protein that are present in the sample – i.e., some of the peptide sequences might be shared with other proteins, making them only suitable for quantitating the group of proteins. A few peptides might also be modified, and the change in the amount of the modified and unmodified forms of the protein is often not the same. Despite these issues, a reasonable estimate of the protein quantity can often be obtained even when only a few of its peptides are quantified. When many peptides are observed for a given protein it can be possible to even calculate the variation in quantity of several isoforms.
The significance of a measured change in quantity can be calculated if the distribution of random quantity changes (due to uncontrolled variation of experimental conditions) is known (Fig. 4a). This distribution can be obtained by analysis of technical and biological replicates. When the distribution of random quantity changes is known, the significance of a measured change in quantity can be calculated by integrating under the curve from the measured change in quantity to infinity and dividing this area by the area under the entire distribution of random changes. This value represents the probability that the measured quantity change was obtained from purely random variations, that is, the probability of rejecting the null hypothesis that there is no change in the experimental conditions. The distribution of random quantity changes is strongly dependent on the experimental conditions and the workflow that is chosen. For example, for label-free quantitation the distribution of random quantity changes depends on the number of replicates obtained (Fig. 4b–g). It is important to design quantitation experiments to minimize the width of the distribution of random quantity changes to allow for detection of small nonrandom changes.
This work was supported by funding provided by the National Institutes of Health Grants RR00862, RR022220, NS050276, and CA126485, the Carl Trygger foundation, and the Swedish research council.
1Alternative methods for quantitation search fragment mass spectra against a protein sequence collection and use the search results for quantitation. One method uses the number of different fragment mass spectra that identifies a peptide as a measure of its quantity (15). Another method calculates a measure that is based on the fraction of the protein sequence that the identified peptides cover (16). However, these alternative methods that are not based on peak integration are generally less accurate when only a few fragment spectra or peptides are observed for a given protein because of the limited statistics. On the other hand, they are less sensitive to interference and can often be more robust.
2There are many software packages available for quantitation. A few examples of freely available software are listed below:
|Name||Type||Location|
|ASAPratio (17)||ICAT|
SILAC
|http://tools.proteomecenter.org/wiki/index.php?title=Software:ASAPRatio|
|MaxQuant (18, 19)||SILAC||http://www.maxquant.org/|
|MSQuant (20)||SILAC||http://msquant.sourceforge.net/|
|Pview (21)||SILAC Label-free||http://compbio.cs.princeton.edu/pview/|
|Quant (22)||iTRAQ||http://sourceforge.net/projects/protms/|
|RAAMS (23)||16O/18O||http://informatics.mayo.edu/svn/trunk/mprc/raams/index.html|
|Skyline (24)||MRM||http://proteome.gs.washington.edu/software.html|
3For a mass spectrum where I(k) is the measured intensity at a point k with 0 ≤ k ≤ N, and N is the total number of points in the mass spectrum. The peaks are detected by calculating the sum, over the expected peak width wl for each point, l, in the spectrum, and detecting local maxima in S(l). In cases where there is sufficient noise in the spectrum the signal-to-noise ratio is calculated by taking the ratio of the root mean square (RMS) of the intensities over the peak ( , where Î is the mean intensity over the peak) and the RMS of the intensities in a nearby region where there are no peaks (see Note 6).
4Peptides are observed as clusters of peaks in mass spectrometry, because of the presence of small amounts of stable heavy isotopes in nature (e.g., 0.015% 2H, 1.11% 13C and 0.366% 15N, 0.038% 17O, 0.200% 18O, 0.75% 33S, 4.21% 34S, 0.02% 36S). The intensities of the isotope distribution are calculated accurately by including all possible isotopes. The largest effect comes from 13C and a first order estimate of the relative peak intensities is given by , where Tm is the intensity of peak m in the distribution, m is the number of 13C, n the total number of carbon atoms in the peptide, and p is the probability for 13C (i.e., 1.11%). The isotope distribution of peptides is strongly dependent on the peptide mass because the number of atoms increases with mass, and therefore the probability increases for having one or more of the naturally occurring heavy isotopes.
5The normalized dot product between the measured intensities, I = (I1, I2,…, In) and theoretical intensities T = (T1, T2,…, Tn) of the isotope distribution is given by . The range of the normalized dot product is from −1 to 1. If the measured and theoretical intensities are identical the resulting dot product is 1 and any differences between them will result in lower values of the dot product.
6Low-frequency background can be removed by fitting a smooth curve to the regions of the mass spectrum where there are no peaks. This smoothing can, for example, be achieved by applying a very wide and strong smoothing function to the entire spectrum, which will result in a smooth function slightly higher than the background. Subsequently, points in the original spectrum that are far above this smooth curve are removed (i.e., the peaks). The smoothing procedure is repeated, this time without including the peaks, to produce a smooth function that will closely follow the background of the spectrum (25).
PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. | http://pubmedcentralcanada.ca/pmcc/articles/PMC3758905/?lang=en-ca |
Date:
April 30, 2019 03:57 PM
Author: Darrell Miller ([email protected])
Subject: Reduce your risk of cirrhosis and liver cancer, researchersdiscover an effective natural solution
Cirrhosis and liver cancer are common diseases that can be easily prevented. There are many ways to reduce the chance of developing these diseases over the course of a lifetime, such as moderate physical activity. Resistance training in particular can help to alleviate symptoms and reduce the likelihood of developing these diseases because it can lower fat levels. Exercise can also lower cholesterol and help manage weight. To sum, healthy lifestyle choices and changes can reduce the risk of cancer associated with the liver and cirrhosis, which will ultimately lead to a positive and healthy life.
Key Takeaways:
- Nonalcoholic fatty liver disease is estimated to affect about thirty percent of Americans which translates to about a hundred million Americans.
- Recent research from the University of Liverpool, UK, has shown that physical exercise, particularly resistance training, can reduce the risk of having liver cancer.
- The liver is an important organ in the body that is involved in a lot of detoxification and it was shown that exercising was more beneficial for obese people.
"At the end of the study period, both liver fat and cholesterol levels were significantly lower." | https://vitanetonline.com/forums/1/Thread/5808 |
A study of how much energy athletes expend as they undertake some of the world’s longest and most grueling sporting events has found that there is an absolute ceiling to how many calories we can burn, no matter how much we eat. The research, reported in Science Advances, showed that whatever the physical activity, and whether it lasted for days, weeks, or months, humans can only burn calories at about 2.5 times their resting metabolic rate. Beyond this threshold, the body starts to break down its own tissues to make up for calorific deficit.
“This defines the realm of what’s possible for humans,” commented research co-lead Herman Pontzer, PhD, an associate professor of evolutionary anthropology at Duke University. The investigators point to the digestive tract as the limiting factor in energy burn, and say the same physiological limits that impact on endurance athletes might also constrain aspects of reproduction, such as pregnancy duration, or how large babies might grow in utero. Pontzer and colleagues reported their findings in Science Advances, in a paper titled, “Extreme events reveal an alimentary limit on sustained maximal energy expenditure.”
There’s considerable scientific interest in studying the physiological limits on energy expenditure—which is often expressed as a multiple of basal metabolic rate (BMR), and known as “metabolic scope,”—to help what the maximum sustained metabolic scope (SusMS) is for humans and other species, the authors explained. “The limits on maximum sustained energy expenditure are unclear but are of interest because they constrain reproduction, thermoregulation, and physical activity. ”
To investigate this in the context of human endurance the researchers, headed by Pontzer and John Speakman, PhD, a professor at the University of Aberdeen and the Chinese Academy of Sciences, measured how many calories were burned by athletes in the 2015 Race Across the USA (RAUSA). Participants in the 3,000-mile race, which stretches from California to Washington, D.C., run six marathons a week for 14–20 weeks. The team then analyzed the data they derived from the RAUSA athletes in parallel with existing data available from other endurance or energy-intensive activities, including triathlons, the Tour de France, arctic trekking, and pregnancy, with a view to quantifying the relationship between duration of activity and maximum measured metabolic scope.
While results demonstrated a wide variation in metabolic rates, dependent on the length of the endurance event, they also showed that prolonged high metabolic rates were accompanied by reduced total energy expenditure. The investigators had measured total energy expenditure (TEE) and resting metabolic rate (RMR) in the RAUSA athletes at three time-points: before the start of the race; then during the first week (Week 1); and again during the final week (Week 20). They discovered that while the participants’ resting metabolic rate remained unchanged over the course of the race, their total energy expenditure decreased between Week 1 and Week 20, resulting in a decreased metabolic scope.
By the end of the race, the athletes were burning 600 fewer calories per day than expected based on their mileage. This suggested that the body can effectively shift down a metabolic gear to help sustain such high levels of endurance activity. “Whatever the mechanisms, the reduction in TEE and metabolic scope may have been crucial in enabling them to complete the run,” the investigators noted.
The resulting L-shaped graph of metabolic scope over time illustrated how athletes’ energy expenditure started out high, but then dropped and flattened out at 2.5 times their basal metabolic rate. “It’s a great example of constrained energy expenditure, where the body is limited in its ability to maintain extremely high levels of energy expenditure for an extended period of time,” said co-author Caitlin Thurber, PhD. “You can sprint for 100 meters, but you can jog for miles, right? That’s also true here,” Pontzer said.
The same shaped graph was obtained for all of the endurance events, whether athletes were sled-hauling in freezing Antarctic conditions, or cycling up mountains in the summer Tour de France. This consistency under very different environmental conditions challenges one previously mooted theory that endurance is linked with the ability to regulate body temperature.
Interestingly, the maximum sustainable energy expenditure demonstrated by endurance athletes was only slightly higher than metabolic rates sustained by pregnant and lactating women, which peak at about 2.2 times BMR, the authors noted. This indicates that the same physiological limits that put a top limit on ultimate endurance capabilities also underpin aspects of reproduction, such as fetal growth in utero.
The authors speculate, from their endurance event data and results from overfeeding studies, that the limiting factor on calorie burning is the digestive system’s ability to take up and process nutrients. “The agreement of weight change data from overfeeding studies with those from endurance studies suggests that a common alimentary limit to energy uptake constrains a broad range of activity,” they wrote. | https://www.genengnews.com/news/ceiling-to-human-endurance-may-be-due-to-limits-of-gut-system/ |
Illustration and description of the Moroccan Locust, its biology, distribution within the former USSR, ecology and economic significance as an ...
- Category:
- Flora and Fauna
- Animalia
Tahlequah Daily Press
While the first trip was to Six Flags Over Texas to "study" roller coasters and human physiology, most of the spring trips have had a marine biology studies theme. "When I was younger, Sunday evening TV choices were either educational or family ...
Atlas Obscura
Smithsonian Conservation Biology Institute. This sprawling farm was once a Cold War-era hideaway for the nation's top diplomats. Suggest an Edit · Add A Photo. Been Here? 0. Want to Visit? 1. Add Smithsonian Conservation Biology Institute to a New List ...
Forbes
Truly, 2017 is The Big Year for wonderful popular science books about biology. It's taken me one agonizing week to narrow down my choices for the best biology books of 2017 into a stack that can be purchased and carried home and read. This list could ...
Calaveras Enterprise
Maybe that is why the recent trip to Santa Cruz proved to be so successful. Already filled to the brim with students who show particular interest in marine biology and related fields, Castro said the trip was designed to get students into the field and ...
The Guardian
When she was just 12 years old, an impressionable Cathy Lucas, now associate professor in marine biology at the University of Southampton, met Sir David Attenborough. He'd come to talk to students about his 1979 landmark wildlife series Life on Earth ...
Illustration and description of the Moroccan Locust, its biology, distribution within the former USSR, ecology and economic significance as an ...
Information from Wikipedia on the lapwings, their description and systematics, and a list of species.
Information of its habitat and geography, adaptation, nutrition, reproduction and interaction with other species.
Article describing these birds and their behavior.
Identification tips, range maps, life history, and taxonomy chart for this bird.
A guide including descriptions, distributions, breeding calls, identification key, and a glossary.
Photograph plus facts about the physical traits, reproduction, and habitat of this bird.
Photographs of flies in this family found in North America, provided by BugGuide.
Photograph and information on this bird and its distribution in Kenya and worldwide.
Photographs of many flies in this family.
Describes characteristics of three subspecies. Includes facts on distribution, behavior, and ecology.
Information and a photograph of the sheep ked. Click to enlarge.
Physical description and general information. Special anatomical, physiological or behavioral adaptations.
Phylogeny and identification of Aleocharine beetles (Coleoptera: Staphylinidae).
The two extant species of two-toed sloths are Linnaeus's (Choloepus didactylus) and Hoffmann's two-toed sloth (Choloepus hoffmanni). They are the ...
Art by wildlife artist Larry Chandler plus news and extensive links to other resources.
Research project by Hannah Sellnow on the great barracuda including its classification, habitat, life history, nutrition and interactions with other ...
Information on research into these nematodes, which are parasitic on insects, with the aim of using them for biological control.
Provides a description of this species, with photographs of male and female, and information on its identification, habits and habitat.
Photograph and information on this species. | http://www.biologydir.com/flora-and-fauna-animalia-cat-95-5.html |
A 35-year-old female patient is admitted to the hospital with pneumonia. She was recently diagnosed with end-stage renal disease and is on maintenance dialysis through a tunneled right subclavian dialysis catheter. Hospital course is complicated by respiratory failure and acute respiratory distress syndrome requiring mechanical ventilation. Due to progressive hypoxia, VV ECMO is instituted via bilateral femoral cannulas. Mechanical ventilation is reduced to resting ventilation with a low FiO2 , tidal volume, and respiratory rate. Twelve hours later the patient has a drop in arterial oxygen saturation from her baseline of 94% to 82%. The oxygen saturation of blood drawn from the femoral venous line, which is pre-oxygenator, has increased during the same time from 65% to 80%.
What is the most appropriate next step in management?
Correct Answer: B
Increase in oxygen saturation of the pre-oxygenator venous blood with a decrease in arterial oxygen saturation raises the concern for clinically significant recirculation in this patient and hence requires radiographic evaluation of cannula position to verify that the two lumens are separate from each other (B). Recirculation is a phenomenon unique to VV ECMO wherein the oxygenated blood from the return cannula reenters the ECMO circuit through the drainage cannula before reaching the systemic circulation. The distance between the ports of drainage and return cannulas influence the amount of recirculation. Recirculation is also affected by the type of cannulation for VV ECMO. Femoro-femoral and femoral-internal jugular configurations carry higher risks of significant recirculation when compared to a dual-lumen configuration.
Increase in pump speed and ECMO flow rate have shown to be associated with a higher fraction of recirculation (A). Oxygenator failure is fairly unlikely to occur in just 12 hours after initiation of support but can be easily ruled out with a postoxygenator ABG (C). Adding another parallel circuit will not provide any additional benefit, if the cannulas are malpositioned (D). Other factors that may influence recirculation include changes in intrathoracic and intra-cardiac pressures and changes in patient positioning.
Clinically significant recirculation can lead to hypoxia and subsequent end-organ damage. Management of new-onset recirculation involves radiographic or ultrasound evaluation to check the positions of the drainage and return cannulas. Increasing the distance between the two cannulas by withdrawing the drainage cannula can reduce recirculation. Other strategies to reduce recirculation include addition of a new drainage cannula, use of a bicaval dual-lumen cannula, or manipulation of the reinfusion cannula to direct the return jet toward the tricuspid valve.
References:
A 22-year-old male is admitted to the ICU with acute respiratory distress syndrome secondary to pneumonia. The clinical course is complicated by progressive hypoxemia, which does not improve with prone ventilation. VV ECMO is instituted with a 31 Fr right internal jugular double-lumen cannula, and the pump flow is at 4.5 L/min. The patient has a HR of 90/min, BP of 110/70 mm Hg with a norepinephrine infusion at 0.05 µg/kg/min, and a SpO2 of 90%. One hour later, the ECMO specialist mentions of “chugging” in the drainage circuit with low inlet pressures. The ECMO flow has reduced to 3 L/min. There is a drop in SpO2 to 84%, and the norepinephrine requirement has increased to 0.1 µg/kg/min. An arterial blood sample sent to the critical care laboratory reveals:
The most appropriate next step in management is to:
Correct Answer: D
“Chugging” or “chattering” of the ECMO circuit refers to back and forth swinging of the drainage and return tubes. This occurs because of fluctuations in venous drainage pressures. Hypovolemia and high pump speeds are two common scenarios where chugging can occur. In both cases, increased negative pressure at the venous inflow port of the drainage cannula leads to a temporary venous collapse. This causes low flows through the ECMO circuit even at high pump speeds, and hence increasing ECMO pump flows will not help (B). The normal negative pressure in the drainage cannula is between -50 to -80 mm Hg. Pressures lower than -100 mm Hg are abnormal and are seen during chugging episodes. The hypovolemia can be treated by administering a fluid bolus (D).
In the presence of chugging, the patient should be evaluated for signs of low intravascular volume. Tachycardia and hypotension may be present requiring vasopressor initiation or up titration. There might be desaturation due to decreased ECMO flows. Management involves administration of fluid bolus or blood transfusion if hematocrit is low. Inotropes are not usually required if baseline cardiac function is normal (A). Because the hematocrit is normal, the patient does not need a blood transfusion (C). Point of care ultrasound can be utilized to guide hemodynamic management. The ECMO pump speed can be reduced temporarily to decrease the flows to avoid chugging and subsequent venous suck down.
Low ECMO flows despite high pump speeds can also be encountered when there is some obstruction in the circuit. Obstruction could be due to kinking of the tubes or due to the presence of blood clots in the oxygenator. Isolated postoxygenator tubing chugging can be due to high flows and unrelated to hypovolemia. It is also important to rule out malposition of the cannulas.
A 42-year-old female is admitted to the ICU after a motor vehicle accident. She develops ARDS secondary to lung contusions and is initiated on VV ECMO. The clinical course is complicated by worsening acute kidney injury. The latest laboratory workup reveals acidosis with a pH of 7.18 and hyperkalemia of 6.5 mEq/L. Sodium bicarbonate, calcium gluconate, and insulin-dextrose are administered. Although adding on a continuous renal replacement therapy circuit to the ECMO circuit, the patient develops a short run of ventricular tachycardia, which quickly degenerates into asystole.
What is the immediate next step in managing this patient?
Correct Answer: A
VV ECMO provides pulmonary support with little cardiac support. The patient on VV ECMO is completely dependent on his native cardiac function to maintain cardiac output and hemodynamics. Any decrease in cardiac or hemodynamic function in such patients should be supported in the same way as a patient who is not on ECMO. Therefore, in the event of a cardiac arrest, it is prudent to follow the advanced cardiac life support algorithm and initiate CPR (A). In this patient it would mean initiating high-quality chest compressions (B), as well as administering intravenous epinephrine. Because asystole is not a shockable rhythm, defibrillation is unlikely to help in this case (D).
During a cardiac arrest, there is no cardiac output, and this impairs the flows through VV ECMO. But with high-quality chest compressions, it is possible to run the pump at low flows, which may be adequate to maintain oxygenation. The FiO2 on the ventilator can be turned up to 1.0 as a safety precaution to protect against hypoxia in the event of inadequate pump flows. Institution of VA ECMO is recommended in case of refractory cardiac arrest when there is a strong suspicion for a reversible cause of cardiac arrest. The survival rates and neurological outcomes after ECPR are influenced by the time to initiation of VA ECMO after cardiac arrest. It seems reasonable to consider ECPR after 10 minutes of high-quality conventional CPR in a patient with a potentially reversible cause of cardiac arrest. In this patient on VV ECMO, conversion to VA ECMO by arterial cannulation should be considered if initial CPR fails to achieve return of spontaneous circulation (C). | https://your-doctor.net/quiz/quiz.php?page=2&records_number=5&id=184 |
Ofili races to new African 200m indoor record
....Meets Olympic qualifying standard.
Nigeria’s Favour Ofili sped to a new Nigeria and African 200m indoor record of 22.75 seconds Saturday night to win at the SEC Indoor Championship at the Randal Tyson Indoor Centre in Fayetteville, Arkansas, USA.
The 18 year old broke the 22.80 seconds record set by Ivory Coast’s Muriel Ahoure in 2009 and made history as the first Nigerian sprinter to break 23 seconds in the half lap indoors.
The World Championship 400m semi-finalist served notice of her huge talent in the semi-finals when she clocked a then personal best of 23.15 seconds to qualify for Saturday’s final.
That performance pushed her to number five on the Nigerian all-time list and number six in Africa before speeding into the record books in the final to become number one in Nigerian and African all-time list ahead.
Prior to her incredible run in Fayetteville, Regina George’s 23.00 seconds run in 2013 in Fayetteville had looked to stay for another year at least after Blessing Okagbare came 100th of second short of equalling it early this month in Fayetteville with her 23.01 seconds performance before Ofili chose the same venue to obliterate it from the record books.
Ofili has thus surpassed the 22.80 seconds qualification standard set for the 200m for the Tokyo Olympics holding this summer. It was generally a good weekend of track and field for Nigerian athletes in the National Collegiate of Athletics Association (NCAA) with Ruth Usoro also surpassing the 14.32m standard for the triple jump after hopping, stepping and jumping to a new 14.38m Nigerian record, surpassing by 10cm the 14.28m she set early this month.
Usoro has not only become the first Nigerian woman to meet the qualification standard for the Olympics in the event but also the first Nigerian woman to meet the standard for both the triple and long jumps after her 6.82m feat a couple of days ago saw her hit the mark for Tokyo.
Shot putter, Isaac Odugbesan was also in great form as he hauled a massive 20.50m personal best in the event to win at the 2021 Indoor Secs. He is now the third Nigerian to throw 20m and more in the Shot Put indoors after Stephen Mozia and Chukwuebuka Enekwechi. | |
Often, human opinions, judgments, and estimates amount to error-prone measurements. Simple math dictates that when people are attuned to reality at all, then averaging their judgments will yield an estimate that is more accurate than its individual components are on average. Galton (1907) illustrated this simple but profound principle by collecting estimates of the weight of an ox at a country fair and showing that the average estimate was more accurate than the individual averages on average. Over the years, a number of replications of this finding were published, showing that the content domain does not matter much. Judging the temperature in a room has been a favorite, or estimating the year of some historical event (cf. Larrick, Mannes, & Soll, 2012). Robyn Dawes (1977) once tongue-in-cheekishly suggested that a person’s height could be measured by averaging judgments obtained from visual inspection. The result is the same. At worst, averaging does not yield an improvement, but often it does. So why not do it?
Recently, we have learned that individual people can take advantage of the averaging principle by producing more than one estimate of whatever it is they are estimating. Vul & Pashler (2008) showed that averaging a person’s first and second estimates yields an advantage. Herzog & Hertwig (2009) replicated this result and showed that the accuracy increment can be made larger by urging individuals to actively think differently when generating their second estimate. In an earlier post, we took a look at their data and some of our own to show how this works.
Today we offer a small replication of the averaging effect in the simplest of circumstances: estimating the number of M&Ms in a transparent container. We put 1,379 of those colorful beans in a jar. We know this number because we counted the beans. 16 students in our psychology class made an estimate of that number from simple visual inspection and guessing. We then gave them the Oliver Cromwell exhortation to “consider, in the bowels of Christ, that they might be mistaken,” and to guess again. They did, and we averaged the two estimates for each person.
Here are the results. The first estimates had a mean of 880 and a standard deviation of 641. That is, on average an estimate was 641 points away from the mean. For the second estimate, the respective numbers were M = 793 and SD = 478. For the averages of the first and the second estimate the numbers were M = 834 and SD = 545. Notice that the difference between the true value and the mean of the first estimates is 1379 – 880 = 499. In contrast, the difference between the true value and an individual’s first estimate is 746 on average. There you have the traditional wisdom of the crowd effect. The average of the judgments is more accurate than individual judgments are on average. In standard units, the size of this effect is .85, which is large. We calculated this effect size by subtracting the value 499 from each individual error, averaging these differences and dividing the result by the standard deviation.
How does the within-the-head wisdom of the crowd compare? This effect is smaller, to wit, .34 standard units. We calculated this effect size by averaging the differences between the error obtained with the first estimate and the error obtained with the average of the first and second estimate, and dividing this average by its corresponding standard deviation. We also ran a t test for statistical significance. This did not force a rejection of the null hypothesis, but we still sleep at night. The sample size was small, and nonetheless, there was an effect of the expected size and sign.
If this works, why not do it? What’s keeping you from giving yourself a second opinion, average it with the first, and harvest the benefits? The answer is that the method, though simple, seems weird to most people. The idea that one’s first estimate of X contains some error is not too hard to swallow, but the idea that some portion of this error is random, is. When people make an estimate of X (e.g., the number of lovers Aunt Polly had in her day) they make it with the conviction that the estimate they produce is the best they can do. This must be so by definition – were it not so, people would have come up with a different estimate in the first place. Vul, Herzog and collaborators have shown that people are not quite aware of the fluctuations in their own judgments (Vul), or that if they are, they still fail to comprehend that some of the variance is random (Herzog). One can elicit multiple judgments from individuals, but one may not expect people to do it on their own.
The willingness and ability to generate different opinions, judgments, estimates bear the stamp of creativity. To be creative, judgments must break out of a mold, a mindset. If initial judgments are the box, then boldly discrepant second judgments are outside of it. Intuitively, many people might feel that going against one’s own initial judgment is irrational and irresponsibly risk seeking. In fact, the opposite is true. As aggregation increases accuracy, so-called “correspondence rationality” is enhanced and the risk of being wrong is reduced.
As the lessons of Galton, Dawes, Herzog and others are beginning to sink in, we may be seeing interesting new applications. Consider the wisdom of the crowd in the context of goals: How many M&Ms would you like to eat (how many square feet should your house be; how many lovers would you like to love; how many mountains do you want to climb?). Are you sure your first estimate is the end of wisdom? Guess again and split the difference. There may not be an accuracy benefit waiting to be calculated, but perhaps there is an adaptiveness benefit. Again: how many children do you want to have? Now seriously, how many children do you want to have?
Dawes, R. M. (1977). Suppose we measured height with rating scales instead of rulers. Applied Psychological Measurement, 1, 267-273.
Galton, F. (1907). Vox populi. Nature, 75, 450-451.
Herzog, S. M., & Hertwig, R. (2009). The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping. Psychological Science, 20, 231–237.
Larrick, R. P., Mannes, A. E., & Soll, J. B. (2012). The social psychology of the wisdom of crowds. In J. I. Krueger (ed.), Social judgment and decision making (pp. 227-242). New York: Psychology Press.
Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19, 645–647. | https://www.psychologytoday.com/blog/one-among-many/201303/guess-guess-again-average |
The mythical fishing port of Saint-Tropez speaks for itself. Even though the small authentic port has given way to luxury shops and very prestigious yachts, the Provençal village has retained its magical charm with a market in the Place des Lices, bistros, boules players, a fish market and extremely well-protected surrounding pine trees.
There is a wide range of beaches for all tastes: from the authentic beaches at La Moutte and Les Salins to the beaches of Pampelonne with numerous restaurants including small coves accessible by boat towards Maison Blanche and Cap Camarat.
Lastly, Saint-Tropez would not be so legendary without its characteristic party spirit and guaranteed wild nights for the jet set, always present from aperitif time to dawn.
Getting there:
Events: | https://www.feepourvous.com/en/rentals/discover-saint-tropez |
Highway to hell
It’s the bad idea that just won’t die: The Active Cyber Defense Certainty (ACDC) Act. Earlier this month Representatives Tom Graves (R-Ga.) and Kyrsten Sinema (D-Ariz.) issued an updated version of the proposed bill that would allow companies to take offensive action if a “persistent” and unauthorized cyber intrusion is identified. The bill’s stated aim is to counter some of the restrictions placed on companies by the antiquated and contentious Computer Fraud and Abuse Act (CFAA) of 1984, and to empower companies that have been victims of cyber crime to aid law enforcement in fighting cyber fraud and “related cyber-enabled crimes [that] pose a severe threat to the national security and economic vitality of the United States.” It’s a noble cause, to be certain, but the reality of the bill leaves ample room for interpretation and does not “[clarify] the type of tools and techniques that defenders can use that exceed the boundaries of their own computer network,” as promised. In fact, the language in the bill is so vague it could, theoretically, be used to prove that a victim company acted recklessly when actively “cyber defending” its own computer networks.
Living easy, living free
Before we go much further, though, let’s address the issue of active cyber defense, which is what the bill supposes to aid. In the security community, many have dubbed the ACDC Act the “hacking back” bill, but as Ed Moyle of Security Curve wrote so eloquently in his blog post, “Hack-Back is NOT Active Defense,” hacking back is…well, not the same as active defense. To summarize, active defense uses techniques and tools like beaconing, honeypots, and client hooks to catch criminals or would-be criminals. Hacking back would require the victim organization (who becomes the aggressor, since it’s now doing the attacking??) to gain unauthorized access to the organization that breached its network(s). The ACDC Act specifically lays out that acceptable active cyber defense measures include:
“Accessing without authorization the computer of the attacker to the defender’s own network to gather information in order to:
- Establish attribution of criminal activity to share with law enforcement and other United States Government agencies responsible for cybersecurity
- Disrupt continued unauthorized activity against the defender’s own networks
- Monitor the behavior of an attacker to assist in developing future intrusions prevention or cyber defense techniques.”
Importantly, the bill then adds that any destruction, modification, or removal of information—even if the information found by the defender is its own—is prohibited. The defender-turned-aggressor may not cause any harm, install backdoors or remote monitoring capabilities, or disrupt the attacker-now -attacked organization’s systems or data. Tricky, isn’t it?
Season ticket on a one-way ride
For purposes of this post, I won’t delve into the perils of hacking back, the challenges and inaccuracy of adversary attribution, the fact that cyber crime knows no geographical boundaries (the bill only applies to US companies accessing attacker networks in the US), how easy it is for criminals to make traffic appear to stem from innocent third-party networks/IP addresses, etc. This skew has been covered in many, many other articles and blog posts.
Instead, the rest of this post will be dedicated to why this bill is not only not necessary—because active defense isn’t illegal, and because of reasons stated in the previous paragraph—but how passing this bill could cause the adverse effects.
As a community, information security is attracted to what’s new, what’s exciting. Of course we are! This is a generalization that could apply to any population (except, maybe, accountants??). The problem with this in security, though, is that when security practitioners are hyper focused on zero-day exploits, active malware variants, widespread attacks (e.g., Mirai, WannaCry, the Experian breach), etc. basic blocking and tackling—security fundamentals—are discarded in favor of research into emerging trends and live incidents. Security teams, on the whole, struggle with attending to many of the things that would keep their organizations more secure: better password policies/practices, implementing 2- or multifactor authentication, up-to-date asset inventories, encryption of sensitive data, patch management, vulnerability testing, etc. Though the security basics are well known and proven to mitigate incidents, they don’t incite excitement. The ability to go after the bad guys? Now that’s exciting!
Asking nothing, leave me be
Taking a step back, many security basics are typically pre-incident activities. Once an incident is discovered, patching a vulnerable system or changing employees’ passwords won’t stop what has already happened (you can’t change the past, sadly). However, covering the basics during an incident might minimize additional damage and reduce the possibility of future incidents. For argument’s sake, let’s remove “prevent” and “detect” processes from the equation and concentrate on response, which is when the ACDC Act would come into play.
When incident response is needed, the organization should execute the incident response plan (which was obviously developed and tested pre-incident…). At this point, the security team’s focus should be on containing the incident and assisting with forensics, recovery, and restoration. Now, assigned duties will vary depending on the number of an organization’s resources, but up to 82% of companies lament a lack of cybersecurity staff. What, then, will happen if the already-stretched security team gets the green light to hack back? Incident response, in and of itself, is a security “basic” and should receive the highest level of attention when required, like during an incident. If security teams have the choice of handling basics or applying offensive measures, what will happen to the basics? Probably what already happens to the basics—they’ll take a back seat to other, more interesting things, like trying to find the adversary and monitor his/her activities.
Taking everything in my stride
The ACDC Act is a slippery slope for many reasons, but fundamental security is one area earning short shrift in this discussion. As it often does. Unless an organization has sufficient resources to both respond to an incident and find the adversary, response should always be the priority. Could organizations create policies that dictate, “IR first, ACDC after”? Sure, but the most compelling activity will always prevail, if history has taught us anything. While it’s encouraging to see elected officials taking cybersecurity seriously, the ACDC Act is unlikely to do more good than harm, at least until the government learns the nuances of what they’re trying to solve.
Attend InfoSec World 2018 in Orlando, Florida, March 19-21, 2018 to try your hand at an incident response tabletop exercise. Work with colleagues in a mock setting that will prepare you for a real-life incident in "Beware the Ransomware." | https://www.scmagazine.com/editorial/news/compliance/the-acdc-act-would-take-defenders-eyes-off-real-cyber-defense |
Bacteria hijack an immune signaling system to live safely in our guts
How do our bodies establish equilibrium between our immune systems and the …
Our immune system operates under the basic premise that "self" is different from "non-self." Its primary function lies in distinguishing between these entities, leaving the former alone while attacking the latter. Yet we now know that our guts are home to populations of bacterial cells so vast that they outnumber our own cells, and that these microbiota are essential to our own survival.
As a recent study in Nature Immunology notes, "An equilibrium is established between the microbiota and the immune system that is fundamental to intestinal homeostasis." How does the immune system achieve this equilibrium, neither overacting and attacking the symbiotic bacteria nor being lax and allowing pathogens to get through? It turns out that our gut bacteria manipulate the immune system to keep things from getting out of hand.
Like many stories of immune regulation, this one is a tale of many interleukins (ILs). Interleukins are a subset of cytokines, signaling molecules used by the immune system to control processes such as inflammation and the growth and differentiation of different classes of immune cells. IL-22 is known to be important in defense, both ridding the intestines of bacterial pathogens and protecting the colon from inflammation.
IL-22 is produced by the subset of T cells defined by their expression of IL-17, known as TH17 cells, as well as by innate lymphoid cells. Sawa et al. report that in the intestine, most of the IL-22 is produced by a specific subset of innate lymphoid cells that live there, and not TH17 cells.
Microbiota can repress this expression of IL-22 by inducing the expression of IL-25 in the epithelial cells lining the walls of the intestine. The researchers deduced this because IL-22 expression goes down in mice after weaning, when microbial colonization of the intestine dramatically increases. When adult mice were treated with antibiotics, IL-22 production went up again. IL-22 production also increased during inflammation.
Microbiota also induce the generation of TH17 cells and, even though these normally make IL-22, this induction further depresses its production. The TH17 ended up competing with the innate lymphoid cells for the same pool of regulatory cytokines; as a result, all of them got less and became less active.
These innate lymphoid cells thus play a critical role in maintaining intestinal homeostasis. They make IL-22, which induces the production of antibacterial peptides by the lining and protects the intestine from pathological inflammation. Symbiotic microbiota make a safe home by tamping down the production of IL-22 by inducing IL-25. The TH17 cells can contribute to this tamping down by competing for regulators. The authors conclude by stating that “this complex regulatory network demonstrates the subtle interaction between the microbiota and the various forces of the vertebrate immune system in maintaining intestinal homeostasis.”
The big picture is that we like having certain bacteria in our gut. In fact, vertebrates have co-evolved with the bacteria in their guts for hundreds of millions of years. The immune system and the gut microbial community have developed a peaceful coexistence that benefits both, if all goes well. We get the bacteria to digest many kinds of carbohydrates for which we don't have our own digestive enzymes. They also make certain nutrients for us (like vitamin K). And the ones we like occupy the environmental niche, keeping bad ones from moving in. In return, we give the desirable ones a stable environment, let them keep some of the food for themselves, and even pass them around the world. But how do we tolerate them in the gut, where they are beneficial to us, and yet respond vigorously against them in the wrong place? That's what the article is about, There is a finely tuned balance (certainly not yet all understood) that hold the immune response in check. In fact, there is communication and cooperation between the immune system and the bacteria. Very interesting to try to model.
I imagine the immune system in the gut is a bit more complicated; we need to make sure we don't fall into the trap of assuming that the entire mechanism is confined to the process defined here. In any case, the body has multiple layers of defense; beneficial flora have likely developed some way around all of these (or the body has learnt to tolerate), while "bad" bacteria would be unlikely to do so. Any that would might get purged when we end up worshipping the porcelain ring, or - worst case - kill the host. Both would probably represent a terminal end for that particular strain, thus providing highly selective pressure against its ongoing development.
If you'd like to see how your gut acts without the beneficial bacteria, just get sick in a way that requires huge doses of antibiotics. My experience has been that things go much... smoother with the good bacteria present.
"Just wait until the hacker bacterium arrives and it figures out how to sign its own code to run whatever it wants. Then we'll be in a world of shit. Quite possibly, literally"
There have been something like that already, but smaller, eg. "HIV virus"
That's not quite it; HIV attacks the actual T-cells for the immune system. We're talking about a virus that's dangerous that manipulates the immune system via this pathway. But I suppose if you can attack the immune system directly, that's less work.
Fascinating. If the number of bacteria outnumbers our own cells it probably means that fecal matter must contain a colossal amount of bacteria, dead and alive, and that the breaking down of food continues for days after we excrete it albeit by additional colonies of bacteria. In the wild, most mammals’ feces seem to disappear after a while. I’m not aware of any insect that feasts on them so if the decomposition is mostly done by microorganisms is it fair to assume (from an evolutionary point of view) that a good number of them were created in the gut?
I could not find answer here to question if (and how) our immune system attack bad bacteria in guts but leave good/symbiotic bacteria alone?
All it shows is evidence that good/symbiotic bacteria developed ways to suppress immune system. But that still leave few questions:1) even suppressed, remaining immune system still attack good/symbiotic bacteria in same way as it attacks bad bacteria? 2) because it is suppressed, immune system in guts is less effective to fight even bad bacteria than in other parts of body?
In other words, I expected "solution" to this problem to be in good bacteria adopting something to make our immune system treat it as our "own", not in suppressing our immune system.
I thought of dung beetles but it seems to me that, on the surface, the amount of excrement produced by animals far exceeds the population of dung beetles. From an evolutionary point of view, one would think that dung beetles would swarm the fields after they have been fertilized with manure. Since farming and agriculture is at least a hundred thousand years old, one would think that beetles would have become an essential farming accessory. Maybe it is and I’m just showing my ignorance.
Fascinating. If the number of bacteria outnumbers our own cells it probably means that fecal matter must contain a colossal amount of bacteria, dead and alive, and that the breaking down of food continues for days after we excrete it albeit by additional colonies of bacteria. In the wild, most mammals’ feces seem to disappear after a while. I’m not aware of any insect that feasts on them so if the decomposition is mostly done by microorganisms is it fair to assume (from an evolutionary point of view) that a good number of them were created in the gut?
Any organic material put in moist ground will decompose. Dirt contains a huge quantity and variety of fungi and bacteria.
Just wait until the hacker bacterium arrives and it figures out how to sign its own code to run whatever it wants. Then we'll be in a world of shit. Quite possibly, literally.
It would be against its interests to screw (and possibly kill off) the host. If it could figure out to always triumph over any hostile invading foreign species without affecting us it'd be a win-win situation.
Fascinating. If the number of bacteria outnumbers our own cells it probably means that fecal matter must contain a colossal amount of bacteria, dead and alive, and that the breaking down of food continues for days after we excrete it albeit by additional colonies of bacteria. In the wild, most mammals’ feces seem to disappear after a while. I’m not aware of any insect that feasts on them so if the decomposition is mostly done by microorganisms is it fair to assume (from an evolutionary point of view) that a good number of them were created in the gut?
Any organic material put in moist ground will decompose. Dirt contains a huge quantity and variety of fungi and bacteria.
The bacteria in the soil are missing in processed food and synthetic supplements. Would you make a case to incorporate a few grams of soil in our diet? Isn’t this how animals re-populate their intestinal fauna?
I could not find answer here to question if (and how) our immune system attack bad bacteria in guts but leave good/symbiotic bacteria alone?
All it shows is evidence that good/symbiotic bacteria developed ways to suppress immune system. But that still leave few questions:1) even suppressed, remaining immune system still attack good/symbiotic bacteria in same way as it attacks bad bacteria? 2) because it is suppressed, immune system in guts is less effective to fight even bad bacteria than in other parts of body?
In other words, I expected "solution" to this problem to be in good bacteria adopting something to make our immune system treat it as our "own", not in suppressing our immune system.
Fascinating. If the number of bacteria outnumbers our own cells it probably means that fecal matter must contain a colossal amount of bacteria, dead and alive, and that the breaking down of food continues for days after we excrete it albeit by additional colonies of bacteria. In the wild, most mammals’ feces seem to disappear after a while. I’m not aware of any insect that feasts on them so if the decomposition is mostly done by microorganisms is it fair to assume (from an evolutionary point of view) that a good number of them were created in the gut?
Any organic material put in moist ground will decompose. Dirt contains a huge quantity and variety of fungi and bacteria.
The bacteria in the soil are missing in processed food and synthetic supplements. Would you make a case to incorporate a few grams of soil in our diet? Isn’t this how animals re-populate their intestinal fauna?
sorry for quoting the whole thing but... yogurt is one way we replenish. also even pasteurized food isn't antiseptic and does contain some organisms. In addition we eat a great amount of dirt.. just by swallowing every minute. There are many vectors for re-population of gut bacteria, but some are going to be both more efficient and better overall for the host.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.