content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Fraser Hollins has been praised as “one of Canada’s great bassists.” Don Thompson – Canadian Jazz Multi-Instrumentalist.
His playing demonstrates “highly imaginative and creative spontaneous music making.” Craig Hurst – Jazz Review.com.
Fraser studied at the prestigious Manhattan School of Music where he earned his Masters degree in jazz performance in 2004. During his professional career, he has performed around the world and appeared on over 40 recordings with bands featuring national and international artists. This list of musicians includes Jeff Johnston, Jean-Christophe Béney, Christine Jensen, Rémi Bolduc, Josh Rager, Dave Turner, Don Thompson, Kelly Jefferson, Guido Basso, Kevin Hays, Ingrid Jensen, Brad Turner, Geoffrey Keezer, Kurt Rosenwinkel, Ben Monder, Dave Liebman, Donny McCaslin, Jon Cowherd, Brian Blade, Jerry Bergonzi, George Garzone and Seamus Blake. Fraser has lead his current quartet since 2006, whose members have included Joel Miller, Steve Amirault, Karl Januska, Martin Auguste, Greg Ritchie and John Fraboni. The first three of these musicians join him on his debut release, entitled “Aerial”, which features a collection of music that is largely influenced by his travels to South America, and his love of Nature. All four members of the band contribute to the creative, modern, fresh and subtle qualities heard on the album. | https://www.mcgill.ca/music/about-us/bio/fraser-hollins |
This measure of the money supply differs from M2 in that it includes treasury deposits at the Fed (and excludes short-time deposits, traveler's checks, and retail money funds).
M2 growth rose in May, growing 4.14 percent, compared to April's growth rate of 3.86 percent. M2 grew 3.83 percent during May of last year. The M2 growth rate has fallen considerably since late 2016, but has varied little in recent months.
Money supply growth can often be a helpful measure of economic activity. During periods of economic boom, money supply tends to grow quickly as banks make more loans. Recessions, on the other hand, tend to be preceded by periods of slowdowns in rates of money supply growth.
Moreover, periods preceding recessions often show a growing gap between M2 growth and TMS growth. We saw this in 2006-07 and in 2000-01. The gap between M2 and TMS narrowed considerably from 2011 through 2015, but has grown in recent years.
Disclosure: No positions.
Note: The views expressed on Mises.org are not necessarily those of the Mises Institute.
Editor's Note: The summary bullets for this article were chosen by Seeking Alpha editors. | |
Website owners should be using Google Analytics to monitor what is going on, what works, what does not, where traffic is going and what the conversion rates are. If you are not doing this, it might be a good idea start doing so. We have found some helpful information which can guide you.
Social media activity should be a big part of your overall online marketing activity. One of the big debates that’s raged on for a long time is how to effectively measure social media results.
In this article I’ll explain how you can use Google Analytics to monitor how well your social media efforts are working.
Where to find the information
If you log in to your Google Analytics account then click on the traffic sources menu on the left hand side, you’ll see a list of sub-menus appear underneath as below. | https://serendipity-online-marketing.co.uk/blog/2012/using-google-analytics/ |
Enterobius vermicularis, commonly known as the pinworm, is a worm from a class of parasites known as nematodes, or simply roundworms. Pinworms are a common source of infection especially in children, where they have high chances of contracting a pinworm infection . More than 30% of children are infected with pinworms all around the world, but adults also can be infected with pinworms if they do not take the necessary precautions. Pinworms are usually half an inch in size, and they can be easily seen with naked eyes. The most common signs and symptoms of pinworms include peri-anal itching, nausea, and occasional abdominal pain. Also, presence of pinworms in the stool can lead to disturbed sleep due to the irritated anal region and the frequent itching and irritation in the anal area . This infection can be very troublesome and irritating in the long run. This is why you should treat this condition as soon as possible. Pinworm infection is caused by accidental ingestion or inhalation of the eggs by animals or humans, where the eggs are usually transferred via contaminated foods, drinks, or other infected utensils. Pinworm eggs are very small that they cannot be seen by naked eyes. Adult pinworms then continue to move in the body, and later they lay more eggs around the anal region of the infected person where the person then becomes infective to other people within hours. This cycle then continues on and on until the infection is cured, and the entire household of the infected person is disinfected. You need to talk with your doctor if you have pinworms before you start using some of the home remedies mentioned below.
Home remedies for pinworms
Onions: It is known that onions are rich in sulfur which can help to eliminate pinworms from your system in a period of day or two. You should peel one to two medium-sized onions, wash them thoroughly, and then cut them into thin slices. You should put the onion slices in a bowl, and then add a pint of water. You should let the onion slices soak in water overnight for at least twelve hours. You should then strain the water using cheesecloth, and drink this home remedy three times per day for a period of two days.
Castor oil: The scientific name of castor oil is Ricinus communis. This is a very popular antimicrobial and anti-inflammatory agent which can destroy and eliminate the harmful worms and parasites from the body. Pinworms are usually attached to the walls of the intestines, and this is a reason why they are not excreted from the body. The castor oil can dislodge them from the intestines because they have laxative properties and this in turn will push the pinworms out of your body via excretion. You should consume a teaspoon of castor oil once per day.
Wormwood tea: Wormwood extract has an anti-parasitic effect which can help to kill parasites such as pinworms, and combat the infection [4,5]. You should add three to four drops of wormwood extract to one cup of hot water, and consume the tea. Also, you can add honey to the mix for flavoring. You should drink it 3 times a day for around a week.
Yogurt: This natural cure is rich in probiotics which can help to restore the natural balance of bacteria in the digestive system . Probiotics can also destroy and eliminate harmful bacteria and germs from the body. You should consume about one cup of plain yogurt and do it at least once per day.
Bitter gourd: Bitter gourd has bitter-tasting compound called cucurbitacin which can help to eliminate pinworms from your system. You should blend two medium-sized bitter gourds with one cup of water. Also, you can add honey or any fruit juice to this blend to combat the bitter flavor. You should drink this home remedy intermittently throughout the day.
Natural cures for pinworms
Pumpkin seeds: Pumpkin seeds have compounds known as cucurbitacins which are known for their anthelmintic activity . This component can paralyze pinworms which makes it easier to expel them from your body. You will need to take one cup of raw pumpkin seeds, and blend it with ½-1 cup of water to make a creamy paste. You should then consume it on an empty stomach. You should take this natural treatment once per day in the morning.
Carrots: Carrots are rich in fiber which can improve digestion and bowel movements . This can push the pinworms out of your body via your intestines. You will need one medium-sized carrot. You should take the carrot, and then wash it thoroughly, peel it, shred it, and take the carrot pieces with your meal, or as a snack. You must eat about a cup of shredded carrots one to two times per day.
Pineapple juice: Bromelain is enzyme in pineapples which has antimicrobial activities [9,10], and it can help you to rid of pinworms. You will need a ¼ of a pineapple. You should peel and cut the pineapple, then blend the pineapple pieces with one glass of water. You should drink this juice at least once per day.
Lemon juice: Lemon has an acidic nature which can lower the pH of the body, and it can make the survival of the pinworms difficult. You should squeeze half a lemon into one glass of water. If you want, you can add honey. You should consume it at least once per day.
Grapefruit seed extract: Grapefruit seeds have polyphenols that exhibit strong antimicrobial activity against pinworms . You should take 200 mg grapefruit seed extract supplements twice per week. You need to talk with your physician before you start consuming these supplements.
Clove essential oil: This oil is derived from the aromatic flower called cloves. It has a component known as eugenol which exhibits powerful antimicrobial properties . Also, clove oil has antimicrobial and antiseptic properties which can help in the natural treatment of pinworm infection. You will need one to two drops of clove essential oil. You should apply the clove essential oil directly on your infected area. If you have sensitive skin, then you should dilute the clove essential oil with one teaspoon of coconut oil because in this way you will avoid irritation. You should use this natural treatment once per day every night.
Tea tree oil: Tea tree oil has antifungal and antibacterial properties which can combat the parasitic infection . You should mix one to two drops of tea tree oil with one to two teaspoons of coconut oil. You should apply this home remedy to the affected area, and apply it every night. However, be cautious because people with sensitive skin can get irritation from it. You should apply it once per day.
Garlic: This is antioxidants that exhibit antifungal and antibacterial properties. You can chew raw garlic cloves on a daily basis or you can also add garlic cloves to the food that you consume. There is another option, and that is to mince garlic cloves, and mix them with petroleum jelly, and apply the paste on the affected area. It is preferred to apply this paste once every night.
Coconut oil: This is a natural antioxidant which has antifungal and antibacterial properties. This can be helpful in eradicating the pinworm infection and its symptoms. You should consume one teaspoon of coconut oil every morning or apply coconut oil to the infected area every night as a second option. You should use this natural treatment on a daily basis.
Apple cider vinegar: It has 6% acetic acid which can lower the pH of the body while also restoring the hydrochloric acid balance. It is creating an uninhabitable environment for the pinworms, which makes it difficult for the worm to survive in the body. You should add two teaspoons of apple cider vinegar to one glass of water, mix it well, and drink it at least twice per day. You may also add honey for flavoring.
Hygiene and environmental disinfection: Pinworm infection is highly contagious. Therefore, only treating the infected person probably won’t be enough. That’s why it is very important to regularly wash your hands with soap and warm water after using the bathroom or after contact with any utensils that have a possibility of being infected. You should also disinfect your entire house with hot water to help in completely getting rid any residual pinworms as well prevent its recurrence. You should soak your fabrics as well as clothes that may be infected with hot water for at least thirty minutes before you wash them. You can disinfect the bathrooms by cleaning them on a daily basis with hot water and soap. You should do this on daily basis until your house is completely clear of pinworms.
References:
Caldwell JP. Pinworms (enterobius vermicularis). Canadian Family Physician Medecin de Famille Canadien. 1982;28:306-309.
Arca MJ, Gates RL, Groner JI, et al. Clinical manifestations of appendiceal pinworms in children: an institutional experience and a review of the literature. Pediatric Surgery International. 2004;20(5):372-375.
Nisha M, Kalyanasundaram M, Paily KP, et al. In vitro screening of medicinal plant extracts for macrofilaricidal activity. Parasitology Research. 2007;100(3):575-579.
Yildiz K, Basalan M, Duru O, Gökpınar S. Antiparasitic efficiency of Artemisia absinthium on Toxocara cati in naturally infected cats. Turkish Journal of Parasitology. 2011;35(1):10-4.
Azizi K, Shahidi-Hakak F, Asgari Q, et al. In vitro efficacy of ethanolic extract of Artemisia absinthium (Asteraceae) against Leishmania major L. using cell sensitivity and flow cytometry assays. Journal of Parasitic Diseases : Official Organ of the Indian Society for Parasitology. 2016;40(3):735-740.
Singh RK, Chang HW, Yan D, et al. Influence of diet on the gut microbiome and implications for human health. Journal of Translational Medicine. 2017;15(1):73-73.
Grzybek M, Kukula-Koch W, Strachecka A, et al. Evaluation of anthelmintic activity and composition of pumpkin (Cucurbita pepo L.) seed extracts – in vitro and in vivo studies. International Journal of Molecular Sciences. 2016;17(9):1456.
Yang J, Wang HP, Zhou L, Xu CF. Effect of dietary fiber on constipation: a meta analysis. World Journal of Gastroenterology. 2012;18(48):7378-7383.
dos Anjos MM, da Silva AA, de Pascoli IC, et al. Antibacterial activity of papain and bromelain on Alicyclobacillus spp. International Journal of Food Microbiology, 2016;216:121-126.
Praveen NC, Rajesh A, Madan M, et al. In vitro evaluation of antibacterial efficacy of pineapple extract (Bromelain) on periodontal pathogens. Journal of International Oral Health : JIOH. 2014;6(5):96-98.
Spiegler V, Liebau E, Hensel A. Medicinal plant extracts and plant-derived polyphenols with anthelmintic activity against intestinal nematodes. Natural Product Reports. 2017;34(6):627-643.
Pessoa LM, Morais SM, Bevilaqua CM, Luciano JH. Anthelmintic activity of essential oil of Ocimum gratissimum Linn. and eugenol against Haemonchus contortus. Veterinary Parasitology. 2002;109(1):59-63.
Cox SD, Mann CM, Markham JL, et al. The mode of antimicrobial action of the essential oil of Melaleuca alternifolia (tea tree oil). Journal of Applied Microbiology. 2000;88(1):170-175. | https://www.homenaturalcures.com/pinworms-natural-treatment-remedy-cure/ |
An algebra for a monad <T,η,µ> is an object AC equipped with an 'action': a morphism θ:TA->A so that the following diagrams commute:
|Axioms of an algebra A:|
Relationship to an Algebraic Theory
If the monad T represents an algebraic theory.
Then an 'algebra for a monad' represents a model for the algebraic theory, that is, an implementation of it.
|Monad||an algebraic theory||Syntax||Holds all the operations that may be needed.||Let us form algebras|
|Algebra for a monad||a model for the algebraic theory||Semantics||An implementation of those operations.||Let us evaluate (collapse) algebras|
Informal Introduction
|We can think of the endofunctor of a monad in terms of an expression|
We are first introduced to 'algebra' as a part of mathematics where we can use letters (variables) to stand in for numbers that are unknown or we don't wish to use the literal value. We can then use this to express axioms like: x+y =y+x.
Here we relate algebras to monads. The monad will allow us to build up expressions and the algebra will allow us to evaluate it back to a single value.
In order to construct a monad, in this case, we start with an endofunctor A -> T A. This takes a variable and gives us an expression in that variable.
Examples:
- x -> x² + 3
- x -> x + 2
We can compose these by substituting one in the other, like this
- x -> (x+2)² + 3
So where x appears in the first we have substituted (x+2) which is the second.
We now have the two natural transformations:
- A -> T A
- T T A -> T A
Which gives us a monad.
On the remainder of this page we investigate questions such as:
- Are there other monads which model algebras?
- What is the link to adjunctions?
- Is there a similar structure for co-algebras?
Monads and Algebras
|There is a connection between monads, adjunctions and algebras.|
An algebra can have two levels:
- An algebraic theory - defines function signatures and axioms.
- A model of that theory - defines the implementation of the functions.
Algebras and Coalgebras
Algebras have a structure based on products, so a subalgebra is a quotient. Coalgebras have a structure and substructure based on sum.
Monad as Algebraic Theory
A monad <T,η,µ> can represent an algebraic theory as might be given by function signatures and axioms.
Category of Algebras
We can generalise this concept to a category of algebras (alg T) where the objects are the morphisms θ:TA->A
Monadisity
Given any category 'C', is it an alg T for some T?
There is a whole category of algebras with two extremes:
- The Eilenberg-Moore algebra (CT) is terminal in this category of algebras. It has different objects but the same arrows as C.
- The Kleisli algebra is initial (CT)in this category of algebras. It has the same objects but different arrows as C.
Adjunctions and Algebras
Here we will talk about 'an algebra' to talk about specific mathematical 'equational theories' (The theory of fields is not equational because not every element has a multiplicative inverse).
An 'equational theory' is called a 'T-Algebra'. This gives rise to a free -| forgetful adjunction between sets and the category of models of the theory.
- forgetful functor U: T-Alg -> Sets
- free functor F: Sets -> T-Alg
The 'initial algebra' for an endofunctor P:S->S is a 'least fixed point' for P. Such algebras can be used in computer science to model 'recursive datatypes'. | http://www.euclideanspace.com/maths/discrete/category/higher/monad/algebra/index.htm |
The average price of renting in London has steadily increased over the years. But, as students debate whether to sign a lease for fall 2021, higher vacancy rates could alter the market.
Due to the uncertainty of the coronavirus pandemic, Western University’s Off-Campus Housing office advised students to not make any decisions for next year’s living situation just yet — unless they are absolutely sure they will need to be in London.
“We have been suggesting they hold off until they know what the university is doing,” said Glenn Matthews, Western’s Off-Campus Housing services’ housing mediation officer.
“That's the thing, we're all guessing,” Matthews said. “What is the pandemic going to do? How long is it going to extend? What's going to happen with the university next fall?”
The pandemic has brought uncertainty for landlords as well as tenants, with lower in-person attendance at Western creating rising vacancy rates in areas near campus. However, Matthews said London continues to see the average rental price grow proportional the the rising cost of living over the years.
“Typically, the rental supply market in London has been very healthy for students over the last 25 years,” Matthews said.
According to Matthews, there are plenty of options for those looking to rent in London now and for next year.
“I think there will be some students that will be happy to be back in London renting off-campus or in residence and there will be some students that will be happy to [remain] home."
Off-Campus Housing continues to list average rent prices from May 2019 — a year prior to the pandemic. Nonetheless, Matthews assures rental prices did not change drastically during the pandemic.
“[It’s] hard to say what the pandemic would have done to the prices, but we weren't in the office for part of last summer," he said. "I don’t think the numbers have changed that much.”
If higher vacancy rates persist as students opt to stay home, London's rental market trends could see a dramatic shift. This change would depend on how long the pandemic — and consequently, remote learning — will last.
“If there is an oversupply of units, generally speaking, the [average] price would normally either stagnate or go down,” Matthews said.
To no surprise, vacancy rates in London’s student neighbourhoods have increased this past year in some areas more than others.
The Canadian Housing and Mortgage Corporation website notes Downtown North, which comprises the neighbourhoods around the Oxford Street and Richmond Street intersection has been particularly affected by lower in-person attendance at Western. This zone has seen a jump in vacancy rates from 1.8 to 5.9 per cent in the past year.
Old North, which includes neighbourhoods near the university’s main gates, like Broughdale Avenue, seem less impacted, with vacancy rates increasing from 1.7 to 2.7 per cent in the past year. Matthews explained that this lower vacancy rate is probably because the neighbourhood is in closer proximity to Western’s campus.
“Over the years, more and more students have tried to be closer to the school, and that's part of the society we have, [with] people wanting to be close to where they have to get to,” Mathews said.
“You pay a higher premium for being close to campus and you pay more money for a newer build.”
For those worried about signing a lease in London next year, Matthews assures students there's no need to panic in making a quick decision — the market is full of choices for students.
“All I know is, if the markets have been anything like they've been in the past, students aren't going to have problems finding housing.”
Follow Us
The Gazette provides a space where young journalists, writers, designers, photographers and videographers can learn and grow in a professional setting. We’d love to see you contribute! If you can't volunteer regularly, you can participate as often as you want. Read something that you are driven to respond to – good or bad? Write us a Letter to the Editor. Want to contribute, but don't have the time? We gladly accept donations, which not only supports our scholarship fund to help students in financial need, but allows us to keep the Gazette a fantastic place for students at Western to learn, improve and make a postive impact. Also, if you're using an Ad Blocker, please consider supporting us by whitelisting this site. | |
The reading opens with God reminding Abram of their covenant agreement. “Walk in my presence and be blameless.” It is a call for Abram to maintain his faith and trust in God’s promises – yet to be fulfilled. We have here another covenant narrative. The same covenant promises are confirmed which we saw in Wednesday’s reading but this time it not only lays down religious and moral obligations but also imposes circumcision as the badge of recognition for God’s people.
In intervening verses not included in our reading, God has also formally announced that Abram’s name has been changed to Abraham (v.5), because it sounds like ab hamon, ‘father of a multitude’. For the ancients a name did not merely indicate, rather it made a thing what it was, and a change of name meant a change of destiny. Abraham is soon to be the father he long wanted to be.
Abram and Abraham, it seems, are in fact just two dialectal forms of the same name whose meaning is ‘he is great by reason of his father, he is of noble descent’. In this context, however, Abraham is interpreted on the strength of its similarity with ab hamon, ‘father of a multitude’.
God now speaks to Abraham:
Both he and his descendants are to observe this covenant down the ages. And, from now on, as a sign of the covenant, every male is to be circumcised.
The Jerusalem Bible comments:
Circumcision was originally a rite initiatory to marriage and to the life of the clan. Here it becomes a sign which, like the rainbow after the Flood, is to remind God of his Covenant and humans of the obligations deriving from their belonging to the Chosen People. Nevertheless, the legislative texts allude to this injunction only on two occasions (Exod 12:44; Lev 12:3). It is only at the Exile and after that it receives its full prominence, cf. 1 Macch 1:63; 2 Macch 6:10. Paul explains it as the “seal of the righteousness of faith” (Rom 4:11). On the ‘circumcision of the heart’, see Jer 4:4.
The origin of the custom of circumcision is likely to have been hygienic but, as in several of Israel’s customs, it is given a divine sanction and a ritualistic significance which guarantees its observance. It then becomes a distinctive badge of the race. It was to be administered eight days after birth and was to be performed on every member of the household and even on outsiders and non-Jews bought (as slaves) and brought into the household. To be uncircumcised was to be seen in violation of the covenant. Where the foreskin has not been cut away, that man will be cut away from the community. “He has broken my covenant.”
Ironically, it has also become the badge of the followers of Islam, who borrowed it from their future nemesis, and who, in some places, have extended it to women as well.
Returning to our reading, God next formally announces to Abraham that his wife Sarai is henceforth to be known as Sarah. The two words are actually forms of the same name, meaning ‘princess’. Sarah is to be – a long way in the future, to be sure – the mother of kings.
And God says he is going to bless her and give Abraham a son by her. The son will also be blessed and will have many descendants and “rulers of peoples shall issue from him”.
Abraham’s response is to prostrate himself in deep reverence before God but at the same time he laughs. “Can a child be born to a man who is 100 years old? Or can Sarah give birth at 90?!” Abraham’s laughter is a sign not so much of unbelief as of surprise at the extraordinary announcement.
Abraham’s laugh will be echoed later by Sarah, who laughed when she found her periods had stopped, a sign of her pregnancy – something she finds incredible at her age. She will laugh again after the birth with a feeling of vindication. Each is a play on the name Isaac, an abbreviation of the name Yshq-El which means “May God smile, be kind” or “God has smiled, been kind”.
Abraham also begs God that Ishmael, soon to be displaced as the true son, continue to be favoured. Mentioning Ishmael, up to now the heir-apparent to the Promise, is an implicit request for reassurance.
God again announces that Sarah will have a son and that he is to be called Isaac. The covenant made with Abraham will now continue with Isaac as an everlasting pact with his descendants forever.
Abraham’s appeal for Ishmael is also heard. He will be blessed and be fertile and have many descendants. He will be the father of twelve chieftains and they will become a great nation but they will be outside the covenant. The covenant, originally made with Abraham, will now pass to Isaac, who will be born within the coming year.
And our reading concludes with God leaving Abraham to his future.
That very day Abraham, his son Ishmael and all the males in his household are then circumcised according to the Lord’s command. (This last is not included in today’s reading.)
The issue of circumcision will become very contentious in the apostolic Church where Jewish Christians wished to have the custom continued. However, it will be seen as an excessive burden on Gentile converts and is soon dropped as a requirement. Later, Paul will emphasise that true circumcision is in the heart and not in an external operation. | https://livingspace.sacredspace.ie/O1126R/ |
The past twenty years have seen tremendous engagement around racial, cultural, and gender diversity. Millenials (ages 18-29) are generally knowledgeable about such identity differences and far better equipped to have a respectful, nuanced discussion of these issues than their parents and grandparents.
Can the same be said about religious diversity? At a time when religion fuels numerous conflicts overseas and when religious tensions in the United States appear to be on the rise, what must be done to improve religious literacy and foster inclusive attitudes among America's next generation?
At the Aspen Ideas Festival last week, we discussed this challenge and opportunity in two panels featuring Madeleine Albright, E.J. Dionne, David Gergen, Ingrid Mattson, Eboo Patel, and Jim Wallis. Secretary Albright stressed that the world is watching the United States: interreligious strife at home damages our image abroad. Gergen expressed concern about the widening religious-secular divide, cautioning that a growing overlap between religion and political affiliation contributes to antipathy and dysfunction in Washington. All agreed that in today's increasingly diverse religious landscape--marked by a sharp rise in the religiously unaffiliated (now 20% of Americans and 30% of Millenials)--it is necessary to take a more intentional approach to positively engage matters of religion. We at the Justice and Society Program call this "principled pluralism."
What is principled pluralism? Harvard scholar of religion Diana Eck defines pluralism by contrasting it with diversity. She writes, "Diversity is just plurality, plain and simple--splendid, colorful, perhaps threatening. Pluralism is the engagement that creates a common society from all that plurality." Principled pluralism encourages that engagement, but respects the desire of some groups to respectfully limit it, in harmony with deeply held views on matters of faith.
Over the past nine months, Gergen and Albright have co-chaired the Inclusive America Project (IAP), an initiative of the Aspen Institute's Justice and Society Program that has assembled a distinguished panel of leaders from youth development organizations, institutions of higher education, media outlets, religiously affiliated organizations, and government agencies.
The IAP panel's recently published Principled Pluralism: Report of the Inclusive America Project offers strategies "to encourage respect in the public sphere for the religious identity of individuals and groups; to foster positive relationships and informed dialogue between people of different spiritual orientations; and to forge partnerships among religious and other organizations in service to the common good." As the co-chairs noted at the Ideas Festival, the report contains specific action steps to which the distinguished panelists are committed. We hope readers will take these recommendations into their own communities.
Service to the common good was a major theme at the Ideas Festival, identified by Gergen as providing a "natural connection" between the Inclusive America Project and the Franklin Project. Since service is a core value across major faith traditions, bringing young people together to work on common projects builds positive social capital far more quickly than theological discussions. Jim Wallis of Sojourners talked about how religious communities can provide transformative energy in social reform, noting the role of the evangelical movement in turning the tide in Senate consideration of comprehensive immigration reform. While in 2007 the Senate failed to bring immigration reform legislation to a vote, last Thursday it passed an immigration reform bill by a margin of 68 to 32, in part due to grass-roots pressure on key members of Congress by religiously-affiliated groups in their home states. That coalition will push forward as the debate moves to the House of Representatives.
Religious differences can be a potent source of social tension, as evidenced by bloody conflicts from Belfast to the Balkans to Baghdad. However, as with race and gender, religious diversity is a source of strength and richness when properly engaged. Albright, Gergen, and their fellow Ideas Festival panelists issued a call to action, urging Americans to build bridges across religious divides. What can you do to help build a stronger, more inclusive America? | https://www.huffpost.com/entry/principled-pluralism-reli_b_3530332 |
Michigan law says third graders with poor reading scores should be held back. How is that affecting Lansing-area schools?
LANSING — Thousands of Michigan third graders who struggle to read were recommended for retention under a recently enacted state law, but few Lansing-area students will be repeating third grade because of poor reading skills.
The Michigan Read by Third Grade law, passed in 2016, requires schools to identify and help students who are struggling to learn to read. For third graders, the Michigan Student Test of Educational Progress provides a key reference point. Low scores in the reading portion of the test make students eligible for retention under the law.
About 5.8%, or 5,660 third graders statewide were identified for retention this year, up from the 4.8% or 3,661 last year.
The number of students retained in third grade was far lower because a number of exemptions exist that allow students to move on to fourth grade. No third grade students at four of Greater Lansing’s five largest school districts will be held back under the Read by Third Grade law. Districts cited a number of reasons.
“All of our cases, there were specific circumstances so that it made sense to go ahead and promote all of them,” said Glenn Mitcham, assistant superintendent for East Lansing Public Schools. “In some cases, the student had already been retained, which automatically qualifies students for a good cause exemption. Others had test anxiety which hindered them from doing well. Through other assessments and portfolios of work, we found out they were not more than a grade behind.”
Starting May 18, the state Center for Educational Performance and Information began sending out letters to families alerting them that their third graders had been identified for retention under the law.
Fewer than 1% of East Lansing Public Schools’ students received the letter, Mitcham said. The state recommended about 13.7% of Lansing School District’s 771 third graders for retention, with about 2.5% of Grand Ledge Public Schools’ third graders and four Okemos Public Schools third grade students.
Each of the schools worked with students who ultimately were allowed to advance.
This was the second year of the retention policy being in place. More students took the M-STEP this year, increasing from 71.2% last year to 98% this year, and the number of students eligible for retention rose with it.
Under the law, parents of students facing retention can apply on behalf of their children for good cause exemptions that allow them to advance to fourth grade. Among the reasons they can cite are such things as the student has learning disabilities, is an English language learner, has already repeated a grade, or has been enrolled in their current school for less than two years and evidence shows they did not receive an appropriate individual reading improvement plan.
Parents or legal guardians also can request the child be allowed to advance if the school district’s superintendent, chief administrator or another designee agrees that promotion to the fourth grade would be in the best interest of the student.
The retention possibility of the law continues to draw criticism among parents and educational professionals.
“Retention decisions should be on a student-by-student basis, in consultation among parents, teachers and administrators,” State Superintendent Michael Rice said in a May 27 press release. “In general, however, the idea that a given score on a state assessment should generate retention makes no sense. Student performance in multiple ways should be considered before a decision to retain a student.”
At Okemos Public Schools, parents or guardians, principals, teachers and other support staff review all student data, not just M-STEP scores, to determine whether a student eligible for retention should repeat third grade, said Stacy Bailey, assistant superintendent for curriculum and instruction.
All of the four students identified as candidates for retention were able to meet one of the exemptions and advance, she said.
Most schools follow a similar process when determining whether their students should be promoted. And aspects of the law, such as communication with parents and guardians through individualized reading improvement plans, are already “universal best practices,” Bailey said.
Some school officials fear the law does more harm than good.
Mitcham said the law is “wonderful, except the part about retention.” Because of the law, school districts assess third graders and those students who are reading below grade level are connected with assistance.
Some children struggling to read receive additional support at school and through “Read at Home” plans. Working on reading skills outside of school can be critical to improving reading ability, Mitcham said.
At Lansing School District, the law has hurt more than helped, said Sarah Odneal, director of diversity, equity and inclusion. The individualized reading plans educators are required to develop for struggling students can be seen more as a compliance document, she said, rather than an opportunity to work with parents toward the success of their children.
When students are held back, Odneal said, the likelihood that they drop out later in their educational careers increases. As a result, the school district sends letters to parents with retention warnings.
“It causes anxiety for our families,” Odneal said. “They’ll get a letter from the state saying their child is eligible for retention and then we’ll send a letter with warning of retention.”
In addition to the requirements under the law, schools like those in Lansing School District are working on different efforts to help their students avoid retention. She said professional development and instructional coaching have been key in Lansing School District.
Prior to COVID-19, Lansing School District teachers gave instruction in grade-level content while other staff helped students in specific areas on the side. Now, through professional development and training, teachers have conduct the intervention support, Odneal said, by using different classroom methods, like small group instruction and layering in support through co-teachers.
Okemos Public Schools conducts benchmark assessments three times a year to track academic success, Bailey said. The results are reviewed to determine whether there are students who need additional educational assistance.
Additionally, Okemos Public Schools assigns an instructional coach to each elementary school to work with teachers, support staff and parents and guardians to monitor students’ progress with the intervention.
“This is an ongoing process throughout the school year to provide targeted support for identified students,” Bailey said.
Other efforts include a partnership with the nonprofit Kids Read Now to send free weekly grade-level books to all kindergarten and first grade students, as well as second and third grade students who had received interventions over the past year.
That work to help students improve their reading skills continues beyond the third grade if the students are cleared for a fourth-grade promotion.
At Grand Ledge Public Schools, and in other districts, students who move on to fourth grade through the approval of a good cause exemption are matched with support they need and the “strongest classroom instruction possible,” said Assistant Superintendent for Academic Services Bill Barnes.
“Our instructional and support practices have changed over time as the result of new research around the science of reading and reading instruction, and we consistently use multiple data points to help ensure that our students are growing as early readers. The law has not changed these practices," Barnes said, in an email.
Contact Mark Johnson at (517) 377-1026 or [email protected]. Follow him on Twitter at @ByMarkJohnson. | https://www.lansingstatejournal.com/story/news/2022/07/04/no-lansing-area-students-retained-low-reading-scores-under-new-law/7791323001/ |
Additional Locations:
Purpose and Passion • Comprehensive Benefits • Life-Work Integration • Community • Career Growth
At Boston Scientific, you will find a collaborative culture driven by a passion for innovation that keeps us connected on the most essential level. With determination, imagination and a deep caring for human life, we’re solving some of the most important healthcare industry challenges. Together, we’re one global team committed to making a difference in people’s lives around the world. This is a place where you can find a career with meaningful purpose—improving lives through your life’s work.
About the role:
The Principal Contract Analyst supports the Global Regulatory Legal group through review, revision and negotiation of clinical and preclinical study agreements, clinical vendor agreements and other related agreements and business documents across a global organization. Works with local and international business partners at varying leadership levels to facilitate the contracting processes. Seeks ways to improve processes and help the department function more efficiently and effectively.
Your responsibilities include:
- Review, revise and negotiate global clinical study agreements and related agreements and documents, e.g., hospital use agreements, device purchase agreements, data use and transfer agreements, informed consent forms, post-market clinical follow-up agreements, electronic medical record access agreements.
- Review, revise and negotiate clinical vendor service agreements, preclinical research agreements and investigator-sponsored research agreements.
- Work collaboratively with counsel and legal professional staff across the global Legal & Compliance Department to ensure agreements are negotiated in compliance with BSC policies and procedures and applicable laws.
- Work collaboratively with business unit employees and functional partners, e.g., Global Clinical Operations, Corporate Quality, Corporate Regulatory.
- Identify opportunities and assist Global Regulatory Legal counsel with the development and maintenance of processes and procedures to maintain compliance with BSC policies and applicable laws, regulations and guidelines.
- Support Global Regulatory Legal counsel in developing and implementing BSC global clinical projects.
What we're looking for:
- Strong experience in reading, understanding and drafting contract language without the assistance of content management tools and/or software; strong experience in redlining and negotiating agreements.
- Basic understanding of contract law, types of agreements, contract terms and conditions, legal requirements and negotiation process.
- Basic understanding of HIPAA and GDPR, preferred.
- Practical and business-minded approach to contract negotiation; ability to balance an appropriate level of risk for achievement of BSC’s strategic objectives with protection of BSC’s legal interests
- Ability to interact professionally and collaboratively with all levels of management, multiple geographies, internal subject matter experts, legal staff, vendors and clients
- Ability to prioritize and complete daily workload and projects with minimal supervision and in accordance with deadlines and shifting priorities
- Ability to synthesize information from a variety of sources and present it in a meaningful and concise way
- Excellent writing, proofreading and editing skills
- Strong verbal and written communication skills and organizational skills
- 5+ years of experience in contracts negotiation/management/administration
- Prior experience engaging in internal operations projects (e.g., template development, policies), preferred.
- Prior use of iManage, preferred.
About us
As a global medical technology leader for more than 35 years, our mission at Boston Scientific (NYSE: BSX) is to transform lives through innovative medical solutions that improve the health of patients. If you’re looking to truly make a difference to people both around the world and around the corner, there’s no better place to make it happen. | https://jobs.bostonscientific.com/job/Marlborough-Principal-Contract-Analyst-Job-MA-01752/750173300/ |
Heat transfer is conventionally divided into three categories: convection, conduction, and radiation.
Conduction is the process of heat transfer between materials that are in direct contact. For example, when you place a saucepan on an electric hob, heat passes from the hob to the saucepan via conduction. Conduction then transfers the heat from the bottom of the saucepan to the inner surface, and from there to the rest of the pan and its contents.
Conduction transfers heat energy from the hotter material to the cooler one. In the coffee roaster, heat transfers to the beans by conduction when the beans touch any hot surfaces inside the drum, including the drum walls and paddles. Heat also transfers from bean to bean via conduction. Conduction is a relatively slow process that occurs only if the materials are in direct contact.
Convection is the process of heat transfer via the movement of a fluid. In the case of coffee roasting, the ‘fluid’ is air. Inside the saucepan, the water touching the bottom of the pan picks up heat energy by conduction from the surface of the saucepan. Hotter water is less dense, so it rises, carrying the heat energy with it. Meanwhile, cooler water flows in to replace it. This movement creates currents within the water that efficiently move the heat around.
Conduction (1) and convection (2) in a saucepan. By conduction, heat travels from the heating element to the hot plate, from the hot plate to the saucepan, and from the saucepan into the water. The hot water becomes less dense and rises, creating currents in the water. The movement of the hot water is called advection. Convection is the combination of conduction (the heat transfer into the water) and advection (the movement of the water).
These natural currents are not the only form of convection, however. If you stir the water, you make it circulate more quickly. The water still carries heat energy with it, so this is also a form of convection. | https://www.baristahustle.com/lesson/rs-4-01-convection-conduction-and-radiation/ |
“Not a lot I can comment on, other than, since we did get so many [questions], I wanted to be responsive the best way I could.
“The way I think I’d like to do it is, I’d like to echo something you’ve heard from our most senior executive leadership.
“While JK Rowling is the creator of Harry Potter, and we are bringing that to life with the power of Portkey, in many places, she’s a private citizen also. And that means she’s entitled to express her personal opinion on social media. I may not agree with her, and I might not agree with her stance on a range of topics, but I can agree that she has the right to hold her opinions.”
The statement follows months of controversy around Rowling’s views on trans women and gender identity, which have drawn condemnation from the likes of Harry Potter actors Emma Watson and Daniel Radcliffe among others.
Warner Bros. has been drawn into the debate because of its somewhat uncertain connection with Rowling. WB Interactive has a licensing deal in place for Harry Potter, and makes franchise games under the Portkey Games label. While the company has made clear that Rowling is “not directly involved” with Hogwarts Legacy, it has declined to comment on whether the author will receive royalties from the game.It’s led to an ongoing discussion about boycotts for the game, with some arguing that the author’s views shouldn’t be tacitly supported by buying work associated with her, while others point out the potential negative consequences on the developers of the game (most of whom will have little say on the products they make) if sales are low.
It’s within the context of this debate that Haddad has been asked for his opinion – and his comments are likely to spark further debate around his company’s vested interests in Rowling’s work, and the intersection between public opinion and what many consider to be hate speech.
Schreier added that, later in the Q&A, Haddad answered a question specifically about diversity and inclusion in WB Interactive products, explaining that the company was working with LGBTQ organisations, and he had personally talked to the director of trans media representation at GLAAD. Haddad apparently did not mention Rowling in this context.
Joe Skrebels is IGN’s Executive Editor of News. Follow him on Twitter. Have a tip for us? Want to discuss a possible story? Please send an email to [email protected]. | https://nextlevelgaming.info/wb-interactive-president-responds-to-ongoing-debate-over-supporting-jk-rowling/ |
Expression analysis of cyclins in pituitary adenomas and the normal pituitary gland.
Turner HE., Nagy Z., Sullivan N., Esiri MM., Wass JA.
OBJECTIVES: The molecular events involved in pituitary tumour development are still poorly understood. The cyclins play an important role in the control of the cell cycle during cell proliferation and over-expression of the cyclins has been shown in many different tumour types. The aim of this study was to investigate whether, in comparison to the normal gland, ectopic expression of cyclins occurs in pituitary tumours, and whether differences in cyclin expression are seen with different pituitary tumour types or in association with different tumour behaviour. In contrast to work on cyclin D there are no published data on cyclin B, A and E in human pituitary tumours. METHODS: Sixty-seven surgically removed pituitary tumours and 10 specimens of normal human anterior gland were studied using immunohistochemistry to detect the nuclear expression of cyclin A, B, D and E. The microvascular density (as a measure of angiogenesis), Ki-67 labelling index (to assess cell proliferation) and bcl-2 expression had previously been investigated in this cohort. RESULTS: All tumours studied contained cells that immunostained positively for cyclin A, B, D and E. However the proportion of positive cells in each tumour type was different. In contrast, there were no cyclin D positive cells in the normal anterior pituitary gland studied, and labelling indices (LI) for cyclins A, B and E were significantly lower in the normal gland than in pituitary adenomas. The cyclin LIs for A, B, D and E were significantly higher in macroadenomas when compared to microadenomas. Non-functioning pituitary tumours (NFA) generally showed the highest cyclin LI. In particular, both recurrent and nonrecurrent NFA showed significantly higher cyclin D LI than other tumours. The ratio of cells expressing cyclin B compared to those expressing cyclin A was significantly higher in functionless tumours that regrew when compared to NFAs that did not (P<0.05). Cyclin D LI and the overall Ki-67 LI as a measure of cell proliferation were related (R2 = 11.4, P = 0.0033) and bcl-2 positive tumours had significantly higher cyclin D LI compared with bcl-2 negative tumours. There was a weak relationship between angiogenesis and the relative proportion of cells expressing D when compared to those in S phase (D/A ratio) (r2 = 10.5, P = 0.02). CONCLUSIONS: We have demonstrated that ectopic expression of cyclin D and over-expression of cyclins A, B and E, regulating different stages of the cell cycle is common in pituitary adenomas. In addition, cyclin expression was related to size and to pituitary tumour regrowth. The differences between functionless tumours that regrow and those that do not, may be due to reduced bcl-2 expression, increased cell proliferation, more cells at the G2/M stage (B/A ratio) and reduced cell differentiation with more aggressive subsequent tumour behaviour. Cyclin D expression and cell proliferation were related indicating that the cells entering the cycle become 'committed' to cell cycle progression. There was no relative over-expression of individual cyclins, and therefore no evidence of relative increase in cell cycle phase, indicating that the increased cyclin expression is more likely to be due to constant mitogenic stimulation rather than cell cycle regulatory failure. Although nuclear cyclin expression is a good marker of tumour growth and aggressive behaviour, the growth signal that leads to cyclin expression remains to be identified. | https://www.neuroscience.ox.ac.uk/publications/257689 |
Ph.D.
Discipline
Molecular Biosciences
Rights
Copyright held by the author.
MetadataShow full item record
Abstract
In addition to its well-established role in nucleating microtubules at microtubule-organizing centers, γ-tubulin has essential, microtubule-independent functions that are incompletely understood [reviewed in (Oakley et al., 2015)]. Experiments in our lab with the cold-sensitive γ-tubulin mutant mipAD159 in the filamentous fungus Aspergillus nidulans revealed that γ-tubulin has a role in inactivating the anaphase promoting complex/cyclosome (APC/C) resulting in continuous destruction of cyclin B and failure of nuclei to progress through the cell cycle (Nayak et al., 2010). Deletion of the APC/C co-activator CdhA (the A. nidulans Cdh1 homolog) restores cyclin B accumulation and these and other data demonstrate that γ-tubulin plays an important role in inactivating APC/C-CdhA at the G1/S boundary. However, cdhAΔ, mipAD159 strains are as cold sensitive as the mipAD159 parent, indicating that the cold sensitivity is not due to continuous APC/C-CdhA activity (Edgerton-Morgan and Oakley, 2012). Although the underlying molecular mechanism(s) by which γ-tubulin regulates CdhA are not yet known, our data do not support a direct interaction between γ-tubulin and CdhA. Instead, we hypothesize that γ-tubulin acts through regulators of CdhA. Proteins involved in Cdh1 inhibition and inactivation have been identified in other organisms but not in A. nidulans prior to my work. Thus, the first part of my main project consisted of identifying and characterizing regulators of CdhA. As filamentous fungi are hugely important medically, agriculturally and commercially [reviewed in (Meyer et al., 2016)], it is vital that we understand the cell biology of filamentous fungi to be able to combat fungal pathogens and to maximize their growth for production of desirable products. The second part of my main project was aimed at determining whether these CdhA regulators are candidates through which γ-tubulin acts to regulate CdhA at the G1/S transition. In many organisms, initial Cdh1 inactivation at G1/S occurs via phosphorylation by cyclin/CDK complexes which then triggers Cdh1 ubiquitination by the Skp1-Cullin1-F-box (SCF) complex. However, cyclins have not been well studied in members of aspergilli, including A. nidulans. In chapter 3 of this work, I report my identification of all cyclin domain-containing proteins in A. nidulans and establish that this cyclin repertoire is well-conserved in closely and distantly related filamentous ascomycetes. This is significant as the complement of cyclins found in model yeasts (Saccharomyces cerevisiae, Schizosaccharomyces pombe and Candida albicans) differs considerably from one another and from the great majority of filamentous ascomycetes. Thus, A. nidulans is a much better model than these yeasts for studying cyclin function and cell cycle regulation in filamentous fungi. My phylogenetic analyses reveal there are three A. nidulans cyclins that likely carry out cell cycle-related functions (NimECyclin B, PucA, and ClbA). In Chapter 4 of this work, I focus on cyclins PucA and ClbA as they had not been characterized previously. My results reveal that ClbA is not essential, but its destruction is required for mitotic exit. ClbA also appears to function at the G2/M transition. My experiments further demonstrate that both NimECyclin B and ClbA play critical roles in chromosomal disjunction. My results also reveal that PucA is the essential cyclin required for CdhA inactivation at the G1/S transition and that there are no other redundant mechanisms for CdhA inactivation in A. nidulans. Finally, my data indicate that PucA function is required for some of the growth limiting effects of two mipA mutants, including mipAD159, although the mechanism of this interaction is not yet understood. I have also determined that the SCF complex plays a role in regulating CdhA in A. nidulans. In Chapter 5 of this work, I focus on two essential components of the SCF complex, Cullin A (CulA) and SkpA. I have determined their terminal phenotypes via the heterokaryon rescue technique, and I have studied their in vivo localization patterns using fluorescent protein fusions I have generated. Interestingly, CulA-GFP strains in a wild-type background are slightly cold sensitive, and CulA-GFP causes strong synthetic growth reduction with mipAD159. The strong, synthetic genetic interaction between mipAD159 and culA-GFP indicates that γ-tubulin and CulA are involved in a common function that is required for growth. Additionally, I have found that the SCF complex in A. nidulans has a crucial role in suppressing septation near the hyphal tip, which is essential for rapid tip extension in filamentous fungi. My study of cell cycle-related cyclins and SCF components in A. nidulans provides new insights into the regulation of the cell cycle and growth of filamentous fungi. My phylogenetic cyclin analyses also indicate that A. nidulans is a well-suited model compared to popular model yeasts for studying cyclin function and cell cycle regulation in aspergilli and other filamentous fungi. Finally, my data also indicate that PucA and CulA are strong candidates through which γ-tubulin acts to control APC/C-CdhA activity and are worthy of follow-up studies.
Collections
- Dissertations
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU. | https://kuscholarworks.ku.edu/handle/1808/28009 |
Pharmaceutically-active antibiotics revolutionized the treatment of infectious diseases, leading to decreased mortality and increased life expectancy. However, recent years have seen an alarming rise in the number and frequency of antibiotic-resistant "Superbugs." The Centers for Disease Control and Prevention (CDC) estimates that over two million antibiotic-resistant infections occur in the United States annually, resulting in approximately 23,000 deaths.
Despite the danger to public health, a minimal number of new antibiotic drugs are currently in development or in clinical trials by major pharmaceutical companies. To prevent reverting back to the pre-antibiotic era—when diseases caused by parasites or infections were virtually untreatable and frequently resulted in death—new and innovative approaches are needed to combat the increasing resistance of pathogenic bacteria to antibiotics.
Bacterial Resistance to Antibiotics – From Molecules to Man examines the current state and future direction of research into developing clinically-useful next-generation novel antibiotics. An internationally-recognized team of experts cover topics including glycopeptide antibiotic resistance, anti-tuberculosis agents, anti-virulence therapies, tetracyclines, the molecular and structural determinants of resistance, and more.
Bacterial Resistance to Antibiotics – From Molecules to Man is a valuable source of up-to-date information for medical practitioners, researchers, academics, and professionals in public health, pharmaceuticals, microbiology, and related fields.
This product has run out of stock. You may send us an inquiry about it.
To order this book, please contact us. | https://www.bookdownload.org/bacterial-resistance-to-antibiotics-from-molecules-to-man |
A Review of The Emergence of Everything by Harold J. Morowitz, Oxford University Press, 2002.
WHO AMONG US HAS NOT USED THE WORD emergence in philosophical debate without a clear definition of what we mean? I, for one, am guilty as charged. It is clear that the universe began with very little structure but somehow atoms, chemicals, life and mind emerged from this primordial simplicity. But what is emergence exactly? It has something to do with growing structural complexity, novelty and the sum being greater than the parts, but how can we define emergence in an epistemologically sound way that can cover both elemental and biological evolution?
When I saw The Emergence Of Everything advertised in Scientific American, I naturally thought this book would help me to define emergence. Reading the cover endorsements supported this hope. Morowitz has two introductory chapters in which he summarizes the history of scientific, philosophical and theological thought up to the present. He uses this background to frame recent work on complexity and computer modeling as an introduction to the concept of emergence. The rest of the book consist of 28 examples of emergences from cosmogenesis through the emergence of stars, atoms, chemistry, life, civilization, mind, philosophy, and, finally, spirituality. This is a lot of ground to cover in 200 pages.
On the first page Morowitz says that something new and exciting is taking place in analytical thought, philosophy, religion and this, of course, is the paradigm of emergence. Later in the first chapter Morowitz explains that emergence is both a property of computer models and the systems being modeled and that they can both demonstrate novelty, that is, the emergence of something new into the universe. In computer models, as in nature, the possible combinations of the elements of one level are often transcomputable (could not be computed no matter how powerful your computer) so you must use pruning rules to only explore plausible combinations in your search for emergent structures in the next level. The premise is that when the mapping between the model and the system to be modeled is successful then you may have discovered the pruning rules used by nature and thus gain a deep insight into the laws of the universe.
At the end of the first chapter, Morowitz summarizes emergence by contrasting it with reduction. Where reduction tries to move from the whole to its parts, emergence tries to generate the properties of the whole from an understanding of the parts. This is a nice summary of the problem but it does not bring anything new to the table.
In the first chapter, the author stated that a clear epistemology is necessary to understand emergence, but in the second chapter he argued that we do not have a clear epistemology in which to frame emergence. At this point I began to wonder what the book was about since the author had admitted up front that we do not have the tools to understand the subject.
But it become clear in the last chapter, when Morowitz confessed that he had two agendas: to study emergence by examining a number of examples, and to look for the nature and operation of God in the emergent universe. His book is therefore really more of a work of theology than it is science. The theology is cloaked in scientific jargon and mathematical equations, but it is theology nonetheless.
Discussing the universe after the Big Bang, Morowitz emphasizes that the emergence of atoms is informatic and that something akin to mind has already entered the universe. Either he is placing the existence of information before the emergence of mind, which is a grave epistemological error, or he is placing the mind of God in the laws of physics, which is a creationist position. In the common evolutionary view, mind emerged from life with nervous systems. From mind sprang meaning, which is required before information can exist. In the evolutionary view, the Universe has no meaning save the meaning cast upon it by mind.
To say that information and mind entered the universe at the time of atomic emergence is either cloudy epistemology or a veiled attempt to imbed divinity in physical laws. I think it is both. This initial misstep in epistemology leads to a much larger problem when, in a later chapter, Morowitz shows an equation for the equivalent energy required for a binary decision and says that the equations for entropy have a noetic quality. Once mind has been introduced into physical laws it does not seem too far afield to attempt to calculate the energy required for a decision. After equating the thermodynamic formulation of entropy to the Information Theory formulation of entropy, Morowitz calculates the energy equivalent of a binary decision. In his Information Theory, Claude Shannon made it clear that this information has no value, that is, it does not reference meaning or consciousness, it is just an engineering tool for describing the capacity of a communication channel. Morowitz disregarded this caveat, making a large epistemological error in assigning a calculated energy value to a decision.
The problem is that only minds/brains can make decisions. To attribute the energy equivalence of a decision is ludicrous, as we do not know how brains make decisions, but the chances are high that the brain requires an amount of energy astronomically larger than 10-21 joules, which is the result of Morowitz’s calculations for a binary decision. This is an attempt to show a noetic aspect in the laws of physics and to show that God is lurking in our equations. Morowitz is not doing anyone any favors by his efforts. Most creationists know better then to try to prove God’s existence using the laws of physics, philosophers will take issue with his sloppy epistemology and physicists will bemoan his perversion of science and mathematics to theological ends.
Morowitz characterizes emergence as the sum being more than the parts. He references this description several times. I had encountered and pondered this expression in the past and it seems to mean that the behavior of the higher-level system could not be predicted by looking at the elements of the lower level. After thinking about this phrase again while reading Morowitz’s book, it seems to me that the phrase is more about human expectations than emergence itself. The phrase speaks to our limitations of knowledge, not about physical emergence. This brings up the same question about emergence. Emergence seems to be both a mental construct and a physical phenomena, and Morowitz does not make a clear epistemological distinction between the two, which clouds the whole book. He says that emergence is the opposite of reduction, but reduction is an analytical methodology. Is emergence, then, simply analysis in the opposite direction? The epistemology here is so cloudy you are never sure if any particular usage of “emergence” refers to a methodology, a noun used to name a particular type of emergence, or a verb describing the coming into being of novelty.
Morowitz wants to predict emergences but admits that we do not have the required pruning rules. In fact, he asks his readers to help in the hunt for pruning rules. This dashed all my hopes built by the cover endorsements. He asks the right questions but has no answers. The fact is that all emergences are postdictive but not predictive for the reasons he states. There are so many possible combinations of matter and energies that all we can do is look at the things that actually do emerge and then try to explain them by reduction. In physics there have been many novel inventions that were based on understanding the properties of matter. The atomic bomb and lasers spring to mind. Are these emergences or just inventions? Under Morowitz’s epistemology there is no way to know how to classify these truly novel creations.
The book’s overview of the various levels of emergence is a good introduction to the concepts if you have never considered them. Morowitz is at his best describing his field of molecular biology. But I did not find that he brought anything new to the table. I had hoped for a defining piece on emergence, but instead I got a book on theology.
About the author
Joe Cuchiara works in R&D for a large telecommunications company located in southern California. He studied neurology and worked in the field of electroencephalography and participated in brain research in a group at UCSD in La Jolla, California. He is a life long student of the history of science and the cognitive sciences. Mr. Cuchiara is a sculptor and painter and characterizes his philosophy as Evolutionary Secular Humanism. | https://www.skeptic.com/reading_room/is-god-in-the-equations/ |
What inspired you to become a chef?
Being a chef was something I fell into. I had aspirations of being a stockbroker and then tried my hand at computer programming but could not see myself doing that for the rest of my working life. It was my mum who suggested that I should be a chef as I loved to cook. I was also inspired by the late 90s cooking shows like Ready Steady Cook. But it was Jamie Oliver that really showed how cool it was to be a chef.
After working in restaurants and hotels for 15 years, you opened your own place – how would you describe the transition and what are the biggest challenges?
Going from running a department to now having to do everything is the biggest challenge I face. Both my wife and I had to learn accounting, marketing and HR very quickly. We made mistakes but it’s how you learn from them which is important.
What would you be doing if you weren't a chef?
It doesn’t bear thinking about. I turned down being a semi-pro footballer, but didn’t think that was my future.
At the end of a long day, what do you like to cook?
I am more than happy to share some red wine with my wife, and savour some really good cheese and homemade pickles and chutneys.
What food could you not live without and why?
Bread. Just thinking about it makes my mouth water. Freshly made bread, toasted with a healthy amount of butter melting into the fantastic golden brown crust. So good.
What is the vital ingredient for a successful kitchen?
The humble egg – it’s so versatile. Quite simply a magical ingredient I couldn’t be without.
How do you start developing a new recipe?
It all depends. The more playful ideas will jump out at me. An idea will get stuck in my head and I work it over until I can visualise it. If I’m a bit stuck, I’ll look at seasonal charts and let the ingredients tell me what they want to be married with. A taste can trigger the development of a dish. You taste something new and then start thinking what else will complement and lift it?
If you could cook for anyone (past or present) who would it be and what would you cook for them?
My grandmother. She never got to see me become the chef I am today. She played a huge part in me discovering food and cooking and I would love to show her what I can do today. I would most probably treat her to my bespoke seven-course tasting menu, starting with her favourite of braised Dorset snails, sweet garlic, black treacle cured bacon, watercress and parsley crunch.
Who is the most interesting person you have cooked for?
Every customer has their own story. But if I had to choose it would be The Queen or her mother.
What is the strangest request you have had from a diner?
A customer asked me to cook him a three-course menu as he had not been able to enjoy a good meal out in years due to his allergies.
Which chef do you most admire at the moment and why?
Peter Gilmore. I think his food screams elegance.
Do you have a favourite restaurant?
It depends on the occasion. If I wanted to spoil my wife for a birthday then I take her to Hartnett Holder & Co at Lime Wood Hotel. | https://www.thegoodfoodguide.co.uk/editorial/Interview/andrew-du-bourg |
Penana is an exclusive publishing platform designed for the aspiring writers who want to showcase their creative skills with rest of the world. This application will allow them to convey their ideas with real people by compiling the compelling and appealing stories in any genre they want. This web based and application based reading and writing platform embraces a lot of features and functions in the shape of collaborative creation, storytelling and much more with the ultimate aim of offering a professional platform to create an online story world. With the usage of this application you can enjoy five kinds of functions in the shape of promoting web fiction and storytelling, engaging in small group private draft discussions, making friends and solving writer’s block, showcase creativity, and find the right audience and build the permanent communities. The core features of this app are reading and exploring stories, writing and editing stories on the fly, following writers and getting story updates, engaging in writing societies, suggested revision functions, showcasing writing skills, communicating with other users and much more. | https://www.topappslike.com/penana/ |
An epidemiological study of the relative importance of damp housing in relation to adult health.
Evans J., Hyndman S., Stewart-Brown S., Smith D., Petersen S.
STUDY OBJECTIVE: To examine the association between damp housing and adult health, taking into account a wide range of other factors that may influence health and could confound this relation. PARTICIPANTS AND SETTING: A general population sample of adults, aged 18-64, from Oxfordshire, Buckinghamshire, Berkshire and Northamptonshire. DESIGN: Secondary analysis of responses to a postal questionnaire survey carried out in 1997 with a 64% response rate (8889 of 13 800). Housing dampness was assessed by self report. Health was measured by responses to a series of questions including presence of asthma and longstanding illness generally, use of health services and perceived health status (the SF-36). The effect of damp was examined using the chi(2) test and one way analysis of variance. Significant associations with the various health outcomes were further explored taking into account 35 other housing, demographic, psychosocial and lifestyle variables using stepwise logistic and linear regression. MAIN RESULTS: Bivariate analyses indicated that damp was associated with the majority of health outcomes. Regression modelling however, found that being unable to keep the home warm enough in winter was a more important explanatory variable. Worry about pressure at work and to a lesser extent about money, showed an independent association with perceived health status equal to or greater than that of the housing environment, including cold housing, and that of health related lifestyles. CONCLUSIONS: This study shows that being unable to keep the home warm enough in winter is more strongly associated with health outcomes than is damp housing. However, as cold and damp housing are closely related, it is likely that their combined effects are shown in these results. The importance of worry as an independent predictor of health status needs testing in other studies. Its prevalence and relative importance suggest that it may be a significant determinant of public health. | https://www.ndph.ox.ac.uk/publications/163538 |
The number of resources available to catalyze and support community health improvement efforts is growing substantially. Users report that platforms have positively influenced their health transformation efforts. Yet, there are opportunities for growth and maturation among online platforms in the areas of collaboration, technological advancement, and knowledge sharing. A future focus on these areas may drive enhanced use of these tools and ultimately create greater impact on community health improvement.
These findings are part of the Georgia Health Policy Center’s (GHPC’s) recently released brief, Enhancing Platforms that Support Local Health Improvement. The brief is a synthesis of findings from two projects1 undertaken by GHPC to help inform Robert Wood Johnson Foundation’s strategic assessment of County Health Rankings & Roadmaps.
These results may be useful to those looking to support community health efforts, particularly for platform developers and funders.
The Scope of Online Platforms
- The majority of assessed online platforms offer publicly available, web-based resources.
- Less than one-third of platforms offer a structured framework to guide the use of its available tools and resources.
- Even fewer platforms provide comprehensive technical assistance.
Current Challenges for Platform Developers
- Platform developers are aware of the need to update platforms as user preferences change and technology advances. This is particularly true when it comes to the user experience–how users navigate these platforms and find the content they need.
- Universally, developers recognize the difficulty in evaluating platform impact.
- As many existing resources are underutilized, developers are challenged getting the word out about what resources are currently available.
Implications for Future Consideration
- Online platforms do impact community work.
- Data integration is important.
- Opportunity exists to connect across platforms.
- Usability and new features will drive adoption.
Click here to read the full brief Enhancing Platforms that Support Local Health Improvement.
1 These projects included a scan of the national landscape of platforms that provide support, guidance, and tools to drive action to improve community health, with County Health Rankings & Roadmaps serving as the anchor platform for comparison. The scan explored challenges and areas of opportunity for future enhancement and leverage. Additionally, GHPC interviewed users of County Health Rankings & Roadmaps to assess the platform’s influence on communities and organizational users. | https://ghpc.gsu.edu/2017/04/03/online-platforms-impactful-enhanced-knowledge-sharing-needed/ |
When Consensus Doesn't Mean Consensus
A few days ago a letter [pdf] written by scientists at Brigham Young University -- a traditionally conservative school -- plopped onto the desks of Utah’s governor and state lawmakers. The letter is being called a “stinging rebuke” and criticizes how, in a recent session, legislators gave equal value to fringe, skeptical climate change views as they did to the broader scientific consensus that our climate is changing and we are to blame.
During
their October meeting, the state’s Public Utilities and Technology
Interim Committee listened to two climate change scientists -- as HCN noted here. The
first, professor Jim Steenburgh,
chair of the University Of Utah Atmospheric Science Department,
presented the more scientifically backed view that climate change is
man made. The second scientist--professor Roy Spencer,
climatologist from the University of Alabama, Huntsville--presented
skeptical views, arguing that climate change is a natural phenomena,
caused by natural cycles, not humans. (Listen to the meeting)
Spencer was specially invited to the meeting by co-chair of the panel, Republican Mike Noel, and his views were reportedly well received. Steenburgh, on the other hand, was attacked after his presentation—many lawmakers shunned his views, claiming global warming is a natural phenomenon. One representative even accused the movement to address global warming as, “the new religion to replace Communism.”
The BYU scientists’ point of contention is that legislators gave the same, if not more, weight to Spencer’s fringe views as they did to Steenburgh’s widely accepted views, noting that “well over 90%” of scientists would agree with Steenburgh’s ideals. They wrote:
We believe that if a legislative committee—composed entirely of non-specialists in the relevant fields -- entertains testimony from someone representing the scientific minority, the responsible course of action would be to give considerable weight to an overwhelming scientific consensus, and treat fringe positions with respectful skepticism.
The scientists seem genuinely concerned about how Utah’s legislators are going to handle climate change in the state, only wanting legislature to be backed by sound science, writing:
As part of an arid continental interior, Utah may sustain serious damage due to a warming climate, and Utah’s climate scientists are a valuable resource to help public officials decide how to respond to the threat. Collectively, Utah scientists have spent many decades trying to unravel the relevant issues in the context of this region. It is irresponsible to alienate them by setting aside their testimony in favor of easily debunked fringe science.
The group of BYU scientists added that they “represent a number of political persuasions (Republicans, Democrats, and Independents,) and disagree with one another about how society ought to respond to the threats posed by a warming climate,” and also that their views “do not necessarily reflect those of (their) sponsoring institution, Brigham Young University.”
To the scientists at BYU who wrote the letter -- well done. Way to step up and drop such a valid point on Utah lawmakers. Of course legislators should hear every point of view -- this is America, people -- but that doesn’t mean they should give equal weight to all views during the law making process. If 19 out of 20 PhDs say one thing, and one PhD says another, maybe think twice about deciding the fate of your state’s environment on that one lone wolf’s opinion.
Now the real question is whether our boys up on Salt Lake’s Capitol hill will consider the arguments made in the letter. So far, I’m not counting on it. Provo Republican Chris Herrod told The Salt Lake Tribune, “The more (BYU scientists) say there is consensus, the more they lose credibility.”
Hmmmm, last time I checked, 90% of scientists in agreement on a single topic is considered a consensus. But who knows, in the Utah House of Representatives, numbers don’t always add up, especially when they are attached to environmental figures. | https://www.hcn.org/blogs/range/consensus-doesnt-mean-consensus |
We explored environmental variables related to pineoak forest community structure at one locality in Jalisco, Mexico. We used an NMS ordination in conjunction with the Sørensen distance to identify the major smallscale community gradients along 25 contiguous quadrats (25 m x 25 m, 400 m2 each) of pineoak forest in Nueva Colonia, Mezquitic, Jalisco, Mexico. The main matrix (25 stands x 7 tree species) included basal area data, and the environmental matrix consisted of 19 quantitative environmental variables. Community structure, through sociological ordination, showed a direct correlation with the vertical altitudinal gradient and apparent soil density, as well as with slope inclination across the horizontal gradient; it also showed an inverse correlation with cation exchange capacity, Ca + Mg, Mg, and altitude across the horizontal gradient. Direct gradient analyses showed an increase of cation exchange capacity, Ca+Mg, Mg and K with decreasing altitude along the vertical gradient (from north to south). Total nitrogen increased with decreasing altitude across the horizontal gradient (from west to east). In addition, we identified three main community groups using UPGMA cluster analysis; however, groups were weakly related to the ordination results and to the physical space. At the 1hectare scale, species composition and basal area of pineoak forest in Nueva Colonia can be explained by niche partitioning of altitude and soil gradients. The relevance of nitrogen for this community could be the result of habitat specialization or disturbance history.
Many studies of largescale plant community patterns have supported the individualistic hypothesis of plant community organization (Gleason, 1926) in both, temperate areas (Whittaker, 1956, 1960; Curtis and McIntosh, 1951; Peet, 1980) and subtropical to tropical areas (VázquezGarcía, 1995, 1998; Cuevas, 2002), some other authors document vegetational discontinuities (Beals, 1969; Kitayama, 1992). However, studies of smallscale community patterns to assess whether or not the individualistic hypothesis is supported, have largely been neglected.
Understanding patterns of species distribution and abundance across landscapes have been central goals in Ecology. Ecological studies have focused on determining whether species specialize to partition environments or whether distributions are the result of stochastic processes (Brokaw and Busing, 2000; Hubbell, 2001). A number of mechanisms have been proposed to explain how diversity and abundance are regulated in tropical and temperate systems (Wills et al., 1997; Wright, 2002). Niche differentiation among species, arising from resource partitioning across resource gradients, has been hypothesized to be a major mechanism maintaining tree diversity in both temperate and tropical forests. Many studies (Denslow, 1980; Kabakof and Chazdon, 1996; Chazdon et al., 1999; Capers et al., 2005; iarte and Chazdon, 2005; Gouvenain et al., 2007) have focused only on light gradients as driving force or mechanism in resource partitioning among tree species, however, little or no attention has been given to other resources such as soil variables.
Climatic variables might determine plant community distribution at a landscape level. Latitude, longitude, precipitation, temperature, and moisture variables have an effect in plant composition and distribution (Rzedowski, 1978, Gentry, 1982). However, at smallscale gradients (< 200 m in altitude), topography is considered a more relevant variable that affects tree community spatial variation rather than it is altitude or any other climatic variable (Basnet, 1992; VázquezGarcía, 1995). Topography is also related to patterns in moisture regimes and soil chemical properties (OliveiraFilho et al. 1999, 1994; Ratter, 1980; Bourgeron, 1983; Johnston, 1992). For instance, soil leaching patterns along an elevational gradient are affected by topography (Smith, 1990).
Habitat specialization for soil resources in a tropical dry forest is likely to be more important at early stages in tree life histories than it is in later life history (VargasRodríguez et al., 2005). This fact is very relevant for understanding tree establishment and regeneration and could help guide land managers in implementing restoration policies. However, environmental variables explaining main smallscale gradients in community composition of pineoak forests in northern Jalisco have not been explored yet. Tree community variation needs to be evaluated at a much larger scale, however community variation of early stages in tree life histories (e.g. saplings) may be studied at smaller scales. Here we use 25 quadrats (0.04 ha) within 1 ha plot to focus on tree communities patterns at early stages in life tree histories (woody species > 2.5 cm in diam.) within a small area, with the advantage of low habitat and climatic heterogeneity.
In the present study, we aim to generate hypotheses, rather than to test them, about what environmental gradients are evident at a hectare plot and which environmental variables possibly explain smallscale community gradients at early stages of tree life histories of a pineoak forest in Nueva Colonia, Mezquitic, Jalisco, Mexico.
METHODS
Study area
]]>
The study area was located at 6 km northeast of Nueva Colonia, southeast of Mezquitic, Mezquitic municipality, Jalisco, Mexico. We located a hectare plot within the tributary watershed Los Coyotes, along an 18002400 altitudinal gradient, at the southern portion of Sierra Madre Occidental (Fig. 1). The climate at Nueva Colonia was temperate with cold and moist winters (VázquezGarcía et al., 2004). Mean annual rainfall was 600 mm with the wet season from July to August. Relief ranged from 520% slope, of southern aspect. Volcanic acidic extrusive rocks and igneous Toba rocks were prevalent in the area. Silt soils and ferric Luvisol soils were common in the area (INEGI 1970a, 1970b).
We selected the least disturbed (forest stands with larger dbh) pineoak forest in Nueva Colonia through field reconnaissance. The huichol community has designated this forest as sacred and no logging is allowed in this area. The total area sampled was 1 hectare (100 m x 100 m). We divided the hectare in 25 quadrats, each one was 400 m2. The altitudinal variation within the hectare ranged from 2353 to 2367 m (14 m difference in elevation or 14% slope degree). To prevent confusion, we assigned a number to each quadrat, following a westeast and northsouth order (first row A1A5, second row B1B5, and so on) (Fig. 1). At each quadrat, we recorded plant species > 2.5 cm DBH and collected a voucher of each species. The sampling was conducted during the summer of 2002 (late Juneearly July). Species' identifications were determined by the authors. Nomenclature generally follows VázquezGarcía et al. (2004). Herbarium specimens were deposited at the The University of Guadalajara's herbarium (IBUG) (Holmgren et al., 1990).
We collected a total of ten composite soil samples. Each of them represented five quadrats, either from a row within the elevational gradient or from a column along the elevational gradient. All of the samples were selected from one hectare's grid with twenty five 20 x 20 m. squares. The composite soil samples were obtained to increase soil representation in the area. Soil density, texture (sand, clay, silt), soil moisture, organic matter, cation exchange capacity (CEC), Mg, Ca, Ca+Mg, Na, K, pH, N, and P were analyzed (Table 1). Nutrients were measured following Mehlich III's extraction method, P was extracted with the Olsen method, pH with a potentiometer, soil moisture with the gravimetric method, organic matter with WalkeyBlack's process, and texture with Bouyoucos' methods (AOAC International, 1990; APHAAWWAWPCF, 1992; Agricultural Experiment Stations, 1998). We recorded site conditions within the plot and determined position and altitude using a global positioning device (GPS 12 Garmin corporation) and aspect and slope using a compass and a clinometer, respectively. Rockiness was also noted visually as a percent of a quadrat covered and averaged for a row or a column of quadrats.
We measured the extent of natural and anthropogenic disturbances at each site. We assessed the degree of disturbance by recording the number of standing dead trees, fallen trees, woody stems, and tree stumps. Cattle grazing and fire evidence were also recorded (OlveraVargas et al., 1996). Average disturbance values were used to assign one of three disturbance classes (high, intermediate or low) for each column or row of quadrats.
We obtained species basal area for each of the 25 quadrats and summarized them in one matrix, containing seven tree species and 25 quadrats (Table 2). The environmental variables were summarized in a matrix with 19 quantitative variables and 25 quadrats.
]]>
Environmental gradients
Using direct gradient analysis, we identified major environmental gradients and how they were related to altitude along the vertical gradient (from north to south) and to the horizontal gradient (from west to east), averaging data on variables for each row or column of quadrats.
Sociological ordination
Matrices were analyzed using Global Nonmetric Multidimensional Scaling (NMS) ordination, available in PCORD v.4, a software that performs multivariate analyses for ecological data (McCune and Mefford 1999). NMS was performed following the method described by Kruskal (1964) and Mather (1976). NMS is an effective ordination method for ecological community data because it does not assume linear relationships among variables (Minchin, 1987). The method searches for the best position on n entities on k axes with the objective of minimizing the stress on the kdimensional configuration (Minchin, 1987; McCune and Grace, 2002).
A general procedure for NMS was used as suggested by McCune & Grace (2002): a) data were adjusted, Quercus castanea occurred in less than 5% of sample units and was removed as a rare species; sample unit E2 was identified as a moderate outlier an thus removed; given the range of basal area values for each species, there was no need for their standardization; raw data were preferred over relativized ones for an appropriate representation of shifts in species structure (e.g. increases in basal area), b) a preliminary configuration with 100 runs was employed to asses how possible would the observed pattern be in the ordination if only a random process would be operating at the community configuration, this step suggested a three dimensional model; c) dimensionality of the data set was determined by plotting a scree plot, which shows stress as a function of dimensionality of the gradient model (Fig. 2); d) three dimensions were specified; e) a plot of stress vs. iteration assisted in checking for stability; f) the final configuration included the Sørensen distance measure, 3 axes, number five was used as a seed for the random number generator, with no step down dimensionality, one real run, and no Monte Carlo test (no randomized runs).
The Sørensen's distance measure was used, a robust measure for ecological distance (Beals, 1984; Faith et al., 1987). The relationship between tree species and environmental variables was evaluated using Pearson correlations between the calculated ordination axes and the environmental variables. Only variables from the secondary matrix with an r2 (with scores on either axis) larger than the cutoff value were plotted. The default cutoff is 0.2, but it was set to 0.3 to overlay few and relevant vectors.
]]>
Classification
We classified the studied forest plots using the grouping algorithm of Unweighted Pair Group Average (UPGMA). We used UPGMA in connection with the Sørensen distance S:
S = 1 2w/(a+b)
where a and b are the numbers of species in each of two samples and w is the number of species the samples share. UPGMA is a hierarchical polythetic agglomerative technique that groups samples into classes and classes into a hierarchy, using information simultaneously from all samples. Each sample is assigned to a cluster on a hierarchy of increasingly inclusive clusters until all samples are part of one cluster. Opposed to related classification techniques, UPGMA's dissimilarities between clusters are equal to the average dissimilarity rather than equal to the maximum dissimilarity (Orlóci, 1978). UPGMA maximizes the correlation between input and output compositional dissimilarities implied in the obtained dendrogram (Gauch 1982).
RESULTS
A total of 326 individual trees were recorded in the studied area (1 ha, 25 plots). We found seven tree species: Pinus devoniana, P. lumholtzii, P. oocarpa, Quercus obtusata, Q. eduardii, Q. castanea and Arbutus xalapensis. The tree diversity index value was H' = 1.483. Oak species were the dominant elements in the forest. Q. eduardii had the highest basal area (Table 2). Q. obtusata, Q. eduardii, and P. devoniana had the highest importance values (32%, 24%, and 22% respectively). Total basal area in the plot was 24 m2 ha (Table 2). Limited anthropogenic disturbance occurs in the study area by restricting visitors to the sacred forested area. We were told that the last wood selective extraction occurred about 30 years ago. Four percent of trees had fire scares. Evidence of past forest fires is attributable to lightening (local inhabitants, pers. comm.). There was no evidence of cattle grazing or insect predation, at least on tree saplings or adults.
Environmental ordination
Regression analysis showed that CEC, Ca + Mg, Mg and K increased with decreasing elevation, which represents an increase of nutrients from north to south (Fig. 3). Percent of totalnitrogen increased from west to east within elevation (Fig. 4).
Sociological ordination
]]>
The scree plot (Fig. 2) and the Monte Carlo test recommended a three dimensional solution. The proportion of randomized runs with stress less than or equal to the observed stress was 0.0495. The final stress for a threedimensional solution of the NMS ordination was 9.718. Axes accounted for a reduction in stress of 0.0001 and instability of 0.06427 with 100 iterations. Most of the stress was reduced after 20 iterations. Axes 1 and 2 were not explained by any of the measured environmental variables. However, axis 3 was directly related to altitude along the vertical gradient, apparent density and the slope of the horizontal gradient (Fig. 5a); and inversely related to CEC, Mg, Ca + Mg, and altitude along the horizontal gradient (Fig. 5a). In relation to species patterns, axis 1 was directly related to basal area of Pinus oocarpa and inversely related to basal area of P. devoniana (Fig. 5b); axis 2 was directly related to basal area of Quercus obtusata (Fig. 5c); axis 3 was inversely related to basal area of Q. eduardii (Fig. 5d). Environmental variables explaining major community gradients in the sociological ordination were largely consistent with those identified through direct gradient analyses.
Classification
UPGMA recognized three groups containing 25% of the remaining information. A heterogeneous group is composed by A1, B1, B2, C3 y C4 quadrats, which are spatially correlated. A second heterogeneous group contains B3, C1, D1 and E2, which are spatially separated. The third group containing 16 quadrats was the largest, it was less heterogeneous, and spatially separated (Fig. 6). Classification results did not reproduced physical location of quadrats, and were weakly related to ordination results.
DISCUSSION
Community structure of pineoak forest in Nueva Colonia may be explained by habitat specialization to altitude and soil gradients. Consistently, establishment of pine species has been related to soil and topographic conditions (Park 2001). In addition, species interactions allow differences in species dominance in the community, for instance, Pinus devoniana and P. oocarpa dominated different ends of axis 1; Quercus obtusata dominated axis 2; and Q. eduardii dominated axis 3. Differences in dominance along these axes was explained by CEC and by soil nutrients (Ca + Mg, Mg). Community patterns at the Sierra Huichola support the individualistic hypothesis of plant community organization, with no evidence for vegetational discontinuities.
The fact that five from seven species have an aggregated distribution pattern indicates habitat specialization or spatial autocorrelation (Harms et al., 2001). A correlation analysis with ecological equivalence among quadrants could help to explain species habitat preferences. A third of these species have random distribution, supporting the neutral theory, in which species are habitat independent.
There is a soilspecies relationship in pineoak forest at Nueva Colonia. Soil nutrients and soil moisture appear correlated with temperate tree species distributions (Cowell 1993). Other factors such as topography and moisture conditions might be important structuring forest composition, however, the role of nutrients and disturbance are primary forces controlling vegetation patterns in temperate forests (Peet and Christensen, 1980; Cowell, 1993; Park, 2001; CavenderBares et al., 2004). Variation in exchangeable magnesium, calcium, and potassium allow differences in growth and mortality rates of oaks and pine species, resulting in different patterns of species distributions (Breemen et al., 1997; Arii and Lechowicz, 2002; Bigelow and Canham 2002; Dijkstra and Smits, 2002; Horsley et al., 2002). Soil nutrients can change on small scale across topography gradients, thus producing gradients in soil fertility and affecting species dominance (Bailey et al., 2004). At a 1 hectare scale, species maintenance and basal area of oak and pine in Nueva Colonia can be explained by niche partitioning of soil gradients. This competitive interaction for soil resources is a key mechanism structuring the community (Muscolo et al., 2007). Consistently, 17 oak species in Florida show clear patterns of specialization along soil moisture and nutrient gradients and show a pattern of habitat differentiation (CavenderBares et al., 2004).
Habitat heterogeneity can be observed at a small scale, as showed in classification. Spatially contiguous sites with low climatic and topographic heterogeneity have small scale differences and have an effect in soil nutrients distribution (Muscolo et al. 2007). The increase in bases at lower elevations might be related to a leaching process. Constant soil leaching has a pronounced effect on Ca and Mg availability (Schier and McQueattie, 2000). Soil leaching occurring in Nueva Colonia resembles the cascade effect (low nutrients at higher elevations increasing at lower) observed in mountains of Sierra de Manantlán (Vázquez and Givnish, 1998; OrtizArrona, 1999). The nitrogen relevance for this community might be the result of disturbance regime stage or habitat specialization. N availability is strongly correlated with vegetation patterns, linking belowground processes and aboveground biodiversity (Hutchinson et al., 1999). Therefore, species are specialized and distributed in microsites with different nitrogen and other nutrients availability.
]]>
ACKNOWLEDGMENTS
The Centro de Ingeniería Ambiental, Unidad de Apoyo a Comunidades Indígenas, both from University of Guadalajara; SIMORELOSCONACYT1996, through Ordenamiento Ecológico Territorial de Jalisco; and Instituto Nacional Indigenista (SEDESOL) provided financial support for this research. Miguel de J. Cházaro B., Ramón Cuevas G., Mollie Harker, Jacqueline Reynoso D., Roberto González T. and María Eugenia Barba R. helped with species identification. Rafael López de la Torre, wirrarika, kindly helped to locate forest sites and gave us logistic support. Luzmila Herrera Pérez, Justo Murguía Castillo, María Eugenia Barba Robert, and Magdalena Alcázar García collected voucher plant specimens and assisted during field work. We thank Susana and Diana Parker for proofreading the English. Special thanks to anonimous reviewers of Polibotánica and to Ramón Cuevas, from the Manantlán Institute, Universidad de Guadalajara, for their valuable suggestions.
Wills, C., R. Condit, R.B. Foster & S.P. Hubbell, 1997. "Strong density and diversityrelated effects help to maintain tree species diversity in a neotropical forest". Proceedings of the National Academy of Sciences of the United States of America, 94(4): 12521257. [ Links ]
| |
The present invention relates to the field of geophysical exploration and more specifically to the detection of sub-surface fractures with surface seismic data. The detection of fractures is of paramount importance in so-called "tight" petroleum reservoirs in which the primary determinant of well producibility is the presence of a connected network of fractures to convey fluid into the borehole. As is well known in the prior art, seismic shear waves are among the most sensitive tools available for detecting fractures in hydrocarbon reservoirs.
Previous shear-wave seismic acquisition and processing techniques have often relied on measurements of shear-wave splitting ("birefringence") to detect sub-surface fractures. To elaborate, aligned fractures induce horizontally-transverse anisotropy in the subsurface such that a vertically-incident shear wave splits into a fast mode polarized along the fracture direction and a slow mode polarized perpendicularly to the fractures. Shear waves are seismic vibrations that are polarized perpendicularly (or nearly perpendicularly in the case of an isotropic material) to their propagation direction, i.e., waves for which the vibrations occur perpendicularly to the direction of the wave's propagation. By measuring the shear-wave splitting (which is proportional to the velocity anisotropy), the location and density (the number of fractures per unit volume) of subsurface fracturing may be determined, because higher fracture densities cause a greater amount of shear-wave splitting. The use of four-component seismic acquisition (two orthogonal sources and two orthogonal receivers active during acquisition by both sources) is described in U.S. Pat. No. 4,803,666 to Alford, which is hereby incorporated by reference herein. Alford's technique entails the acquisition of a four-component data matrix to determine the symmetry planes of the medium. Alford describes the use of rotation algorithms to transform the seismic data into a symmetry-plane coordinate system which provides useful information about the orientation of the symmetry planes of the medium while also improving data quality. Knowledge of the orientation of the medium symmetry planes provided, e.g., by Alford rotation, is useful because the direction of open fracturing and that of the symmetry planes usually coincide. Measuring the symmetry plane orientation is therefore usually tantamount to measuring the fracture orientation, a parameter frequently difficult to determine from geological data and useful in reservoir management decisions.
A second consequence of horizontally transverse anisotropy caused by aligned vertical fracturing is a variation in seismic reflection amplitude at boundaries (such as a reservoir) as a function of profile azimuth caused by changes in the intensity of fracturing at the reflecting interface. This physical phenomenon enables fracture intensity to be ascertained with a relatively high vertical resolution by comparing the reflection amplitudes of the fast and slow shear-wave seismic sections. Both of these methods for fracture detection using shear-wave seismic data are described in detail in U.S. Pat. No. 4,817,061 to Alford et al., which is hereby incorporated by reference herein. Alford et al. describes the use of at least one source polarization along each source-receiver azimuth and receivers having matched polarizations.
However, these techniques may produce unsatisfactory results if the fracturing in the reservoir is relatively weak or if more than a single direction of open fracturing is present. Consequently, a method of seismic exploration which is able to characterize fracture intensity in the subsurface in even subtly fractured reservoirs would be advantageous.
Andreas Ruger, in his Ph.D. thesis, "Reflection Coefficients and Azimuthal AVO Analysis in Anisotropic Media", extended the theoretical treatment of split shear waves in the symmetry planes of a horizontally transversely isotropic ("HTI") medium or an orthorhombically anisotropic medium to the case of non-vertical incidence, deriving equations for the plane-wave reflection coefficients as a function of the waves' incident phase angle. This thesis is hereby incorporated by reference herein. AVO stands for amplitude variation with offset. At non-vertical incidence, i.e., by using offset seismic sources and receivers, it is possible to measure two additional amplitude attributes (reflection amplitude refers to the strength of the reflected signal observed at the receivers) in a seismic waveform, its reflection amplitude intercept and slope. The intercept is the projected zero-angle amplitude of the signal and provides information about the change in acoustic impedance and fracture density across the reflecting interface for shear waves. The reflection slope is the slope of a line fitted through the observed amplitudes as a function of the incidence angle of the waves and gives important information about the change in shear wave velocity and fracture intensity at the reflecting interface. This analytical insight into shear-wave reflection at non-normal incidence offers the possibility of obtaining more information about the elastic parameters (which are related to the fracture density) of the subsurface than previously possible if the numerous practical problems associated with the acquisition, processing, and interpretation of non-vertically-incident shear waves can be overcome.
| |
Search...
Jean-Michel Basquiat 2021 Wall Calendar (Calendar)
Email or call for price
Out of Print
Description
This 12-month calendar features 13 of Jean-Michel Basquiat’s iconic, ground-breaking works of art.
Jean-Michel Basquiat is renowned for his cutting-edge style, which combines street art with a distinctive commentary on pop culture, history, and politics. Basquiat embodies a young, international, urban culture and is still making headlines 30 years after his death. This 2021 calendar celebrates his work, easily recognized by edgy and raw lines paired with a strong sense of color and balance.
Features include:
- Generous grids with space to add appointments and reminders
- Opens to 12 inches x 24 inches
- Planner page for September–December 2020
- Widely celebrated and nationally recognized holidays and observances
- Moon phases, based on Eastern Standard Time
About the Author
Artist Jean-Michel Basquiat was born in Brooklyn, New York, to Haitian and Puerto Rican parents. His first artistic expressions were created and enjoyed as urban street art. His art is exhibited in over a dozen museums around the world, including the MoMA, the Met, and the Whitney. | https://www.blackstonebookstore.com/book/9781419744747 |
# H.I.V (album)
H.I.V (also known stylistically as H.I.V (Humanity is Vanishing)) is the debut album of the Cameroonian rapper and producer Jovi, released August 31, 2012. Entirely self-produced under his producer alias, "Le Monstre", Jovi composed, recorded, and mixed the album in Yaoundé, Cameroon. The album blends instrumentation and rhythms from traditional Cameroonian genres with Western hip hop beats and style, as well as influences from pop, rock, electronic and industrial genres, and includes samples from African musicians Tabu Ley Rochereau and Eko Roosevelt. H.I.V received critical acclaim for its use of punchlines, rhyming, and wordplay in Pidgin English (which is widely spoken in Cameroon, but not considered a formal language), mixed with English and French. Jovi’s H.I.V album was highly anticipated in Cameroon following the release of his debut video “Don 4 Kwat” on October 14, 2011, which is seen as re-energizing the hip hop scene in Cameroon, which had largely been dormant and dominated by Bikutsi and Makossa genres. Originally released on iTunes for sale, the album was removed after one year, and became available for free download on Bandcamp on September 12, 2013, according to the website, as a "special edition of Jovi's debut album."
## Critical reception
The album was well received by Cameroonians and also received some recognition in other African countries. According to African culture and entertainment magazine, Je Wanda magazine, "H.I.V is an array of variety, from the Pidgin English, the instrumentation and the audacious themes that only gives the album the much needed global and local appeal that was previously missing in Cameroon hip hop." The Cameroonian literary magazine Bakwa stated that "H.I.V is a colorful addition, not only to a budding local hip-hop scene but to the contemporary Cameroonian music scene as a whole; it is the long awaited arrival of a self-assured emcee very conscious of his abilities, the vacuum in the genre, his audience’s expectations, and the right dose of hustle to assert his place amongst the likes of Les Nubians and X-Maleya as a flag-bearer on stages across the globe." Pan-African magazine Okayafrica declared that H.I.V. "delivers on that notion in that it’s loaded with some of the cleanest and most original rap production we’ve heard out of Cameroon lately. Throughout the album infectious melodies are crafted out of 808s, local instruments, and the “signature bell sounds” of Le Monstre..." | https://en.wikipedia.org/wiki/H.I.V_(album) |
Composites manufactured for applications in the automotive industry were non-destructively tested to determine damage using the following techniques: (1) Low frequency tapping. (2) High frequency (C-Scan). (3) Visual imaging. (4) Low and high temperature pulse video thermography. Various levels of impact energy were applied to the following types of composites (I) RIM: Reaction injection moulded. (II) Woven Glass. (III) GMT: Glass mat thermoplastic. Some interesting results were obtained which could be explained through analytical and numerical modelling. These results were analyzed through developments of the following algorithms: (a) A novel approach to damage detection using wavelength variation and sequence grouping software. (b) Correlation of the various NDT techniques through one mathematical equation and software. (c) The introduction of the uniformity factor concept and software to account for variations among samples quality in relation to experimental results. (d) The development of smart classification system together with standard neural network algorithms for prediction and classification. The objectives of this research were all achieved. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319756 |
- BMJ: Cervical cancer screening may usher in a big change!
- “Magic drug” metformin may make people live to 120 years old
- Common bone density scan predicts risk of dementia later in life
Analysis of changes in WHO pituitary tumor classification in 2017
Analysis of changes in WHO pituitary tumor classification in 2017. The incidence of tumors around the pituitary gland and sellar area accounts for about 15% of all intracranial tumors. Pituitary adenomas are the most common intracranial benign tumors.
For a long time, the classification criteria for pituitary adenomas are based on different criteria: including tumor cell morphology, hormone secretion and ultrastructure. The Classification Criteria for Endocrine Tumors published by WHO in 2017 emphasizes more precise classification based on the differentiation lineage of tumor cells. This article focuses on the new changes and significance in the publication standards.
1. The background of the new classification
The latest edition of “Classification Standards for Endocrine Tumors” (4th edition) was revised in 2017. In 2004, WHO published the third edition of “Endocrine Organ Pathology and Genetic Tumor Classification” according to the types of hormones secreted by pituitary adenoma cells, and classified by immunohistochemical staining and observing the ultrastructural characteristics of tumor cells. Most pituitary adenomas are defined as benign tumors, but there are still some tumors that invade the dura, periosteum and even bone and important surrounding structures, including sella, cavernous sinus, intracranial, slope, and paranasal sinuses. Therefore, it is necessary to establish a classification method based on the clinical manifestations of individual tumors, focusing on predicting those tumors with aggressive tendency and identifying their biological characteristics such as aggressiveness, aggressiveness, and early recurrence.
The 4th edition of the WHO “Classification Standards for Endocrine Tumors” establishes a sound and reliable classification mechanism for pituitary tumors. In the new version of the classification of pituitary adenomas, in addition to identifying the types of hormones secreted by tumor cells, it also emphasizes that pituitary transcription factors are in determining the differentiation direction of tumor cell lineages, regulating the secretion of different types of hormones by the pituitary gland and in the development and progression of pituitary adenomas. Play a role that cannot be ignored.
2. The application of immunohistochemical detection and the redefinition of “zero-cell adenoma”
The cell development process is an organized and complex process that is coordinated by specific transcription factors. Pituitary transcription factors also play a similar role in determining the cell differentiation and hormone production of pituitary adenomas. Therefore, they can be used as diagnostic markers. The 4th edition of 2017 “Classification Criteria for Endocrine Tumors” proposes immunohistochemical detection of pituitary transcription factors for adeno-pituitary cell lines, which provides an accurate basis for reclassification of pituitary adenomas according to cell lineage. Pituitary adenomas are tumors originated from endocrine pituitary cells in the anterior pituitary gland, and can be divided into three cell lineages: eosinophils, gonadotropin cells and ACTH.
According to different transcription factors and their combination forms, each lineage is finally manifested as growth hormone cell adenoma, prolactin cell adenoma, thyroid stimulating hormone cell adenoma, corticotropin cell adenoma, gonadotropin cell adenoma, zero Cell adenoma, dual hormone cell adenoma and multihormone cell adenoma.
During the development and differentiation of pituitary adenoma cells, pituitary transcription factors determine the differentiation direction of adeno-pituitary stem cells: ①pituitary specific POU-class homoe domain transcription factor-1 (pituitary specific POU-class homoe domain transcription factor-1) , PIT-1) participate in and induce the differentiation of eosinophil lineage into growth hormone cells, prolactin cells, lacto-growth hormone cells and thyroid-stimulating hormone cells. ② Steroidogenic factor-1 (SF-1) mainly regulates the differentiation of gonadotropin cells. ③T-BOX family member TBX19 (T-PIT) is mainly involved in regulating the differentiation and maturation of ACTH cells. Therefore, the immunohistochemical staining of pituitary transcription factors can accurately identify and classify the subtypes of pituitary adenomas, and it is not necessary to rely on electron microscopy for routine ultrastructural analysis.
In addition, the concept of this new classification is also reflected in the naming of pituitary adenomas: according to the different lineage sources of adeno-pituitary cells, they are named in combination with different types of secreted hormones and specific pituitary cell transcription factors and their combinations. For example: growth hormone cell adenoma (formerly known as “growth hormone secreting adenoma”), its diagnosis must have positive expression of pituitary specific transcription factor PIT-1 in addition to immunohistochemical staining showing positive growth hormone staining.
In clinical pathological diagnosis, routine immunohistochemical staining of GH, PRL, ACTH and other pituitary hormones and α subunits can accurately classify most adenomas. Therefore, referring to the 4th edition of the “Criteria for the Classification of Endocrine Tumors”, it is not necessary to stain all pituitary adenomas with pituitary transcription factors, and for certain subtypes of adenomas, pituitary transcription factors and related cofactors are required for immune tissue Chemical dyeing is reclassified. For example, in the third edition of “Endocrine Organ Pathology and Genetic Tumor Classification” in 2004, “zero-cell adenoma” was defined as a pituitary adenoma with negative immunohistochemical staining for pituitary hormones. Only conventional hormone immunohistochemistry and other methods were used. Pituitary adenomas are distinguished from each other. However, in fact, a negative hormone staining cannot be diagnosed as a zero-cell adenoma.
According to the third edition classification criteria, zero-cell adenomas account for about 10% of all pituitary adenomas. In fact, some gonadotropin cell adenomas that are negative for pituitary hormone immunohistochemistry are included in the category, traced back to the pathological sections diagnosed as “zero cell adenoma” in the past, and re-stained the pituitary transcription factor SF-1 by immunohistochemical staining. , The gonadotropin cell adenoma with negative hormone staining and positive SF-1 transcription factor can be distinguished from true zero-cell adenoma with negative hormone and transcription factor. The re-testing of pituitary transcription factors by immunohistochemistry statistics shows that zero-cell adenomas account for less than 1% of all pituitary adenomas.
Therefore, in the 4th edition of “Classification Standards for Endocrine Tumors”, zero-cell adenomas are precisely defined as: pituitary adenomas that are negative in immunohistochemical staining of pituitary secreting hormones and pituitary cell transcription factors. In addition, multi-hormonal adenomas are defined as tumor cells derived from one or more adeno-pituitary cell lines within the same tumor, which are divided into PIT-1-positive multi-hormonal cell adenomas (previously called static type III pituitary Adenoma) and a rare combination of immunohistochemical adenoma. It can be seen that there are limitations to using hormone immunohistochemistry to achieve accurate classification. The correct use of pituitary transcription factor immunohistochemical detection can play an important supplementary role in the accurate diagnosis of hormone expression-negative or only focal-positive pituitary adenomas.
3. The abandonment of the term “atypical adenoma” and the proposal of “high-risk pituitary adenoma”
In the third edition of 2004 “Classification of Endocrine Organ Pathology and Genetic Tumors”, in order to reflect the malignant potential of pituitary adenomas and assess the prognosis of patients, pituitary adenomas are divided into typical adenomas, atypical adenomas and pituitary carcinomas. Typical adenomas include most of the pituitary neuroendocrine tumors. Atypical adenomas are those with elevated mitotic index of tumor cells, Ki-67 marker index> 3%, and p53 immunoreactive staining. However, it has been found in clinical work that some typical adenomas have invaded the sphenoid sinus and/or cavernous sinus, grew around the carotid artery or optic canal in preoperative MRI imaging, and the residual tumor tissue recurred rapidly after surgery. Chemotherapy and other adjuvant treatments have poor results and poor prognosis; preoperative MRI and related imaging examinations of some atypical adenomas showed limited space occupation, clear boundaries with surrounding tissues, no obvious abnormalities in hormone levels, and good postoperative prognosis .
The results of different reports showed that there was no significant correlation between the aggressive growth of tumors and the pathological diagnosis of atypical adenomas. The results of KIM and other studies showed that the postoperative recurrence rate of pituitary adenomas was not significantly correlated with mitotic counts. The results of HASANOV and other studies showed that: p53 immunoreactive staining was not significantly related to postoperative recurrence. The segmentation value of Ki-67 was greater than 2.5% and the MRI imaging showed that the sphenoid sinus, cavernous sinus and other surrounding tissues were invaded and the adenoma recurred. Highly correlated. The diagnosis of atypical adenoma cannot effectively help assess the prognosis of the prognosis of the prognosis of the pituitary adenoma.
In the fourth edition of the 2017 classification criteria, the diagnosis of “atypical adenoma” is no longer recommended, but other parameters such as mitotic count, Ki-67 marker index, and p53 staining are still reserved to assess whether pituitary adenomas are aggressive Growth of tumor behavior and prediction of patient prognosis, and the question of whether tumor aggressiveness should be included in the clinicopathological classification of neuroendocrine tumors is another important discussion point. Some scholars advocate that aggressiveness should be included in the classification of pituitary neuroendocrine tumors. However, it is generally believed that aggressiveness should not be included in the pathological classification and classification.
This is mainly due to the fact that clinicians use imaging criteria and evidence of gross tumor invasion during surgery to assess the invasive ability. The subjective bias is strong and the deviation is large; while pathological smear observations usually lack imaging data or surgeon evaluations. With regard to clinical data on invasion, pathological results alone cannot assess its aggressive behavior. Therefore, WHO does not include the aggressiveness of tumors in the pathological classification of pituitary adenomas, but uses invasion as an important reference index for predicting the prognosis of pituitary adenomas. It appears as an aggressive growth of adenoma, which deviates from the benign histological characteristics of a typical adenoma.
The definition is different in the relevant literature, such as: “huge and fast-growing tumors that are invasive and enveloping with surrounding tissues”, “tumors that will recur early even if they are completely removed”, and “resistance to conventional treatments.” Drug tumors”. These subtypes of pituitary adenomas with aggressive growth, high potential for value-added and prone to recurrence are defined as “high-risk pituitary adenomas” in the 4th edition of 2017 “Classification Standards for Endocrine Tumors”, including: sparse granular type Growth hormone cell adenoma, PIT-1 positive multihormone cell adenoma, quiescent corticotropin cell adenoma, Crooke cell adenoma, and male prolactin cell adenoma.
These five types of adenomas usually grow in an invasive and enveloping manner with the surrounding tissue structures, and have a high tendency to relapse and are more difficult to control. Therefore, in clinical work, formulate and standardize individualized diagnosis and treatment plans for patients whose case diagnosis results are high-risk pituitary adenomas, adopt relevant indications for adjuvant therapy after total resection/subtotal resection, and review the density of management after follow-up. Time and duration, as well as the best time node and treatment methods for early intervention for signs of recurrence, are issues that need to be resolved and standardized in the future.
4. Non-neuroendocrine tumors of the pituitary-pituitary blastoma
In the fourth edition of 2017, “The Classification Standards for Endocrine Tumors”, the pituitary non-neuroendocrine tumors are also elaborated in detail. Compared with the previous version, many types are improved, including craniopharyngiomas, neuronal and paraneuronal tumors originating in the sella area. , Posterior pituitary tumors, mesenchymal tissue tumors, germ cell tumors, lymphohematopoietic system tumors and secondary tumors. The improved classification is basically consistent with the 2016 version of the central nervous system tumor classification. In addition, a solid tumor-pituitary blastoma, which is a rare malignant pituitary non-neuroendocrine tumor, is introduced for the first time in infants under 24 months (median age of onset is 8 months), female The incidence rate is higher than that of men. Most of the sick children have Cushing syndrome and have a poor prognosis.
5. Summary
The pathological evolution of pituitary adenoma is developed through multi-step and multi-organ factors, among which genetic configuration, specific somatic mutations and endocrine factors are important inducing factors. From the early evaluation of chromaffin cells, eosinophils, and basophils to the new classification of immunohistochemical studies based on transcription factors in 2017, the ability to qualitatively diagnose pituitary adenomas has been greatly improved.
The new classification standard emphasizes that immunohistochemistry is the main auxiliary diagnostic tool for the classification of pituitary neuroendocrine tumors. Immunohistochemical staining of immune markers such as pituitary hormones, pituitary transcription factors and cytokeratins is the basis of the classification standard. Under the guidance of the new classification, it has guiding significance for accurately diagnosing the pathological subtypes of pituitary adenomas, identifying pituitary adenomas with high invasiveness and high risk of recurrence, as well as clinical treatment, prognosis prediction and follow-up management; this time is at the molecular level Explaining the mechanism and classification of pituitary tumors is not only the ultimate goal of pituitary tumor classification, but also a new beginning for the transformation of pituitary tumors. | https://medicaltrend.org/2021/03/23/analysis-of-changes-in-who-pituitary-tumor-classification-in-2017/ |
One of the country’s most popular coffee chains has given a Bury College Foundation Studies student the opportunity to gain skills to secure a job within the retail industry.
Former Siddal Moor pupil James Taylor has undertaken a six week placement at Costa Coffee based in Tesco, Bury. From serving customers, to cleaning tables and making coffees, the 20 year old is delighted to be gaining a broad range of employability skills in customer service.
James, who lives in Heywood, said, “I am really enjoying my placement, I like serving customers and my confidence has improved.”
Hardworking James is combining his Bury College studies and Costa Coffee placement with volunteer work at Springhill Hospice charity shop in Heywood. James is hoping all the experience he is gaining will stand him in good stead for securing a full-time job within retail in the future.
James added, “I love Bury College and my tutors are so helpful.”
Click here to discover your options at Bury College and the wide range of high quality Foundation Studies programmes. | https://burycollege.ac.uk/about-us/latest-news/posts/2019/february/bury-barista-champions-colleges-supported-internship-programme/ |
Hiro Shishigami is a high-school student and the main antagonist of Inuyashiki. Hiro lived with his mother and is currently wanted as the most dangerous serial killer currently alive. Not much else is known about Hiro.
After being infused with alien cyborg technology, he begins to use his newfound modifications to help himself feel emotion, by way of committing serial murder. He begins by causing traffic accidents and general disturbances before quickly progressing to committing serial murder, often of random families. He is eventually identified and pursued by the police after being turned in by Ichirou Inuyashiki and his former best friend, Naoyuki Andou, and is currently on the run.
Shishigami has the same powers as Ichirou Inuyashiki, namely flight, advanced combat abilities, advanced healing abilities, and the ability to seamlessly communicate with any electronic device, as well as having technology seamlessly integrated into his body. However, unlike Inuyashiki, Shishigami instead uses his powers to commit crimes, usually murder and theft. This predisposition to violence might be a result of being bullied or living in less-than ideal conditions with only his mother, but has not been revealed yet. Shishigami is fond of imitating guns to kill his victims. | https://myanimelist.net/character/114455/Hiro_Shishigami |
Choosing the right therapy for a non-small cell lung cancer (NSCLC) patient may be difficult, as biomarkers can change during therapy rendering that treatment ineffective. Now researchers from the Moffitt Cancer Center report they are developing a noninvasive, method to analyze a patient’s tumor mutations and biomarkers to determine the best course of treatment.
Their findings, “Noninvasive decision support for NSCLC treatment using PET/CT radiomics,” is published in Nature Communications. The researchers demonstrate how a deep learning model using positron emission tomography/computerized tomography (PET/CT) radiomics may identify which non-small cell lung cancer patients may be sensitive to tyrosine kinase inhibitor treatment and those who would benefit from immune checkpoint inhibitor therapy.
NSCLC is a disease in which malignant cells form in the tissues of the lung. There are different types of treatment for patients with non-small cell lung cancer including surgery, radiation therapy, chemotherapy, targeted therapy, and more.
“Two major treatment strategies employed in NSCLC are tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs). The choice of strategy is based on heterogeneous biomarkers that can dynamically change during therapy. Thus, there is a compelling need to identify comprehensive biomarkers that can be used longitudinally to help guide therapy choice. Herein, we report an 18F-FDG-PET/CT-based deep learning model, which demonstrates high accuracy in EGFR mutation status prediction across patient cohorts from different institutions,” the researchers wrote.
The EGFR—which stands for epidermal growth factor receptor—contributes to the growth of some lung cancers and drugs that block the activity of EGFR slow cancer growth and prolong survival. EGFRs are small proteins that are found on the surface of all cells and proteins circulating in the blood called growth factors. The binding action between EGFR and growth factors stimulates biological processes within the cell to promote growth of a cell in a strictly controlled manner. However, in many cancer cells, EGFR is either abundantly overexpressed or the EGFR biological processes that normally stimulate cell growth are constantly active. This leads to the uncontrolled and excessive growth of the cancer cell.
“…EGFR is a common mutation found in non-small cell lung cancer patients. EGFR mutation status can be a predictor for treatment, as patients with an active EGFR mutation have better response to tyrosine kinase inhibitor treatment,” explained Matthew Schabath, PhD, associate member of the cancer epidemiology department.
The researchers developed an 18F-FDG PET/CT-based deep learning model using retrospective data from non-small cell lung cancer patients at two institutions: Shanghai Pulmonary Hospital and Fourth Hospital of Hebei Medical University in China. 18F-FDG PET/CT is used in determining the staging of patients with non-small cell lung cancer.
“Prior studies have utilized radiomics as a noninvasive approach to predict EGFR mutation,” said Wei Mu, PhD, study first author and postdoctoral fellow in the cancer physiology department. “However, compared to other studies, our analysis yielded among the highest accuracy to predict EGFR and had many advantages, including training, validating, and testing the deep learning score with multiple cohorts from four institutions, which increased its generalizability.”
“We found that the EGFR deep learning score was positively associated with longer progression-free survival in patients treated with tyrosine kinase inhibitors, and negatively associated with durable clinical benefit and longer progression-free survival in patients being treated with immune checkpoint inhibitor immunotherapy,” said Robert Gillies, PhD, chair of the cancer physiology department. “We would like to perform further studies but believe this model could serve as a clinical decision support tool for different treatments.”
This model will help overcome the heterogeneous biomarkers that can change during therapy and will help identify the best course of treatment for those with NSCLS. It may also one day contribute to a higher survival rate compared to the five-year survival rate of 5% for the patients diagnosed with metastatic disease. | https://sandbox.genengnews.com/news/deep-learning-model-developed-to-identify-appropriate-lung-cancer-therapy/ |
Which country has no night
NorwayIn Svalbard, Norway, which is the northern-most inhabited region of Europe, the sun shines continuously from April 10 to August 23.
Visit the region and live for days, for there is no night..
Why Norway has no night
The Polar Night is a phenomenon that happens inside the Polar Circle. In Norway, that means that this phenomenon only occurs in the northern regions. While the days are short in all of Norway during the winter months, the sun rises above the horizon (even if only for a few hours) so it doesn’t qualify as a Polar Night.
How long does it stay dark in Norway
The Polar Night can last days to months depending on your location. On the North Cape, the sun remains under the horizon for more than two months, while in Tromsø the phenomenon lasts for six weeks or so. In Lofoten, the dark period is short, just under four weeks.
Does Norway have 24 hours of darkness
76 days of midnight sun between May and July greets travelers in Northern Norway. The further north you go, the more nights of midnight sun you get. During the summer months, you can experience up to 24 hours of sunlight above the Arctic Circle, which means more time to enjoy the sights and make new discoveries.
What is the shortest day in Norway
December 21, 2021December Solstice (Winter Solstice) is on Tuesday, December 21, 2021 at 4:59 pm in Tromsø. In most locations north of Equator, the shortest day of the year is around this date.
Is it always cold in Norway
Winters are relatively moderate and rainy with little snow or frost. Inland areas (like Oslo) have a continental climate with colder winters (think minus 13 degrees Fahrenheit, or 25 below zero Celsius) but warmer summers. Weather in Norway is best between May and September when it’s usually mild and clear.
How cold is Norway in winter
-6.8 degrees CelsiusIn winter, the average temperature in Norway is -6.8 degrees Celsius, but the conditions may vary quite a lot. Around Oslo, snowfall is common and the average temperatures are just below zero. The lower inland areas of Finnmark, Troms, Trøndelag, and Eastern Norway can have very cold winters with lots of snow.
What country has 6 months of darkness
AntarcticaAntarctica has just two seasons: summer and winter. Antarctica has six months of daylight in its summer and six months of darkness in its winter.
Why does Alaska never get dark
In Alaska, the sun travels in a slanting 360 degree circle in the sky, so even if it’s below the horizon, it’s barely below it for a long period. This means that even though the sun isn’t visible, we still receive very bright twilight that can last for hours or until the sun rises again.
Is Sweden dark for 6 months
Time and daylight in Sweden From between end May and mid-July, the midnight sun lights up the night in northern Sweden lengthening your sightseeing days. Sweden is a country with big differences in daylight. In the far north, the sun does not set at all in June and there is darkness around the clock in January.
What country is always dark
Located more than 200 miles north of the Arctic Circle, Tromsø, Norway, is home to extreme light variation between seasons. During the Polar Night, which lasts from November to January, the sun doesn’t rise at all.
Which country has 40 minutes night
NorwayThe 40-minute night in Norway takes place in June 21 situation. At this time, the entire part of the earth from 66 degree north latitude to 90 degree north latitude remains under sunlight and this the reason why the sun sets for only 40 minutes. Hammerfest is a very beautiful place.
Which country has longest day in the world
IcelandSummer and Winter Solstices in Iceland Iceland’s longest day of the year (the summer solstice) is around the 21st of June. On that day in Reykjavík, the sun sets just after midnight and rises again right before 3 AM, with the sky never going completely dark.
Which country has 3 months of darkness
Norwegian arcticSpring in the Norwegian arctic seems to spend a long time plotting its annual comeback. When it returns, though, it does not disappoint.
Is Alaska dark
All of Alaska does not go dark in winter! But the further north you go, the darker it gets.
How many hours is a day in Norway
Relatively high in the north the days in summer are long and short in winter. With up to approximately 19 hours the longest days happen in Juni. On the other hand, the longest and darkest nights are in winter (in the southern hemisphere it is the other way around). In Dezember a night in Oslo lasts almost 18 hours.
Why Norway is called Midnight Sun
Norway lies beyond the Arctic Circle. Thus the Sun neither rises nor sets for few months in a year. Therefore it is known as the land of the midnight Sun. The Sun here is visible at a very low height just above the horizon.
Why Norway has 6 months day and 6 months night
If the Earth rotated exactly perpendicular to its axis then we would all have 12 hour days and 12 hour nights no matter where we were. Instead, the Earth is tilted by approximately 23.5 degrees. This means that there’s an area at the top and the bottom that gets 6 months of day followed by 6 months of night. | https://avalanchespaces.com/qa/question-is-it-6-months-dark-in-norway.html |
Forensic DNA Typing charts the progress and development of DNA applied to criminal forensics, providing vivid demonstrations of the amazing potential of the method, not only to convict the guilty but also to exonerate the innocent. John Butler has created a text that caters to all audiences, covering the basics of DNA structure and function and describing in detail how the techniques are used. In addition, the extensive use of D.N.A. (Data, Notes, and Application) Boxes in the text enables the reader to dip in and out as he or she pleases.
Probably the most important development of recent years is the universal use of polymerase chain reaction (PCR) to replicate DNA molecules in vitro. This has led to the rapid development of new platforms and biochemistry that have revolutionized the methods used to carry out DNA analysis. These new technologies are clearly explained in great detail in this book, with lavish illustrations. The culmination of recent advances has led to the instigation of massive National DNA offender databases using short tandem repeat (STR) loci. For example, since its inception in 1995, the England and Wales National DNA database (NDNADB) now has more than 2.75 million reference DNA profiles from suspects and offenders alike, against which all crime stains are routinely compared. Many more countries throughout the world have since followed suit. The social benefits of such databases are considerable - individuals who commit major crimes such as murder usually already have a criminal record. UK policy enables the collection of DNA profiles from all offenders regardless of the seriousness of the crime. Consequently, those who re-offend can be quickly identified and apprehended. In the US thirteen different STRs are combined together into one or two-tube reactions known as multiplexes to provide data for the Combined DNA Index System (CODIS). When a complete DNA profile is obtained the probability of a chance match with a randomly chosen individual is usually less than one in one trillion using these 13 CODIS loci.
Other areas are explored in detail including mitochondrial, Y chromosomal DNA and use of forensic science in wildlife crime such as poaching. Recently, as a result of terrorist attacks, new areas of forensic DNA profiling have arisen in response. Foremost amongst these is the field of microbial forensics, which is used to identify pathogens such as anthrax.
Although it is very difficult to anticipate all future developments, STRs are probably the system of choice for the foreseeable future although other systems, especially single nucleotide polymorphisms (SNPs), have been suggested. SNPs may find a special niche to analyze very highly degraded material and may play a valuable role in future mass disasters. However, there is no doubt that the utility of both STRs and SNPs will benefit from new biochemistry and new platforms such as microchips. Automation, miniaturization and expert systems will all play an increasingly important role over the coming years. The main aims of the new technology can be summarized: to enable faster processing; to reduce costs; to improve sensitivity; to produce portable instruments; to de-skill and to automate the interpretation process; to improve success rates; to improve quality of the result and to standardize processes. The next few years will probably see a new revolution as this new technology comes of age and becomes widely available.
John Butler reviews these new innovations in great detail - he is to be congratulated for preparing such a readable book that will appeal to everyone from the layperson, the lawyer and the scientist alike.
Peter Gill, Ph.D.
Birmingham, UK December 2004
Was this article helpful?
This book discusses the futility of curing stammering by common means. It traces various attempts at curing stammering in the past and how wasteful these attempt were, until he discovered a simple program to cure it. The book presents the life of Benjamin Nathaniel Bogue and his struggles with the handicap. Bogue devotes a great deal of text to explain the handicap of stammering, its effects on the body and psychology of the sufferer, and its cure. | https://www.barnardhealth.us/forensic-science/foreword.html |
2. (a) Use the identity cos2 θ + sin2 θ = 1 to prove that tan2 θ = sec2 θ – 1.
(b) Solve, for 0 ≤ θ < 360°, the equation
2 tan2 θ + 4 sec θ + sec2 θ = 2
3. Rabbits were introduced onto an island. The number of rabbits, P, t years after they were
introduced is modelled by the equation
P = 80e1/3t, t ∈ ℝ, t ≥ 0
(a) Write down the number of rabbits that were introduced to the island.
(b) Find the number of years it would take for the number of rabbits to first exceed 1000.
(c) Find dP/dt
(d) Find P when dP/dt 50.
4. (i) Differentiate with respect to x
(a) x2 cos 3x
(b) ln(x2 + 1)/(x2 + 1)
(ii) A curve C has the equation
y = √(4x + 1), x > -¼ , y > 0
The point P on the curve has x-coordinate 2. Find an equation of the tangent to C at P in the form ax + by + c = 0, where a, b and c are integers.
Try the free Mathway calculator and
problem solver below to practice various math topics. Try the given examples, or type in your own
problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | https://www.onlinemathlearning.com/edexcel-c3-june-2009.html |
Section links:
CSLLEA FAQs
CLETA FAQs
SLEPS FAQs
LEMAS FAQs
CSLLEA FAQs
What is CSLLEA?
The Census of State and Local Law Enforcement Agencies (CSLLEA) is part of the Law Enforcement Core Statistics (LECS) program which coordinates a group of law enforcement agency surveys conducted by the Bureau of Justice Statistics (BJS). BJS is working with RTI International, a nonprofit research organization, and the Police Executive Research Forum (PERF) to administer the CSLLEA. BJS has administered the CSLLEA regularly since 1986. The next CSLLEA will begin administration in fall 2018. The primary purpose of the CSLLEA is to serve as a complete enumeration of law enforcement agencies in the United States.
Why is CSLLEA important?
LEAs face unique challenges in areas such as staffing, number and types of functions performed, and budget allocation. The CSLLEA is the only national data collection that asks every law enforcement agency in the nation about these issues. These data are used by various stakeholders in order to better understand the current state of law enforcement. Additionally, the data can be used by law enforcement agencies in order to compare themselves to other similar agencies in the U.S.
What information does the CSLLEA collect?
The CSLLEA collects information on government authority, budget, functions, and personnel. The information is used to provide national, state and local estimates for all law enforcement agencies in the U.S. This data is also used to pull samples for other BJS data collections.
Who should complete the CSLLEA?
Any law enforcement agency operating with public funds that employs the equivalent of one full-time sworn officer (at least 2 part-time officers) should complete the CSLLEA.
Agencies meeting any of the following would not complete the CSLLEA: that are no longer in existence, contracted or outsourced all law enforcement services to another agency, employed only part-time officers and the total combined hours worked for these officers averaged less than 35 hours per week, all officers in the agency were unpaid volunteers, all officers in the agency were paid via fee-for-service and not salary, agency was private or agency was operated by the Federal government.
CLETA FAQs
Why is CLETA important?
The CLETA is the only systematic data collection that produces national estimates on the characteristics of academies that train all state and local law enforcement officers. Law enforcement agencies, policy makers, and researchers will use the CLETA data to better understand and respond to the training need of law enforcement personnel.
What information does CLETA collect?
In addition to general information describing each academy, the CLETA collects detailed information on personnel, resources, core curriculum, trainees, policies and practices of state and local law enforcement training academies.
What is the definition of basic law enforcement training for the purposes of CLETA?
Basic law enforcement training is defined as the mandatory training for newly appointed or elected law enforcement officers as required by federal or state statute, rule, or regulation, depending upon the jurisdiction of the agency hiring the new officer.
SLEPS FAQs
What is SLEPS?
The Survey of Law Enforcement Personnel in Schools (SLEPS) is a two-phase research effort to examine the prevalence and role of law enforcement officers in schools. The U.S. Department of Justice’s Bureau of Justice Statistics (BJS) is working with RTI International (RTI), a not-for-profit research organization, to conduct this important data collection. During the first phase, law enforcement agencies (LEAs) complete an agency-level survey and provide a roster of their officers working in schools. BJS and RTI will use these rosters to draw a sample of School Resource Officers (SROs) f or the second phase, which is an officer-level survey.
Why is SLEPS important?
At this time, there is no national-level information on the roles, functions, and regular activities of police officers assigned to schools nor on the infrastructure of law enforcement agencies that supports these officers. SLEPS will provide this information which is critical to inform research and policy on effective school resource officer (SRO) programs.
What information does SLEPS collect?
The LEA survey collects general information including (1) LEA agency characteristics; (2) SRO program characteristics; (3) SRO policies and assigned responsibilities; (4) SRO recruitment, training, and supervision; (5) SRO staffing; and (6) sworn SRO training topics and activities performed. The final piece of the agency-level survey is the rostering form, which asks for a list of sworn SROs.
The SRO survey collects officer-level information including (1) SRO experience and characteristics; (2) SRO training; (3) SRO activities; and (4) characteristics of the schools that SROs are assigned to.
What is the definition of an SRO for the purposes of SLEPS?
A sworn law enforcement officer who is assigned to work in any public K-12 school.
LEMAS FAQs
Why is LEMAS important?
LEMAS is the only survey of law enforcement agencies that gathers nationally representative information about agencies on key factors like personnel, policies, and agency activities. LEMAS data are widely used by researchers, policy makers and law enforcement agencies to understand law enforcement at local, county, state and national levels.
What information does the LEMAS collect?
The LEMAS core collects important information on personnel, expenditures and pay, operations, equipment, computers and information systems, and policies and procedures. This information is used to create national estimates for all law enforcement agencies in the U.S. The LEMAS supplements will collect in-depth information on a specific topical area. The first supplement focuses on body-worn camera usage.
Why might I receive multiple LEMAS surveys?
The LEMAS is moving to a new core + topical supplement model. Topical supplements will cover emerging issues in law enforcement and will change over time. You may have participated in the first LEMAS topical supplement which covered body-worn camera usage. You may have also been invited to participate in the 2016 LEMAS core. Data collection concluded on the 2020 LEMAS in the fall of 2021 and a LEMAS supplement is coming soon.
Do I need to complete the LEMAS if I recently completed a different LEMAS survey?
Yes! The LEMAS core and supplements are critically important to understanding characteristics, policies and procedures of law enforcement agencies across the country. Each agency that was selected to participate is crucial – each agency’s responses are important. We need responses from all selected agencies for each survey sent to ensure that the results are representative of law enforcement agencies across the U.S. | https://bjslecs.org/Home/FAQ |
As parents, we are always looking for ways we can connect with our kids. One of the most powerful is through a morning and end-of-day routine that allows our kids to share how they feel, get grounded for the day, share gratitude, and wind down for the evening.
Fostering a strong connection with your kid allows you to tune in to how they’re feeling and to offer support when they need it. It lets them know they matter and are loved, and builds their self-esteem and confidence. It takes as little as 5 minutes each day to connect with your kid, and build that parent-kid relationship that says, “you get me.”
Check-In/Check-Out can become a daily routine that builds important habits like reflection, self-awareness, empathy, and a secure attachment or bond. The more we give, the more we get, and this is a gift we can give each other.
Dr. Jack P. Shonkoff, Director of the Center on the Developing Child at Harvard calls the parent-child connection “one of the most essential experiences in shaping the architecture of the developing brain.” This relationship is so powerful that it can be a protective buffer against stress and uncertainty for our kid’s developing brains.
Additionally, Check-In/Check-out creates a sense of belonging. We’re built to make social connections. Research shows that people who feel like they belong enjoy greater self-esteem, have fewer illnesses, and live longer lives.
Most notably, Check-In/Check-Out:
- fosters connection,
- encourages self-awareness,
- nurtures empathy,
- promotes active listening, and
- shows a kid they matter
Check-In with these 4 Steps:
- Share what you’re feeling. Go around the circle and have each group member choose 3 adjectives that describe how they are feeling. As a listener, be curious about what is behind these 3 adjectives. Naming emotions can be hard, but there are handy tools that make it easier to go deeper than “happy” and “sad.” Check out the “My Feelings” poster from Generation Mindful or the “Mood Meter” from the Yale Center for Emotional Intelligence.
- Talk about those emotions. Go back around the circle, letting each person talk for 1–2 minutes about the adjectives they named and why they chose those emotions. Everyone should listen carefully and think about what might be causing the other person’s feelings.
- Acknowledge the feelings of others. Acknowledge where the group is emotionally. Do a round of “I noticed…,” “I wonder…,” or “I feel…” sentence starters to wrap up. For example, “I noticed we’re all feeling stressed today,” or “I wonder if there are ways we could calm our stress. How about….”
- Say “I’m in!” At the end of the Check-In, everyone says “I’m in!” to signify that they are present and activated for the day to come!
Check-Out with these 3 steps:
- Show gratitude. Each group member shares something they are grateful for, whether it actually happened during the day or it’s just something that’s top-of-mind.
- Share takeaways. Everyone shares one takeaway from the day.
- Say “I’m out!” At the end of the Check-Out, everyone says “I’m out!” to provide a calming marker that the day of work, learning, and growth is done.
The phrases, “I’m in” and “I’m out” signify togetherness and belonging. They’re the rallying call that confirms the connection with each member of the family who has just been seen and heard. You’re a team who cares about each other going through life together.
- Circle up: Sitting in a circle promotes unity.
- Ask for a volunteer: The first to share should be ready, willing, and able.
- Go deep: You’re connecting on a deep emotional level. Encourage vulnerability and honesty.
- Model it: Your willingness to share openly and honestly will encourage others in the circle to do the same.
- Keep it short: Strive for two minutes per person, but don’t use a timer. You don’t want anyone to feel like they’re being cut off when they’re telling you how they’re feeling.
- keep track of how the family is doing, | https://preparedparents.org/tip/how-to-connect-with-my-kid/ |
February 2, 2023 — Five additional imaging centers across Allegheny Health Network (AHN) have been recognized by the ...
March 31, 2016 — In a paper published in Nature Reviews last week, Axel Hoos, M.D., Ph.D., laid out the current immunotherapy development paradigm, as well as his strategic vision to optimize the implementation of next-generation immunotherapies.
Immunotherapy is currently revolutionizing cancer treatment and, according to Axel Hoos, M.D., Ph.D., has the potential to improve patient outcomes significantly in the future. Hoos leads one of the Cancer Research Institute’s (CRI) programs — the Cancer Immunotherapy Consortium (CIC) — that played an important role in enabling early immunotherapy trials to succeed.
Imaging is often used to monitor patient response to this therapy, but there is difficulty in measuring treatment response because it presents very differently than traditional chemotherapy response. The tumors sometimes increase in size for a couple of months before the treatment begins to shrink tumors over time. The tumor tissue response is also often delayed and may be mistaken as ineffective in the first couple months of monitoring.
Under the leadership of Hoos, who also serves as GlaxoSmithKline’s head of immuno-oncology, the CIC initiatives highlighted how two important factors were complicating immunotherapy trials. First, immunotherapy can produce novel responses that are not accounted for in the traditional response criteria developed for chemotherapy. Second, immunotherapy’s differential effects on clinical endpoints, such as survival, need to be addressed. In addition to the development of new immune-related response criteria (irRC), Hoos also believes that the CIC’s insights helped to transform the development program for the first checkpoint blockade immunotherapy drug, the anti-CTLA-4 antibody ipilimumab (Yervoy), which unleashes the immune system and enables a stronger anti-cancer response.
With the help of the new immunotherapy-related criteria, early trials employing ipilimumab as well as other checkpoint blockade drugs, such as the anti-PD-1 antibody pembrolizumab (Keytruda) and nivolumab (Opdivo), paved the way for immunotherapy to improve the standard of care in cancer treatment. Furthermore, these stimulated cooperative efforts in the immunotherapy community that directly influenced the regulatory guidance documents of both the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA).
Moving forward, Hoos hopes that the CIC, which has since been reconstituted as an industry advisory “think tank” focused on late-stage drug development, can build on these accomplishments. According to him, several factors will drive the efforts to usher in the next generation of immunotherapies — including technologies such as cancer vaccines, re-engineered oncolytic viruses and cell-based therapies — as well as to maximize the benefits of current immune-based drugs:
- First, immunotherapy has already started to change the standard of care in oncology, both in significant improvements in survival as well as side effects. This will substantially change the treatment landscape and will require careful integration of immunotherapies into current treatment strategies as new information becomes available;
- Second, there is still room to improve the criteria by which clinicians can determine successful patient responses to immunotherapy;
- Third, not all patients respond to the same treatment the same way. To ensure each patient receives the most beneficial treatment for their particular tumors, better immune biomarkers must be developed;
- Fourth, while individual immunotherapies have already produced clinical improvements, early trials with combination therapies suggest that benefits can be improved. Determining rational strategies for combination therapies will be a high priority moving forward; and
- Fifth, novel immunotherapy technologies already present much promise, and by incorporating recent scientific breakthroughs into the design of treatments, their effectiveness will continue to improve patient outcomes in the future.
Immunotherapy was declared the Breakthrough of the Year by Science in 2013. To ensure that immunotherapy realizes its full potential, much more must be learned about the immune system, cancer, and their interactions within the complex and ever-changing tumor environments of individual patients. | https://www.itnonline.com/content/history-and-future-cancer-immunotherapy |
Robinson said in the article, “Coronavirus Anxiety: Coping with Stress, Fear,
and Uncertainty” that it is important for people to get information about the
Coronavirus from “trustworthy sources, to limit their media consumption and to
limit the amount of time they check for updates”.
It can also be
helpful for individuals to practice deep breathing and mindfulness. Mindfulness is where the person describes
what they see, taste, hear, what something feels like when they touch it, etc. Mindfulness can be helpful because it can
distract the person from the negative thoughts they are experiencing.
Another thing that may make people anxious or depressed during the Corona Virus pandemic is social distancing. While it is important that people practice social distancing to avoid catching and spreading the disease, it may cause individuals to feel lonely, bored, depressed and anxious. Fortunately, there are ways that people can still stay connected including video calling and social media. I like video calling because I can see my classmates, people in the college church group I am a part of and my other friends. When I video call with them, it is almost like I am there with them in person.
I also like using social media, especially during the pandemic. On social media, I’ve used the hashtag #coronaviruscantcancel to post the fun things I am still able to do. This hashtag helps me because I notice that good things can still happen during the pandemic. I also hope to spread positivity and help others by using the hashtag. Some of my posts that include this hashtag are of my artwork, nature, a selfie of me with my dog, the video call of The Roar’s meeting, etc.
It is also good
for individuals to get exercise. One way
to get exercise is by taking walks. The
weather has been nice lately, and I like observing nature when I am outside. Another way I like to get exercise is by looking
up videos of the video game “Just Dance” on YouTube and dancing to it.
Other ways people can take care of their mental health while at home are by drawing, reading, watching their favorite show, engaging in a hobby, baking, etc.
It can also be helpful for people to have counseling and therapy appointments. While they may not be able to meet with a counselor or therapist in person, they may be able to talk with them over the phone or video call them. I appreciate that counselors and therapists are setting up appointments with patients especially because the mental health challenges individuals face may heighten during this time.
By practicing mindfulness, deep breathing, getting information about the Corona Virus from reliable sources, staying connected with friends, participating in hobbies and having an appointment with a counselor or therapist, people may be able to feel better during the pandemic. | https://piedmontroar.com/8291/sliderposts/how-to-take-care-of-your-mental-health-during-the-corona-virus-pandemic/?print=true |
Categories: Chemistry | Materials science
Corrosion is deterioration of useful properties in a material due to reactions with its environment. Weakening of steel due to oxidation of the iron atoms is a well-known example of electrochemical corrosion. This type of damage usually affects metallic materials, and typically produces oxide(s) and/or salt(s) of the original metal. Corrosion also includes the dissolution of ceramic materials and can refer to discolouration and weakening of polymers by the sun's ultraviolet light.
Electrochemical corrosion causes between $8 billion and $128 billion in economic damage per year in the United States alone1, degrading structures, machines, and containers. Most structural alloys corrode merely from exposure to moisture in the air, but the process can be strongly affected by exposure to acids, bases, salts and organic chemicals. It can be concentrated locally to form a pit or crack, or it can extend across a wide area to produce general deterioration; efforts to reduce corrosion sometimes merely redirect the damage into less visible, less predictable forms.
2.1 Galvanic series
3.1 Intrinsic chemistry
3.2 Passivation
3.3 Surface treatments
4.1 Pitting corrosion
4.2 Fretting
4.3 Weld decay and knifeline attack
Most ceramic materials are almost entirely immune to corrosion. The strong ionic and/or covalent bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda-lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature.
The degredation of polymeric materials is due to a wide array of complex and often poorly-understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against. A more common and related problem is swelling, where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structue, causing brittleness or other undesirable changes. The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (i.e., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as litter.
The remainder of this article is about electrochemical corrosion.
See Electrochemistry for a more in-depth treatment of this topic.
One way to understand the structure of metals on the basis of particles is to imagine an array of positively-charged ions sitting in a negatively-charged "gas " of free electrons. Coulombic attraction holds these oppositely-charged particles together, but there are other sorts of negative charge which are also attracted to the metal ions, such as the negative ions (anions) in an electrolyte. For a given ion at the surface of a metal, there is a certain amount of energy to be gained or lost by dissolving into the electrolyte or becoming a part of the metal, which reflects an atom-scale tug-of-war between the electron gas and dissolved anions. The quantity of energy then strongly depends on a host of variables, including the types of ions in solution and their concentrations, and the number electrons present at the metal's surface. In turn, corrosion processes cause electrochemical changes, meaning that they strongly affect all of these variables. The overall interaction between electrons and ions tends to produce a state of local thermodynamic equilibrium that can often be described using basic chemistry and a knowledge of initial conditions.
In a given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than the next, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electron gas, so that the tug-of-war at each surface is translated into a competition for free electrons between the two materials. The noble metal will tend to take electrons from the active one, while the electrolyte hosts a flow of ions in the same direction. The resulting mass flow or electrical current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a Galvanic series, and can be a very useful design guideline when choosing materials.
Some metals are more intrinsically resistant to corrosion than others, either due to the basic nature of the electrochemical processes involved or due to the details of how reaction products form. Otherwise, many techniques can be used during manufacturing or application to protect materials from damage.
The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any "corrosion" products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form in nature, and is a large part of their intrinsic value. More common "base" metals can only be protected by more temporary means.
Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptibly slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions.
Given the right conditions, a thin film of corrosion products can form on a metal's surface spontaneously, acting as a barrier to further oxidation. When this layer stops growing at less than a micrometre thick under the conditions that a material will be used in, the phenomenon is known as passivation. While this effect is in some sense a property of the material, it serves as an indirect kinetic barrier: the reaction is often quite rapid unless and until an impermiable layer forms. Passivation in air and water at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon.
These conditions required for passivation are specific to the material. The effect of pH is recorded using Pourbaix diagrams , but many other factors are influential. Some conditions that inhibit passivation include: high pH for aluminum, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, sometimes unusual conditions can bring on passivation in materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms.
Applied coatings: Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the (often cheaper, tougher, and/or easier-to-process) structural material. Aside from cosmetic and manufacturing issues, there are tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, and if the plating is more noble than the substrate (i.e., chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with a more active metal such as zinc or cadmium. See main article galvanization for more information.
Reactive coatings: If the environment is controled (especially in recirculating systems), corrosion inhibitors can often be added to it. These form an electrically insulating and/or chemically impermiable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods obviously make the system less sensitive scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are famous for their mineral deposits), chromates, phosphates, and a wide range of specially-designed chemicals that resemble detergents (i.e. long-chain organic molecules with ionic end groups).
Aluminium alloys often undergo a surface treatment known as anodization in a chemical bath near the end of their manufacture. Electrochemical conditions in the bath are carefully adjusted so that uniform pores several nanometers wide appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to close, forming a harder-than-usual (and therefore more protective) surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area.
Passivation is extremely useful in alleviating corrosion damage, but care must be taken in its application. Even a high-quality alloy will corrode if its ability to form a passivating film is compromised. Because the resulting modes of corrosion are more exotic and their immediate results are less visible than rust and other bulk corrosion, they often escape notice and cause problems among those who are not familiar with them.
Certain conditions, such as low availability of oxygen (or high concentrations of species such as chloride which compete as anions), can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen. In extreme cases, the sharp tips of extremely long and narrow pits can cause stress concentration to the point that otherwise tough alloys can shatter, or a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment, which often includes ensuring that the material is exposed to oxygen uniformly (i.e., eliminating crevices).
Many useful passivating oxides are also effective abrasives, particularly TiO2 and Al2O3. Fretting corrosion occurs when particles of corrosion product continuously abrade away the passivating film as two metal surfaces are rubbed together. While this process does often damage the frets of musical instruments, they were named seperately.
Stainless steel can pose special corrosion challenges, ince its passivating behavior relies on the presence of a minor alloying component (Chromium, typically only 18%). Due to the elevated temperatures of welding or during improper heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to weld decay (corrosion of the grain boundaries near welds) in highly corrosive environments. Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of knifeline attack. As its name applies, this is limited to a small zone, often only a few micrometres across, which causes it to proceed more rapidly. This zone is very near the weld, making it even less noticeable1. | http://www.fact-archive.com/encyclopedia/Corrosion |
Bob Dylan: Did he deserve the 2016 Nobel Prize in Literature?
The iconic artist is awarded the prize for "reinventing himself constantly" over a music career spanning 55 years.
Friday 14 October 2016 11:49, UK
Bob Dylan has won the Nobel Prize in Literature for creating "new poetic expressions within the great American song tradition".
Announcing the award from the Swedish Academy, Permanent Secretary Sara Danius said Dylan was a great poet in the English-speaking tradition.
"Of course he does, he just got it!" https://t.co/dzo9bkmRBP— The Nobel Prize (@NobelPrize) October 13, 2016
"He is a wonderful sampler, a very original sampler," she said. "He embodies the tradition and for 55 years now he has been at it, reinventing himself constantly and creating a new identity."
Professor Seamus Perry, chair of the English Faculty at Oxford University, described Dylan as "one of the greats," and the Tennyson of our times.
He said: "Dylan winning the Nobel was always the thing that you thought should happen in a reasonable world but still seemed quite unimaginable in this one.
"He is, more than any other, the poet of our times, as Tennyson was of his, representative and yet wholly individual, humane, angry, funny, and tender by turn; really, wholly himself, one of the greats."
The legendary singer-songwriter has sold more than 110 million records and played thousands of gigs during his career.
The 75-year-old laureate, who began his career as the leading light of the early 1960s folk boom before embracing electric rock 'n' roll, grew up in Minnesota.
Memorable Dylan Lyrics
:: Blowin' in the Wind
"How many roads must a man walk down before you can call him a man? How many seas must a white dove sail before she sleeps in the sand?"
:: Hurricane
"How can the life of such a man, be in the palm of some fool's hand? To see him obviously framed, couldn't help but make me feel ashamed."
:: Like a Rolling Stone
"You used to laugh about, everybody that was hanging out, now you don't talk so loud, now you don't seem so proud, about having to be scrounging for your next meal."
:: Shelter From the Storm
"I came in from the wilderness, a creature void of form, come in she said I'll give ya shelter from the storm."
:: Ain't Talkin'
"Ain't talkin', just walkin' up the road, around the bend. Heart burnin', still yearnin' in the last outback at the world's end."
Early in his career, Dylan - born Robert Zimmerman, was lauded for his acoustic protest songs such as Blowin' In The Wind.
His first album was the eponymous Bob Dylan released in 1962.
The Nobel literature award was the last of this year's prizes to be announced, with all six awards to be given out on 10 December.
Previous literature winners include author Doris Lessing in 2007 and playwright Harold Pinter in 2005.
Since 1901, the prize has been awarded every year to an author who has, in the words of Swedish chemist Alfred Nobel, produced "in the field of literature the most outstanding work in an ideal direction".
The Nobel prize is not the first time Dylan has won such a highly coveted award - he was also given the 2008 Pulitzer Prize for his contribution to American music and culture. | https://news.sky.com/story/bob-dylan-did-he-deserve-the-2016-nobel-prize-in-literature-10615471 |
(November 2013)…Peter Keith was pretty certain early-on that he would go into the culinary arts after high school. “When I was younger, I watched a series on TV called ‘Canada’s Next Great Chef.’ It was a series of cook-offs between young chef students from each province, culminating in a final competition held at NAIT. That determined the national champion. Seeing the determination, focus, and creativity of these young chefs inspired me to try my hand at competing,” he says.
Peter participated in the High School Culinary Challenge (HSCC) during its first two years, in 2008 and 2009, and after his first competition he was hooked!
He attributes much of his career success to the High School Culinary Challenge and the resulting scholarship. “I have had so many unique opportunities that stem directly from my involvement with the High School Culinary Challenge,” he explains. “Three incredible job placements, many great workshops with CCF Edmonton, and the chance to come back as a volunteer and then as a judge for more recent competitions.”
The biggest highlight of his career was participating in the 2012 Culinary Olympics in Germany as a member of Team Alberta, an opportunity, he says, that came to him because of the professional connections he made at the HSCC. | http://highschoolculinarychallenge.ca/graduate-profiles/peter-keith/ |
UNICEF works in some of the world’s toughest places, to reach the world’s most disadvantaged children. To save their lives. To defend their rights. To help them fulfill their potential.
Across 190 countries and territories, we work for every child, everywhere, every day, to build a better world for everyone.
And we never give up.
For every child, a fair chance
UNICEF started operation in Cambodia in 1952 and opened first country office in 1973 in Phnom Penh. More information on what we do in Cambodia is available at https://www.unicef.org/cambodia/.
How can you make a difference?
Purpose
The overall objective of the consultancy is to assess the current learning levels of grade 8 students in Cambodia. Compared with the 2017 assessment results, the consultancy also aims to analyze to what extent learning performance has changed over the past 5 years. In doing so the study will examine any loss/gains arising from school closures and/or lack of access and/or usage of distance learning modalities. This will be done by providing technical assistance to the Education Quality Assurance Department (EQAD) for the grade 8 national assessment. It will also, to the extent possible, compare and analyze the grade 8 results with the grade 6 national learning assessment and Learning loss report published in 2022
Work Assignment
The outputs of the consultancy will be used to increase knowledge of student learning outcomes and student learning loss during and after the COVID19 pandemic. The outputs will be further used to stimulate dialogue among the CDPF partners, the education sector stakeholders, and the Ministry of Education, Youth, and Sports to identify priority areas for interventions going forward. Under the overall supervision of the UNICEF Chief of Education, with inputs from the CDPF partners, and direct technical guidance by a UNICEF Education Specialist in close collaboration with the Education Quality Assurance
Department, the consultant will undertake the following tasks:
- Identify and analyze existing data sets that can support additional analysis on student learning loss/gain, such as the previous grade 8 national assessment conducted in 2017, and the grade 6 national assessment incl. learning loss report published in 2022, based on sound methodological considerations.
- Support EQAD team to quality assure data collection tools, sampling methodology, and assessment questionnaires, aiming to address potential biases due to dropout for students.
- Support the EQAD team to carry out analysis to investigate the current learning level and learning loss/gain, identifying potential mitigating and intensifying, factors as a result of school closures, based on appropriate and accepted scientific statistical methodologies, as part of the grade 8 national assessment.
- Write a concise report on the results of the grade 8 national assessment including the current student learning levels as well as learning loss/gain caused due to school closures and other factors associated with learning and the COVID19 pandemic. The consultant will draw on both the findings from the 2022 grade 8 national assessment and the previous 2017 grade 8 national assessment and other existing data sets as appropriate, and as identified by the consultant to enable an analysis of learning loss/gain.
- Provide continuous capacity building and assistance to EQAD technical officers
- Support EQAD to develop a presentation on findings for dissemination to wider education sector stakeholders, CDPF partners, and MoEYS leadership.
To qualify as an advocate for every child you will have…
Qualifications and Experience
- An advanced University degree (master's degree or above) in a relevant field such as education planning, management, economics, statistics, or similar
- At least ten years of experience working in the education sector and a solid understanding of the education development context in Cambodia or Southeast Asia
- Strong research and data analysis skills and demonstrated experience in learning assessment projects, learning loss estimations, methods, and analysis.
- Good knowledge of quantitative and qualitative analysis software packages (e.g. Stata, SPSS, R, Nvivo).
- Excellent written and oral communication skills in the English language.
- Demonstrated ability to provide on-the-job capacity-building support to government officials.
Languages
- Proficient in English language.
- Knowledge of another official UN language (Arabic, Chinese, French, Russian or Spanish) or a local language is an asset.
Moreinfo: https://jobs.unicef.org/mob/cw/en-us/job/555106
Please mention "www.Cambodiajobs.Biz" where you saw the ad when you apply! | https://www.cambodiajobs.biz/2022/09/support-education-quality-assurance.html |
The Kestrel range are a small, electronic rotating vane type anemometer (wind meter). It uses high precision Zytel bearings and a lightweight impeller to provide accurate air flow measurements even at low speeds.
Liquid crystal display has large 9mm high digits and is backlit for a clear readout in low light conditions.
Power is from an easily replaceable standard lithium coin cell battery.
The Kestrel range of wind meters are made from high impact injection moulded plastic and corrosion resistant materials with the electronics fully sealed. It will float if accidentally dropped into water. There is a hard cover for protection when not in use and a lanyard is provided for added security.
The Kestrel 3000 measures: | https://instrometweathersystemsltd.bigcartel.com/product/kestrel-3000 |
In April 2017, ASI published two Fact Sheets drawn from a publication developed through the Indigenous Peoples Advisory Forum entitled ‘Mining, the Aluminium Industry, and Indigenous Peoples (Download a copy of the report).
The first factsheet concerns the criteria for the identification of indigenous peoples in various regions throughout the world, and the second addresses the content of indigenous peoples’ right to give or withhold their free prior and informed consent (FPIC).
The need for guidance in relation to these two topics, in order to complement the case studies and proposed ASI indicators, was identified at the 2015 ASI Indigenous Peoples’ Experts Meeting in Chiang Mai.
Convening the ASI Indigenous Peoples Advisory Forum (IPAF) – April 2016
On 16-18 April 2016, indigenous organisations met in Kuantan, Malaysia, to discuss the convening of ASI’s Indigenous Peoples Advisory Forum (IPAF), which will be a standing forum in ASI’s governance. The objectives of the meeting included to share experiences across communities affected by exploration, bauxite mining and processing, and to review and further develop Terms of Reference for the IPAF and its role in ASI governance. Two representatives of the IPAF will serve on the ASI Standards Committee.
Around 20 participants were drawn from Australia, Cambodia, Guinea, India, Malaysia, Thailand, Suriname and the UK. The meeting was hosted by Jaringan Orang Asal SeMalaysia (JOAS) in partnership with the Asia Indigenous Peoples Pact (AIPP), the Forest Peoples Programme (FPP) and ASI. A field trip to the Kuantan port and to Kampung Mengkapur, a local Orang Asli indigenous community affected by mining and rubber plantation developments, took place on the third day and concluded with a cultural night in the village where participants shared traditional dances and music.
Meeting participants engaged in two days of detailed and productive discussions on ASI’s governance, assurance model and complaints mechanism and the future role of the IPAF. The meeting resulted in draft Terms of Reference for the establishment and functioning of the IPAF that will now be circulated for further input from indigenous organisations. In addition, a range of valuable input was brought forward by participants that will be integrated into the development of the ASI assurance model and complaints procedures.
ASI extends sincere thanks to the meeting hosts and organisers, to the Mengkapur Village for their kind hospitality, to the hard-working translators, and to all participants for their time, expertise and thoughtful input. The next IPAF meeting is anticipated in early 2017.
Download a copy of the April 2016 meeting report (pdf format).
Indigenous Peoples Expert Workshop – May 2015
In May 2015, indigenous peoples’ organisations gathered over four days in Chian Mai, Thailand, to review the ASI Performance Standard (version 1) and discuss appropriate indicators to measure implementation of the standard in practice.
Input from participating organisations was drawn from India, Cambodia, Australia and Suriname, with additional advice drawn from indigenous peoples’ rights experts from the Philippines, Nepal, and Bangladesh. The meeting was facilitated by the Asia Indigenous Peoples Pact Foundation (AIPP) and the Forest Peoples Programme (FPP), in partnership with IUCN as the coordinating body for the preparatory stage of ASI.
The meeting resulted in a detailed set of recommendations related to appropriate indicators and associated guidance required for effective implementation and assurance of compliance, plus broader recommendations for continued engagement of indigenous peoples’ organisations and support groups in ASI governance, and for the planned ASI Complaints Mechanism (now complete).
ASI will work to integrate these recommendations into the development of its programs and develop formal structures for continued engagement with indigenous peoples’ organisations and rights experts on these issues.
Report: Mining, the Aluminium Industry and Indigenous Peoples – November 2015
As one of the outcomes of the Indigenous Peoples Expert Meeting in May 2015, ASI agreed to provide financial support for a publication that could further illustrate and share the experiences of indigenous peoples with the aluminium industry.
The resulting report provides a global overview of the challenges facing indigenous peoples and presents five case studies from Australia, Cambodia, Guinea, India and Suriname. The case studies reveal that indigenous communities are affected by both primary production activities, such as mining and associated infrastructure, and secondary processes such as smelting and energy production used to sustain operations. The report also provides guidance on Free Prior Informed Consent and the identification of indigenous peoples.
The report, published by the Asia Indigenous Peoples Pact (AIPP), Forest Peoples Programme (FPP) and International Union for Conservation of Nature (IUCN), was released during the UN Business and Human Rights Forum which took place in Geneva this month. ASI also attended the 3-day UN Forum to exchange experiences and lessons for good practice for business’ responsibility to respect human rights.
The ASI Performance Standard includes requirements for companies to adhere to key principles concerning the rights of indigenous peoples, including obtaining Free Prior and Informed Consent (FPIC) “where new projects or major changes to existing projects may have significant impacts on the Indigenous peoples associated culturally with or living on the relevant lands”.
ASI is currently working with FPP to convene a standing Indigenous Peoples Advisory Forum as part of its formal governance structure. This group would be comprised of representatives from Indigenous Peoples organisations and indigenous peoples’ rights experts that have connections to the aluminium value chain.
The Indigenous Peoples Advisory Forum will liaise with both the ASI Board and Standards Committee on matters relating to standards setting, the ASI Complaints Mechanism, and the broader involvement of indigenous peoples in ASI’s programs. | https://aluminium-stewardship.org/indigenous-peoples/ |
Liberia joins the global community to commemorate the World Antibiotic Awareness Week
Monrovia, 19th November 2017: Antibiotics are important medicines used to prevent or treat bacterial infections. Antibiotic resistance, a change in bacteria's response to antibiotics has been recognized as one of the serious threats to global health, food security and development. Common infections such as pneumonia, tuberculosis, typhoid fever etc., are becoming harder to treat as antibiotics become less effective. Antibiotics misuse, overuse and ignorance are some factors contributing to antibiotic resistance. The World Antibiotic Awareness Week (WAAW) is commemorated every year from 13th – 19th November 2017. The theme for this year: “Seek advice from a qualified health professional before taking antibiotics” is aimed at advocating, promoting awareness on the global impact of antibiotic resistance and to encourage best practices amongst the general public, health workers and key stakeholders. The Ministry of Health - Liberia and the World Health Organization (WHO) joined the global community to commemorate the world antibiotic awareness week for the first time in Liberia to improve public awareness and understanding of antibiotic resistance.
The activities undertaken included:
- advocacy meetings involving religious groups, partners and key stakeholders;
- creating public awareness using simplified Liberian English on mass media including Liberia Broadcasting System, United Nation Missions in Liberia Radio and at Ministry of Information, Culture and Tourism’s (MICAT) weekly Press Conference;
- creating awareness at health facilities and schools on the dangers of antibiotic misuse in the 15 counties of Liberia;
- distribution of flyers with key facts;
- and raising awareness on the social media (Facebook).
Rev. John Sumo, Director of Health Promotion at the Ministry of Health, in his launching statement stressed that antibiotics are essential for the treatment of many infections in Liberia including pneumonia. The misuse of medicines by the public contributes to antibiotic resistance. Rev. Sumo also indicated that the common practice of purchasing medications for common illnesses from drug stores and black baggers (drug peddlers) without advise from trained health worker should be discouraged. He called on the public to seek advice from a qualified healthcare worker before taking antibiotics as this will help in preventing antimicrobial resistance (AMR).
Rev. Tijli Tyee, Chief Pharmacist of the Republic of Liberia, noted that when infections can no longer be treated by first-line antibiotics, patients are required to purchase more expensive medicines leading to a longer duration of illness and high economic burden on families and societies. He informed the public that the government is working with partners in developing strategies including antimicrobial national action plan to address current gaps.
Speaking to the media, Dr April Baller, Team Lead for Infection Prevention Control & Control (IPC) and Case Management Team at WHO-Liberia, noted that tackling antibiotic resistance is a priority for WHO globally. WHO is supporting member states to translate the global AMR action plan into national AMR action plans.
Mr. Moses Bolongei, National IPC Officer at WHO, emphasized the importance of proper hand washing, cleaning instruments and healthcare environments and only prescribing antibiotics when indicated. He urged healthcare workers to educate patients on how to take antibiotics correctly. He further pointed out that the practice of sharing or using leftover antibiotics leads to antibiotic resistance.
Healthcare workers at various health facilities visited recommended that the awareness be continued at all levels to educate and empower the public, and they pledged to provide more health education on medication safety including antibiotic resistance. The issue of stock out of essential medicines including antibiotics, as well as the practice of self-medication by the public, were also highlighted by healthcare workers during the campaign as key challenges. Some patients indicated that little education is provided them by healthcare workers on medications; “many times, the doctors and nurses only give us pieces of paper to go buy medicines and antibiotics from the drug store and we do not know if the medicines there are good or not,” one patient explained at Duport Road Health Center in Monrovia. | https://www.afro.who.int/news/liberia-joins-global-community-commemorate-world-antibiotic-awareness-week |
The goal of a Sound Studies program, as with all programs offered by Sound Experience, can be summarized in a single word: awareness. We believe that people will protect what they learn to value.
Recognize the interrelationships that exist between all life. Identify the positive and negative impacts that they as individuals have on the Puget Sound ecosystem. Recognize their ability to take action by raising others’ awareness and by making responsible choices. Understand the necessity of cooperation as a course to action.
Experience working together to raise sails and learn about all the factors that help make a traditional sailing vessel function.
This program is offered for grades 3 through 12. If you have a younger, older, or mixed age group, please call 360-379-0438 x2 for additional options to accommodate your group.
Please contact Amy Kovacs by phone or e-mail: 360-379-0438 x2 or [email protected].
Sound Experience offers four themed options for your Sound Studies programs. Within each option is a specific selection of Learning Stations that complement each other to create a cohesive learning experience. For a detailed description of each station within each theme, please see the Educator Packet.
Identify major environmental issues in the region and learn how we as a community can have an impact.
Learning Stations: Plankton, Life Aboard Ship, Marine Debris, Nautical Skills, Ocean Acidification.
Explore the many skills required to become a mariner on the Salish Sea. | https://www.soundexp.org/sail-with-us/schoolsyouth-groups/day-program-sound-studies/ |
An intrusion is a deliberate move into someone else’s territory — either literal or figurative. When your sister interrupts your conversation with that girl from math class, that’s an intrusion. If someone breaks into your home, that’s also an intrusion.
First used in the late 14th century, the noun intrusion derives from the Latin word intrudere, which combines the prefix in-, meaning “in,” and trudere, meaning “to thrust, push.” If someone reads your diary, that’s considered an intrusion of privacy. Ordering a Muslim woman to take off her veil would be considered an intrusion on religious beliefs. You may remember intrusion used in science class to describe molten rock that forms in an earlier rock formation.
-
entrance by force or without permission or welcome
-
entry to another’s property without right or permission
-
synonyms:
encroachment, trespass, usurpation, violationsee more
-
types:
-
inroad
an encroachment or intrusion
-
type of:
-
actus reus, misconduct, wrongdoing, wrongful conduct
activity that transgresses moral or civil law
- inroad
-
any entry into an area not previously occupied
-
synonyms:
encroachment, invasion
-
the forcing of molten rock into fissures or between strata of an earlier rock formationsee more
-
type of: | https://sarkariexam.io/intrusion/ |
The average family unit size in Collier, PA is 2.73 residential members, with 84.1% being the owner of their very own dwellings. The average home value is $238094. For individuals paying rent, they pay out on average $1463 per month. 58% of households have dual sources of income, and a typical domestic income of $89148. Median individual income is $49155. 3.7% of residents exist at or beneath the poverty line, and 11% are considered disabled. 9.8% of residents are ex-members for the armed forces of the United States.
Collier, PA is located in Allegheny county, and includes a community of 8106, and rests within the higher Pittsburgh-New Castle-Weirton, PA-OH-WV metropolitan area. The median age is 49.6, with 9.9% of the population under 10 several years of age, 10.4% between ten-nineteen many years of age, 5.1% of residents in their 20’s, 12.3% in their 30's, 13.2% in their 40’s, 16.9% in their 50’s, 15.9% in their 60’s, 11.7% in their 70’s, and 4.6% age 80 or older. 48.6% of inhabitants are men, 51.4% women. 55.2% of citizens are recorded as married married, with 11.7% divorced and 21.5% never married. The % of people identified as widowed is 11.6%.
A modest open-air fountain is an appropriate complement to a small garden, a patio table or spacious balcony with a height of less than 24 inches. Please be aware that these parts might be hefty always. Look at your weight before your buy, and guarantee it is handled by your room. Medium-size garden fountains Any garden, verandas or tiny yard is likely to be supplemented by an exquisite fountain that is medium-sized. These objects are 24-36 inches high and not a major feature that is decorative. Huge Garden Fountains Why not select a garden that is big if you have more room for work? These pieces of art are around 36-60" tall, offering an important enhancement to the external wall, courtyard, flower garden or pool environment. Almost 60-inch high, this extra big outside water fountain provides an attractive focus for any space with lots of space. Extremely large outdoor water fountains These excellent works are characterized by an extended gardens or a big garden. We offer fountains that meet your local area and taste from the classical design to a contemporary aesthetics, from a little tabletop to a large scenery. A range is provided by us of forms and sizes of classic bird baths, wall fountains and stands. You may construct a little meditation place from our huge range of outdoor fountains to go away and enjoy with your family and friends or a area that is wonderful. Outdoor Waterspring Materials You have lots to choose from, such as the materials you use to help make a fountain, if you just began thinking about improving its appearance. Everybody's astonishing, but your choice will probably have effects that are distinct. These gorgeous fountains that are outdoor appear as though made of concrete or material, but cement fiber is a blend of cement, sand and water cellulose fibres. | https://www.francescolette.com/collier-antique-garden-fountains.html |
Beyond Changing Opinions: The Enduring Impact of Debate
Even if you make no progress in changing anyone else’s mind, you may end up changing your own. Debate is like a stone that sharpens a blade. By forcing you to defend your arguments in a rigorous way, it compels you to think more deeply about them.
Are you among the millions of Americans who, in the year or so since the 2016 presidential election, have found themselves shaking with frustration at the refusal of a friend, family member, or colleague to accept a premise that seems beyond dispute? Have you argued, over Thanksgiving dinner or on social media, until one of you stormed away huffing and red in the face? Do you despair at the seemingly hopeless task of reconciling your own beliefs with those of the other half of the country? Well, I have bad news, good news, and a bit of advice for you.
The bad news is that a logical argumentation is not a particularly effective tool for changing the mind of a person you’re arguing with. I say this as a dedicated debater and teacher of debate. Unfortunately, what we know of human psychology suggests that directly challenging a person’s beliefs can even have the opposite effect, further entrenching those beliefs and making them more resistant to change. People circle the wagons, so to speak, when they feel attacked. They close their minds, and in the process of arguing they often come to hold their original beliefs even more strongly.
The good news is that it can be done. Changing someone’s mind isn’t easy, and you should be prepared to fail any time you take up the task, especially if you’re arguing with a stranger on social media. But to the extent that it’s possible, it requires patience and a clear goal.
We must understand a debate – the direct clash of ideas and arguments – as a single moment in a larger process. The goal is not to provoke an abrupt, dramatic admission of defeat, but rather to plant a seed that might, over time, spread a productive sort of doubt and prompt further reflection on the part of your interlocutor. In a public forum like social media, you may also sow this seed in others who see the debate, even if they do not actively engage in it.
The goal is not to provoke an admission of defeat, but rather to spread a productive sort of doubt in your interlocutor.
People are open to persuasion, and logical argumentation can be the impetus for it, but it must be paired with time for reflection. I know that debate can prompt this kind of reflection, because it happened to me.
Debate has been important to me for almost as long as I can remember. I was a nationally competitive debater in high school and college. As an undergraduate, I worked for the Chicago Debate League teaching argumentation skills to Chicago Public Schools students. When I graduated, I started a similar organization in Boston and spent years working with public high school students there. I currently help the Bay Area Urban Debate League promote debate to hundreds of middle and high school students.
As a teenager, I described myself as an Objectivist. I read Ayn Rand’s Atlas Shrugged after seeing it on a list of the best novels of the 20th century – I had no idea what it was about or that it was really more of a philosophical manifesto than a novel. But I was taken in.
It spoke to me, as a smart kid who often felt that my intelligence was a social liability among my peers. Rand coupled her unapologetic elitism with an absolutist approach to property rights and the virtue of selfishness. It was a sort of philosophical libertarianism, distinct from the Republican party of the time in that it also advocated a laissez-faire government stance on social issues such as gay marriage (Rand held that the government had no place sanctioning marriages in the first place, really).
At an age when I was eager to rebel against my left-leaning family and community, I subscribed eagerly to these notions and soon was making a nuisance of myself to anyone who would listen (and, I imagine, many who would have preferred not to) as I railed about the injustice of the antitrust suit against Microsoft and dreamt of becoming a corporate lawyer so that I could defend tobacco companies against the tyrannical government. I credit debate with breaking Rand’s hold on my young mind, by exposing me to — and forcing me to reflect on — some perspectives I might never otherwise have encountered.
The national debate topic for my senior year of high school had to do with education policy. My debate partner and I began the year advocating for the use of busing to desegregate public high schools.
While this policy may not have squared with my political philosophy at the time, strategically, it was a sound choice. After all, the harm of segregation was hard to dispute, so we didn’t worry about losing debates there. Where we expected to be vulnerable was on the question of whether we could actually solve the problem of segregation, given the failure of previous efforts.
This already was a prompt for some research and reflection. My partner and I, white students at a public high school in a solidly middle-class suburb of Baltimore, broadly agreed that segregation was a bad thing. But the truth was that we knew virtually nothing about why it existed or the history of attempts to combat it. We had to learn that history in order to formulate a defensible plan for fixing it.
We learned about “white flight” – white families migrating from racially mixed cities to homogenous suburbs in order to avoid desegregation orders. We learned that the Supreme Court had ruled, in the case of Milliken v Bradley, that desegregation orders could not cross school district lines. In other words, if there were majority-black and majority-white schools in the same district, it would be permissible to bus students between those schools for the purpose of integrating them. But if those schools were in different districts, then they could not be desegregated in this way. This was what enabled white families to “escape” desegregation orders by moving to segregated suburbs.
I thought of my own family history. Both of my parents grew up in Baltimore City and attended Catholic schools. I myself was born in the city, though we moved to the county when I was 3, so that my brother and I could attend better public schools. The high school I attended was actually not overwhelmingly white, but my classes – I was in a “gifted and talented” track – certainly were. Tracking students within schools, we learned, was another trick to subvert desegregation.
Anticipating that critiques of our plan would focus on the failures of past segregation efforts, we decided to advocate for overturning the Milliken v Bradley decision and allowing inter-district desegregation orders. This would effectively render toothless many of the lines of attack that we expected other debaters to have prepared against desegregation.
This proved to be a sound strategy, and we enjoyed a good amount of success with this case… until we got blindsided, in much the same way that we’d sought to blindside others. As well prepared as we were to defend the effectiveness of our plan in combating segregation, we were less prepared to defend the claim that segregation was a problem in need of fixing.
That’s not to say that we were caught completely off-guard – that would be amateurish. But we were prepared for the stock conservative objections, those related to local control, individual choice and, freedom of association. What we were not prepared for were arguments rooted in Afrocentrism, which is exactly what one clever team hit us with.
I’d learned a bit about folks like Marcus Garvey and the Black Panthers in history class, but to me they represented failed ideas long consigned to the dustbin of history. I knew nothing about Afrocentrism as a contemporary ideology and was completely unprepared to refute the claims that busing stigmatized black schools, put an unfair burden on black students, and disrupted black communities and their culture.
When I heard these arguments, I was first confused, then flabbergasted. But a good debater is never speechless. Like a hitter running out a pop fly, my partner and I did our best to defend our case, but it was hopeless. I look back on this as one of the more lopsided losses of my career.
Even so, my mind wasn’t changed. When it came to Afrocentrism, the only thing I was convinced of was that I needed to research and understand it before our next debate. It’s one thing to lose to an argument you never saw coming, but top debaters aim never to lose to the same argument twice. Either you find good answers, or you accept that there are no good answers and find a new, more defensible position.
Either you find good answers, or you accept that there are no good answers and find a new, more defensible position.
I began reading about Afrocentric philosophies of education. I learned that there were Afrocentric educators, Afrocentric curricula, and proponents of all-black Afrocentric schools.
My research also led me to Critical Race Theory, an approach to legal analysis that begins with the assumption that racism infuses the American legal system and that all aspects of the law must be approached with this in mind. I found legal scholars of color critical of virtually every landmark civil rights law and case.
They argued, in essence, that the law cannot genuinely advance the interests of an oppressed minority. The real beneficiaries of affirmative action were white women, not black people. The real beneficiaries of court-ordered busing were the white owners of bus companies, not the black students who spent hours every day on buses to arrive at schools where they were out of place and unwelcome. Even Brown v Board of Education, they argued, was ultimately about geopolitics, not civil rights; the United States was seeking to combat Soviet propaganda that pointed to segregated schools as an example of the evils of capitalism.
I’m not necessarily presenting any of the above as true (or false, for that matter). But it’s one thing to understand and disagree with an argument, and another thing entirely to never even have considered that argument, which is the camp I’d been in.
A single debate didn’t change my mind on the spot, but it was the impetus for me to explore a whole world of arguments, ideas, and perspectives I’d never considered: that race shapes the perceptions and the experiences of all Americans; that ignoring race will not make it go away but rather more deeply entrench inequality; that racism is a matter not only of individual belief but of institutions and structures that maintain inequality without the active direction of any individual.
A single debate didn’t change my mind on the spot, but it was the impetus for me to explore a whole world of arguments, ideas, and perspectives I’d never considered.
I found no good answers to these critiques in any of the libertarian philosophy with which I was so enamored. And so, over time, my previous beliefs eroded. I found their underlying tenets increasingly difficult to maintain. How could people who truly believed in property rights above all else not be in favor of compensating black Americans for the myriad ways in which their property rights were and continued to be violated? How would I take seriously the claim that America was a meritocracy and all success earned through hard work, when I could so clearly see that the playing field was not level?
It certainly didn’t hurt that adopting this “Critical Race Theory” persona still satisfied my desire for teenage intellectual rebellion. But there’s no doubt that a single debate set into motion a process of reflection that ultimately led me to change my views fundamentally, and a lot of what I learned in that time still informs how I think today.
Finally, it’s worth noting that even if you make no progress in changing anyone else’s mind, you may end up changing your own. Debate is like a stone that sharpens a blade. By forcing you to defend your arguments in a rigorous way, it compels you to think more deeply about them, to research, and perhaps even to come to the difficult realization that they are in fact indefensible and therefore must be changed. | http://argumentcenterededucation.com/2018/01/25/beyond-changing-opinions-the-enduring-impact-of-debate/ |
ALA Position: The American Library Association asks all Members of Congress to support level funding of $27 million in FY2019 for the proven and effective Innovative Approaches to Literacy program.
What does this bill mean for libraries and the public?
- Innovative Approaches to Literacy (IAL) grants fund literacy programs in schools nationwide. Fully half of the funding is targeted to libraries in underserved schools via the Improving Literacy through School Libraries program.
- Exposure to books is an essential part of early childhood literacy and greatly increases a child's likelihood of success in high school, college and in 21st century jobs.
- Research has proven that access to quality literacy resources has a direct and positive relationship to lifelong reading behavior and motivation, and encourages families to read together.
- IAL is the only source of federal funding for school library materials targeting literacy.
Level FY2019 funding for IAL in the Labor, Health and Human Services Appropriations bill will permit libraries to continue serving millions of school children in every state in the country.
- Libraries will continue to be the leading federal vehicle for encouraging young children to read and to provide them with access to critical literacy materials. Successful IAL programs across the country have provided books for children to take home—often the only books a child may have in their home. Other successful programs have helped purchase e-readers, expand school library access, and replace outdated library material.
- The most recent available survey of teachers documented that 82% of elementary school students, 87% of middle school students, and 80% of high school students most often find their independent reading books in school libraries.
SUPPORT THE LIBRARY SERVICES & TECHNOLOGY ACT (LSTA) IN THE LABOR, HEALTH & HUMAN SERVICES, EDUCATION & RELATED AGENCIES APPROPRIATIONS BILL
ALA Position: The American Library Association asks all Members of Congress to support funding the Library Services and Technology Act at $189.3 million for FY2019. Including $189.3 million in LSTA funding in the FY2019 "Labor, HHS" Appropriations bill will keep libraries contributing at the core of tens of thousands of communities in every state in the nation. Read ALA's Issue Brief on appropriations for more information.
What does this bill mean for libraries and the public?
- The Library Services and Technology Act (LSTA) is the only federal funding program for libraries. The majority of this funding goes to each state through the Institute of Museum and Library Services in the form of a population-based matching grant.
- Each state determines how best to use its own LSTA funding, and this document gives a state-by-state breakdown of how funds are used. States previously have issued grants to libraries, for example, to: update technology resources and services, create summer learning programs, assist job seekers to build resumes and apply for jobs, and develop services for community engagement.
- Unfortunately, because federal LSTA funding is matched with state funds, if LSTA funding were reduced or eliminated, libraries would lose both sources of revenue.
- Funding LSTA at $189.3 million for FY2019 would continue critical existing programs and launch a new national digital platform that will connect patrons to services and online collections enabling new forms of inquiry and exploration at any time of the day.
- With LSTA's help, libraries and highly-skilled librarians will continue to serve a vital role in communities by providing free access to all types of information, job skills training, and computing services. The demand for such services has grown in concert with the need to apply online for many jobs and government services, especially for Americans who don't have adequate or affordable broadband in their homes. Even as the economy shows signs of improvement, millions of patrons continue to turn to their local library for assistance and access to essential information of every kind.
- LSTA also supports: improved access to library services for Native Americans, Alaska Native Villages, and Native Hawaiians; National Leadership Grants to support activities of national significance that enhance the quality of library services nationwide and fund pilot programs for coordination between libraries; and, the Laura Bush 21 Century Librarians program, which develops and promotes the next generation of America's indispensable librarians.
The Institute of Museum and Library Services (IMLS) is the only federal agency whose focus is libraries. They provide about $210 million in funding per year to the nation’s libraries, including LSTA mentioned above, but also via competitive grants. The White House budget for FY19 proposes the total elimination of IMLS and federal funds for libraries. The ALA Position is that IMLS must not be eliminated. For more information, read ALA's initial press release about this issue, or visit ala.org/advocacy/fund-libraries
Support Broadband & Libraries' Role in Providing It
Broadband Internet access is essential for survival in the 21st century. Teens rely on it to complete school assignments, take practice exams, explore career options, apply for jobs and college and more. The nation's 17,000+ public libraries and 88,000+ school libraries are where teens go to access the Internet. Read ALA's Issue Brief on broadband for more information.
- Libraries must be included and supported in all kinds of federal broadband initatives, programs and funding.
- Libraries can be leveraged as part of broadband adoption and upskilling efforts.
- Congres must support legislation that bans blocking, throttling, degrading or paid prioritization of Internet traffie and includes clear enforcement authority. | https://www.ala.org/yalsa/national-library-legislative-day-priorities |
Eight Mindful Steps to Happiness (pp. 233-236)
In the last talk, we discussed the first level or stage of full concentration: the first jhana.
There are three other levels of jhana briefly described below.
The Jhanas
Recall that there are five factors necessary to achieve the first jhana:
- Initial application of thought
- Sustained application of thought
- Joy
- Happiness
- Concentration
In the second jhana, the first two factors needed to achieve the first level (initial and sustained applications of thought) disappear. Thus there is no thinking involved, only the three factors of joy, happiness and a stronger one-pointedness of mind remain.
In the third jhana, the factor of joy fades away. Happiness remains and equanimity becomes stronger as well as concentration (one pointedness).
In the fourth jhana, happiness fades leaving only mindfulness and equanimity
Practice and progression with the jhanas builds concentration. Concentration and mindfulness work together to see the true nature of reality. It is very worthwhile to strengthen your concentration.
“At this point you have gained the real force of concentration. The mind has consolidated, become pure, bright, free of hindrances, and steady. It is malleable yet imperturbable, strengthened and sharpened for its most important task. When you focus that concentrated mind on an object, you see that object as it is. In other words, the perfectly concentrated mind can penetrate into the true nature of reality.” –Bhante G., P. 236
Reflection
- Read this talk every day and reflect on it.
Meditation
- Try practicing concentration meditation daily.
- Don’t set reaching the first jhana as a goal or trying to reach the other jhanas until you have sufficiently been able to attain the first jhana many times. Merely, follow the steps as outlined in two talks and see what happens. | https://www.whitehallmeditation.org/the-eightfold-path/skillful-concentration-the-stages-of-full-concentration-ii/ |
“We are working with our law enforcement partners to assess the situation & determine the cause, with public safety the Bureau’s highest priority,” the FBI tweeted early Sunday.
FBI personnel have responded to the scene of an explosion in Allentown. We are working with our law enforcement partners to assess the situation & determine the cause, with public safety the Bureau's highest priority. (1/2)
“I heard a big explosion, I ran to the window to see what it was, I didn’t see anything. But after that, I seen the fire,” recalls eyewitness Antonia Santiago. “People couldn’t even get their cars out. It’s crazy. It’s got me nervous.”
Officials say the investigation expands from Seventh and Eighth Streets and Linden and Chew streets.
Authorities are asking the public to avoid this area and for residents to shelter in place to remain safe while the investigation continues. | |
Unscrambling Socialism from Our Economic Order: Wisdom from Leonard Read
Any adept student of human action knows that it isn’t enough merely to “drop anchor” in today’s sea of socialism, that is, to stop where we are. The U.S. is already playing host to more parasitic socialism than the economy can sustain … [and] there comes a point in time in growth when any parasite will destroy its host.
That statement seems far truer today than nearly any time in my memory, where what Albert Jay Nock once called octopean government has seemingly grown even more appendages and used them more aggressively against citizens. And calling socialism a parasite on the economy is insightful because it helps us see how socialism resists planned efforts to displace it:
Consider socialism’s double-barreled definition: the state ownership and control of the means and/or the results of production. Give it entry into any area and socialism insinuates itself into the warp and woof of the activity; it becomes embedded in the mores, the traditions, the way of life. Immediately there develops a stubborn vested interest to assure its continuance; it infects the economic, social, and political bloodstream.
Read then turned to the thorny question of how to successfully rid ourselves of this parasite, now even more important after several decades that have left us far more socialized than when he wrote:
How are we to break out of … any of the … many socializations? … thousands are asking how to engineer the dismantling of this labyrinthine growth, but not a one can blueprint the procedure; none ever will. Drafting a plan for riddance of socialism is like trying to find a formula for unscrambling an egg.
Read explained why a socialistic centralized plan will not work to defeat socialistic parasitism:
Escape is predicated on … dismiss[ing] the thought of engineering or planning socialism’s uprooting, and … an expanding appreciation of how a seeming chaos of initiatives and skills will miraculously combine, when free to do so, into ordered patterns of creative phenomena. In short, a growing comprehension of the miracle of the free market—a comprehension all too rare today.
Using the example of how someone trying to design how mail would be delivered in a free market, Read goes further:
Unable to think how he would deliver mail … unable to design or engineer the project in his own mind—that is, being unaware of how the market really works—he will likely draw the socialistic conclusion…. Mail delivery is a job for government.
[But] No one person can or ever will attain to such comprehension … free market mail delivery would be accomplished, not by any masterminding but in precisely the same manner as has voice delivery and all other creative phenomena: millions of tiny “think-of-thats,” in a chaos which defies cataloguing, ending up in … fantastic order.
It is this seemingly chaotic (in the sense of not being subject to anyone’s unilateral control) order that has advanced people’s well-being, rather than unilateral controls and those who impose them, which undermines it:
Natural and spontaneous configurations of creative ideas account for our economic and social blessings.
However, no one knows how to correct all the embedded socialist errors that have crept into almost every nook and cranny of society. In fact, it is harder than how to make a pencil, Read’s most famous illustration, because the protection of individual property rights mean that voluntary arrangements need only be worked out among those whose rights are involved in making a pencil, while every socialistic restriction hinders all efforts to instead produce nonparasitic results:
The point we must bear in mind is that socialism itself is but the political outcropping of a plurality of false ideas, notions, passions, plausibilities and emotions … all the way from doing good to feathering one’s own nest—all at the expense of others! To conclude that anyone can engineer or blueprint the eradication of these errors is no more valid than to believe that someone knows how to make a jet plane or to deliver mail. Plan an erasure of these myriad misconceptions from the minds of millions of unknown persons? … To fasten the eye on a design or an organized scheme or a blueprint to bring about socialism’s demise is to fritter away one’s time and energy.
Read suggests that rather than trying to plan the demise of socialism, we should instead focus our attention on building a good society:
A good society cannot emerge from the drafting board. Rather the good society is a dividend which automatically flows from antecedent virtues and talents…. And the shrinking or the withering away of socialism will come about, if at all, as a natural by-product of numerous antecedent actions that are meritorious in and of themselves.
Unsound ideas lead to socialism, just as sound ideas make for a good society.
Unsound ideas will produce their bad fruit until sound ideas prove acceptable. In short, its largely a matter of displacement or replacement. Any idea, right or wrong, will revert to an insignificant role if the value judgements of men do not approve it.
So how would a focus on a good society displace socialistic errors? Read continues:
Unsound ideas will lose authority whenever their intellectual source peters out, that is, whenever the intellectual source of sound ideas attains a dominant position … [so] socialism will shrink into historical oblivion whenever little or nothing is done to preserve it.
With the unsound ideas which underlie socialism: take no steps to preserve them but, instead, fasten attention on the sound ideas which give support to the free market and the good society. When we pursue high purposes, natural forces do their clean-up work for us as a dividend for having set our sights aright.
[Consequently, under this approach] We do not need to know how to mastermind or blueprint the unscrambling of socialism; we need to know little else than how to win Nature as an ally … with men of good faith, she will cooperate.
Leonard Read believed that a single plan for eliminating socialism everywhere was a pipe dream, because no one knows enough to make such a plan effective. In contrast, better ideas and better results from freely chosen, voluntary arrangements, when and where they are implemented, have the potential to unfasten parasitic government from draining our lives, liberties, and properties in those areas, and inspire other similar efforts. It is still a daunting task, but only principles of self-ownership and freedom, in the minds and hands of principled adherents, make it possible at all. | https://sevenhaventrade.com/2022/09/29/unscrambling-socialism-from-our-economic-order-wisdom-from-leonard-read/ |
- Rounded neck, with sleeveless arm openings.
- Soft jersey knit, with comfortable stretch.
- Flowy and flattering, the swing silhouette has a flared A-line shape you can dress up or down. | https://www.proudnotloud.com/product-page/sleeveless-jersey-swing-dress-w-trump-to-flag-flip-sequins-design-free-pin |
The nominations for the 90th Academy Awards, which will be held on March 4, were announced on Tuesday. The Shape of Water, Guillermo del Toro’s fairy tale romance, led all films with a surprising thirteen nominations. Dunkirk, Christopher Nolan’s World War II epic, received eight nominations, while Martin McDonagh’s increasingly controversial Three Billboards Outside Ebbing, Missouri, received seven.
But what’s most notable about the nominees is how few locks there are. Sam Rockwell (for Three Billboards) is a clear favorite in a shallow supporting actor pool. Gary Oldman, who played Winston Churchill in Darkest Hour and has never won an Oscar, is a similar favorite in the lead actor category; the more deserving Daniel Day-Lewis (for Phantom Thread) is the only other nominee that has a chance.
The supporting actress nominees are deep: Nominees include Allison Janney (I, Tonya), Laurie Metcalf (Lady Bird), and Octavia Spencer (The Shape of Water). The lead actress category is similarly vexing. Frances McDormand, who won a Golden Globe for her performance in Three Billboards, could be considered a favorite, but Meryl Streep (The Post), Saorise Ronan (Lady Bird), Margot Robbie (I, Tonya), and Sally Hawkins (The Shape of Water) are also nominated.
There are nine nominees for Best Picture. The leading films of the season—Call Me By Your Name, Lady Bird, The Post, The Shape of Water, Three Billboards—are all nominated. Get Out, which was largely shut out at the Golden Globes, garnered a nomination, as did Phantom Thread, which could act as a spoiler in the six categories it was nominated in. One of those categories is Best Director, where Paul Thomas Anderson will compete with del Toro, Nolan, Get Out’s Jordan Peele, and Lady Bird’s Greta Gerwig.
There were a few surprise omissions. The Florida Project and The Big Sick, two audience-favorite indies, received only one nomination each. The Disaster Artist, a favorite at the Golden Globes, also received only one nomination in the wake of multiple accusations of sexual harassment against its director and star, James Franco. | https://newrepublic.com/article/146718/oscars-race-wide-open |
Identity Theft Prevention Red Flags
What is the Red Flags Rule ?
The Red Flags Rule was issued by the Federal Trade Commission (FTC) in 2009 to help organizations detect, prevent, and mitigate identity theft in their daily operations. It requires banks and other institutions that maintain covered accounts to implement a formal Identity Theft Prevention Program.
The University's Identity Theft Prevention Program (ITPP), section 4.0 of the UO Fiscal Procedures is available on the UO Fiscal Policy Manual site. The program was originally developed by the former Oregon University System and was refined in 2020 by the UO Red Flags team.
Red Flags are suspicious patterns or practices encountered in our daily operations that indicate the possibility that identity theft may occur.
Identity Theft is the unauthorized use of another persons personal identifying information.
UO Red Flags Team
- Registrar, Jim Bouse
- BAO Student Financial Services, Krista Borg
- Payroll, Ben Kane
- BAO Financial Services, Rob Freytag
- University Health Center, Volga Koval
- BAO Information Systems, Mark McCulloch
- Information Security Office and IS Accounts Team, Cleven Mmari
- Financial Aid, Morgan Ramsey Daniel
- UO Card Office, Tamarra White
The UO ITPP identifies several Identity Theft Prevention Units ITPUs across campus whose services and activities confer some risk of identity theft. ITPUs are responsible for:
- Maintaining written procedures for detecting, preventing, mitigating, and reporting instances of identity theft.
- Ensuring staff engaged in these activities participate annually in identity theft prevention training.
- Verify the identity of persons before providing services, and sharing personal identfying information.
- Ensuring service providers with access to identifying information are contractually obligated to safeguard it.
- Participate each year in a survey conducted by the program administrator, to ensure the aforementioned responsibiities are carried out.
UO Materials:
- 2020/2021 Annual Report to be completed by Identity Theft Prevention Units (ITPUs),
- Report a Security incident, if a Red Flag is detected.
- UO Identity Theft Prevention Program document (contact program administrator Mark McCullloch [email protected]).
- UO Identity Theft Prevention Red Flags Training in My Track Learning Library
- Red Flags Team Charter
External Resources:
- Full text of the FTC's Red Flags Rule FTC 16 CFR Part 681, as amended by the Red Flag Program Clarification Act of 2010 (effective Jan. 1, 2011)
- SEC and CFTC's final Identity Theft Red Flags Rule rule (effective May 20, 2013)
- Other FTC resources: | https://ba.uoregon.edu/content/identity-theft-prevention-red-flags |
Dr. Kaplan employs advanced micro-invasive, minimally invasive and muscle sparing surgical techniques designed to provide his patients with the quickest, most pain-free return to their active lifestyle after joint replacement surgery.
Dr. Kaplan’s focus on soft-tissue preservation has allowed his patients to experience an expedited and more comfortable post-surgical recovery period. Patients spend significantly less time in the hospital and return to activity, exercise, work and play faster.
Dr. Kaplan understands that education is the cornerstone of providing the best care for his patients. By staying up to date with technological innovations and developments in the field of Orthopaedic Medicine Dr. Kaplan is able to employ cutting edge techniques aimed at improved patient outcomes and unparalleled patient satisfaction.
After receiving his undergraduate degree from Miami University in Oxford, Ohio Dr. Kaplan went on to earn his medical degree from The University of Cincinnati College of Medicine. He then completed his Residency and Internship in Orthopaedic Surgery at Henry Ford Hospital in Detroit, Michigan. Dr. Kaplan’s advanced surgical training in hip and knee replacement continued during his Lower Extremity Adult Reconstruction Fellowship at Beaumont Hospital in Royal Oak, Michigan. Throughout his practice Dr. Kaplan has continued to advance his knowledge and skills in adult hip and knee replacement procedures by attending numerous surgical training courses, national meetings, educational symposia and academy proceedings.
Dr. Kaplan is board certified by the American Board of Orthopaedic Surgeons (www.abos.org). He is a fellow member of the American Academy of Orthopaedic Surgeons (www.AAOS.org) as well as the American Association of Hip and Knee Surgeons (www.AAHKS.org) He is also member of the Detroit Orthoapaedic Association and the Michigan Orthopaedic Society.
Dr. Kaplan’s private orthopaedic practice is located at the prestigious Oakland Orthopaedic Surgeons in Royal Oak, Michigan. His operating privileges are at Beaumont Hospital – Royal Oak.
Dr. Kaplan also maintains an academic practice at the Beaumont Orthopaedic Institute within Beaumont-Royal Oak Hospital. He is an Assistant Professor of Orthopaedic Surgery at Oakland University School of Medicine. It is here, that Dr. Kaplan is intimately involved in the training of orthopaedic residents and fellows as well as educating medical students.
Dr. Kaplan is a consultant/educator for Stryker Orthopaedics. He travels nationally and internationally educating his fellow orthopaedic surgeons on the latest implant technology and surgical techniques. He is also a consultant for Stryker Surgical where his is involved in the design and development of the latest surgical instruments, hospital equipment and orthopaedic technology. | http://lkaplanmd.com/about-dr-kaplan/ |
Location: Salt Lake City, UTThe Enterprise Desktop Engineer will be responsible for the review and engineering of emerging/innovative technologies for CXone's end user computing. The primary focus is on planning, design and transition of the end device environments. They include deployment, management, security and support of the managed endpoints. Additional responsibilities include training and mentoring staff.
Financial Responsibilities:
- Ensures access to systems is accurate and according to Job Descriptions.
- Ensures equipment is accounted for, purchased and disposed of according to company policy.
- Provide quotes for purchasing desktops, laptops, computer parts and software.
- Computer and software purchasing.
- Acts as an internal consultant role across the enterprise providing an engineering perspective and vision, in addition to deep analysis and problem-solving where needed.
- Identifies important potential technologies and approaches to address current and future needs within the enterprise, and evaluate their applicability and fit, as well as leading the definition of standards and best-practices for their use.
- Is able to provide hands-on application of technologies and processes to support proof-of-concept development, review of critical systems and training to other teams within the organization.
- May be required to function as a technical lead on projects of any size as necessary.
- Communicates detailed technical information in both written and verbal form across a wide range of audiences, including business stakeholders, users, developers, project management, and other architects.
- Collaborates with colleagues, customers, vendors, and other parties to understand and develop architectural solutions across the enterprise.
- Leads and/or facilitates the development of appropriate standards and best practices, in addition to the paths by which they may be achieved and monitored.
- Develops a sound understanding of existing systems and processes, their strengths and limitations, and the current and future needs of the environment in which they exist.
- Provides vision on how they may be improved and developed.
- Understands and explains the interactions between systems, applications, and services within the environment, and evaluates the impact of changes or additions to the environment.
- Participates in the evaluation and/or selection of solutions or products, including requirements definition, vendor and product evaluations.
- Acts as a local expert for areas of domain expertise.
- Acts as an escalation point for technical teams and provides hands on expertise.
- Applies strong analysis, research and problem-solving skills across a wide array of systems and situations, including those which may be unfamiliar, in order to address critical issues.
- Follow the company Code of Ethics and CXone policies and procedures at all times.
- Communicate in an effective and professional way with customers in and outside of CXone.
- Follow the company Code of Ethics and CXone policies and procedures at all times.
This organization reserves the right to revise or change job duties as the need arises. This job description does not constitute a written or implied contract of employment.
Required Education, Systems, Experience, and Specific Job Related Skills
Education:
- Bachelor’s Degree in Computer Science, Business Information Systems or related field. Work experience can be used in lieu of degree.
- MCSA and VCA6-DCVSystem(s) Rights Administration rights on all corporate servers including:
- Billing Building Security Financial Test
- Development Web
- FTP Database Sales and marketing
- 6+ years providing end user workstation support and/or desktop engineering
- 6+ years deploying and supporting Windows workstations in an Active Directory environment
- 6+ years working with end user workstation management platforms such as Microsoft SCCM or similar products
- 4+ years desktop transformation and Messaging environment implementations
- 4+ years Troubleshooting and enabling configuration management with Group Policy Objects (GPO)
- 4+ years working with Messaging Technologies (e.g. Microsoft Exchange, Office 365)
- 6+ years working with collaboration and End User Productivity tools (e.g. Microsoft Office, SharePoint, Skype for Business, Jabber, Cisco Unified Communications)
- 2+ years Virtualization of desktops (VMware View, Citrix Xen
- Desktop, Microsoft RDS), virtualization of applications (VMware ThinApp, Citrix XenApp, MS App-V), thin computing technologies (Terminal Services, Citrix, Wyse, VMWare) and workspace management solutions (AppSense, RES)
- 2+ years using scripting technologies such as VBScript, PowerShell or AutoIT Script Experience in an environment requiring Change Control
- Demonstrate strong analytical and problem solving skills
- Knowledge of current industry protocols, standards/ best practices and technologies for end device management including mobility
- Excellent communication skills (both written and oral)Experience working within a team of IT professionals
- taking and following direction and completing tasks and assignments in a timely manner with a positive attitude
- Ability to adapt and work with people across multiple sites
- Ability to constantly adapt to a fast paced, ever changing technology landscape filled with smart phones, virtual desktops, and a wide array of desktop computing hardware/software
- Configuration, management and design of HP/Compaq environments
- Microsoft Office 365
- Unify phone switch experience
- Knowledge of layer 2 and layer 3 switching
NICE is committed to provide an environment based on equal opportunity for all qualified applicants and employees. It is the policy of NICE to afford equal employment opportunities to qualified individuals, regardless of age, race, color, creed, religion, citizenship, ancestry, national origin, sex, gender, pregnancy, mental or physical disability, marital status, veteran status, service in the Armed Forces, sexual or affectional orientation, atypical hereditary cellular or blood traits, genetic information, status as a victim of domestic or sexual violence, and/or any other status protected by any applicable federal, state and/or local statute or regulation. | http://jobs.jobvite.com/incontact/job/oxP7gfwt |
The Process:
Metal finishing encompasses a variety of processes that can treat the base metal, such as passivation of stainless steels, or apply one or more coatings on the surface to perform a sacrificial function when exposed to the environment, to meet an engineering requirement, to perform a cosmetic function, or a combination of some or all of these. Additionally, pre-treatments for painting are also within the domain of metal finishing.
Most metals when left in their original condition will react with oxygen and other components of the environment and form oxides such as common rust on steels and white corrosion on aluminum. Additionally, metals generally do not possess the desired properties for the end product, from either an engineering or cosmetic perspective.
Plating includes electro-plating, electroless plating, immersion coatings, conversion coatings, and mechanically applied finishes.
Anodizing is unique in that a controlled oxide is formed at the surface of the aluminum.
Painting and powder coating are non-metallic coatings.
THE TYPICAL ELECTROPLATING PROCESS INCLUDES:
Removing oils and other contaminants from the surface of the part.
Removing oxides from the surface of the metal.
Plating the desired metal such as nickel or zinc utilizing direct current, or an electroless method, to deposit metal on the surface, or an immersion method to deposit a coating such as chemical film or phosphate.
Baking for hydrogen embrittlement relief if required.
Applying a supplementary coating to the plating. | http://www.pmforlando.com/capabilities.html |
Slowing the spread of superbugs, speeding up development and access
On Tuesday 28 June 2022, the Access to Medicine Foundation will host an expert panel discussion in collaboration with Global Antibiotic Research and Development Partnership (GARDP). The event will link Dutch, European and global initiatives that address antimicrobial resistance (AMR) in order to reduce cross-border health security risks. High-level speakers will share their perspectives and research on the spread of AMR.
28 June 2022
The Hague
Multi-stakeholder panel discussion.
If you are interested in attending this event, please contact Camille Creisson on: [email protected].
The COVID-19 pandemic has highlighted the fragility and lack of resilience of the world’s healthcare systems. However, chronic inequalities in access to medicine long predate the current global crisis of COVID-19 vaccine inequity.
Antibiotics and antifungals remain unaffordable and unavailable in much of the world, particularly affecting the 83% of the world’s population who live in low- and middle-income countries (LMICs). Each year around 5.7 million people die because they cannot access or afford antibiotics – and the stark reality is that this figure comes from a typical year, rather than one affected by a pandemic.
At the same time, antimicrobial resistance (AMR) is being hailed by many experts as the next big global health threat. Ensuring equitable access to antibiotics is essential to curb AMR and promote cross-border health security.
The panel discussion will explore questions such as:
- Why is it important to view AMR in the context of global cross-border health security?
- Why is addressing the spread of AMR important for multiple stakeholders to consider?
- How does government prioritisation lead to greater action from industry?
- What opportunities exist for the private sector, academic research institutions, product development partnerships and procurers to improve access to antibiotics in LMICs?
- What innovative approaches to improving access to antibiotics have been successful, and how can such efforts be expanded upon?
Part 1 - Presentations: Global context of AMR
- Access issues in LMICs and innovative solutions from the Access to Medicine Foundation
Speaker: Fatema Rafiqi, Research Programme Manager, representing the Access to Medicine Foundation to talk about specific findings from the Foundation’s new research paper in the context of industry progress and future opportunities.
- Supporting sustainable solutions on new and existing antibiotics – global, regional and national considerations
Speaker: Jennifer Cohn, Global Access Leader, representing GARDP to talk about the importance of embedding a global access strategy into late-stage development efforts of new antibiotics, implementing this strategy, and access considerations on existing antibiotics.
- The need for public-private collaboration on new antibiotic development and access
Speaker: Takuko Sawada, Director and Executive Vice President, Integrated Disease Care Strategy Division, representing Shionogi to discuss the role of pharma in improving access, along with existing market limitations and respective solutions, including the need for public support to advance R&D and global access efforts.
- AMR in the context of Dutch health security: Scope of the global AMR problem
Speaker: Sian Williams, Senior Policy Advisor, representing the Wellcome Trust to speak about the Global Research on Antimicrobial Resistance (GRAM) study results and specifically the prevalence of AMR in LMICs and the potential future impact on populations.
Part 2 - Moderated Discussion
Moderated discussion with panellists providing a Dutch, European, and global perspective on the 4 presentations on curbing the spread of AMR:
- Constance Schultsz, Professor of Global Health, Department of Global Health, Amsterdam UMC, University of Amsterdam (Moderator).
- Cornelis Boersma, Executive Board Member, the Netherlands Antibiotic Development Platform, on strategic R&D investments.
- Christina de Vries, Senior Health Expert, Cordaid and Dutch Global Health Alliance, on the importance of improving access to antibiotics to advance the Dutch global health agenda and the Sustainable Development Goals.
- Peter Hermans, Professor in Infection and Immunology Trials at Utrecht University / UMC Utrecht, European Clinical Research Alliance on Infectious Diseases, ECRAID.
- Fatema Rafiqi, Research Programme Manager, Access to Medicine Foundation, on improving access to antibiotics in the context of health security and pandemic preparedness.
Access to Medicine Foundation events
Through moderated workshops and panel discussions, the Foundation provides space for people working with and within healthcare companies to come together and discuss access to essential medicines. Participants use the insights to redefine access strategies and internal metrics, while in turn informing the Foundation’s approach to mobilising the most important companies across five essential healthcare sectors.
The Foundation has organised previous interactive workshops on topics including access to generic medicines, shortages and stockouts of lifesaving products, access to medical oxygen, lessons learned from COVID-19 and how to ramp up access to medicine toward 2030. | https://accesstomedicinefoundation.org/events/slowing-the-spread-of-superbugs-speeding-up-development-and-access |
Five actions to make the most of digital transformation
Share
The Latin American Economic Outlook (LEO) 2020, published by the OECD, ECLAC, CAF, and the European Commission for International Partnerships, proposed an action plan to make digital transformation play a major role to turn the 2020 crisis into a new development opportunity and address the region’s development traps.
The report’s editors suggested five action areas to increase the benefits of digitization:
Key actions
Focus on micro enterprises. The region is characterized by a predominance of micro and small firms that have low productivity, are often disconnected from their markets and do not have the capacity to absorb the shock created by the pandemic. In this respect, digital tools can help drive productivity growth and increase their competitiveness, in particular for companies that are lagging behind. Policies should therefore aim to support the uptake of technological tools with holistic digital ecosystems, adequate infrastructures, and appropriate digital skills.
Focus on the poor. Digital divides need to be addressed in order to bring the benefits of the digital transformation to all. A human-centric approach to digital technologies can increase the quality of life within households and therefore improve the social welfare of Latin American societies and promote environmentally sustainable development. Disparities in access and use across territories, socio-economic, age or gender groups persist, and these may widen in the context of the pandemic. This can lead to the creation of gaps between winners and losers, and therefore poses additional threats to social cohesion and stability.
Focus on new jobs. Digital technologies will bring both opportunities and challenges to the labor market. A number of jobs in the region are at high risk of automation, while others will experience substantial changes in the way they are performed. Policies to boost productivity must play a strong role in matching market needs and in ensuring a smooth transition from obsolete to new jobs.
Place connected screens at home and school. To ensure the benefits of the digital transformation are enjoyed at home and at the office, appropriate skills need to be developed early in life and along people’s lifespan. Among those who have fewer skills, a higher proportion of women have no computer experience. Providing disadvantaged schools and students with more access to ICTs is not enough on its own, programs that develop the right skills for both students and teachers are also needed.
Connect governments. New digital technologies can transform public institutions and make them more credible, efficient, inclusive and innovative. This can help restore trust in governments by simplifying complex bureaucratic systems, providing more inclusive public services, including e-health or e-learning that reach more disadvantaged segments of society, becoming more open and transparent, and allowing the participation of citizens in decision-making processes. | |
Landscape Paintings: What the Eye Perceives
Buying an a original work of art can be a minefield unless you can rely on your own authority. You don’t have to be a fine arts grad to choose a quality work. Your most trustworthy resource is much simpler and closer to hand than that. It is your own perception.
We hadn’t quite re-entered social circles since our trip to Greece but pulled ourselves together for an informal Q&A with artist Ed Yaghdjian at the Arts & Letters Club in Toronto. His solo exhibit In Search of Beauty was up in the Great Hall and while his passion is to paint resplendent nudes, he was chatting intently about landscapes when we slipped quietly in the door, wine glass in hand, to find our place at the long table.
We paused at several slides depicting the tossing sea on the shores of Monterey, autumn red leaves sprouting from craggy rocks, a hushed and golden painting of a wooded valley.
Ed Yaghdjian – Rouge Valley in the Fall, oil on MDF
One of our group commented that Ed’s landscapes were more impressionistic and less detailed than his figurative paintings. “Yes and no”, was Ed’s response and he began to expand on that simple statement.
As Ed spoke, my companion nudged my arm and whispered “That’s what always bothered me about so and so’s paintings!” It’s an explanation so simple that you are tempted to slap your forehead and mutter “D’huh!”. It is the reason a landscape painting often ends up looking “not quite right” without anyone seeming to quite know why.
The eye gathers information about our world which the brain interprets according to stored experience. Nuances are interpreted in simple primal language. For example, a hunter loping across the savannah after prey or a young executive dashing down the urban thoroughfare to an appointment will perceive their surroundings in the same way. The pride of lions or the sight of the convening board members is judged by distance and the time to arrive there.
Size matters
We judge distance by proportion. What is there is ‘smaller’ than what is here. But is it really smaller? Of course not. The lions will be a big surprise when they are found to be larger than kittens to the imperceptive hunter. It is the painter’s job to adjust the ‘reality’ on his canvas so that the message of distance is delivered convincingly.
Why do some landscape drawings or paintings confuse or bother us in spite of the fact that converging lines and vanishing point of perspective may read appropriately? We ask ourselves, what was it that made ‘so and so’s’ landscapes a problem when the elements of trees, lake, canoe, and dock were all fairly well painted and in place, and trees in the foreground were larger than trees in the distance? What was missing?
We may not understand the reasons for our dislike or growing discomfort, but we will be troubled by incongruity. Actually, it was not what was missing but what was excessive. Size is not the only determinant of distance. Our brain does a great job of filtering and interpreting so that extraneous visual information is subdued or eliminated depending on the situation.
It’s in the details
Driving on a busy highway, we are most acutely aware of elements in our immediate environment, such as the cars or people around us that may impact our safe travel. Second to that we take in what is beyond, such as red lights ahead or people dashing out unexpectedly.
Beyond that, in less detail, are the surroundings as far as our eye can see. From those distant surroundings, vehicles, people, buildings, birds, animals, clouds are all filtered out of our consciousness to enable us to get to our destination without being overwhelmed by extraneous information. Simply put, objects on which we focus our eyes are sharply defined while surrounding objects are less clearly defined and objects further away from our area of focus are blurred.
What you see
The third aspect of landscape painting that was truly our ‘aha’ was Ed’s explanation of atmospheric or optical perspective. Unless we have a foggy or smoggy day, the air between you and me as we meet on the street is invisible and, as human beings, we tend to think that what we cannot see has no real substance: like the air in our atmosphere, for instance.
But, as we know, air does have density and as distance and the layers of atmosphere between ourselves and objects increase, the atmosphere acts as a filter between viewer and object. Of the yellow, red, and blue triad, the first to be filtered out are the yellows, followed by the reds, until we are left with only the blues.
That is why trees, mountains, or other objects appear purplish or blue in the far distance. Beyond that, when the blues are also filtered out, all things still visible are reduced to a nondescript pale grey. Similarly, on a clear, sunny day, the sky at the far distant horizon appears a pale grey because of the thicker atmospheric layer that separates us from that distant horizon. On the other hand, looking straight above our head, through a lesser atmospheric layer, we see the sky as a dark ultramarine blue.
Blue Sky Horizon, Image by Ryan Grobin
In a landscape painting, an artist takes us on a journey into a canvas and it is his/her level of skill that will encourage us to linger and roam around awhile before we move on. It is the artist’s job to edit all the extraneous detail from a landscape so that we really can see the tree in the forest. The composition will provide this map into and around the canvas.
While Ed’s figure paintings may well seem to be more exquisitely detailed, it is because the composition is in a fairly shallow plane (from foreground to background). We are up close and easily taking in the detail of skin, hair, fabric, furniture as it is presented. Light and shadow, colour temperature, treatment of edges will be our subtle guides to depth, form, and substance.
In a landscape, a solitary tree against the backdrop of a forest may show some detail such as branches, or clusters of leaves, whereas the forest in the background will be seen only as masses of varying colour or light and shade with the occasional tree trunk or particularly large branch being seen. Excessive detail would clutter the work and confuse our perception. For the painter to include it, would be to provide more information than our eyes would normally perceive, and our mind would intuitively reject it.
Excessive detail would tell us the tree was in front of our nose, while the other signs relating to perspective would be in contradiction. “Which is it – here or there?” would be our mind’s argument. The farther we look into the distance, the more contrasts diminish, outlines appear softer, details disappear, and colours become greyed. A novice painter will be identified more often by evident lack of perception of these fundamentals than by lack of skill in his/her work. Technique alone will not save a painting.
Buying a painting is like entering into a relationship: often a lifelong process. Just as individuals of integrity and genuine values provide the foundation for solid long term relationships, well structured paintings are sincere and will hold your interest and provide you with real viewing pleasure, indefinitely.
Choose your paintings like your friends. What might be amusing or stimulating over a latte will lose its allure on deeper exploration if there is no substance or intuited reality to it. Choosing wisely is a blend of knowledge, intuition, and experience. Trust your own perception – your eye and heart will not fail you.
About Marilyn Harding
Closing the locker door on all her worldly goods except what is in her suitcase, Marilyn and her mate Athan are following their hearts and charting a new course as Life unfolds doing business online while living the life of their dreams.
Marilyn Harding is the director of
ArtemisAllianceInc.Com: Strategic Alliances in the Business of Arts and Letters,
mhArt.Ca: Creating Relationships in Art
LightBeam.Org: Conscious Choices for Living Well;
ExhilartedLife.Com: Travels with My Heart and Soul
and
SilverArrowPublishing.Com: Bringing Beautiful Books to Life!
a member of
The Arts & Letters Club, Toronto, and
Women's Art Association, Canada (WAAC).
and a contributing author to
BlogCritics.Org and RebelleSociety.Com
Marilyn Harding is a seasoned and award winning executive marketing strategist and effective communicator with more than twenty years experience.
Currently, she writes on holistic business principles and ethics, art, travel and philosophy. A zealous student of life, Marilyn continues to hone her skills in internet marketing and strategic alliances.
Devoted to the life path of personal mastery, Marilyn lives in the spirit of holism and conducts her life with the goal to augment vibrant and open hearted access to living a superlative life in creativity, wellness, spirituality, productivity, and accountability as citizens of the world no matter what our role.
| |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to automatic rangefinder devices of the type having means for projecting a beam of light to illuminate a spot on a distant object, and an image sensor for sensing the apparent location of the spot to determine the distance to the object; and more particularly, to an improved light source for generating said beam of light.
2. Description of the Problems
In one type of automatic rangefinder, a beam of light is projected upon an object in a scene to form an illuminated spot on the object. An image of the scene including the illuminated spot, is formed on a linear image sensor. The location of the illuminated spot on the linear image sensor is determined to measure the distance to the object in the scene. An example of such a rangefinder device for use in a photographic camera is shown in U.S. Pat. No. 4,274,735 issued June 23, 1981 to Tamura et al.
The general arrangement of elements and the mode of operation of a rangefinder device of the type shown by Tamura et al, will be described with reference to FIG. 1. The apparatus comprises a beam- forming portion shown by way of example, as a light emitting diode (LED) 10 and a lens 12. The beam is projected along a path 14 to illuminate a spot on an object O.sub.1 in a scene. The scene is sensed by an image sensing portion comprising a second lens 16 and a linear array of photosensors 18. The signals produced by the photosensor array are analyzed by control electronics 20 to determine the position of the illuminated spot in the scene and to produce a signal representing the distance to the object. As shown by example in FIG. 1, the apparent position of the illuminated spot in the scene is a function of distance along light path 14 to the object. For an object O.sub.1 located at a distance D.sub.1 from the rangefinding device, the image of the illuminated spot will fall on the image sensor at location S.sub.1. For an object O.sub.2 at a further distance D.sub.2, the image of the spot will fall on the image sensor at a location S.sub.2. By examining the output of the image sensor, the control electronics 20 determines (for example, by comparing the output of the sensor elements to determine that output which is a maximum) the location of the illuminated spot in the scene and thereby the distance to the object.
In general, the resolution of such a rangefinder is limited by the baseline distance between the LED 10 and the image sensor 18. The ability to increase the resolution of the rangefinder by increasing the baseline distance is limited in modern compact photographic cameras.
As shown in FIG. 1, the image sensor 18 is arranged with its long axis perpendicular to the optical axis of lens 16. Since lens 16 is a very fast lens (e.g. F/1) to maximize the amount of light gathered from the scene, its depth of field will be relatively shallow and only objects at a narrow range of distance will be in focus on the image sensor 18, thereby reducing the sensitivity of the rangefinding device at other distances. Another problem encountered with the rangefinding device of the type shown in FIG. 1, is that the projected image of LED 10 is in focus at only one distance, resulting in a larger than optimum spot at all other distances. A third problem encountered with the use of a rangefinder device of the type shown in FIG. 1 in a photographic camera is the parallax that results between the beam 14 and the optical axis of the viewing or taking lens of the camera.
One solution to the problem of spot size and focus of the image on the image sensor is shown in U.S. Pat. No. 4,248,532 issued Feb. 3, 1981 to Nosler. In the rangefinder device disclosed by Nosler, the sensor array is angularly oriented relative to the optical axis of the imaging lens 16 (as shown in phantom in FIG. 1 of the present specification) to insure that the spot will remain in sharp focus on the sensor throughout the useful range of object distances. The light beam disclosed by Nosler is generated by a laser, thereby producing an optimum size spot throughout the useful range of distances.
The solutions proposed by Nosler do not solve the problem of parallax between the rangefinder axis and the optical axis of the viewing or taking lens of a photographic camera.
SOLUTION TO THE PROBLEM--SUMMARY OF THE INVENTION
The above-noted problems are solved according to my invention by providing an automatic rangefinder device of the type described above with improved means for forming the beam of light. The improved beam- forming portion comprises an elongated light source and projection optics for forming a real image of the light source, thereby defining an elongated beam region in space, corresponding to the image of the light source, and extending generally away from the rangefinder. An object located anywhere in the beam region will be illuminated with a spot of light having an in-focus component. The image sensor detects the position of the in-focus component of the spot. The rangefinder device includes imaging optics for imaging the beam region onto a linear image sensor. The imaging optics and the image sensor are arranged such that a real image of the image sensor formed by the imaging optics substantially coincides with the real image of the elongated light source in the beam region of space.
In a preferred embodiment of the invention, the light source is an edge emitting LED, arranged with respect to a projection lens such that the long axis of the light emitting portion of the LED passes through the focal point of the projection lens, thereby forming a real image of the LED lying along a line parallel to the optical axis of the projection lens, and intersecting the principal plane of the projection lens at the point where the extension of the long axis of the LED intersects the principal plane. In the preferred embodiment, the imaging optics for the image sensor is a lens having its principal plane coplanar with the principal plane of the projection lens, the linear image sensor is likewise arranged with its long axis passing through the focal point of the imaging lens and intersecting the principal plane at the same point where the long axis of the light source intersects the principal plane.
By forming a beam region off center of the optical axis of the projection lens according to the present invention, the rangefinder may be employed in optical apparatus such as a photographic camera with the beam aligned with the taking or viewing optics of the camera, thereby eliminating parallax between the rangefinder and the camera optics. Furthermore, by forming the beam region off center of the optical axis of the projection lens, the rangefinder may be arranged to have an effective baseline distance greater than the distance between the light source and the image sensor, thereby increasing the resolution of the rangefinder without increasing its physical size.
According to another feature of the invention, the projection and imaging optics are arranged such that the beam region does not lie in the same plane as the optical axis of the projection and imaging optics, thereby improving the sharpness of the image of the light source on the image sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be described with reference to the accompanying drawings, wherein:
FIG. 1 is a schematic diagram of a rangefinder device of the type to which the present invention pertains;
FIGS. 2a and 2b are schematic diagrams of a preferred and an alternative embodiment respectively of the light beam-forming portion of a rangefinder device according to the present invention;
FIG. 3 is a schematic diagram showing a rangefinder device according to the present invention, including a focused elongated light source according to the present invention;
FIG. 4 is a schematic diagram illustrating the use of the rangefinder device shown in FIG. 3 in a photographic camera of the single lens reflex type, showing the axis of the rangefinder device aligned with the optical axis of the taking lens of the camera;
FIG. 5 is a schematic diagram of an alternative embodiment of a rangefinding device according to the present invention;
FIG. 6 is a schematic diagram of the rangefinding device shown in FIG. 5 employed in a photographic camera of the viewfinder type, with the axis of the rangefinder device aligned with the optical axis of the viewfinder of the camera;
FIG. 7 is a schematic diagram illustrating the overlapping patterns of projected light spots and the projected image of the image sensor falling on objects located at various distances from a rangefinding device configured as shown in FIG. 3;
FIG. 8 is a schematic perspective diagram of an arrangement for further improving the response of a rangefinder device according to the present invention; and
FIG. 9 is a schematic diagram illustrating the overlapping patterns of projected light spots and the projected image of the image sensor falling on objects located at various distances from a rangefinder device configured as shown in FIG. 8.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring first to FIG. 2a, the improved beam-forming portion of a rangefinder device according to the present invention will be described. The beam-forming portion includes an elongated light source, such as an edge emitting LED 100, a means for forming a real image of the light source, such as a lens 102. The real image 104 of the light source 100 forms a beam region in space coincident with the real image 104 extending generally away from the rangefinder, any part of which contains an in- focus portion of the image of the light source. An object located anywhere in the beam will be illuminated with a spot of light that includes an intense in-focus portion, and a neighboring portion or portions that are out-of-focus and therefore less intense. By arranging the long axis of the LED to pass through the focal point 106 of lens 102, the beam region 104 will be parallel to and displaced from the optical axis 107 of projection lens 102. The beam region 104 will lie along a line 108 that intercepts the principal plane 109 of lens 102 at a point A coincident with the point that the long axis of the LED intercepts the principal plane.
To create the elongated beam region in space, it is not necessary that the long axis of the LED pass through the focal point of the projection lens. FIG. 2b (numbered similarly to FIG. 2a, with primes ('))shows an alternative arrangement of the LED 100' and projection lens 102' wherein the long axis of the LED 100' does not pass through the focal point 106' of the lens. The beam region 104' in this case, is not parallel to the optical axis 107' of the projection lens 102'. The arrangement must be tilted slightly as shown in FIG. 2b to make the beam region 104' parallel with the beam region 104 shown in FIG. 2a.
FIG. 3 illustrates a rangefinder device according to the present invention, employing the improved beam-forming apparatus shown in FIG. 2a. The beam-forming portion of the rangefinder includes an elongated light source such as an edge emitting LED 200, and a projection lens 202 for projecting a real image 204 of the light source to form a beam region coincident with the real image in space. The image of the light source lies along a line 206 that intersects the principal plane 208 of lens 202 at point A. The beam region 204 is imaged onto a linear image sensor 210 by an imaging lens 212. The linear image sensor 210 and imaging lens 212 are arranged such that a real image of the image sensor 210 projected by the imaging lens 212 substantially coincides with the real image 204 of light source 200. This is accomplished for example as shown in FIG. 3, by positioning the lens 212 to be coplanar with lens 202, and arranging linear image sensor 210 along a line passing through the focal point of lens 212 and intersecting the principal plane at point A.
With the above-described arrangement, an object positioned anywhere in beam region 204 will be illuminated with a spot that has an in-focus component, and the portion of the object illuminated by the in- focus spot will be sharply imaged onto the appropriate portion of the linear image sensor 210.
The location of the in-focus component of the spot is determined by the portion of the image sensor receiving the most intense light.
Control electronics 214 receives the signal from the linear image sensor 210 and generates a focus signal in response thereto in a known manner.
In the preferred embodiment of the invention, lenses 202 and 212 are 12.5 mm focal length F/1 lenses. The LED 200 is an edge emitting LED with an elongated rectangular light emitting surface approximately 1. 5. times. 0.05 mm. The image sensor 210 is a self-scanned CCD sensor with provision for automatically subtracting background illumination. An example of such an image sensor is shown in copending U.S. patent application No. 409,256, entitled "Image Sensor and Rangefinder Device Having Background Subtraction with Paired Analog Shift Registers" by C. Anagnostopoulos filed on even date herewith. The image sensing array 210 is approximately 1.5×0.5 mm wide and contains 32 individual sensor elements.
Since the beam region 204 lies off the optical axis of both the projecting lens 202 and the imaging lens 212, a rangefinder according to the present invention used in optical apparatus such as a camera, can be constructed so that the beam 204 lies along the optical axis of the taking optics of the camera, thereby eliminating parallax between the rangefinder and the camera optics.
FIG. 4 is a schematic diagram showing a rangefinder such as that illustrated in FIG. 3 in a camera of the single lens reflex type. The camera 216 includes a taking lens 218 having an optical axis 220. The beam-forming portion of the viewfinder, including elongated light source 200 and projecting lens 202, is positioned on one side of taking lens 218 such that beam 204 lies along the optical axis 220 of the taking lens. The image forming portion of the rangefinder including linear image sensor 210 and imaging lens 212, is positioned on the opposite side of the taking lens 218 such that the beam region 204 is focused onto linear image sensor 210. The focus signal generated by control electronics 214 is employed to control a focus servo depicted as a focus motor 222 connected to a pinion 224 to drive a rack 226 connected to lens 218.
Although the beam region 204 is shown approximately midway between the two lenses 202 and 212 in FIGS. 3 and 4, the beam region may be adjusted toward one lens or the other by simultaneously varying the angle of the elongated light source and the image sensor to move the point A back or forth between the two lenses. The beam region may even be displaced past one of the lenses to reside along some line outside the rangefinder device itself. With such an arrangement of parts, the beam may be displaced away from the sensor array to produce an effective baseline distance that is greater than the distance between the light source and the image sensor, thereby allowing the resolution of the rangefinder to be increased without increasing the physical size of the rangefinder. FIG. 5 shows an example of such an arrangement where similar parts are similarly numbered. This arrangement may be employed to advantage for example, in a viewfinder type camera to place the beam region along the optical axis of a viewfinder lens. FIG. 6 shows schematically how such an arrangement can be employed in a viewfinder camera. A camera 216 includes a viewfinder having optical elements 228 and 230 defining an optical axis 232. The rangefinder device is positioned in the camera so that the beam region 204 lies along optical axis 232. The baseline distance of the rangefinder shown in FIG. 6 is the distance between the image sensor 210 and the optical axis 232, which is seen to be larger than the distance between the LED 200 and the image sensor 210.
In the embodiments discussed thus far, the light source and the image sensor reside in a common plane. With this arrangement, light falling on an object will form a comet- or butterfly-shaped spot with an intense narrow in-focus portion and less intense larger out-of-focus portion or portions. FIG. 7 schematically depicts the pattern of light formed on objects at several distances D.sub.1, D.sub.2, D.sub.3, and D. sub.4, from the rangefinder. The pattern of light at distance D.sub.1 is a comet- shaped pattern comprising an in-focus portion outlined by shaded area 232, and an out-of-focus portion represented by solid line 234. The spots formed at other distances D.sub.2, D.sub.3, and D.sub.4 are similarly depicted by in-focus shaded portions and out-of-focus portions outlined by a solid line. As can be seen from FIG. 7, the spots formed at the near and far distances are single lobed, resembling comets. The spots formed at intermediate distances are double lobed, resembling butterflies. Also depicted in FlG. 7 is the projected image of the image sensing array 210 projected at the various distances D.sub.1 -D.sub.4. The projected image of the image sensing array 210 at distance D.sub.1 similarly comprises an in-focus portion 232 and an out-of-focus portion 236 shown in phantom. As can be seen from FIG. 7, at the near and far distances, the only portions of the images which overlap are the in- focus portions 232 and 238 respectively, thereby resulting in sharp definition of the spot on the image sensor. As can be seen in FIG. 7, at the intermediate distances represented by D.sub.2 and D.sub.3, portions of the out-of-focus spot fall out of focus on the image sensor 210, thereby reducing the definition of the in-focus portion of the spot on sensor 210.
This effect can be reduced by causing the images of the light source and the image sensor to intersect each other at an angle, preferrably a right angle, such that the out-of-focus portions of the illuminated spot fall off the image sensor to one side or the other. FIG. 8 is a perspective view of an optical arrangement of a rangefinding device according to the present invention which accomplishes this goal. As shown in FIG. 8, the projection lens 202, the imaging lens 212, and the beam region 204 lie at right angles to each other on orthogonal axes X, Y and Z in space. FIG. 9 depicts the pattern of illuminated spots and projected images of the image sensor 210 at various distances along the beam 204. As can be seen in FIG. 9, the only portions of the illuminated spot and the projection of the image of image sensor 210 which overlap are in the region where both images are in focus, thereby providing a sharp definition of the spot at all locations along beam region 204.
The invention has been described in detail with particular reference to preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the accompanying claims.
For example, although a continuous elongated light source has been disclosed, it will be obvious that a linear array of discrete light sources could be substituted therefore within the spirit of the invention. | |
An updated review published in the Cochrane Library concludes that the benefits of a variety of interventions intended to reduce sitting at work are very uncertain.
Millions of people worldwide sit at a desk all day, and over recent years this has led to increased levels of physical inactivity in the work place. Health experts have warned that long periods of sitting can increase the risk of heart disease and obesity. There are a number of different approaches to reduce the amount of time we spend sitting down while at work. One option that is increasing in popularity is the sit-stand desk. These are desks that are designed to allow you to work at your desk sitting down or standing up.
A team of Cochrane researchers updated a systematic review that looked at the effects of different strategies to encourage people to reduce the amount of time they spend sitting at work. They looked at twenty studies with a total of 2,174 participants from the US, the UK and Europe. They included evidence from both randomised and non-randomised studies.
Although sit-stand desks are popular, their potential health benefits are very uncertain. The researchers found very low quality evidence from three non-randomised studies and low quality evidence from three randomised studies, with 218 participants, that people who used them sat between 30 minutes and two hours less,... | https://www.australasianscience.com.au/article/issue-march-2016/sit-standing-desks-wont-fix-couch-potato.html |
Liberty school board offers employee retirement incentives
The Liberty school board is offering an incentive for the Liberty Association of School Employees to retire after this school year.
The incentive for teachers to retire ranges from $10,000 for a person with 20 years or more of experience to $12,500 to a person with 28 years of more of experience.
The incentive for nonteachers to retire ranges from $6,657 for a person with 10 years of experience to $7,925 for a person with 20 years or more of experience.
Barb DePasqua, seventh-grade social studies teacher, will retire from her position effective May 31. More staff may announce their retirement for the incentives at the next school board meeting, said district Treasurer Bradley Panak.
Also at the board meeting this week, the board approved a resolution to re-configure the building grade levels and special-education programs.
In fall 2018-19, students in preschool through sixth grade will be housed in the E.J. Blott Elementary and Guy Middle School buildings, which are attached, and students in seventh through 12th grade will be housed at the high school.
Seventh and eighth grades will be considered junior high, and will have its own principal, said board president Calvin Jones.
Having those students at the high school will allow them to take the same science, technology, engineering and mathematics classes as upperclassmen. Superintendent Joseph Nohra said this helps prepare junior-high students for the future.
The Trumbull County Educational Services Center will still provide special-education services for Liberty students, but more special-ed classes will be housed at the district, Jones said.
“Our buildings were underutilized because of a decrease in student population, so by bringing some of the special-education modules in the district, we can increase the efficiency and utilization of our building space,” Jones said. “Our idea is to educate as many of our residential students in our buildings as we can.”
Jessica Kohler, the special-education supervisor, will become a K-6 principal effective July 1. Mike Palmer, the former elementary school principal, will be the junior-high principal.
Board members may introduce a revised version of the open-enrollment resolution at its June 25 meeting. | |
Two friends find themselves away on someone else’s holiday at the Happiest Place On Earth: Orlando, Florida, but they find their dreams are faced with nightmares. Haunted by the ghost of their pasts, they are left on an unexpected emotional rollercoaster. Their friendship is tested as it’s never been before.
Review
First published on Everything Theatre
The play opens with two friends, Joe and Rich, arriving in Orlando, Florida, on holiday away from gloomy England. While Joe wants to drink his problems away, Rich insists on distracting Joe with everything else Disney World has to offer. Even though it is clear that Joe is deeply bothered by something, the cause of this is not revealed until almost halfway into the play.
The audience is guided through the story by Joe and Rich, with narration from Walt Disney, who is meant to represent Joe’s inner dialogues, as he agonises over their friend Pete’s suicide and reflects upon his own life events. Interestingly, the same actor portrays both Walt Disney and Pete, which is a stark contrast symbolising Joe’s childhood dreams and the realities of adulthood. This could be a representation of picking yourself up from where you have fallen. However, the script by Dan Berridge has some issues with the use of humour by Walt Disney. At times, instead of drawing the audience in further and instigating a deeper emotional response, it feels like the narrator is mocking the seriousness of the topic.
Given that Joe’s problems are not presented upfront, it is a little unclear how the audience should feel toward the character. This is further exacerbated by the actor’s opening performance, which is a blend of confusion, depression, and a general unwillingness to participate in any conversations or activities. However, the rationale behind his behaviour becomes clearer as the play progresses, explaining some of his actions and mental state. This is facilitated through a flashback to a short interaction between Pete and Joe immediately before Pete took his own life.
Rich’s character is interesting. He comes across as a carefree person who wishes to do his best to support Joe during this difficult time, but it is revealed that he has problems of his own. We find out that about eight months before this trip, Rich became obsessed with the gym, lost touch with Joe and was overall unresponsive to even his best friends. This hints that Rich was also going through something in his personal life, but this aspect is quickly dropped and forgotten about. The banter between Rich and Joe feels unnatural and lacks chemistry, which could be a result of them growing apart prior to the holiday. However, this level of distance between the best friends is what I would expect from years of miscommunication, not months.
Mental health is the front and centre theme of this play, exploring the consequences of bottled up emotions, shifting priorities among friends as they navigate through their personal circumstances, and the importance to connect with each other. The play abruptly finishes the story with a happy ending, but does not provide the characters or the audience enough time to process the events to get there. The production tries to cover multiple topics (mental health, suicide, adult responsibilities, childhood dreams etc) within 60 minutes, spreading them too thinly, and could benefit from focusing on the emotional development on a single topic instead. | https://www.operationlivetheatre.com/post/the-happiest-place-on-earth |
Q:
Badge for Closing Question
I think we should have a badge ( or a set of badges) for the users who close a sufficient number of questions. Closing question is a form of community service and therefore deserves rewards.
Note: Now don't try to close this question in the name of illustrating the irony :)!
A:
This could make sense since closing is now a community process, requiring 5 votes to accomplish. That means that rather than awarding a badge based on simply voting to close (could encourage voting to close for no good reason), or based on casting the final vote (unfair to others who voted to close, and creates unusual incentives when votes approach 4 or 5), I'd propose that the badge be based on:
Casting X votes to close on questions which were then subsequently closed.
As an arbitrary suggestion, I'd say X = 25.
This means that everyone who votes to close would get credit towards their X, but only if the question was actually later closed (signifying that your vote was a "correct" vote). This does make it a little more attractive for those who wait until the 5th vote, but I don't see that as a major problem, as all voters still get credit.
I'd propose that questions that are re-opened should not rescind the associated tally, because a question could be re-opened after editing and be different from the form that was closed. Indeed, the close process can play a part in encouraging questioners to shape up their questions, and so the votes to close should still be recognized as a valuable part of that process even if it gets re-opened later.
The only problem I can see is that this kind of badge might be difficult to calculate and implement for the dev team, as it relies on keeping a tally on information which is transient in the system. It may not be technically feasible without major work.
A:
To be honest, I'm not sure sure is such a good idea if the badge is awarded for closing a large number of questions as it might encourage some users to vote to close questions that may not need to be closed just to get the badge.
That said though, I think it would also be in keeping with the trend in some of the other badges to award one for the first question that you vote to close that is also closed as that would encourage users who just got their "Vote to Close" permissions to try them out which is how some of the other badges are designed.
A:
I favor a 3 tier badge system with bronze, silver, and gold badges. It is not enough to close enough questions to get one badge, people should continually close questions that are duplicated content or require work from the asker to be up to standard.
All the rewards right now are stacked towards asking questions and answering them. We need additional gamification to encourage users to also keep this site clean. Closing questions is one of those mechanisms. Too few people do it.
| |
Submit abstract, deadline 2019-10-25.
Abstract Guidelines – IBD Nordic Conference 2019
The IBD Nordic Conference invites you to submit abstracts reporting new research or developing information on IBD. We welcome abstracts presenting unpublished studies, case reports, systematic reviews and scientific approaches on methodology and quality improvement. Studies presented in part on other meetings and recently published data can also be considered. The Scientific Program Committee will select abstracts on the basis of their medical significance, the quality of the data and methodology, and adherence to specific format requirements and other criteria described herein. The selected abstracts will be offered presentations as posters a few of these will also be selected for oral presentation.
Instructions for posters
Posterboards will have a display area 150 cm wide x 100 cm high. Your poster must fit within these dimensions. Posters will be fixed to boards using push pins supplied onsite. The posters should be up for the entire session and presenters are encouraged to be at their posters during the full poster session time (Coffee breaks, see program).
IBD Nordic 2019 Poster Award
The three best poster will be selected by the Scientific Program Committee for the IBD Nordic 2019 Poster Award, and awarded with a Research Grant of a total of 2.500 Euro each, distributed as follows:
1st and 2nd place: 1.000 Euro each
3rd place: 500 Euro.
|Abstract Timetable|
|Abstract submission opens||2019-02-01|
|Last day for submitting abstracts||2019-10-25|
|Decisions of abstracts emailed||2019-11-10|
Abstract Length
The titles of abstracts should not exceed 15 words. The body text should not exceed 350 words. Embedded images (tables, figures, etc) will be counted as 30 words.
Required Elements
Each abstract must include the following:
Title. The title should appear in boldface at the top. It should be concise (no more than 130 characters, not including spaces) and unambiguously represent the subject of the abstract.
Authors’ names. These names should appear in below the title. They should be separated by commas, with first names first followed by middle initials and last name only (no titles or degrees). The presenting author should be the first name added in the authors list. Add affiliations, use superscripts to identify the author and the affiliation (for example: Name A¹, Name B², University X¹, University Y²).
Background: This section briefly presents the hypothesis of the abstract or encapsulates the subject under study.
Methods: This section details the experimental methods and processes employed in the study.
Results: This section describes the precise findings of the study. Promises of results “to be determined” are not acceptable.
Conclusions: This section describes logically sound conclusions and reliable inferences from the results. | http://ibdnordic.se/abstracts/ |
Training Videos:
Machine Learning A-Z™ Hands-On Python & R In Data Science
(Very fruitful training to understand ML)
950390_270f_3.jpg
Learn to create Machine Learning Algorithms in Python and R from two Data Science experts. Code templates included.
What you'll learn:
- Master Machine Learning on Python & R
- Have a great intuition of many Machine Learning models
- Make accurate predictions
- Make powerful analysis
- Make robust Machine Learning models
- Create strong added value to your business
- Use Machine Learning for personal purpose
- Handle specific topics like Reinforcement Learning, NLP and Deep Learning
- Handle advanced techniques like Dimensionality Reduction
- Know which Machine Learning model to choose for each type of problem
- Build an army of powerful Machine Learning models and know how to combine them to solve any problem
Link:
Part 1:
https://dropgalaxy.in/gbn5t1mrpq2a
Part 2:
https://dropgalaxy.in/3aamfwtaicxm
Part 3:
https://dropgalaxy.in/wfn3wicrzhx4
Part 4:
https://dropgalaxy.in/f0q3cnos7s1x
Part 5: | https://www.finetopix.com/showthread.php/53489-Machine-Learning-A-Z%EF%BF%BD%EF%BF%BD%EF%BF%BD-Hands-On-Python-R-In-Data-Science?s=5e95dcc2e18d759bc3b69aec8327c580 |
Think Declining Mental Sharpness “Just Comes With Age”? Think Again, Says a Prestigious NIH-Funded Conference
We’ve long thought cognitive decline was just “characteristic of aging,” but researchers convened by the American Geriatrics Society funding from the National Institutes of Health (NIH) suggest there’s nothing “just characteristic” of connections between age and cognition.
Declining mental sharpness “just comes with age,” right? Not so fast, say geriatrics researchers and clinicians gathered at a prestigious 2018 conference hosted by the American Geriatrics Society (AGS) with support from the National Institute on Aging (NIA). In a report published in the Journal of the American Geriatrics Society (JAGS), attendees of a conference for the NIA’s Grants for Early Medical/Surgical Specialists Transition into Aging Research (GEMSSTAR) program describe how increasing evidence shows age-related diseases—rather than age itself—may be the key cause of cognitive decline. And while old age remains a primary risk factor for cognitive impairment, researchers believe future research—and sustained funding—could illuminate more complex, nuanced connections between cognitive health, overall health, and how we approach age.
“We’ve long been taught that cognitive issues are ‘just part of aging,’” explains Christopher R. Carpenter, MD, MSc, who helped coordinate the conference. “But contemporary medical research shows how bodily changes that lead to diseases like dementia appear long before the symptoms we associate with ‘old age.’ This begs the question: Is it really age that causes cognitive decline, or is it ultimately the diseases we now associate with age—in large part because we see them with increasing frequency now that we live longer? That’s what we wanted to tackle coming together for this meeting.”
Hosted by the AGS and NIA in 2018 as the third conference in a three-part series for GEMSSTAR scholars, the NIA “U13” conference brought together NIA experts and more than 100 scholars, researchers, and leaders representing 13 medical specialties to explore experiences with cognitive impairment across health care. Conference findings, published in JAGS (DOI: 10.1111/jgs.16093), detail early thinking on the two-way relationship between cognitive health and the health of other organ systems, as well as opportunities for moving science and practice forward.
According to attendees, several themes emerged:
- Researchers and clinicians from across health care noted the critical relationship between two of their top concerns: Dementia and delirium (the medical term for abrupt, rapid-onset confusion or an altered mental state, which affects millions of older adults annually). Research now suggests delirium and dementia are mutually inclusive risk factors, with cases of one prompting risks for the other. Thus, prevention of delirium may offer the unprecedented opportunity to prevent or lessen future cognitive decline.
- Still, as one of the conference attendees noted, “[T]he brain is not an island.” Because the conference focused on the impact of cognitive impairment across specialties, a critical focal point for scholars was the complex, bi-directional relationship between cognition and the rest of the body. Cognitive impairments can serve as indicators or influencers in the course of other diseases and conditions. For example, cognitive impairment is perhaps “the strongest independent predictor” of hospital readmission and mortality for older people living with heart failure.
- As the field progresses, however, a major barrier remains: A dearth of research owing to the exclusion of potential study participants who are cognitively impaired. Though obtaining informed consent (the term used to describe a person’s willingness to participate in a study after confirming they understand all the possible risks and benefits) remains challenging, researchers pointed to data that willingness to participate remains high. Coupled with suggestions for tailoring consent safeguards to the types of studies and potential participants thus holds promise for protecting against exploitation while continuing to move cutting-edge care principles forward.
As the GEMSSTAR conference attendees concluded, “The aging of the U.S. population and the growing burden of dementia make this an area of critical research focus…[U]nderstanding and addressing cognitive health and its relationship with the health of other organ systems will require multidisciplinary team science…[and new] study designs…”
The NIA’s GEMSSTAR program awards support to early-career physicians trained in medical and surgical sub-specialties to conduct transdisciplinary aging research. The AGS serves as a central coordinating body for applicants in particular specialties interested in applying for professional development support, and connects these awardees with their specialty societies. Additional funds support a Professional Development Plan to complement research projects.
Funding for this conference was provided in part by the National Institutes of Health (NIH, Award Number U13AG048721). The information and views expressed in conference materials and this release are solely the responsibility of the authors do not necessarily represent the official views of the NIA and/or the NIH.
###
Report: Christopher R. Carpenter MD, MSc; Frances McFarland PhD, MA; Michael Avidan MBBCh, MD; Miles Berger MD; Sharon K. Inouye MD, MPH; Jason Karlawish MD; Frank R. Lin MD; Edward Marcantonio MD, SM; John C. Morris MD; David B. Reuben MD; Raj C. Shah MD; Heather E. Whitson MD, MHS; Sanjay Asthana MD; and Joe Verghese MBBS, MS.: “Impact of Cognitive Impairment Across Specialties: Summary of a Report From the U13 Conference Series,” Journal of the American Geriatrics Society, 22 August 2019, DOI: 10.1111/jgs.16093
Abstract
Although declines in cognitive capacity are assumed to be a characteristic of aging, increasing evidence shows that it is age‐related disease, rather than age itself, that causes cognitive impairment. Even so, older age is a primary risk factor for cognitive decline, and with individuals living longer as a result of medical advances, cognitive impairment and dementia are increasing in prevalence. On March 26 to 27, 2018, the American Geriatrics Society convened a conference in Bethesda, MD, to explore cognitive impairment across the subspecialties. Bringing together representatives from several subspecialties, this was the third of three conferences, supported by a U13 grant from the National Institute on Aging, to aid recipients of Grants for Early Medical/Surgical Specialists’ Transition to Aging Research (GEMSSTAR) in integrating geriatrics into their subspecialties. Scientific sessions focused on the impact of cognitive impairment, sensory contributors, comorbidities, links between delirium and dementia, and issues of informed consent in cognitively impaired populations. Discussions highlighted the complexity not only of cognitive health itself, but also of the bidirectional relationship between cognitive health and the health of other organ systems. Thus, conference participants noted the importance of multidisciplinary team science in future aging research. This article summarizes the full conference report, “The Impact of Cognitive Impairment Across Specialties,” and notes areas where GEMSSTAR scholars can contribute to progress as they embark on their careers in aging research. | https://scitechdaily.com/experts-now-say-that-declining-mental-sharpness-doesnt-have-to-come-with-age/ |
At Vanniks, we have been providing top-notch web and mobile app development services for over 10 years. Our team is made up of experienced professionals who are passionate about what they do and committed to delivering the best results for our clients.
We believe in the power of technology to transform businesses and improve people's lives. That's why we take a comprehensive and holistic approach to web and mobile app development, focusing on creating solutions that are not only visually appealing but also user-friendly and functional.
Our team has a wealth of knowledge and expertise in various industries, and we have a track record of successfully delivering projects of all sizes and complexities. We pride ourselves on our ability to understand our clients' needs and provide customized solutions that meet their specific goals and requirements.
Thank you for considering Vanniks as your web and mobile app development partner. We look forward to working with you and helping your business thrive. | https://vanniks.com/about |
asian american history and culture an encyclopedia an encyclopediaDownload Book Asian American History And Culture An Encyclopedia An Encyclopedia in PDF format. You can Read Online Asian American History And Culture An Encyclopedia An Encyclopedia here in PDF, EPUB, Mobi or Docx formats.
Asian American History And Culture An EncyclopediaAuthor : Huping Ling
ISBN : 9781317476443
Genre : Business & Economics
File Size : 31. 22 MB
Format : PDF, ePub, Mobi
Download : 671
Read : 1179
With overview essays and more than 400 A-Z entries, this exhaustive encyclopedia documents the history of Asians in America from earliest contact to the present day. Organized topically by group, with an in-depth overview essay on each group, the encyclopedia examines the myriad ethnic groups and histories that make up the Asian American population in the United States. "Asian American History and Culture" covers the political, social, and cultural history of immigrants from East Asia, Southeast Asia, South Asia, the Pacific Islands, and their descendants, as well as the social and cultural issues faced by Asian American communities, families, and individuals in contemporary society. In addition to entries on various groups and cultures, the encyclopedia also includes articles on general topics such as parenting and child rearing, assimilation and acculturation, business, education, and literature. More than 100 images round out the set.
Asian Americans An Encyclopedia Of Social Cultural Economic And Political History 3 VolumesAuthor : Xiaojian Zhao
ISBN : 9781598842401
Genre : Social Science
File Size : 90. 26 MB
Format : PDF, ePub, Mobi
Download : 832
Read : 644
This is the most comprehensive and up-to-date reference work on Asian Americans, comprising three volumes that address a broad range of topics on various Asian and Pacific Islander American groups from 1848 to the present day. • Presents information on Asian Americans and individual Asian ethnic groups that provides comprehensive overviews of the respective groups • Includes special topic entries that contain source information regarding major historical events • Comprises work from a truly outstanding list of contributors that include scholars, journalists, writers, community activists, graduate students, and other specialists • Expands the boundaries of Asian American studies through innovative entries that address transnationalism, gender and sexuality, and inter- and cross-disciplinarity
Asian American SocietyAuthor : Mary Yu Danico
ISBN : 9781483365602
Genre : Reference
File Size : 58. 94 MB
Format : PDF, Kindle
Download : 889
Read : 1326
Asian Americans are a growing, minority population in the United States. After a 46 percent population growth between 2000 and 2010 according to the 2010 Census, there are 17.3 million Asian Americans today. Yet Asian Americans as a category are a diverse set of peoples from over 30 distinctive Asian-origin subgroups that defy simplistic descriptions or generalizations. They face a wide range of issues and problems within the larger American social universe despite the persistence of common stereotypes that label them as a “model minority” for the generalized attributes offered uncritically in many media depictions. Asian American Society: An Encyclopedia provides a thorough introduction to the wide–ranging and fast–developing field of Asian American studies. Published with the Association for Asian American Studies (AAAS), two volumes of the four-volume encyclopedia feature more than 300 A-to-Z articles authored by AAAS members and experts in the field who examine the social, cultural, psychological, economic, and political dimensions of the Asian American experience. The next two volumes of this work contain approximately 200 annotated primary documents, organized chronologically, that detail the impact American society has had on reshaping Asian American identities and social structures over time. Features: More than 300 articles authored by experts in the field, organized in A-to-Z format, help students understand Asian American influences on American life, as well as the impact of American society on reshaping Asian American identities and social structures over time. A core collection of primary documents and key demographic and social science data provide historical context and key information. A Reader's Guide groups related entries by broad topic areas and themes; a Glossary defines key terms; and a Resource Guide provides lists of books, academic journals, websites and cross references. The multimedia digital edition is enhanced with 75 video clips and features strong search-and-browse capabilities through the electronic Reader’s Guide, detailed index, and cross references. Available in both print and online formats, this collection of essays is a must-have resource for general and research libraries, Asian American/ethnic studies libraries, and social science libraries.
The Greenwood Encyclopedia Of Asian American Literature 3 VolumesAuthor : Guiyou Huang
ISBN : 9781567207361
Genre : Literary Criticism
File Size : 83. 79 MB
Format : PDF, ePub
Download : 678
Read : 733
Asian American literature dates back to the close of the 19th century, and during the years following World War II it significantly expanded in volume and diversity. Monumental in scope, this encyclopedia surveys Asian American literature from its origins through 2007. Included are more than 270 alphabetically arranged entries on writers, major works, significant historical events, and important terms and concepts. Thus the encyclopedia gives special attention to the historical, social, cultural, and legal contexts surrounding Asian American literature and central to the Asian American experience. Each entry is written by an expert contributor and cites works for further reading, and the encyclopedia closes with a selected, general bibliography of essential print and electronic resources. While literature students will value this encyclopedia as a guide to writings by Asian Americans, the encyclopedia also supports the social studies curriculum by helping students use literature to learn about Asian American history and culture, as it pertains to writers from a host of Asian ethnic and cultural backgrounds, including Afghans, Chinese, Japanese, Koreans, Filipinos, Iranians, Indians, Vietnamese, Hawaiians, and other Asian Pacific Islanders. The encyclopedia supports the literature curriculum by helping students learn more about Asian American literature. In addition, it supports the social studies curriculum by helping students learn about the Asian American historical and cultural experience.
Encyclopedia Of Asian American Folklore And FolklifeAuthor : Jonathan H. X. Lee
ISBN : 9780313350665
Genre : Social Science
File Size : 46. 49 MB
Format : PDF, ePub
Download : 421
Read : 927
This comprehensive compilation of entries documents the origins, transmissions, and transformations of Asian American folklore and folklife. * More than 600 entries * Contributions from more than 170 expert contributors * Introductory essays covering disciplinary theories and methods in the study of folklore and folklife * An appendix of Asian American folktales
Encyclopedia Of Chinese HistoryAuthor : Michael Dillon
ISBN : 9781317817161
Genre : History
File Size : 38. 4 MB
Format : PDF
Download : 192
Read : 495
China has become accessible to the west in the last twenty years in a way that was not possible in the previous thirty. The number of westerners travelling to China to study, for business or for tourism has increased dramatically and there has been a corresponding increase in interest in Chinese culture, society and economy and increasing coverage of contemporary China in the media. Our understanding of China’s history has also been evolving. The study of history in the People’s Republic of China during the Mao Zedong period was strictly regulated and primary sources were rarely available to westerners or even to most Chinese historians. Now that the Chinese archives are open to researchers, there is a growing body of academic expertise on history in China that is open to western analysis and historical methods. This has in many ways changed the way that Chinese history, particularly the modern period, is viewed. The Encyclopedia of Chinese History covers the entire span of Chinese history from the period known primarily through archaeology to the present day. Treating Chinese history in the broadest sense, the Encyclopedia includes coverage of the frontier regions of Manchuria, Mongolia, Xinjiang and Tibet that have played such an important role in the history of China Proper and will also include material on Taiwan, and on the Chinese diaspora. In A-Z format with entries written by experts in the field of Chinese Studies, the Encyclopedia will be an invaluable resource for students of Chinese history, politics and culture.
Encyclopedia Of Japanese American HistoryAuthor : Brian Niiya
ISBN : 0816040931
Genre : History
File Size : 48. 6 MB
Format : PDF
Download : 412
Read : 1057
Chronicles the history of Japanese Americans with entries that reveal their culture, religion, accomplishments, and social interactions with other ethnic groups in America.
The Asian American EncyclopediaAuthor : Franklin Ng
ISBN : 185435678X
Genre : Asian Americans
File Size : 36. 51 MB
Format : PDF, Docs
Download : 328
Read : 589
Explores the experience of Asian immigrants and the communities which they and their descendants have created in the United States, and offers information about the history, language and culture of Asian Americans' diverse countries of origin.
Encyclopedia Of Muslim American HistoryAuthor : Edward E. Curtis
ISBN : 9781438130408
Genre : Muslims
File Size : 67. 26 MB
Format : PDF
Download : 537
Read : 566
A two volume encyclopedia set that examines the legacy, impact, and contributions of Muslim Americans to U.S. history.
The Encyclopedia Of Contemporary Japanese CultureAuthor : Sandra Buckley
ISBN : 041548152X
Genre : Foreign Language Study
File Size : 48. 63 MB
Format : PDF, Docs
Download : 557
Read : 381
With more than 700 alphabetically arranged entries, The Encyclopedia of Contemporary Japanese Culture offers extensive coverage of Japanese culture spanning from the end of the Japanese Imperialist period in 1945, right up to the present day. Entries range from shorter definitions, histories or biographies to longer overview essays giving an in-depth treatment of major issues. Culture is defined in its broadest sense to allow for coverage of the diversity of practice and production in a country as vibrant and rapidly changing as Japan. Including a new preface by the editor to bring the book fully up-to-date with cultural developments since 2001, this Encyclopedia will be an invaluable reference tool for students of Japanese and Asian Studies, as well as providing a fascinating insight into Japanese culture for the general reader. | http://journalistesdebout.com/pdf/asian-american-history-and-culture-an-encyclopedia-an-encyclopedia/ |
The design elements library Walls, shell and structure contains 29 symbols of structural elements: walls, rooms, windows, doors, pillars.
Use the vector stencils library Walls, shell and structure to draw the floor plans and other architectural drawings, blueprints, home and building interior design, space layout plans, construction and house framing diagrams using the ConceptDraw PRO diagramming and vector drawing software.
"A wall is a horizontal structure, usually solid, that defines and sometimes protects an area. Most commonly, a wall delineates a building and supports its superstructure, separates space in buildings into sections, or protects or delineates a space in the open air. There are three principal types of structural walls: building walls, exterior boundary walls, and retaining walls.
Building walls have one main purpose: to support roofs and ceilings. Such walls most often have three or more separate components. In today's construction, a building wall will usually have the structural elements (such as 2×4 studs in a house wall), insulation, and finish elements or surface (such as drywall or panelling). In addition, the wall may house various types of electrical wiring or plumbing. Electrical outlets are usually mounted in walls.
Building walls frequently become works of art externally and internally, such as when featuring mosaic work or when murals are painted on them; or as design foci when they exhibit textures or painted finishes for effect. | https://www.conceptdraw.com/examples/building-symbol-of-a-window |
Creating reciprocal teaching and learning at Parkway Northwest
This guest blog post is the latest in a series from Christina Puntel and Geoffrey Winikur.
As a way to talk back to the Inquirer’s “Assault on Learning” series many teachers wanted to describe teaching and learning through a different lens. As teachers, we are often conditioned to view our students though misleading quantifiable measures, and thus often become complicit in deficit-based thinking about students and our profession.
What does it take to re-imagine school as a site for reciprocal teaching and learning? At Parkway Northwest, Geoff, Christina, and a group of colleagues participated in the act of re-thinking what a college preparatory curriculum looks like.
The experience at Parkway Northwest revealed a desire among teachers to teach social justice content in ways that gave students a balance of autonomy and companionship. This dialog, and the resulting SHARE project that developed at the school, turns deficit talk upside down and reclaims school as a fertile site for teacher and student learning.
Reform from within: Discourse and dialog
Parkway Northwest High School for Peace and Social Justice is a small high school that complements the School District curriculum with opportunities for students to learn about peacemaking, social justice, and leadership. Two years ago we were forced to confront the fact that many of our students were not prepared to complete a substantive research paper. This led us to grapple with the question of what it really means to prepare our students for higher education.
We began to meet, both formally and informally, in order to think about what skills students need to both enroll in college and graduate. This process was very complex and involved many lengthy discussions between teachers about what we were doing well and where we needed more professional development. As a language teacher, Christina noted her overreliance on worksheets and prepackaged curriculum. In college, language study was about culture, politics, and identity. Other discussions among teachers focused on implementing basic-skills instruction versus valuing multi-disciplinary projects that engaged students in both skills and process learning.
Given our school’s mission, we decided that the best to way to develop school-wide research skills was to design interdisciplinary projects based on research about social justice issues. Although Geoff’s experiences at Gratz provided some insight into how this could be done, we also visited a few other project-based schools and quickly realized that we could not replicate another model; rather, we had to reform from within.
Reform from within: Teacher collaboration
The process that we used as teachers to develop ourselves into learners turned out to be a generative one. Teachers who enjoyed a degree of flexibility in their classes decided to build on many of the sustainable reform models that our principal, Ethyl McGee, was already implementing. They created a collaborative research project for all 9th graders that also included a group of upperclassmen across four content areas: Spanish, history, art, research, and English (SHARE).
Teachers chose curriculum around a question, “What is culture?” We deliberately chose content that we felt connected to, that we knew well. Initially, as a teacher collaboration team, we agreed on four ideas that we wanted to see to fruition in the work:
- We wanted to plan our teaching around content we knew well, and around ideas that energized us. We thought that if we had deep connections to the content we were teaching, students would engage at a high level along with us in the learning process.
- We wanted to differentiate instruction for all learners, those with IEPs, late arrivers, non-attendees, students who were on their game all the time, students who worked well in groups, students who worked best alone, etc. We thought we could do this by making explicit the teaching strategies in our lessons and by building in a cooperative approach. We also thought we could do this by providing exciting content, to give all students the desire to know more, to question, to re-read, to re-imagine.
- We wanted students to work on challenging projects together with a nice mix of freedom and support. We thought that if we provided just the right mix, students would rise to the occasion, work interdependently and show us what they learned in authentic ways.
- We wanted students to research some aspect of the content we were exploring together to present to the school during schoolwide teach-ins, or workshop days. We thought that learning in order to teach others would make the work meaningful. We also knew we had to model the teaching strategies we thought students could use during their teach-ins, and encourage them to think about teachers from their past in order to practice some of the methods that worked for them.
As a collaboration team, we learned a great deal about teacher learning, student learning, and research. In the next post, we will share more about what actually happened during the project when students studied social justice issues like violence and anti-violence through different scholarly approaches.
The guest blog section is a place for people, other than our regular cast of bloggers, to share their views. (See our "About Our Blog" note at the top, right.) Got something you’d like to write about? Email us with a pitch, idea, or a completed post. | https://thenotebook.org/articles/2011/05/17/creating-reciprocal-teaching-and-learning-at-parkway-northwest/ |
It’s official: Majority of Americans think women are just as competent as men, if not more so
Good news, ladies: Americans now think women are just as smart and just as competent as men.
And it gets better: Among the 25% of respondents who did perceive a gender difference in smart, most said that women were more intelligent and competent than men.
So says a scientific study published Thursday in the journal American Psychologist that examines Americans’ perceptions of women over the past 70 years.
“It’s a pretty dramatic shift,” said Alice Eagly, a social psychologist at Northwestern University in Illinois who led the work. “If you think women are still seen as less capable than men, then forget it. That is not the case.”
Eagly and her co-authors analyzed 16 public opinion polls spanning from 1946 to 2018 to see how gender stereotypes have evolved over time. Specifically, they looked at three clusters of personality traits that they define as competence, communion and agency.
Competence traits include being organized, intelligent and capable.
The communion cluster includes traits generally associated with good social skills — warmth, compassion, expressiveness, generosity and altruistic impulses.
Agency traits are more self-oriented and include assertiveness, decisiveness and even aggression.
The polling data, collected by different groups over seven decades, were not uniform and took a fair amount of finessing to get in usable order, Eagly said.
For example, one poll might ask a respondent who he or she thought was more likely to be compassionate:
A) Men
B) Women
C) Men and women are equally compassionate
Another poll might ask a similar question about kindness.
To assemble enough data to be statistically significant, three researchers categorized each of the questions to see which cluster of traits they best fit. Responses to a question about whether men or women were more organized were put in the “competence” category. A question about who was more likely to stay calm in an emergency went in the “agency” category.
Eventually, clear trends emerged.
Eagly said it was not much of a surprise that the perceived competence of women steadily increased over time. As more women entered the workforce in the second half of the 20th century, more Americans had an opportunity to observe women in roles that require organization, intelligence and ability, she said.
“Back in the ’40s, the public didn’t see women working as journalists or professors, and they were not famous in the sciences,” she said. “We didn’t see them doing the wonderful brilliant things that we now see women doing.”
In addition, women now earn more bachelor’s, master’s and doctoral degrees than men — another shift from decades ago.
The bigger surprise was the discovery that over the same period, women were increasingly likely to be seen as more compassionate and socially skilled than men.
In the 1940s, just a smidge over 50% of respondents thought women had better people skills and were more compassionate and kind than men. By 2018, that percentage had grown to about 75%.
The researchers also found that there has been little change in perceptions of agency. In the mid-20th century, most Americans believed that men were more likely to be assertive than women — and that remains the case today as well.
All this may seem counterintuitive. How is it that as women play an increasingly larger role in the workforce, they are also seen as increasingly altruistic?
Eagly chalks it up to the way women and men are segregated within fields, with women often filling roles that require skills associated more with communion than agency.
For example, while there are far more women doctors now than in the 20th century, women are more likely to be pediatricians or internists, which require people skills, she said. In business, women are more likely to take on leadership roles in human resources and public relations, which also depend on good interpersonal communication.
“Stereotypes form automatically based on all the things we observe directly and indirectly,” she said.
So as the American public increasingly saw women taking jobs that required traits associated with communion, they increasingly saw women as having those traits.
Peter Glick, a social psychologist specializing in gender discrimination at Lawrence University in Wisconsin, said the work is important because it tracks how stereotypes about men and women have changed over time.
However, he cautioned that while the “female competence advantage” may feel like a win for women, it doesn’t mean discrimination is dead.
“Discrimination is a bit like that ‘whack-a-mole’ game,” he said. “Gains in one area get offset by other routes toward inequality popping up.”
Even though women are now perceived as just as competent as men, they aren’t seen as assertive enough to be promoted into leadership roles.
“Despite all the social changes, women remain the ‘nurturers’ and men still dominate leadership positions in business, politics, etc.,” he said.
Stanford University sociologist Cecilia Ridgeway agreed. She said that while women’s gains in education and the labor force have reduced the status gender gap, it has clearly not eroded it completely. Today, she said, “men’s status advantage over women rests increasingly on their believed advantage in forceful agency.”
Could that change? Might there come a day when women are seen as just as assertive, decisive and goal oriented as men?
“That’s the big question,” Eagly said.
Social psychologists are still debating whether gender stereotypes that paint women as more nurturing and men as more assertive are innate or learned.
However, one thing seems clear: If we see more women being stubborn, arrogant, ambitious and confident, this lingering stereotype could change too.
“It’s a dilemma,” Eagly said. “I hate being with men who are very dominating and talk over you. Sometimes I think, ‘I could speak more too, but I don’t want to talk like they do.’”
Get our free Coronavirus Today newsletter
Sign up for the latest news, best stories and what they mean for you, plus answers to your questions.
You may occasionally receive promotional content from the Los Angeles Times. | https://www.latimes.com/science/story/2019-07-18/women-more-competent-than-men-study |
Deconstructing Martin Luther King, Jr.'s Dream
"We want all of our rights!" Martin Luther King, Jr. told a throng of people gathered in and around Detroit's Cobo Arena on June 23, 1963. He was speaking at what he called the largest and greatest demonstration for freedom ever held in the United States. "We want them here, and we want them now!" he said.
Fifty years ago this August, King gave the iconic " I Have A Dream" speech at the March on Washington. But the first time King spoke of his dream of equality and brotherhood between the races, was earlier that summer in Detroit.
Parts of King's Detroit speech may sound familiar to those who have heard the address he gave at the March on Washington. But the Detroit speech was tailored especially for a city with a long history of Civil Rights activism.
"I have a dream this afternoon, that one day, right here in Detroit," King said, "Negroes will be able to buy a house or rent a house anywhere that their money will carry them, and they will be able to get a job."
Organizers of "The Walk to Freedom" wanted to speak out against the brutality that Civil Rights activists faced in the South. They also wanted to address the inequities in jobs, housing and education faced by blacks in the North. King's speech dealt with it all.
"I have a dream this afternoon ... that one day little white children and little Negro children will be able to join hands as brothers and sisters," King said.
King gave his Detroit speech just two weeks after NAACP field secretary Medgar Evers was assassinated. His speech also came on the heels of protests in Birmingham, Ala., where police chief Bull Connor ordered police to use fire hoses and dogs to break up demonstrations.
"Before the victory is won, some like Medgar Evers may have to face physical death," King told the Michigan crowd. "But if physical death is the price that some must pay to free their children and white brothers from an eternal psychological death, then nothing can be more redemptive.
Grace Lee Boggs, 98, a longtime Detroit resident and activist who helped to organize the "Walk to Freedom," attended King's speech afterward.
Boggs said King's address spiked the Civil Rights movement in the city. People there remember it not so much for the speeches, but for the organization and the way it brought the whole state together.
"It really changed conditions in the city," Boggs said.
Gloria Mills was just 14 in June 1963, but she was there the day more than 125,000 people streamed down Woodward Avenue.
"It was just thrilling," Mills recalled, adding that they piped King's voice out of Cobo Arena all around the river area so the people outside could hear. "I can still feel the chills from listening to that man speak because of the timing and then his leadership.
"We just knew we were on the precipice of something very, very important."
Lonnie Bunch, director of the Smithsonian's National Museum of African American History and Culture, said King often tried out themes for his speeches. Not only did he give his Detroit speech in June, but he also gave a similar one in Newark, N.J., in January 1963. Bunch says King knew it was critical to have the right tone. And by the time it came to the March on Washington, King knew exactly the rhythm he wanted and the story he wanted to tell.
"While there's something wonderful about imagining it as if it sprung from the head of Zeus, it really is part of long months of work and study just to get it right," Bunch said.
At the University of Michigan in Ann Arbor, Stephen Ward, a professor of African-American history, said people shouldn't think of King's speech in Detroit as a preview of his words at the March on Washington. Ward, who specializes in the Black Power movement of the 1960s and '70s, says the Detroit and Washington speeches are not the same.
"I think he was giving a speech for each context," Ward said. "In (Washington) D.C., he's speaking to a national audience, in the nation's capital, trying to push the civil rights legislation."
Like many preachers, and Civil Rights activists, Ward says King would use some of the same language and ideas in different places. And he believes the way most people think about the March on Washington — and the speech — is limiting.
"It was his March on Washington speech," Ward said, "which included the 'I Have a Dream' oration and idea which he gave at least in Detroit and probably a few other places, because those are the things he was thinking about and he was communicating and sharing."
The NAACP in Detroit and UAW-Ford sponsored a 50th anniversary celebration of the "Walk to Freedom."
Detroit NACCP President Rev. Wendell Anthony said the march and a series of workshops were about trying to continue King's work against violence, racism, voter suppression and the inequities in employment that continue to plague his city.
"Dr. King was about bringing folk together," Anthony said. "So that fact that we leave him on the mountaintop dreaming is our fault. He wouldn't want to be there. He wants to be back down in the valley working."
Copyright 2020 NPR. To see more, visit https://www.npr.org. | https://www.tpr.org/2013-06-23/deconstructing-martin-luther-king-jr-s-dream |
This is part 2 of my snowboard series on tuning your edges, if you haven’t detuned your edges properly, go read my detuning guide first.
Alright, so you’ve detuned your edges appropriately and assuming that you aren’t riding boxes and rails all day, you now need to sharpen and bevel your remaining edges. Obviously, don’t sharpen and bevel any places you’ve detuned, that would put you back at square one again.
Bevelling your edges
What is bevelling?
In simple speak, this means you’re going to reduce the angle of your edge. The below diagram should make this a little clearer for you.
This diagram shows you different bevels with your snowboard placed upside down, base facing up:
What does base bevel do?
Base bevel lifts your bottom edge away from the snow, this means there’s less contact with the snow which results in better gliding and speed. This means you’ll catch edges less easily, but it also means your board is a little less stable and precise when turning.
What does side bevel do?
Side bevel increases the grip of your edge. This is especially good for cutting into icy surfaces, but it also means you’ll need more precise control over your edges to avoid gripping into snow when you don’t want to.
What bevel angles do I want?
As you can see in the diagram above, adding a side bevel decreases the total angle of your edge and adding a base bevel increases the angle of your edge. The trick is getting the right balance between the two so that it matches the type of riding you want to do.
Warning: You can’t really un-do a bevel. You can add more bevel, but it’s really hard to make a 3 degree bevel back into a 2 degree bevel. Make sure you want that bevel angle before going too far.
Here is a good overall guide of what bevel angles you can use for what sort of riding:
Beginner – 1 to 2 degree base, 0 to 1 degree side
Intermediate – 1 degree base, 1 degree side
Freerider – 1 degree base, 1 to 2 side
Spinner – 2 degree base, 0 degree side
Boardercross – 0 to 1 degree base, 1 to 2 degree side
Halfpipe – 1 degree base, 1 degree side
Slalom Race – 0 to 0.5 degree base, 3 to 4 degree side
GS Race – 0.5 to 0.75 degree base, 2 to degree side
Super G – 1 degree base, 2 to 3 degree side
What bevel angle I’d recommend
Go with a 1 degree base, 1 degree side or a 2 degree base, 2 degree side if your tool only does 2 degree bevels. Overall a 1 degree bevel on your base and side is the most useful and well rounded bevel for riding all sorts of terrain. You can’t undo bevels, so there’s no point going further unless you intend to use this snowboard for only a specific type of riding.
Most other bevels are meant for halfpipe or racing specialists, so the only time I’d recommend another bevel angle is if you’re intending to use your snowboard as a jib/rail/box specific snowboard. If you’re intending to only hit rails and boxes exclusively, make your base bevel a 3 degree bevel to make it harder to catch edges on the rails and boxes.
How do I add a bevel to my snowboard?
It’s as simple as sharpening your snowboard like normal, but instead of leaving it at a 90 degree angle, set your edge sharpening tool to the bevel you want. See below for a guide to sharpening your edges.
Most tools will either have the number 88 or 89 on one side of the tool and 90 on the other side. 88 means using that side of the tool is a 2 degree bevel and 90 would mean that using that side doesn’t add any bevel.
You can get slightly more expensive tools that let you adjust the exact bevel angle to the degree you want, but they are a little more expensive and usually unnecessary if you’re just wanting a standard 1 or 2 degree bevel on each side.
Sharpening your snowboard’s edges
Why sharpen your edges?
Nice sharp edges help you to cut into any ice and hard snow when you turn. This means you can grip into the snow better and hold your edge while turning.
How to sharpen your edges
It’s really simple, even easier than detuning your edges. You’ll want a special edge tuning tool built to hold a file in place while you run it along your edges, as well as a gummi/diamond stone for smoothing out any really rough spots left by the file.
The tool and gummi/diamond stone will cost about $10-20 each at any ski or snowboard shop.
Here’s a video on how to use the tool to sharpen your edge:
Remember that you have to sharpen the edge that runs along the base of your snowboard first before you sharpen the edge that’s on the side of your snowboard. Also, don’t forget to check the bevel angle of your tool before using it!
Hope this guide helped you out. Thanks again to Beau for submitting this question topic! | http://snomie.com/snowboards-edges-bevel-sharpen/ |
Wispy clouds of gas and a strange "superbubble" dominate the view of a new Hubble Space Telescope image.
The view stars a nebula, or gas cloud, known as N44, that is located in a nearby galaxy called the Large Magellanic Cloud. In the newly released image, you can see hydrogen gas glowing in the dark, along with dark dust lanes and stars of all ages, in a complex structure roughly 170,000 light-years from Earth.
NASA said the "superbubble," which appears in the upper central part of the gas cloud, is of particular interest because scientists are trying to figure out how the 250-light-year wide structure formed.
Related: The best Hubble Space Telescope images of all time!
"Its presence is still something of a mystery," agency personnel wrote in a statement (opens in new tab), explaining there are two leading hypotheses. One is that huge stars blew away the gas with stellar winds, but the wind velocities measured there are "inconsistent" with what the models suggest, according to the statement.
Another possibility is perhaps a dying star's explosion, known as a supernova, caused the hollow in the gas. Lending credence to the supernova theory is evidence of at least one supernova remnant near the superbubble.
Astronomers spotted a 5-million-year-old difference between stars within the superbubble and stars at the rim of the superbubble. NASA said this age difference suggests "multiple, chain-reaction star-forming events" and pointed to a zone of intense star formation at the lower right of the superbubble, which appears in deep blue in the Hubble Space Telescope image.
The glowing gas of N44 pegs it as an emission nebula, a type of gas cloud that has the molecules energized by star radiation. The gas emits light energy as it begins cooling, producing the glowing effect.
Follow Elizabeth Howell on Twitter @howellspace (opens in new tab). Follow us on Twitter @Spacedotcom (opens in new tab) and on Facebook (opens in new tab). | https://www.space.com/hubble-space-telescope-superbubble-nebula-image |
An organization is responsible for personal information under its control and shall designate an individual or individuals who are accountable for the organization's compliance with the following principles.
The knowledge and consent of the individual are required for the collection, use, or disclosure of personal information, except when inappropriate.
However, in consideration of Golf Québec’s services, including tournaments, Golf Québec reserves the right to use photographs and names of participants, without limitation, for television, radio, motion picture, print advertising, Internet and other related media communications related to such activities. For example, unless asked otherwise by a participant and in writing, it is possible that the complete list of scores and photo gallery of a tournament would be posted on Golf Québec's Facebook Page.
Personal information shall be as accurate, complete and up-to-date as necessary for the purposes for which it is to be used.
An individual shall be able to address a challenge concerning compliance with the above principles to the designated individual or individuals accountable for the organization's compliance.
• To comply with legal and regulatory requirement.
Personal information may also be used for other purposes, subject to obtaining your prior consent for such use.
Personal information will not be released to third parties other than for the mailing of partnership golf publications and materials. There are no circumstances under which we will provide or sell personal information, including your email address, to third parties.
Consent to use personal information may be obtained in various ways. We may obtain your express consent or we may determine that consent has been implied by the circumstances. Express consent could be in writing (for example, in a signed consent, e-mail or application form), or verbally in person or over the telephone. When we receive personal information from you that enables us to provide you with requested services, your consent to allow us to deal with that personal information in a reasonable manner would be implied. If you need to provide personal information about other individuals (such as employees, dependants, etc.), you must obtain their consent for these purposes prior to your disclosure to us.
Providing us with your personal information is always your choice. When you request services from us, we ask that you provide information that enables us to respond to your request. In doing so, you consent to our collection, use and disclosure of such personal information for these purposes. You also authorize us to use and retain this personal information for as long as it may be required for the purposes described above. Your consent remains valid even after the termination of our relationship with you, unless you provide us with written notice that such consent is withdrawn. By withdrawing your consent, or not providing it in the first place, you may limit or even prevent us from being able to provide you or an authorized third party (such as an employer) with the service desired.
In certain circumstances, consent cannot be withdrawn. There are legal exceptions where we will not need to obtain consent or explain the purposes for collection, use or disclosure of personal information. For example, this exception would apply if there is an emergency that threatens the life, health or security of an individual, or if we must comply with a court order.
Keeping information accurate and complete is essential. Having accurate information about you enables us to give you the best possible service. You have the right to access, verify and amend the information we have about you. We rely on you to keep us informed of any changes, such as a change of address, telephone number or any other circumstances - simply contact our office. Despite our best efforts, errors sometimes do occur. If you identify any personal information that is out of date, incorrect or incomplete, please let us know and we will make the corrections promptly and use every reasonable effort to communicate these changes to other parties who may have inadvertently received incorrect or out of date information from us.
© 2016 www.golfquebec.org. All Rights Reserved. | http://www.golfquebec.org/en/pages.asp?id=63 |
The use of the same obverse die by multiple cities in the Roman provinces has been much discussed from a numismatic standpoint, with particular focus on its implications for the system of coin production and for control of minting in the Roman provinces. Instead of considering die sharing as evidence to aid our understanding of the coinage, this article switches the focus onto the practice itself and examines it as a historical process in its own right. It examines how and why the practice came to be so widely adopted in the first half of the third century CE, using the spread of die sharing as a proxy for the spread of ideas. It reveals that the practice spread in a series of regional fits and starts, as the idea was experimented with by groups of cities, before being discarded and then often taken up again at a later point in time. In conclusion, I suggest that the explosion in use of shared dies in third-century CE Asia Minor can be better explained by connections between cities that were conducive to the spread of ideas than by any inherent benefit arising from the practice itself. | https://www.ajaonline.org/article/4243 |
Understanding that the Amped Wireless TAP-R2 is only an AC750 device, comparing against our other Wireless-AC suite of routers might seem a bit unfair. We still would like to get a measure of how well the TAP-R2 performance is when connected to a very fast 802.11ac device.
We connected the Amped TAP-R2 to our desktop PC running Windows 8.1 Pro 64-bit and configured the router using Automatic settings for our 5Ghz band and called it LEGIT. We then took our Alienware M17XR4 laptop to use as a client and connected it to a Netgear R7000 Router. Next, we configured the Netgear R7000 in Bridge Mode and paired it with each of our 802.11ac routers, including the Amped Wireless TAP-R2. The R7000 is one of the latest wireless-AC routers on the market and works as an excellent bridge.
We took our Alienware M17XR4/R7000 Bridge and didnt tell the client anything more than the SSID name for each router tested and let it automatically choose the cleanest channel to connect to. For the fastest possible data throughput, the routers were set to Unsecure Mode and with WMM turned ON. After connecting, we ran the application LAN Speed Test (LST) to measure file transfer and network speeds. LST builds a file in memory and then transfers the packet without the effects of windows file caching. It then reports the time and calculates the network speed.
We repeated the test 2 more times rotating the router 90 degrees after each test to make sure that the routers speed was affected adversely by its orientation. We took speed measurements at three different locations of the Legit Bunker: In the Same Room at exactly 5.5 feet away from the test router; in a 2nd Room about 20 feet away from the router through a wooden door; and in a Far Room, one level below (wood framed floor) where the client router/bridge sat. Doing this should really give us a nice taste of how our routers perform in the real-world.
Benchmark Results: Looking at the 1MB data in the same room should give us an indication of the maximum throughput our routers can achieve. With an Average Read Speed of 91.7 Mbps and Average Write Speed of 93.40 Mbps, the Amped Wireless TAP-R2 is clearly out matched by our suite of routers using multiple antennas over the 5GHz band.
Benchmark Results: We look at 100 MB sized data packets because it is a good indicator of what multimedia streaming performance should be. The throughput test results for the Amped Wireless TAP-R2 is consistent with what we saw at 1MB. Average Write Speeds are around 92 Mbps while Average Read Speeds drop slightly to around 80 Mbps. | https://www.legitreviews.com/amped-wireless-tap-r2-touch-screen-wireless-router-review_164065/3 |
HIP29096 is the reference name for the star in the Hipparcos Star Catalogue.
AE Columbae has alternative name(s) :- , AE Col.
More details on objects' alternative names can be found at Star Names .
The location of the variable star in the night sky is determined by the Right Ascension (R.A.) and Declination (Dec.), these are equivalent to the Longitude and Latitude on the Earth. The Right Ascension is how far expressed in time (hh:mm:ss) the star is along the celestial equator. If the R.A. is positive then its eastwards. The Declination is how far north or south the object is compared to the celestial equator and is expressed in degrees. For AE Columbae, the location is 06h 08m 14.54 and -28° 59` 41.8 .
All stars like planets orbit round a central spot, in the case of planets, its the central star such as the Sun. In the case of a star, its the galactic centre. The constellations that we see today will be different than they were 50,000 years ago or 50,000 years from now. Proper Motion details the movements of these stars and are measured in milliarcseconds. The star is moving 109.89 ± 2.61 miliarcseconds/year towards the north and -80.34 ± 3.27 miliarcseconds/year east if we saw them in the horizon. . When the value is negative then the star and the Sun are getting closer to one another, likewise, a positive number means that two stars are moving away. Its nothing to fear as the stars are so far apart, they won't collide in our life-time, if ever.
Luminosity is the amount of energy that a star pumps out and its relative to the amount that our star, the Sun gives out. The figure of 0.06 that I have given is based on the value in the Simbad Hipparcos Extended Catalogue at the University of Strasbourg from 2012.
The source of the information if it has a Hip I.D. is from Simbad, the Hipparcos data library based at the University at Strasbourg, France. Hipparcos was a E.S.A. satellite operation launched in 1989 for four years. The items in red are values that I've calculated so they could well be wrong. Information regarding Metallicity and/or Mass is from the E.U. Exoplanets. The information was obtained as of 12th Feb 2017.
|Primary / Proper / Traditional Name||AE Columbae|
|Alternative Names||HIP 29096, AE Col|
|Constellation's Main Star||No|
|Multiple Star System||No / Unknown|
|Star Type||Variable Star|
|Galaxy||Milky Way|
|Constellation||Columba|
|Absolute Magnitude||9.11 / 8.70|
|Visual / Apparent Magnitude||11.85|
|Naked Eye Visible||Requires a 4.5 - 6 Inch Telescope - Magnitudes|
|Right Ascension (R.A.)||06h 08m 14.54|
|Declination (Dec.)||-28° 59` 41.8|
|Galactic Latitude||-21.52 degrees|
|Galactic Longitude||235.48 degrees|
|1997 Distance from Earth||28.35 Parallax (milliarcseconds)|
|115.05 Light Years|
|35.27 Parsecs|
|2007 Distance from Earth||23.43 Parallax (milliarcseconds)|
|139.21 Light Years|
|42.68 Parsecs|
|8,803,320.59 Astronomical Units|
|Galacto-Centric Distance||24,207.84 Light Years / 7,422.00 Parsecs|
|Proper Motion Dec.||109.89 ± 2.61 milliarcseconds/year|
|Proper Motion RA.||-80.34 ± 3.27 milliarcseconds/year|
|B-V Index||1.40|
|Stellar Luminosity (Lsun)||0.06|
|Exoplanet Count||None/Unaware|
|Mean Variability Period in Days||0.194|
|Variable Magnitude Range (Brighter - Dimmer)||11.693 - 11.986|
|SIMBAD Source||Link|
There's no register feature and no need to give an email address if you don't need to. All messages will be reviewed before being displayed. Comments may be merged or altered slightly such as if an email address is given in the main body of the comment. | https://www.universeguide.com/star/aecolumbae |
7 Tasting Notes
I bought the Oolong sample set from Teavivre and had been discouraged after trying a couple of them. I thought maybe I should just stick to Puerh teas and Oolong wasn’t for me! This tea changed that all for me however. It has a smooth flavor with none of the bitterness I experienced from the other samples, even in the early steeps. The later steeps became sweeter with some vanilla notes that were very pleasant. I can’t wait to give it another try.
Flavors: Floral, Grass, Menthol, Pleasantly Sour, Sweet, Vanilla
Preparation
This tea may be for somebody, but it wasn’t for me. It has a strong rice flavor that seemed to mask all of the other flavors even in later steeps. After the fourth steep, some bitterness started to creep out. If you like the rice flavors in your tea, this is the tea for you. If you like a more traditional tasting pu-erh, I would shy away from this or stick to a sample.
Flavors: Rice, Tobacco, Wet Earth, Wood
Preparation
I was so excited to try this tea when the 25g sample came in and I haven’t had time to brew it gong fu style yet, so I’ve only tried it western style so far. Regardless, it is one of the best white teas I have tried. Usually my biggest turn off to white teas is an overpowering floral taste but this one has a very nice balance of a smooth creamy taste within the floral taste. I will definitely be trying this gong fu style soon and following up. I would definitely reorder.
Flavors: Creamy, Floral, Hay, Honey, Vanilla
Preparation
I love this tea on a cold winter’s morning to get me started. It reminds me a lot of a sweet chai which I really enjoy. It’s a little strong for a daily drinker, but excellent as an occasional pick-me-up. | https://steepster.com/Keatkins |
World War II had just begun in Europe.
The Wizard of Oz captivated moviegoers’ imaginations with its revolutionary cinematography, portraying mysterious lands where strange rules applied.
At Princeton, another kind of magic was about to begin: a complete rethinking of time and reality, led by two extraordinary physicists: John Wheeler and his student Richard Feynman. Their mission: to rescue quantum physics from its mathematical contradictions and establish what are the fundamental ingredients of the universe.
Wheeler pondered the idea that all particles could be built from electrons, perhaps even from a single electron zigzagging backward and forward in time. Feynman, in turn, developed a new way of framing quantum mechanics as a medley of contradictory timelines, which Wheeler dubbed “sum over histories.” Time would no longer be a steady stream, representing a single progression of events, but rather a labyrinth of multiple paths, somehow all taken at once. Feynman later modified this idea to become the core of his Nobel-Prize-winning contributions to quantum electrodynamics.
After the physics community adopted Feynman’s radical notion, illustrated by special diagrams he designed, Wheeler tried to apply it to the universe itself. Wheeler’s goal was to identify an elusive quantum theory of gravitation. In the process, he proposed peculiar spatial shortcuts called wormholes, and other kinds of exotic entities called geons and spacetime foam. However, there was a catch. We, as observers, are part of the universe. Therefore a complete quantum theory of the universe needs to include us as well.
Enter another of Wheeler’s brilliant students, Hugh Everett, who proposed what became known as the Many Worlds Interpretation of Quantum Mechanics. In Everett’s model, human observers “split” into multiple copies in parallel universes each time they take a quantum measurement. Not only is time a labyrinth; it is a mirror maze reflecting multiple versions of each one of us!
Step into the quantum labyrinth and discover a realm of fascinating scientific history, with many unexpected twists and turns!
The Quantum Labyrinth: How Richard Feynman and John Wheeler Revolutionized Time and Reality, published by Basic Books, tells the story of how the two eminent physicists engaged in a lifelong exchange of ideas, resulting in many of the innovations of late 20th century physics. While outwardly they seemed very different, they shared a deep bond. The soft-spoken Wheeler, though conservative in appearance, was a raging nonconformist full of wild ideas about the universe. The boisterous Feynman was a cautious physicist who believed only what could be tested. Yet they were complementary spirits.
Feynman and Wheeler’s collaboration led to a complete rethinking of the nature of time and reality. It enabled Feynman to show how quantum reality is a combination of alternative, contradictory possibilities, and inspired Wheeler to develop his landmark concept of wormholes, shortcuts through space. Feynman received a Nobel Prize for his research, and became a renowned popularizer of science. Wheeler went on to pioneer the concept of black holes. Together, Feynman and Wheeler made sure that quantum physics would never be the same again.
Paul Halpern is a professor of physics at the University of the Sciences in Philadelphia, and the author of fifteen popular science books, including Einstein’s Dice and Schrödinger’s Cat. He is the recipient of a Guggenheim Fellowship, a Fulbright Scholarship, and an Athenaeum Literary Award. Halpern has appeared on numerous radio and television shows including “Future Quest,” “Radio Times,” several shows on the History Channel, and “The Simpsons 20th Anniversary Special.” He has contributed opinion pieces for the Philadelphia Inquirer, blogs frequently on Medium, and was a regular contributor to NOVA’s “The Nature of Reality” physics blog. He lives in the suburbs of Philadelphia. | https://quantumlabyrinth.com/ |
Writing an essay is at all times about selecting the correct phrases. Strong conclusion – The conclusion restates the thesis and makes use of parallel construction to provide the essay a way of importance and finality. Within the previous classes, students analyzed the construction of the model literary essay using the Painted Essay construction and wrote their introductory paragraph. They build on these foundations in this lesson.
Expository essay is used to inform, describe or explain a topic, using vital facts and educating reader about the matter. Largely written in third-individual , using “it”, “he”, “she”, “they”. Expository essay uses formal language to debate somebody or something. Examples of expository essays are: a medical or organic situation, social or technological course of, life or character of a famous person. Writing of expository essay usually consists of following next steps: organizing thoughts ( brainstorming ), researching a topic, developing a thesis statement , writing the introduction, writing the body of essay, writing the conclusion. 12 Expository essays are often assigned as a part of SAT and other standardized testings or as a homework for highschool and college college students.
Now, similar to with every other the other aspects of syntax we’re discussing right here in this article, merely stating that this sentence is declarative or this sentence is exclamatory won’t garner you any additional points on the AP® English Literature exam. It will be just as absurd to write that this sentence is long, whereas this one is brief. Assume your take a look at grader is an intelligent individual who can see these things for his or her self.
A literary analysis is the method the place you learn a literary work very carefully to figure out how the creator gets their main points throughout. Begin by taking notes on the text and reading it very carefully, then develop and outline your argument. Write the analysis in keeping with your outline, and proofread it carefully before turning it in or sending it on.
Present college students that shut studying and gathering proof would not have to be a mundane, one-dimensional process. Provide the context for the subject sentence, whether or not it pertains to a quote, a particular incident in society, or one thing else. Supply evidence on who, what, the place, when, why, and the way.
Clarifying Trouble-Free Programs Of essay samples
Simple essay samples Solutions – Where To Go
1. Offers a brand new reality or piece of evidence rather than evaluation. Though it is attainable to offer two pieces of proof together and analyze them in relation to one another, simply offering another piece of evidence as a stand in for evaluation weakens the argument. Telling the reader what happens next or one other new reality will not be evaluation.
Getting around the Internet, you have positively seen a literary analysis instance or two, but they didn’t exactly appear like a chapter from Tom Sawyer, right? The factor is, literary essay writing is another sport to play from the attitude poetics definition of paperwork creation. Maze-like ideas with numerous deeply creative descriptions are not about writing this type of essay. We’ll explain why proper under.
A reflective essay is an analytical piece of writing wherein the author describes an actual or imaginary scene, occasion, interplay, passing thought, reminiscence, or type — adding a personal reflection on the which means of the subject within the author’s life. Thus, the main target is not merely descriptive. The author does not simply describe the scenario, however revisits the scene with more element and emotion to examine what went well, or reveal a necessity for additional learning — and may relate what transpired to the rest of the author’s life.
Start your introduction with a grabber. In a literary evaluation essay, an efficient grabber could be a brief quote from the textual content you’re analyzing that encapsulates some side of your interpretation. Other good grabbers are quotes from the e book’s author regarding your paper’s matter or one other facet relevant to the textual content and the way you interpreted it. Place the quote in quotation marks as the primary sentence of the introductory paragraph. Your next sentence ought to determine the speaker and context of the quotation, as well as briefly describing how the quote relates to your literary analysis.
Candidates are expected to write nicely-organized essays in clear and exact prose. The essay part is scored by faculty at the institution that requests it and is still administered in paper-and-pencil format. There’s an additional charge for taking this part, payable to the institution that administers the exam.
From a general view, literal analysis delves into the why and tries to grasp the obvious and hidden meanings that lurk beneath the main plot. It causes one to not only replicate on the story itself but to understand the bigger image of historical past, human condition and many others. Reading evaluation, then again, is the act of studying to extract information.
Do not assume that as long as you address considered one of these points, your paper shall be interesting. As mentioned in step 2, you’ll want to address these massive topics in a posh manner. Keep away from going into a subject with a preconceived notion of what you will find. Be ready to challenge your individual ideas about what gender, race, or class mean in a specific text. | https://mysignaturechain.com/explaining-clear-cut-products-in-literary-analysis-conclusion-example/ |
Known as Gektograf, Gennadi Blokhin is a professional photographer from St.Petersburg, Russia. His series of black and white photographs of urban landscapes is similar to the paintings. The paintings of gloomy, hopeless St. Petersburg, with its poverty, misery, despair, illness, and death. St. Petersburg through the eyes of Rodion Raskolnikov, main character of Dostoevsky’s “Crime and punishment”. The rainy city of stone sphinxes, damp yard-wells and gloomy ceremonial stones inspires photographers, artists, and poets, in particular, Joseph Brodsky. According to him, in St. Petersburg there is this mystery – it really affects your soul, it forms it. The person who grew up there, or at least spent his youth there – it seems to me difficult to confuse him with other people. | https://viola.bz/black-and-white-st-petersburg-by-gennadi-blokhin/ |
The Data Warehouse Engineer is tasked with overseeing the full life-cycle of back-end development of the business’s data warehouse. The Data Warehouse Engineer is responsible for the development of ETL processes, cube development for database and performance administration, and dimensional design of the table structure.
The Data Warehouse Engineer works closely with the data analysts, data scientists, product management, and senior data engineering teams in order to power insight and avail meaningful data products for the business and enable consistently informed management decisions.
Management: The Data Warehouse Engineer plays a managerial role where he provides day-to-day support of the data warehouse and troubleshoots existing procedures and process. He defines and promotes the department’s best practices and design principles for data warehousing techniques and architecture. The Data Warehouse Engineer additionally strives to improve data organization and accuracy. In this capacity, he monitors and troubleshoots performance issues on data warehouse servers and assists in the development of business intelligence, business data standards, and processes.
The Data Engineer also plays a key role in technological decision making for the business’s future data, analysis, and reporting needs. He supports the business’s daily operations inclusive of troubleshooting of the business’s data intelligence warehouse environment and job monitoring. It is also the role of the Data Warehouse Engineer to guide the business in identifying any new data needs and deliver mechanisms for acquiring and reporting such information as well as addressing the actual needs.
Design/Strategy: The Data Warehouse Engineer designs and supports the business’s database and table schemas for new and existent data sources for the data warehouse. He additionally creates and supports the ETL in order to facilitate the accommodation of data into the warehouse using SSIS and other technologies. In this capacity, the Data Warehouse Engineer designs and develops systems for the maintenance of the business’s data warehouse, ETL processes, and business intelligence.
Collaboration: The role that the Data Warehouse Engineer plays is highly collaborative and, as such, he works closely with data analysts, data scientists, and other data consumers within the business in an attempt to gather and populate data warehouse table structure, which is optimized for reporting. The Data Warehouse Engineer also works closely with other disciplines/departments and teams across the business in coming up with simple, functional, and elegant solutions that balance data needs across the business.
The Data Warehouse Engineer partners with the senior data analytics management and senior data warehouse engineering in an attempt to refine the business’s data requirements, which must be met for building and maintaining data warehouses.
Analytics: The Data Warehouse Engineer plays an analytical role in quickly and thoroughly analyzing business requirements for reporting and analysis and subsequently translating the emanating results into good technical data designs. In this capacity, the Data Warehouse Engineer establishes the documentation of reports, develops, and maintains technical specification documentation for all reports and processes.
Knowledge: The Data Warehouse Engineer is also tasked with gathering and maintaining best practices that can be adopted in big data stacking and sharing across the business. The Data Warehouse Engineer provides expertise to the business in the areas of data analysis, reporting, data warehousing, and business intelligence. It is also the Data Warehouse Engineer’s duty to provide technical expertise to the business on business intelligence data architecture and also on structured approaches for transitioning manual applications and reports to the business.
Other Duties: The Data Warehouse Engineer plays similar duties as he deems fit for the proper execution of his duties and duties as delegated by the Senior Data Warehouse Engineer, Head of Analytics, Director Analytics, Chief Data Officer, or the Employer.
Education: The Data Warehouse Engineer has to have a bachelor’s degree in Computer Science, Data Science, Information Technology, Information Systems, Statistics, or any other related field. An equivalent of the same in working experience is also acceptable for the position.
Experience: A candidate for this position has to have experience of at least 2 years in SQL server coding and SQL server database administration. The candidate has to have additional experience working with SQL server integration services or any similar ETL tools. The candidate must also demonstrate a strong understanding of dimensional modeling as well as other data warehousing techniques.
A suitable candidate for this position will also have had experience in data warehouse development and architecture and hands-on physical and logical database designing. The candidate will additionally have experience with Teradata, coupled with experience working on projects within a collaborative setting composed of cross-functional, technical, and non-technical personnel. The candidate will further have had experience working with Tableau or any other business analytics platforms, for example, SpagoBI.
Communication Skills: Communication skills are a must have for the Data Warehouse Engineer. These skills will be necessary in facilitating the clear conveyance of technical communications in cross-functional settings. Communication skills will also be necessary in the drafting of clear and understandable data designs that will be reviewed by senior data warehouse engineers as well as the clear articulation of documentation and reporting processes that will apply across the business. The adherence to these processes and their maintenance will be highly dependent on the clarity with which they are described and conveyed by the Data Warehouse Engineer.
Computer Skills/Ms Office/Software: The Data Warehouse Engineer must possess excellent computer skills and be highly proficient in the use of Ms Word, PowerPoint, Ms Excel, MS Project, MS Visio, and MS Team Foundation Server, which will all be necessary in the creation of visually and verbally engaging data designs and tables as well as the communication of documentation and reporting processes for use across all departments in the business. The candidate’s proficiency in data visualization tools will also make him better suited to play this role.
Analytics: A candidate for this position must have a deep passion for data analytics technologies as well as analytical and dimensional modeling. The candidate must be extensively familiar with ETL (Extraction, Transformation & Load), data warehousing, and business intelligence tools such as Qlikview. The candidate must have the skill to draft, analyze, and debug SQL queries and also be proficient in a scripting language, for example, Java, Python, C Sharp, Perl, R, and so forth. The candidate must also have vast knowledge of database design and modeling in the context of data warehousing. He will additionally be skilled in diagnosing complex data warehouse ETL processes, business logic failures, and data flows, in order to quickly resolve issues.
Interpersonal Skills: The Data Warehouse Engineer has to be an individual with a positive can-do attitude, be open and welcoming to change, be a self-starter and be self-motivated, have an insatiable thirst for knowledge, be proactive and go beyond the call of duty, take accountability for business performance, have innovative problem solving skills, be a creative and strategic thinker, and have an ability to remain calm and composed in times of stress and uncertainty.
People Skills: The Data Warehouse Engineer must have an ability to establish strong, meaningful, and lasting relationships with others. He will be approachable and likable inspiring trust and a feeling of dependability in others, hence, giving more credibility to his insights and directives in the view of collaborating personnel, senior management, and his colleagues. | https://www.cleverism.com/job-profiles/data-warehouse-engineer/ |
It has been suggested that the primary motor cortex plays a substantial role in the neural circuitry that controls swallowing. Although its role in the voluntary oral phase of swallowing is undisputed, its precise role in motor control of the more reflexive, pharyngeal phase of swallowing is unclear. The contribution of the primary motor cortex to the pharyngeal phase of swallowing was examined using transcranial magnetic stimulation (TMS) to evoke motor evoked potentials (MEPs) in the anterior hyomandibular muscle group during either volitional submental muscle contraction or contraction during the pharyngeal phase of both volitionally, and reflexively, initiated swallowing. For each subject, in all three conditions, TMS was triggered when submental surface EMG (sEMG) reached 75% of the mean maximal submental sEMG amplitude measured during 10 volitional swallows. MEPs recorded during volitional submental muscle contraction were elicited in 22 of the 35 healthy subjects examined (63%). Only 16 of these 22 subjects (45.7%) also displayed MEPs recorded during volitional swallowing, but their MEP amplitudes were larger when triggered by submental muscle contraction than when triggered by volitional swallowing. Additionally, only 7 subjects (of 19 tested) showed MEPs triggered by submental muscle contraction during a reflexively triggered pharyngeal swallow. These differences indicate differing levels of net M1 excitability during execution of the investigated tasks, possibly brought about by task-dependent changes in the balance of excitatory and inhibitory neural activity. | https://researchnow.flinders.edu.au/en/publications/task-dependent-differences-in-corticobulbar-excitability-of-the-s |
Beyond object play.
Facilitating generative play
More recent initiatives
Supporting Play with Peers
Children with disabilities engage in more solitary play, are likely to exhibit challenging behaviors without structure or supports, do not initiate or initiate inappropriately interactions with other children.
Fun and Friendships
Teaching classroom personnel and peers how to support play increases interactions and engagement while decreasing challenging behavior
Autism Spectrum Disorder (ASD)
Increasing incidence
Highly variable characteristics
Early identification leads to earlier intervention 18 mo rather than 4 years
Why we need more research on play
Core deficits in ASD
Children with autism spectrum disorders (ASD) display core deficits in:
Symbolic play
Social interaction
Language (DSM-IV-TR; APA, 2000)
Generally agreed that 1 or more types of play delayed (e.g. symbolic, imaginative)
Early Intervention in ASD: Multiple high quality studies Outcomes
Increased SP and JA skills post play intervention
Generalized skills across persons
Increased vocabulary
Active engagement
Initiations of communication
Limitations
Variable use of standardized measures
More clinic based than classroom
Families or teachers not interventionist
Short term with little follow up
What happens if play is independent variable? One example to considerSymbolic Play Intervention
Multiple baseline design across participants
Setting: Childrens ECSE classrooms
4 days/week, 18 min/session, 8-12 weeks total duration
Measures: Play (symbolic and nonsymbolic)
Language
On floor, near pretend play area in classroom
10 min/child (5 explicit, 5 supported)
During free-play time (observation)
Toys in all environments comparable and matched to developmental level and child interests
Play environments
Key findings
Children increased rates and levels/complexity of play (from NSP to SP)
Children increased total expressive language scores from baseline to intervention
Possible relation between language and play observed
Repetitive behaviors decreased in similar trend to increases in play signifying a possible relation
Independent play was observed in group times and areas
So what?
Professional development. Play can be in classroom doesnt have to be reward
Doesnt need intensive special toys, times
Promotes competence and independence in key learning domains
Engaged parents and teachers in perspective changing
What do we need to know about play for children with ASD and other disabilities? Theory that accounts for the underlying play differences
Types/categories of play deficits (it would help if we had consistent terminology)
Relationships between play, language, social interaction and repetitive behaviors
Processes of play such as flexibility and diversity
Interventions including those that support parent and peer participation
What is next for the fields of EI and ECSE?
ReferencesHoney, E., Leekam, S., Turner, M., McConachie, H. (2007). Repetitive behavior and play in
typically developing children and children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 37, 1107-1115.
Kasari, C., Freeman, S., & Paparella, T. (2006). Joint attention and symbolic play in young children with autism: A randomized controlled intervention study. Journal of Child Psychology and Psychiatry, 47, 661-620.
Kasari, C., Paparella, T., Freeman, S., & Jahromi, L.B. (2008). Language outcome in autism: Randomized comparison of joint attention and play interventions. Journal of Consulting and Clinical Psychology, 76, 125-137.
Lifter, K. (2000). Linking assessment to intervention for children with developmental disabilities
or at-risk for developmental delay: The Developmental Play Assessment (DPA) instrument. In K. Gitlin-Weiner, A. Sandgrund, & C. Schafer (Eds.). Play diagnosis and assessment(2nd Ed., pp. 228-261). New York: Wiley.
Lifter, K., & Bloom, L. (1989). Object knowledge and the emergence of language. Infant Behavior and Development, 12, 395-423.
ReferencesLifter, K. Ellis, J., Cannon, B., & Anderson, S.R. (2005). Developmental specificity in targeting and teaching
play activities to children with pervasive developmental disorders. Journal of Early Intervention, 27, 247267.
Lifter, K., Sulzer-Azaroff, B., Anderson, S., & Cowdery, G. (1993). Teaching play activities to
preschool children with disabilities: The importance of developmental considerations. Journal of Early Intervention, 17(2), 139-159.
Luze, G.J., Linebarger, D.L., Greenwood, C.R., Carta, J.J., Walker, D., Leitschuh, C., et al. (2001). Developing a general outcome measure of growth in expressive communication of infants and toddlers. School Psychology Review, 30(3), 383-406.
Rubin, K., Fein, G., & Vandenberg, B. (1983). Play. In E.M. Hetherington (Ed.), Handbook of child psychology (vol. 4): Socialization, personality, social development. New York: Wiley.
Ungerer, J. A., & Sigman, M. (1981). Symbolic play and language comprehension in autistic children. American Academy of Child Psychology, 20, 318-337.
Westby, C. (1980). Assessment of cognition and language abilities through play. Language, Speech, and Hearing Services in Schools, 11, 154-168.
Wing, L., & Gould, J. (1979). Severe impairments of social interaction and associated abnormalities in children: Epidemiology and classification. Journal of Autism and Developmental Disorders, 9, 1130. | https://dokumen.tips/documents/embracing-play-for-all-children.html |
"BEAUTIFUL, SHIMMERING AND WONDERFUL"
LAUREN LAVERNE \\ BBC 6 MUSIC
"REAL, EFFORTLESS SIMPLICITY"
JAMIE CULLUM \\ BBC RADIO 2
Originally from the East Sussex coast, Isobel's music weaves a tapestry of breathtaking songwriting, detailed, electronic infused production and immersive field recording. An impressive discography of four self-released, self-produced solo albums and over 25 million Spotify streams has secured Isobel's place as one of the UK DIY music scene's most prolific and independent artists.
Alongside this, Isobel's sonic arts practice, cultivated during her time studying for her PhD at Belfast's renowned Sonic Arts Research Centre, reveals a truly distinctive exploration into sound, identity and place. This work has been archived in The British Library and The British Music Collection and performed internationally.
Isobel is currently working on a new collection of songs, sounds and visuals exploring themes of alienation, disability and disgust. More news on this coming soon. | https://www.isobelanderson.com/about |
I have been writing and editing for more than 35 years. I hold a degree in Journalism from Temple University in Philadelphia, where I studied writing for print and broadcast. After graduation, I spent five years with two major HR consulting firms, creating employee communications and editing complex consulting reports. I also designed and led in-house workshops for consultants on writing and self-editing. Following that, I wrote successful grant proposals for numerous educational and literacy nonprofits. Most recently, I have been working with an educational publisher on student and teacher materials, ensuring their accuracy and consistency.
In addition to polishing your text, I can help you distill your writing into reader-friendly, approachable writing, no matter how complex the material. I work with New York Times, AP and Chicago style guides, as you choose. As part of my service, I create a customized style guide to ensure consistency across all your communications. I can also use your existing style guide if you have one. | https://www.bethduddyediting.com/about |
Discover the daily job duties, required skills, and common qualifications to become a front-end web developer in Australia. Explore statistics on the profession including salary potential, weekly hours, and main industries of employment.
Overview
Front-end web developers are responsible for designing and implementing the visual elements that users view and interact with on a website application. They are typically supported by a back-end web developers – those who focus on the server side of web development.
Common job duties include the analysis and debugging of website code, providing technical solutions to ensure the quality and efficiency of applications. They likely work in team settings with other developers, web designers, or network specialists; communicating any web-related issues such as security, backup and disaster recovery.
Employment for web developers is expected to grow very strongly in the coming years – from 15,000 in 2018 to 18,000 in 2023.
Key Skills
- Excellent writing comprehension and for documenting website code, along with commenting or reporting on specific elements
- A team-player mentality with an ability to meet deadlines and collectively work towards specific requirements
- High-level knowledge of multiple programming languages
- Adaptive to new kinds or new versions of software
- Sharp programming skills for creating (and debugging) maintainable, efficient, and error-free code
- Advanced knowledge of Office applications, databases, and networks
Quick Facts
Front-end web developers can find work in most regions of Australia, with Victoria currently holding the largest share of employment. Weekly salary potential is higher than the all-jobs average, with plenty of opportunity for full-time work.
Salary Range
$50,000 – $108,000 (Median: $72,000)*
Average Weekly Hours
42 hours (vs. all jobs average of 44 hours)*
Main Industries
Professional, Scientific, and Technical Services; Information Media and Telecommunications and Education and Training
Most Common Qualification Level
47.8% hold a Bachelor’s Degree*
STUDY PATHWAYS
Getting the skills you need is simple by studying one of our information technology courses below.
- ICT50615 – Diploma of Website Development
- ICT50118 – Diploma of Information Technology
- Certified Cyber Security Professional
*Sources: payscale.com.au and joboutlook.gov.au – All information is to be used as a guide only, and are accurate at the time of publication.
Techtalk
Get the latest news from AIICT, industry updates and course information.
Industry Partnerships
A BALANCED APPROACH TO TRAINING WITH DDLS
DDLS is Australia’s largest provider of corporate IT and process training, with the widest portfolio of strategic partners and courses in Australia. For more than 25 years, DDLS has been an award-winning non-accredited training provider, spanning across Australia and into Asia.
DDLS partners with world-class companies to help organisations and individuals in the IT industry remain up-to-date with new processes, technology and platforms to reduce risk and enable efficient business practices.
GAIN A COMPETITIVE EDGE WITH THE MICROSOFT AZURE
Azure Dev Tools for Teaching provides developer tools and learning resources that students need to build their cloud-based skills. It also allows students to build, deploy, and manage applications with comprehensive Azure cloud services. Whether you’re a student getting started, or just interested in building cloud-based skills in your community, this platform has got the cloud development resources you need. Azure based tools and environments are profoundly used during the AIICT learning cycle and assist the students in becoming familiar with the Microsoft Azure environment utilised in the IT industry.
Specifically designed around building your expertise in today’s most popular technologies, the Microsoft Imagine Academy is focused on helping you to gain Microsoft Certification, a globally recognised credential that will give you a distinctive edge in the competitive job market.
SUPPLEMENT YOUR QUALIFICATION WITH LINKEDIN LEARNING
As an AIICT student, you will gain unlimited access to the global leader in online video training, Linkedin Learning, an on-demand library of high-quality instructional videos covering a vast range of software, business and creative skills. LinkedIn Learning combines industry-leading content from Lynda.com with personalized course recommendations based on insights from LinkedIn’s network. | https://aiict.edu.au/careers/front-end-developer/ |
We work with medium-sized cities in Asia and the Pacific in shaping an inclusive, livable and sustainable future for all.
We help bridge infrastructure–financing gap.
We use a demand-driven approach.
We provide solutions that are responsive to the development needs of each city we work with.
We strive for impact and sustainability.
By providing infrastructure project preparation and capacity development support, we hope to create positive and lasting contribution to poverty alleviation, environmental improvement, climate change adaptation or mitigation, and governance. | http://cdia.asia/ |
13 May 2013 ... But can I write a convincing high school essay about the novel after watching the film? ... Tobey Maguire as Nick Carraway in The Great Gatsby. Desire in The Great Gatsby — Anthropoetics XXI, no. 1 Fall 2015 3 Oct 2018 ... In The Great Gatsby, F. Scott Fitzgerald depicts the attitudes and character of the ...... Critical Essays on Scott F. Fitzgerald's The Great Gatsby. Essay Questions On "The Great Gatsby" 25 Apr 2017 ... List of 10 possible questions and answers on "The Great Gatsby" essay, essaybasics.com. The Great Gatsby delusion - Telegraph 8 Jun 2014 ... In 1931, F Scott Fitzgerald wrote an essay, “Echoes of the Jazz Age”, .... It has become a cliché that Jay Gatsby allegorizes America itself: ...
Overview The Great Gatsby may be the most popular classic in modern American fiction. Since its publication in 1925, Fitzgerald's masterpiece has become a touchstone for generations of readers and writers, many of whom reread it every few years as a ritual of imaginative renewal.
Close analysis of beginning of chapter III in The Great ... The Great Gatsby Analysis Essay 530 Words | 2 Pages. The Use of Symbolism in The Great GatsbyF. Scott Fitzgeralds novel The Great Gatsby is about a man named Gatsby and his struggle to attain the American Dream in 1920s Long Island. He fights to get his dream woman and to do so, he must first become rich. The Motif of Driving in The Great Gatsby - Classics Network The Motif of Driving in The Great Gatsby. A Discussion of Driving as Symbolism in The Great Gatsby. This essay seeks to address the motif of driving in The Great Gatsby. Driving is a recurrent image in the book and seems integrally connected with one of the book?s more important themes. Why 'The Great Gatsby' is the Great American Novel - The ...
Mar 18, 2016 · The Great Gatsby Essay Great Gatsby - Loneliness. Loneliness Essay In the book The Great Gatsby, Fitzgerald's The Great Gatsby. The American dreams and hope for women in The Great Gatsby by F. The Great Gatsby Essay. The Great Gatsby and Today 's Society In American society, Essay …
F. Scott Fitzgerald's novel . F. Scott Fitzgerald's novel "The Great Gatsby" is a classic of American fiction and a staple in the literature classroom. great gatsby - theme of success | Samples of Thesis Essays The descriptive writing makes each setting impact the theme greatly. Gatsby looked successful on the outside, but he died without achieving happiness and the settings help to convey the this theme to the reader. Please note that this sample paper on great gatsby - theme of success is for your review only. The Great Gatsby (Analyze this Essay Online) | Edusson Blog The essay explores the ideology of "The American Dream" through The Great Gatsby. It begins by explaining what the American dream originally meant and what it means now. The writer mentions different themes in the novel and their effects on the lives of Americans as they pursue the real American dream.
The Great Gatsby was written by F. Scott Fitzgerald in 1925. Taking place in a fictional town on the Long Island, Fitzgerald was trying to depict the reality of wealth communities behind their lavish parties and luxurious houses.
Gatsby was a great man who embodied the virtues of the American Dream to such a degree, that it left cracks in the novel for the realism to escape. The equality, and hope of the American Dream died with Gatsby, leaving the careless people of the social elite to prosper in its absence.
View this essay on The Great Gatsby. Set in the Jazz Age the novel’ s backdrop is one in which flappers music booze riches and alcohol-fueled... | https://writehdo.firebaseapp.com/cofone6550bega/essay-for-the-great-gatsby-2623.html |
The aim of this paper is to examine the development of a financial framework for assessing the effectiveness of interventions. The research is based on the evidence from Serbia. In terms of methods applied, we used econometric and scenario analysis. We presented - as individual separate items - the issues such as "who" - Government budget (Ministry, specific program, loan, donor, etc.), "how much" - the amount spent, "where" (NUTS 2 region), and on "what" (type of initiative). In our model, each of the interventions applied to one of the regional development priorities is linked and evaluated by its effectiveness observing the performance of the group of indicators associated with each of the priorities. All data obtained from 8 sectors were categorized under 4 priorities, i.e. "People, Place, Productive Capacity, and Institutional Capacity". Accordingly, we evaluate the effectiveness by observing the performance of a group of indicators related to each of the priorities. Our recommendations for optimizing the distribution structure of regional policies and regions are determined by the analysis of the performance of the group of indicators and their relative rankings per NUTS 2 region. The results are significant for further theoretical and applied research, as well as decision- making in the field of government financial policy. Our results confirmed that calculations of funds for regional development in strategic areas appear to be slightly problematic because, in the past, there was no strategic distribution based on established facts, which could be measured in terms of performance.
The object of study of this paper is a regional economic system which is complex, dynamic and developable by nature. The reproduction of material wealth necessary for the region is provided in the process of functioning of the above system through the interaction between the combinations of subjective (personal) and objective (material) elements, thereby meeting regional environmental and economic needs.
Smoking is a problem, bringing signifi cant social and economic costs to Russiansociety. However, ratifi cation of the World health organization Framework conventionon tobacco control makes it possible to improve Russian legislation accordingto the international standards. So, I describe some measures that should be taken bythe Russian authorities in the nearest future, and I examine their effi ciency. By studyingthe international evidence I analyze the impact of the smoke-free areas, advertisementand sponsorship bans, tax increases, etc. on the prevalence of smoking, cigaretteconsumption and some other indicators. I also investigate the obstacles confrontingthe Russian authorities when they introduce new policy measures and the public attitudetowards these measures. I conclude that there is a number of easy-to-implementanti-smoking activities that need no fi nancial resources but only a political will.
One of the most important indicators of company's success is the increase of its value. The article investigates traditional methods of company's value assessment and the evidence that the application of these methods is incorrect in the new stage of economy. So it is necessary to create a new method of valuation based on the new main sources of company's success that is its intellectual capital. | https://publications.hse.ru/en/articles/315255616 |
Burlington Ward 3 candidates say what form transportation should take in the city
As part of its coverage of the fall municipal election, the Burlington Post has compiled a series of questions relating to issues specific to the City of Burlington and Halton Region for the candidates running for ward and regional councillor and mayor.
Below is the first of four questions relating to issues specific to the City of Burlington posed to the candidates and their responses.
Do you feel the focus should be more on building up active transportation to support the coming mobility hubs plans or alleviating congestion on our roads through the immediate expansion of local infrastructure (I.e. more roads, connections, etc.)?
I believe that we are going to need to do both. As we have built out Burlington, we have more citizens with cars. Until we can work out a sensible transit plan, we will not accomplish reducing the number of cars we have on our roads. We also need traffic enforcement in place where commuters are cutting through residential side streets and speeding to get to Ancaster and other outlying areas.
Alleviating congestion on our roads is the more urgent priority, as it will have a greater positive impact on the quality of life for the vast majority of residents in our city. Focusing on improving the flow of traffic in Burlington will reduce transportation costs for local businesses, lower greenhouse gas emissions and leave you with more money in your pocket and more time to spend with your family and friends enjoying our community.
To keep Burlington moving we should invest in synchronizing our lights and in public transit. However, I don’t support the designation of the downtown as a mobility hub, which is providing cover for building unwanted high rises on downtown Brant Street. I want to remove the downtown as a mobility hub to focus growth around the GO stations, where active transportation will be important. We should add lanes wherever we can, but do it right.
A combined approach. It appears the city's current efforts at intersection improvements, such as providing turning lanes and removing centre medians to help to alleviate the stacking of vehicles, is working somewhat. For the long term, due to the constricted road design in this city, place focus on the mobility hub plans because in Burlington, the only way to relieve congestion is to get people out of their cars and lower all speed limits by 10 kilometres.
There is no reason we can’t do both. Improving and modernizing transit go hand-in-hand with alleviating congestion on our roads and meeting new infrastructure demands. If we focus now on increasing transit’s reliability and frequency, as well as right-sizing and electrifying our vehicles, we’ll give people more options over driving while reducing pollution; and if we work with developers on focusing growth near major transit stations, we can create more efficient traffic patterns.
Burlington Ward 3 candidates say what form transportation should take in the city
As part of its coverage of the fall municipal election, the Burlington Post has compiled a series of questions relating to issues specific to the City of Burlington and Halton Region for the candidates running for ward and regional councillor and mayor.
Below is the first of four questions relating to issues specific to the City of Burlington posed to the candidates and their responses.
Do you feel the focus should be more on building up active transportation to support the coming mobility hubs plans or alleviating congestion on our roads through the immediate expansion of local infrastructure (I.e. more roads, connections, etc.)?
Related Content
I believe that we are going to need to do both. As we have built out Burlington, we have more citizens with cars. Until we can work out a sensible transit plan, we will not accomplish reducing the number of cars we have on our roads. We also need traffic enforcement in place where commuters are cutting through residential side streets and speeding to get to Ancaster and other outlying areas.
Alleviating congestion on our roads is the more urgent priority, as it will have a greater positive impact on the quality of life for the vast majority of residents in our city. Focusing on improving the flow of traffic in Burlington will reduce transportation costs for local businesses, lower greenhouse gas emissions and leave you with more money in your pocket and more time to spend with your family and friends enjoying our community.
To keep Burlington moving we should invest in synchronizing our lights and in public transit. However, I don’t support the designation of the downtown as a mobility hub, which is providing cover for building unwanted high rises on downtown Brant Street. I want to remove the downtown as a mobility hub to focus growth around the GO stations, where active transportation will be important. We should add lanes wherever we can, but do it right.
A combined approach. It appears the city's current efforts at intersection improvements, such as providing turning lanes and removing centre medians to help to alleviate the stacking of vehicles, is working somewhat. For the long term, due to the constricted road design in this city, place focus on the mobility hub plans because in Burlington, the only way to relieve congestion is to get people out of their cars and lower all speed limits by 10 kilometres.
There is no reason we can’t do both. Improving and modernizing transit go hand-in-hand with alleviating congestion on our roads and meeting new infrastructure demands. If we focus now on increasing transit’s reliability and frequency, as well as right-sizing and electrifying our vehicles, we’ll give people more options over driving while reducing pollution; and if we work with developers on focusing growth near major transit stations, we can create more efficient traffic patterns.
Top Stories
Burlington Ward 3 candidates say what form transportation should take in the city
As part of its coverage of the fall municipal election, the Burlington Post has compiled a series of questions relating to issues specific to the City of Burlington and Halton Region for the candidates running for ward and regional councillor and mayor.
Below is the first of four questions relating to issues specific to the City of Burlington posed to the candidates and their responses.
Do you feel the focus should be more on building up active transportation to support the coming mobility hubs plans or alleviating congestion on our roads through the immediate expansion of local infrastructure (I.e. more roads, connections, etc.)?
Related Content
I believe that we are going to need to do both. As we have built out Burlington, we have more citizens with cars. Until we can work out a sensible transit plan, we will not accomplish reducing the number of cars we have on our roads. We also need traffic enforcement in place where commuters are cutting through residential side streets and speeding to get to Ancaster and other outlying areas.
Alleviating congestion on our roads is the more urgent priority, as it will have a greater positive impact on the quality of life for the vast majority of residents in our city. Focusing on improving the flow of traffic in Burlington will reduce transportation costs for local businesses, lower greenhouse gas emissions and leave you with more money in your pocket and more time to spend with your family and friends enjoying our community.
To keep Burlington moving we should invest in synchronizing our lights and in public transit. However, I don’t support the designation of the downtown as a mobility hub, which is providing cover for building unwanted high rises on downtown Brant Street. I want to remove the downtown as a mobility hub to focus growth around the GO stations, where active transportation will be important. We should add lanes wherever we can, but do it right.
A combined approach. It appears the city's current efforts at intersection improvements, such as providing turning lanes and removing centre medians to help to alleviate the stacking of vehicles, is working somewhat. For the long term, due to the constricted road design in this city, place focus on the mobility hub plans because in Burlington, the only way to relieve congestion is to get people out of their cars and lower all speed limits by 10 kilometres.
There is no reason we can’t do both. Improving and modernizing transit go hand-in-hand with alleviating congestion on our roads and meeting new infrastructure demands. If we focus now on increasing transit’s reliability and frequency, as well as right-sizing and electrifying our vehicles, we’ll give people more options over driving while reducing pollution; and if we work with developers on focusing growth near major transit stations, we can create more efficient traffic patterns. | |
Hold a series of town hall meetings. These meetings, which are open to all organizational members, provide a forum for discussing topics of common interest (e.g., diversity issues). The goal of these meetings is to build a stronger community through the open exchange of information and ideas. Town hall meetings offer several benefits including improved organizational communication, enhanced decision-making regarding diversity efforts, a greater sense of ownership and involvement on part of employees/community members, and reduced confusion and miscommunication regarding diversity goals and activities.
Next Post: February 10, 2014 – Creating a Climate for Diversity: Tip #15
Your comment will be posted after it is approved.
Leave a Reply.
|
|
Author
Dr. Tyrone A. Holmes is an author, speaker, coach and consultant. He helps his clients develop the skills needed to communicate, resolve conflict, solve problems and improve performance in diverse organizational settings. | http://www.drtyroneholmes.com/blog/creating-a-climate-for-diversity-tip-14-town-hall-meetings |
This recipe is my homemade version of a Starbucks Frappuccino , which I find to be just as delicious. If you're looking for a quick and easy way to enjoy a Starbucks-style frappuccino at home, then this recipe is definitely for you! Enjoy!
Ingredients
Directions
Combine the coffee, milk, cream, sugar, and vanilla extract in a blender. Blend until smooth. Pour into glasses and top with whipped cream and chocolate sauce. Serve immediately. Enjoy! | https://www.eraofwe.com/coffee-lab/en/coffee-recipes/easy-homemade-starbucks |
The utility model relates to a shaft-current prevention structure that is used for large-sized and middle-sized motor rolling bearings, which comprises a bearing outer ring and a bearing block, and is characterized in that an insulation layer is respectively arranged between the bearing outer ring and the bearing block and between a bearing end surface and a bearing cover. The insulation layer is a steel ring that is coated with Teflon, and the steel ring has interference fit with the bearing block; the bearing block is sleeved into the insulation ring after being heated, and then the inner circle and the end surface of the insulation ring are mechanically processed, and the bearing is assembled after being qualified in the insulation resistance measurement. A layer, with the thickness of 0.5 mm, of an epoxy phenol aldehyde glass cloth laminated board is arranged between the bearing end surface and the bearing cover during the assembling, so that insulation is realized between the bearing and the bearing block and between the bearing and the bearing cover. With the insulation structure, the malfunction, which is caused by the shaft current damage for the application of the rolling shaft, of the electric motor is thoroughly resolved. For the shaft current problem is resolved, the rolling bearing can be widely applied to the large-sized and middle-sized electric motors, thereby reducing the manufacturing cost and the application cost, and having economic benefit. | |
Our research focuses on the role of seafloor processes in ocean chemical cycles and ecology, the biogeochemical influences of hypoxia and anoxia, and new electrochemical tools for ocean observing networks.
New Project: Quantifying and Communicating the Impacts of Groundfish Bottom Trawling on Deoxygenation and Nutrient Fluxes off Oregon
Eddy Covariance
Over the last 12 years we have adopted the eddy covariance (EC) method as our main approach to studying benthic oxygen exchange, and we have applied EC to study the dynamics of fluxes on the Oregon shelf. Benthic oxygen fluxes represent the rate that oxygen dissolved in seawater is consumed (or produced) by the biological community at the seafloor. Essential to eddy covariance measurements are reliable, fast responding, low noise, and fully calibrated oxygen sensors. We are working on new sensor designs and comparing sensors made in our lab to commercially available optodes and microelectrodes. In 2018 and 2019, we made repeated EC measurements at sites at 30 m and 80 m water depth adjacent to the Oregon line of the Ocean Observatories Initiative (OOI) Endurance Array to assess if there are changes in the biological activity of sediments due to seasonal changes in organic matter input from overlying waters. High fluxes observed in winter months suggest that enough labile marine organic matter is retained in the shelf environment throughout the year to support substantial benthic fluxes of oxygen, total carbon dioxide and nutrients that are important in setting bottom water conditions at the start of the summer upwelling season.
Image of integrated EC sensors: Nortek Vector ADV, Pyroscience fiberoptic optode and Rockland Microsquid thermistor
Benthic Microbial Fuel Cells
Our group also continues to develop and evaluate revolutionary benthic microbial fuel cells (BMFCs) – devices designed as self-refueling power sources for fixed seafloor sensors. We have demonstrated powering acoustic modem/chemical sensor systems by unattended BMFCs at water depths ranging from 10-1000 m. We are investigating the microbial communities and chemistries that enable electron transfer in these systems. A unique discovery we made recently is finding multicellular, filamentous, sulfur-oxidizing bacteria, known as cable bacteria, attached to fibers of a carbon brush electrode serving as an anode of a benthic microbial fuel cell. This finding indicates that cable bacteria may in fact be facultative anaerobes. We are conducting follow-on work to use bioelectrochemical systems to enrich for cable bacteria and to study their mechanisms of electron transport from cell to cell and to a solid electrode. To learn more about our research please go to the links above. | https://blogs.oregonstate.edu/benthicbiogeochemistrylab/ |
As ocean levels rise, littoral communities will need the ability to act and adapt quickly. For coastal urban areas, rising seas could be particularly catastrophic, flooding cities and reshaping the shoreline and way of life. Sea-level rise affects not only infrastructure, but cultural and built heritage as well.
As historic preservation scholars Sujin Kim and Morris Hylton III note in their study on communications surrounding sea-level rise, “Beyond individual property adaptation, preservation specialists must engage in community-wide dialogues on endangered urban heritage and comprehensive resilience planning.”
Kym and Hylton examined new techniques to more accurately assess the impacts of rising seas as well as how to better communicate potential hazards to community members, researchers, and policymakers alike. They identified three primary challenges for communicating information about sea-level rise to the general, non-specialist public. First, they found that ideas about possible hazards, such as urban flooding, tend to be based on past experiences of similar events. “This experience-based assessment can often lead to an underestimation of the impacts of extreme disasters,” they note.
A second communication challenge comes from the very complexity of climate change. Kym and Hylton write that “people see sea-level rise as a seemingly surreal subject, an abstract phenomenon viewed as a long-term event, occurring is the distant future… the invisibility of the hazard can undermine people’s awareness and concerns.”
Third, communication about the effects of climate change becomes bogged down in jargon and data, failing to engage “a wide-ranging public audience.” Thus, they suggest, the problem of preparing communities to address potential threats to cultural and built heritage is not just about accurately modelling and assessing those threats, but effectively sharing that knowledge with the people who will be affected by sea-level rise.
Kym and Hylton wanted to specifically integrate sea-level rise models with impact assessments from a historical preservation point of view. To do this, they developed a tool that integrated “terrestrial laser scanning with a cultural-resource survey approach that utilizes a geographic information system (GIS),” allowing them to integrate historical data alongside climate predictions. Their research focused on communities in the United States where city life is intertwined with material history, culture, and the coast.
The first pilot project in their study centered on Cedar Key, a small island off the Gulf Coast of Florida. Cedar Key’s downtown, a National Register Historic District, features buildings from the mid-nineteenth through the early twentieth centuries. The district has long been plagued with hurricanes and flooding, the frequency of which has increased as sea levels have risen.
More about climate communications
On the Side of Climate Solutions: An Interview with Paul Lussier
Making Climate Communication Nature-Driven
How Language and Climate Connect
At the outset of the project, a town leader and university researcher organized a series of public lectures. “Instead of immediate discussions on rising sea levels,” write the authors, “the lectures first explored a range of multidisciplinary water-related topics, from the town’s wildlife to its drinking water. They then discussed storms in the town’s history, using generic words like ‘hazard.’”
Building on the lectures, Kym and Hylton created digital and physical models of the town and the predicted sea rise, as well as videos that animated the potential changes. The models weren’t just for research; they were used to communicate the basics of sea-level rise to the community. “The models did not represent a specific projection, rather a general idea of rising sea levels,” the authors note.
Kym and Hylton shared the physical model at a public event, using the digital model to contextualize the information “by visualizing significant historic structures and primary intersections with virtual flooding.” Together, the models “helped enhance the residents relatability through familiar and realistic presentations. The residents were often impressed by the accurate, granular-level simulation of water behavior in the virtual town environment, saying that it ‘matched’ what they had witnessed.” People tended to compare sea-level rise to memorable disasters, using those past events to verify the information provided by the models.
Kym and Hylton write that the models helped inform the community and generated discussion on potential next steps, concluding that “the digital visualizations had been an effective method for sharing information.”
Weekly Newsletter
Models like those used in the Cedar Key project, essential for research, can be integral to science communication strategies. Scientific analyses are generally associated with complicated equations or difficult-to-interpret results. But the goal of the Cedar Key models, designed with cultural and architectural preservation in mind, was to encourage conversations that would lead to long-term collaborative urban planning.
Overall, the researchers found the models, combined with the public lectures, allowed residents to integrate data and their own experiences. As a result of this multi-pronged approach, “the residents and visitors contemplated the issue and discussed it with others. This engagement helped disseminate the information throughout the community and triggered discussions on the next steps.”
Support JSTOR Daily! Join our new membership program on Patreon today. | https://daily.jstor.org/improving-communications-around-climate-change/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.